diff -Nru python-scipy-0.7.2+dfsg1/debian/changelog python-scipy-0.8.0+dfsg1/debian/changelog --- python-scipy-0.7.2+dfsg1/debian/changelog 2010-10-26 23:14:44.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/debian/changelog 2011-02-04 01:21:58.000000000 +0000 @@ -1,3 +1,26 @@ +python-scipy (0.8.0+dfsg1-1ubuntu1) natty; urgency=low + + * Merge from debian experimental (LP: #696403). Remaining changes: + - debian/patches/stdc_format_macros.patch: Fix FTBFS issue with python 2.7 + + -- Sameer Morar Thu, 03 Feb 2011 04:28:09 +0000 + +python-scipy (0.8.0+dfsg1-1) experimental; urgency=low + + [ Varun Hiremath ] + * New upstream release + * Build-Depend on python-numpy-* (>= 1:1.5.1) + * Update all the debian/patches/* + + [ Luca Falavigna ] + * Remove myself from Uploaders. + + [ Stefano Rivera ] + * debian/patches/blitz++.patch: Fix scipy.weave.inline compilations. Thanks + to Sameer Morar (Closes: #598520, LP: #302649) + + -- Varun Hiremath Fri, 24 Dec 2010 08:20:54 -0500 + python-scipy (0.7.2+dfsg1-1ubuntu1) natty; urgency=low * Merge from debian unstable (LP: #667001). Remaining changes: diff -Nru python-scipy-0.7.2+dfsg1/debian/control python-scipy-0.8.0+dfsg1/debian/control --- python-scipy-0.7.2+dfsg1/debian/control 2010-10-26 23:13:07.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/debian/control 2011-02-03 13:05:29.000000000 +0000 @@ -3,10 +3,10 @@ Priority: extra Maintainer: Ubuntu Developers XSBC-Original-Maintainer: Debian Python Modules Team -Uploaders: Alexandre Fayolle , Ondrej Certik , David Cournapeau , Luca Falavigna , Varun Hiremath +Uploaders: Alexandre Fayolle , Ondrej Certik , David Cournapeau , Varun Hiremath Build-Depends: debhelper (>= 7.0.50~), python-all-dev (>= 2.5.4-1~), python-all-dbg (>= 2.5.4-1~), python-central (>= 0.6.7), - python-numpy (>= 1:1.4.1-4~), python-numpy-dbg (>= 1:1.2.0), + python-numpy (>= 1:1.5.1), python-numpy-dbg (>= 1:1.5.1), gfortran, sharutils, swig, libsuitesparse-dev (>= 3.1.0-3), libblas-dev | libatlas-base-dev, liblapack-dev | libatlas-base-dev XS-Python-Version: all @@ -19,7 +19,7 @@ Package: python-scipy Architecture: any XB-Python-Version: ${python:Versions} -Depends: ${python:Depends}, python-numpy (>= 1:1.2.0), ${shlibs:Depends}, ${misc:Depends} +Depends: ${python:Depends}, ${shlibs:Depends}, ${misc:Depends} Provides: ${python:Provides} Recommends: g++ | c++-compiler Suggests: python-profiler @@ -38,7 +38,7 @@ Section: debug Architecture: any XB-Python-Version: ${python:Versions} -Depends: ${python:Depends}, python-dbg, ${shlibs:Depends}, ${misc:Depends}, python-scipy (= ${binary:Version}), python-numpy-dbg (>= 1:1.2.0) +Depends: ${python:Depends}, python-dbg, ${shlibs:Depends}, ${misc:Depends}, python-scipy (= ${binary:Version}), python-numpy-dbg (>= 1:1.5.1) Description: scientific tools for Python - debugging symbols SciPy supplements the popular NumPy module (python-numpy package), gathering a variety of high level science and engineering modules together as a single diff -Nru python-scipy-0.7.2+dfsg1/debian/patches/blitz++.patch python-scipy-0.8.0+dfsg1/debian/patches/blitz++.patch --- python-scipy-0.7.2+dfsg1/debian/patches/blitz++.patch 2010-10-26 23:13:07.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/debian/patches/blitz++.patch 2010-12-23 23:40:28.000000000 +0000 @@ -1,10 +1,13 @@ Description: Fixes scipy.weave.inline compalition with g++ 4.3 and upwards -Bug-Ubuntu: https://bugs.launchpad.net/ubuntu/+source/python-scipy/+bug/302649 +Author: Sameer Morar +Forwarded: http://projects.scipy.org/scipy/scipy/ticket/739 +Bug-Debian: http://bugs.debian.org/598520 +Bug-Ubuntu: https://launchpad.net/bugs/302649 -Index: python-scipy-0.7.2/scipy/weave/blitz/blitz/blitz.h +Index: python-scipy-0.8.0+dfsg1/scipy/weave/blitz/blitz/blitz.h =================================================================== ---- python-scipy-0.7.2.orig/scipy/weave/blitz/blitz/blitz.h 2010-09-29 13:17:01.539914362 +0200 -+++ python-scipy-0.7.2/scipy/weave/blitz/blitz/blitz.h 2010-09-29 13:17:02.091914362 +0200 +--- python-scipy-0.8.0+dfsg1.orig/scipy/weave/blitz/blitz/blitz.h 2010-07-26 10:48:37.000000000 -0400 ++++ python-scipy-0.8.0+dfsg1/scipy/weave/blitz/blitz/blitz.h 2010-12-23 18:39:27.000000000 -0500 @@ -65,6 +65,8 @@ #define BZ_THROW // Needed in @@ -14,23 +17,23 @@ BZ_NAMESPACE(blitz) #ifdef BZ_HAVE_STD -Index: python-scipy-0.7.2/scipy/weave/blitz/blitz/mathfunc.h +Index: python-scipy-0.8.0+dfsg1/scipy/weave/blitz/blitz/mathfunc.h =================================================================== ---- python-scipy-0.7.2.orig/scipy/weave/blitz/blitz/mathfunc.h 2010-09-29 13:17:01.507914362 +0200 -+++ python-scipy-0.7.2/scipy/weave/blitz/blitz/mathfunc.h 2010-09-29 13:17:02.099914362 +0200 -@@ -12,6 +12,8 @@ - #include - #endif +--- python-scipy-0.8.0+dfsg1.orig/scipy/weave/blitz/blitz/mathfunc.h 2010-07-26 10:48:37.000000000 -0400 ++++ python-scipy-0.8.0+dfsg1/scipy/weave/blitz/blitz/mathfunc.h 2010-12-23 18:39:27.000000000 -0500 +@@ -14,6 +14,8 @@ + + #include +#include + BZ_NAMESPACE(blitz) // abs(P_numtype1) Absolute value -Index: python-scipy-0.7.2/scipy/weave/blitz/blitz/prettyprint.h +Index: python-scipy-0.8.0+dfsg1/scipy/weave/blitz/blitz/prettyprint.h =================================================================== ---- python-scipy-0.7.2.orig/scipy/weave/blitz/blitz/prettyprint.h 2010-09-29 13:20:04.091914362 +0200 -+++ python-scipy-0.7.2/scipy/weave/blitz/blitz/prettyprint.h 2010-09-29 13:18:19.611914362 +0200 +--- python-scipy-0.8.0+dfsg1.orig/scipy/weave/blitz/blitz/prettyprint.h 2010-07-26 10:48:37.000000000 -0400 ++++ python-scipy-0.8.0+dfsg1/scipy/weave/blitz/blitz/prettyprint.h 2010-12-23 18:39:27.000000000 -0500 @@ -22,6 +22,8 @@ #ifndef BZ_PRETTYPRINT_H #define BZ_PRETTYPRINT_H diff -Nru python-scipy-0.7.2+dfsg1/debian/patches/restore_sys_argv.patch python-scipy-0.8.0+dfsg1/debian/patches/restore_sys_argv.patch --- python-scipy-0.7.2+dfsg1/debian/patches/restore_sys_argv.patch 2010-04-05 15:45:11.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/debian/patches/restore_sys_argv.patch 2010-12-23 23:40:28.000000000 +0000 @@ -1,11 +1,11 @@ Description: restore sys.argv in case of exception Bug-Debian: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=500814 -Index: scipy-0.7.1/scipy/weave/build_tools.py +Index: python-scipy-0.8.0+dfsg1/scipy/weave/build_tools.py =================================================================== ---- scipy-0.7.1.orig/scipy/weave/build_tools.py 2010-04-05 16:40:02.377973881 +0200 -+++ scipy-0.7.1/scipy/weave/build_tools.py 2010-04-05 16:41:02.685970432 +0200 -@@ -283,6 +283,9 @@ +--- python-scipy-0.8.0+dfsg1.orig/scipy/weave/build_tools.py 2010-12-23 18:39:55.000000000 -0500 ++++ python-scipy-0.8.0+dfsg1/scipy/weave/build_tools.py 2010-12-23 18:39:58.000000000 -0500 +@@ -284,6 +284,9 @@ configure_python_path(build_dir) except SyntaxError: #TypeError: success = 0 diff -Nru python-scipy-0.7.2+dfsg1/debian/patches/stdc_format_macros.patch python-scipy-0.8.0+dfsg1/debian/patches/stdc_format_macros.patch --- python-scipy-0.7.2+dfsg1/debian/patches/stdc_format_macros.patch 2010-10-26 23:13:07.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/debian/patches/stdc_format_macros.patch 2011-02-03 03:21:15.000000000 +0000 @@ -1,9 +1,9 @@ Description: Fix FTBFS issue with python 2.7 Bug: http://projects.scipy.org/scipy/ticket/1180 -Index: python-scipy-0.7.2+dfsg1-1ubuntu1/scipy/sparse/sparsetools/SConscript +Index: python-scipy-0.8.0+dfsg1/scipy/sparse/sparsetools/SConscript =================================================================== ---- python-scipy-0.7.2+dfsg1-1ubuntu1.orig/scipy/sparse/sparsetools/SConscript 2010-10-26 23:04:48.414646002 +0200 -+++ python-scipy-0.7.2+dfsg1-1ubuntu1/scipy/sparse/sparsetools/SConscript 2010-10-26 23:03:22.994646002 +0200 +--- python-scipy-0.8.0+dfsg1.orig/scipy/sparse/sparsetools/SConscript 2011-02-03 03:20:03.478724732 +0000 ++++ python-scipy-0.8.0+dfsg1/scipy/sparse/sparsetools/SConscript 2011-02-03 03:18:41.761716263 +0000 @@ -3,6 +3,7 @@ from numscons import GetNumpyEnvironment @@ -12,17 +12,17 @@ for fmt in ['csr','csc','coo','bsr','dia']: sources = [ fmt + '_wrap.cxx' ] -Index: python-scipy-0.7.2+dfsg1-1ubuntu1/scipy/sparse/sparsetools/setup.py +Index: python-scipy-0.8.0+dfsg1/scipy/sparse/sparsetools/setup.py =================================================================== ---- python-scipy-0.7.2+dfsg1-1ubuntu1.orig/scipy/sparse/sparsetools/setup.py 2010-10-26 23:04:48.522646002 +0200 -+++ python-scipy-0.7.2+dfsg1-1ubuntu1/scipy/sparse/sparsetools/setup.py 2010-10-26 23:04:21.870646002 +0200 -@@ -8,7 +8,8 @@ - +--- python-scipy-0.8.0+dfsg1.orig/scipy/sparse/sparsetools/setup.py 2011-02-03 03:20:03.542727579 +0000 ++++ python-scipy-0.8.0+dfsg1/scipy/sparse/sparsetools/setup.py 2011-02-03 03:19:57.876628103 +0000 +@@ -9,7 +9,8 @@ for fmt in ['csr','csc','coo','bsr','dia']: sources = [ fmt + '_wrap.cxx' ] -- config.add_extension('_' + fmt, sources=sources) -+ config.add_extension('_' + fmt, sources=sources, -+ define_macros=[('__STDC_FORMAT_MACROS', 1)]) + depends = [ fmt + '.h' ] +- config.add_extension('_' + fmt, sources=sources, depends=depends) ++ config.add_extension('_' + fmt, sources=sources, ++ define_macros=[('__STDC_FORMAT_MACROS', 1)], depends=depends) return config diff -Nru python-scipy-0.7.2+dfsg1/debian/patches/string_exception.patch python-scipy-0.8.0+dfsg1/debian/patches/string_exception.patch --- python-scipy-0.7.2+dfsg1/debian/patches/string_exception.patch 2010-06-07 13:51:28.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/debian/patches/string_exception.patch 2010-12-23 23:40:28.000000000 +0000 @@ -1,29 +1,29 @@ Description: Do not use string exceptions, not supported by Python 2.6 Origin: Debian -Index: python-scipy-0.7.2/scipy/weave/blitz_tools.py +Index: python-scipy-0.8.0/scipy/weave/blitz_tools.py =================================================================== ---- python-scipy-0.7.2.orig/scipy/weave/blitz_tools.py 2010-06-07 12:48:10.000000000 +0000 -+++ python-scipy-0.7.2/scipy/weave/blitz_tools.py 2010-06-07 12:47:44.000000000 +0000 -@@ -32,7 +32,7 @@ - # of time. It also can cause core-dumps if the sizes of the inputs +--- python-scipy-0.8.0.orig/scipy/weave/blitz_tools.py 2010-07-26 10:48:37.000000000 -0400 ++++ python-scipy-0.8.0/scipy/weave/blitz_tools.py 2010-07-30 16:43:35.000000000 -0400 +@@ -33,7 +33,7 @@ # aren't compatible. if check_size and not size_check.check_expr(expr,local_dict,global_dict): -- raise 'inputs failed to pass size check.' -+ raise Exception('inputs failed to pass size check.') + if sys.version_info < (2, 6): +- raise "inputs failed to pass size check." ++ raise Exception("inputs failed to pass size check.") + else: + raise ValueError("inputs failed to pass size check.") - # 2. try local cache - try: -Index: python-scipy-0.7.2/scipy/weave/bytecodecompiler.py +Index: python-scipy-0.8.0/scipy/weave/bytecodecompiler.py =================================================================== ---- python-scipy-0.7.2.orig/scipy/weave/bytecodecompiler.py 2010-06-07 12:48:10.000000000 +0000 -+++ python-scipy-0.7.2/scipy/weave/bytecodecompiler.py 2010-06-07 12:48:02.000000000 +0000 -@@ -237,7 +237,7 @@ - elif goto is None: +--- python-scipy-0.8.0.orig/scipy/weave/bytecodecompiler.py 2010-07-26 10:48:37.000000000 -0400 ++++ python-scipy-0.8.0/scipy/weave/bytecodecompiler.py 2010-07-30 16:44:30.000000000 -0400 +@@ -239,7 +239,7 @@ return next # Normal else: -- raise 'xx' -+ raise Exception('xx') + if sys.version_info < (2, 6): +- raise "Executing code failed." ++ raise Exception("Executing code failed.") + else: + raise ValueError("Executing code failed.") - symbols = { 0: 'less', 1: 'lesseq', 2: 'equal', 3: 'notequal', - 4: 'greater', 5: 'greatereq', 6: 'in', 7: 'not in', diff -Nru python-scipy-0.7.2+dfsg1/doc/Makefile python-scipy-0.8.0+dfsg1/doc/Makefile --- python-scipy-0.7.2+dfsg1/doc/Makefile 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/Makefile 2010-07-26 15:48:29.000000000 +0100 @@ -9,6 +9,8 @@ SPHINXBUILD = LANG=C sphinx-build PAPER = +NEED_AUTOSUMMARY = $(shell $(PYTHON) -c 'import sphinx; print sphinx.__version__ < "0.7" and "1" or ""') + # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter @@ -97,9 +99,11 @@ generate: build/generate-stamp build/generate-stamp: $(wildcard source/*.rst) mkdir -p build +ifeq ($(NEED_AUTOSUMMARY),1) $(PYTHON) \ ./sphinxext/autosummary_generate.py source/*.rst \ - -p dump.xml -o source/generated + -p dump.xml -o source/generated +endif touch build/generate-stamp html: generate diff -Nru python-scipy-0.7.2+dfsg1/doc/postprocess.py python-scipy-0.8.0+dfsg1/doc/postprocess.py --- python-scipy-0.7.2+dfsg1/doc/postprocess.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/postprocess.py 2010-07-26 15:48:29.000000000 +0100 @@ -39,7 +39,8 @@ def process_tex(lines): """ - Remove unnecessary section titles from the LaTeX file. + Remove unnecessary section titles from the LaTeX file, + and convert UTF-8 non-breaking spaces to Latex nbsps. """ new_lines = [] diff -Nru python-scipy-0.7.2+dfsg1/doc/release/0.7.1-notes.rst python-scipy-0.8.0+dfsg1/doc/release/0.7.1-notes.rst --- python-scipy-0.7.2+dfsg1/doc/release/0.7.1-notes.rst 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/release/0.7.1-notes.rst 1970-01-01 01:00:00.000000000 +0100 @@ -1,88 +0,0 @@ -========================= -SciPy 0.7.1 Release Notes -========================= - -.. contents:: - -SciPy 0.7.1 is a bug-fix release with no new features compared to 0.7.0. - -scipy.io -======== - -Bugs fixed: - -- Several fixes in Matlab file IO - -scipy.odr -========= - -Bugs fixed: - -- Work around a failure with Python 2.6 - -scipy.signal -============ - -Memory leak in lfilter have been fixed, as well as support for array object - -Bugs fixed: - -- #880, #925: lfilter fixes -- #871: bicgstab fails on Win32 - - -scipy.sparse -============ - -Bugs fixed: - -- #883: scipy.io.mmread with scipy.sparse.lil_matrix broken -- lil_matrix and csc_matrix reject now unexpected sequences, - cf. http://thread.gmane.org/gmane.comp.python.scientific.user/19996 - -scipy.special -============= - -Several bugs of varying severity were fixed in the special functions: - -- #503, #640: iv: problems at large arguments fixed by new implementation -- #623: jv: fix errors at large arguments -- #679: struve: fix wrong output for v < 0 -- #803: pbdv produces invalid output -- #804: lqmn: fix crashes on some input -- #823: betainc: fix documentation -- #834: exp1 strange behavior near negative integer values -- #852: jn_zeros: more accurate results for large s, also in jnp/yn/ynp_zeros -- #853: jv, yv, iv: invalid results for non-integer v < 0, complex x -- #854: jv, yv, iv, kv: return nan more consistently when out-of-domain -- #927: ellipj: fix segfault on Windows -- #946: ellpj: fix segfault on Mac OS X/python 2.6 combination. -- ive, jve, yve, kv, kve: with real-valued input, return nan for out-of-domain - instead of returning only the real part of the result. - -Also, when ``scipy.special.errprint(1)`` has been enabled, warning -messages are now issued as Python warnings instead of printing them to -stderr. - - -scipy.stats -=========== - -- linregress, mannwhitneyu, describe: errors fixed -- kstwobign, norm, expon, exponweib, exponpow, frechet, genexpon, rdist, - truncexpon, planck: improvements to numerical accuracy in distributions - -Windows binaries for python 2.6 -=============================== - -python 2.6 binaries for windows are now included. The binary for python 2.5 -requires numpy 1.2.0 or above, and and the one for python 2.6 requires numpy -1.3.0 or above. - -Universal build for scipy -========================= - -Mac OS X binary installer is now a proper universal build, and does not depend -on gfortran anymore (libgfortran is statically linked). The python 2.5 version -of scipy requires numpy 1.2.0 or above, the python 2.6 version requires numpy -1.3.0 or above. diff -Nru python-scipy-0.7.2+dfsg1/doc/release/0.7.2-notes.rst python-scipy-0.8.0+dfsg1/doc/release/0.7.2-notes.rst --- python-scipy-0.7.2+dfsg1/doc/release/0.7.2-notes.rst 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/release/0.7.2-notes.rst 1970-01-01 01:00:00.000000000 +0100 @@ -1,10 +0,0 @@ -========================= -SciPy 0.7.2 Release Notes -========================= - -.. contents:: - -SciPy 0.7.2 is a bug-fix release with no new features compared to 0.7.1. The -only change is that all C sources from Cython code have been regenerated with -Cython 0.12.1. This fixes the incompatibility between binaries of SciPy 0.7.1 -and NumPy 1.4. diff -Nru python-scipy-0.7.2+dfsg1/doc/release/0.8.0-notes.rst python-scipy-0.8.0+dfsg1/doc/release/0.8.0-notes.rst --- python-scipy-0.7.2+dfsg1/doc/release/0.8.0-notes.rst 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/release/0.8.0-notes.rst 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,263 @@ +========================= +SciPy 0.8.0 Release Notes +========================= + +.. contents:: + +SciPy 0.8.0 is the culmination of 17 months of hard work. It contains +many new features, numerous bug-fixes, improved test coverage and +better documentation. There have been a number of deprecations and +API changes in this release, which are documented below. All users +are encouraged to upgrade to this release, as there are a large number +of bug-fixes and optimizations. Moreover, our development attention +will now shift to bug-fix releases on the 0.8.x branch, and on adding +new features on the development trunk. This release requires Python +2.4 - 2.6 and NumPy 1.4.1 or greater. + +Please note that SciPy is still considered to have "Beta" status, as +we work toward a SciPy 1.0.0 release. The 1.0.0 release will mark a +major milestone in the development of SciPy, after which changing the +package structure or API will be much more difficult. Whilst these +pre-1.0 releases are considered to have "Beta" status, we are +committed to making them as bug-free as possible. + +However, until the 1.0 release, we are aggressively reviewing and +refining the functionality, organization, and interface. This is being +done in an effort to make the package as coherent, intuitive, and +useful as possible. To achieve this, we need help from the community +of users. Specifically, we need feedback regarding all aspects of the +project - everything - from which algorithms we implement, to details +about our function's call signatures. + +Python 3 +======== + +Python 3 compatibility is planned and is currently technically +feasible, since Numpy has been ported. However, since the Python 3 +compatible Numpy 1.5 has not been released yet, support for Python 3 +in Scipy is not yet included in Scipy 0.8. SciPy 0.9, planned for fall +2010, will very likely include experimental support for Python 3. + +Major documentation improvements +================================ + +SciPy documentation is greatly improved. + +Deprecated features +=================== + +Swapping inputs for correlation functions (scipy.signal) +-------------------------------------------------------- + +Concern correlate, correlate2d, convolve and convolve2d. If the second input is +larger than the first input, the inputs are swapped before calling the +underlying computation routine. This behavior is deprecated, and will be +removed in scipy 0.9.0. + +Obsolete code deprecated (scipy.misc) +------------------------------------- + +The modules `helpmod`, `ppimport` and `pexec` from `scipy.misc` are deprecated. +They will be removed from SciPy in version 0.9. + +Additional deprecations +----------------------- + +* linalg: The function `solveh_banded` currently returns a tuple containing + the Cholesky factorization and the solution to the linear system. In + SciPy 0.9, the return value will be just the solution. +* The function `constants.codata.find` will generate a DeprecationWarning. + In Scipy version 0.8.0, the keyword argument 'disp' was added to the + function, with the default value 'True'. In 0.9.0, the default will be + 'False'. +* The `qshape` keyword argument of `signal.chirp` is deprecated. Use + the argument `vertex_zero` instead. +* Passing the coefficients of a polynomial as the argument `f0` to + `signal.chirp` is deprecated. Use the function `signal.sweep_poly` + instead. +* The `io.recaster` module has been deprecated and will be removed in 0.9.0. + +New features +============ + +DCT support (scipy.fftpack) +--------------------------- + +New realtransforms have been added, namely dct and idct for Discrete Cosine +Transform; type I, II and III are available. + +Single precision support for fft functions (scipy.fftpack) +---------------------------------------------------------- + +fft functions can now handle single precision inputs as well: fft(x) will +return a single precision array if x is single precision. + +At the moment, for FFT sizes that are not composites of 2, 3, and 5, the +transform is computed internally in double precision to avoid rounding error in +FFTPACK. + +Correlation functions now implement the usual definition (scipy.signal) +----------------------------------------------------------------------- + +The outputs should now correspond to their matlab and R counterparts, and do +what most people expect if the old_behavior=False argument is passed: + +* correlate, convolve and their 2d counterparts do not swap their inputs + depending on their relative shape anymore; +* correlation functions now conjugate their second argument while computing + the slided sum-products, which correspond to the usual definition of + correlation. + +Additions and modification to LTI functions (scipy.signal) +---------------------------------------------------------- + +* The functions `impulse2` and `step2` were added to `scipy.signal`. + They use the function `scipy.signal.lsim2` to compute the impulse and + step response of a system, respectively. +* The function `scipy.signal.lsim2` was changed to pass any additional + keyword arguments to the ODE solver. + +Improved waveform generators (scipy.signal) +------------------------------------------- + +Several improvements to the `chirp` function in `scipy.signal` were made: + +* The waveform generated when `method="logarithmic"` was corrected; it + now generates a waveform that is also known as an "exponential" or + "geometric" chirp. (See http://en.wikipedia.org/wiki/Chirp.) +* A new `chirp` method, "hyperbolic", was added. +* Instead of the keyword `qshape`, `chirp` now uses the keyword + `vertex_zero`, a boolean. +* `chirp` no longer handles an arbitrary polynomial. This functionality + has been moved to a new function, `sweep_poly`. + +A new function, `sweep_poly`, was added. + +New functions and other changes in scipy.linalg +----------------------------------------------- + +The functions `cho_solve_banded`, `circulant`, `companion`, `hadamard` and +`leslie` were added to `scipy.linalg`. + +The function `block_diag` was enhanced to accept scalar and 1D arguments, +along with the usual 2D arguments. + +New function and changes in scipy.optimize +------------------------------------------ + +The `curve_fit` function has been added; it takes a function and uses +non-linear least squares to fit that to the provided data. + +The `leastsq` and `fsolve` functions now return an array of size one instead of +a scalar when solving for a single parameter. + +New sparse least squares solver +------------------------------- + +The `lsqr` function was added to `scipy.sparse`. `This routine +`_ finds a +least-squares solution to a large, sparse, linear system of equations. + +ARPACK-based sparse SVD +----------------------- + +A naive implementation of SVD for sparse matrices is available in +scipy.sparse.linalg.eigen.arpack. It is based on using an symmetric solver on +, and as such may not be very precise. + +Alternative behavior available for `scipy.constants.find` +--------------------------------------------------------- + +The keyword argument `disp` was added to the function `scipy.constants.find`, +with the default value `True`. When `disp` is `True`, the behavior is the +same as in Scipy version 0.7. When `False`, the function returns the list of +keys instead of printing them. (In SciPy version 0.9, the default will be +reversed.) + +Incomplete sparse LU decompositions +----------------------------------- + +Scipy now wraps SuperLU version 4.0, which supports incomplete sparse LU +decompositions. These can be accessed via `scipy.sparse.linalg.spilu`. +Upgrade to SuperLU 4.0 also fixes some known bugs. + +Faster matlab file reader and default behavior change +------------------------------------------------------ + +We've rewritten the matlab file reader in Cython and it should now read +matlab files at around the same speed that Matlab does. + +The reader reads matlab named and anonymous functions, but it can't +write them. + +Until scipy 0.8.0 we have returned arrays of matlab structs as numpy +object arrays, where the objects have attributes named for the struct +fields. As of 0.8.0, we return matlab structs as numpy structured +arrays. You can get the older behavior by using the optional +``struct_as_record=False`` keyword argument to `scipy.io.loadmat` and +friends. + +There is an inconsistency in the matlab file writer, in that it writes +numpy 1D arrays as column vectors in matlab 5 files, and row vectors in +matlab 4 files. We will change this in the next version, so both write +row vectors. There is a `FutureWarning` when calling the writer to warn +of this change; for now we suggest using the ``oned_as='row'`` keyword +argument to `scipy.io.savemat` and friends. + +Faster evaluation of orthogonal polynomials +------------------------------------------- + +Values of orthogonal polynomials can be evaluated with new vectorized functions +in `scipy.special`: `eval_legendre`, `eval_chebyt`, `eval_chebyu`, +`eval_chebyc`, `eval_chebys`, `eval_jacobi`, `eval_laguerre`, +`eval_genlaguerre`, `eval_hermite`, `eval_hermitenorm`, +`eval_gegenbauer`, `eval_sh_legendre`, `eval_sh_chebyt`, +`eval_sh_chebyu`, `eval_sh_jacobi`. This is faster than constructing the +full coefficient representation of the polynomials, which was previously the +only available way. + +Note that the previous orthogonal polynomial routines will now also invoke this +feature, when possible. + +Lambert W function +------------------ + +`scipy.special.lambertw` can now be used for evaluating the Lambert W +function. + +Improved hypergeometric 2F1 function +------------------------------------ + +Implementation of `scipy.special.hyp2f1` for real parameters was revised. +The new version should produce accurate values for all real parameters. + +More flexible interface for Radial basis function interpolation +--------------------------------------------------------------- + +The `scipy.interpolate.Rbf` class now accepts a callable as input for the +"function" argument, in addition to the built-in radial basis functions which +can be selected with a string argument. + +Removed features +================ + +scipy.stsci: the package was removed + +The module `scipy.misc.limits` was removed. + +scipy.io +-------- + +The IO code in both NumPy and SciPy is being extensively +reworked. NumPy will be where basic code for reading and writing NumPy +arrays is located, while SciPy will house file readers and writers for +various data formats (data, audio, video, images, matlab, etc.). + +Several functions in `scipy.io` are removed in the 0.8.0 release including: +`npfile`, `save`, `load`, `create_module`, `create_shelf`, +`objload`, `objsave`, `fopen`, `read_array`, `write_array`, +`fread`, `fwrite`, `bswap`, `packbits`, `unpackbits`, and +`convert_objectarray`. Some of these functions have been replaced by NumPy's +raw reading and writing capabilities, memory-mapping capabilities, or array +methods. Others have been moved from SciPy to NumPy, since basic array reading +and writing capability is now handled by NumPy. diff -Nru python-scipy-0.7.2+dfsg1/doc/source/conf.py python-scipy-0.8.0+dfsg1/doc/source/conf.py --- python-scipy-0.7.2+dfsg1/doc/source/conf.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/source/conf.py 2010-07-26 15:48:29.000000000 +0100 @@ -1,11 +1,11 @@ # -*- coding: utf-8 -*- -import sys, os +import sys, os, re # If your extensions are in another directory, add it here. If the directory # is relative to the documentation root, use os.path.abspath to make it # absolute, like shown here. -sys.path.append(os.path.abspath('../sphinxext')) +sys.path.insert(0, os.path.abspath('../sphinxext')) # Check Sphinx version import sphinx @@ -20,8 +20,14 @@ # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.pngmath', 'numpydoc', - 'phantom_import', 'autosummary', 'sphinx.ext.intersphinx', - 'sphinx.ext.coverage', 'only_directives', 'plot_directive'] + 'sphinx.ext.intersphinx', 'sphinx.ext.coverage', 'plot_directive'] + +if sphinx.__version__ >= "0.7": + extensions.append('sphinx.ext.autosummary') +else: + extensions.append('autosummary') + extensions.append('only_directives') + # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] @@ -34,15 +40,24 @@ # General substitutions. project = 'SciPy' -copyright = '2008-2010, The Scipy community' +copyright = '2008-2009, The Scipy community' # The default replacements for |version| and |release|, also used in various # other places throughout the built documents. # -# The short X.Y version. -version = '0.7' +import scipy +# The short X.Y version (including the .devXXXX suffix if present) +version = re.sub(r'^(\d+\.\d+)\.\d+(.*)', r'\1\2', scipy.__version__) +if 'dev' in version: + # retain the .dev suffix, but clean it up + version = re.sub(r'(\.dev\d*).*?$', r'\1', version) +else: + # strip all other suffixes + version = re.sub(r'^(\d+\.\d+).*?$', r'\1', version) # The full version, including alpha/beta/rc tags. -release = '0.7.2' +release = scipy.__version__ + +print "Scipy (VERSION %s) (RELEASE %s)" % (version, release) # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: @@ -168,6 +183,7 @@ # Additional stuff for the LaTeX preamble. latex_preamble = r''' \usepackage{amsmath} +\DeclareUnicodeCharacter{00A0}{\nobreakspace} % In the parameters section, place a newline after the Parameters % header @@ -214,6 +230,14 @@ #numpydoc_edit_link = '`Edit `__' # ----------------------------------------------------------------------------- +# Autosummary +# ----------------------------------------------------------------------------- + +if sphinx.__version__ >= "0.7": + import glob + autosummary_generate = glob.glob("*.rst") + +# ----------------------------------------------------------------------------- # Coverage checker # ----------------------------------------------------------------------------- coverage_ignore_modules = r""" @@ -234,8 +258,29 @@ # Plot #------------------------------------------------------------------------------ plot_pre_code = """ -import numpy -numpy.random.seed(123) +import numpy as np +import scipy as sp +np.random.seed(123) """ -plot_output_dir = '_static/plot_directive' plot_include_source = True +plot_formats = [('png', 100), 'pdf'] + +import math +phi = (math.sqrt(5) + 1)/2 + +import matplotlib +matplotlib.rcParams.update({ + 'font.size': 8, + 'axes.titlesize': 8, + 'axes.labelsize': 8, + 'xtick.labelsize': 8, + 'ytick.labelsize': 8, + 'legend.fontsize': 8, + 'figure.figsize': (3*phi, 3), + 'figure.subplot.bottom': 0.2, + 'figure.subplot.left': 0.2, + 'figure.subplot.right': 0.9, + 'figure.subplot.top': 0.85, + 'figure.subplot.wspace': 0.4, + 'text.usetex': False, +}) diff -Nru python-scipy-0.7.2+dfsg1/doc/source/constants.rst python-scipy-0.8.0+dfsg1/doc/source/constants.rst --- python-scipy-0.7.2+dfsg1/doc/source/constants.rst 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/source/constants.rst 2010-07-26 15:48:29.000000000 +0100 @@ -567,9 +567,9 @@ ----- ==================== ======================================================= -``dyn`` one dyne in watts -``lbf`` one pound force in watts -``kgf`` one kilogram force in watts +``dyn`` one dyne in newtons +``lbf`` one pound force in newtons +``kgf`` one kilogram force in newtons ==================== ======================================================= Optics diff -Nru python-scipy-0.7.2+dfsg1/doc/source/fftpack.rst python-scipy-0.8.0+dfsg1/doc/source/fftpack.rst --- python-scipy-0.7.2+dfsg1/doc/source/fftpack.rst 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/doc/source/fftpack.rst 2010-07-26 15:48:29.000000000 +0100 @@ -1,11 +1,10 @@ -========================================= Fourier transforms (:mod:`scipy.fftpack`) ========================================= .. module:: scipy.fftpack Fast Fourier transforms -======================= +----------------------- .. autosummary:: :toctree: generated/ @@ -20,7 +19,7 @@ irfft Differential and pseudo-differential operators -============================================== +---------------------------------------------- .. autosummary:: :toctree: generated/ @@ -37,7 +36,7 @@ shift Helper functions -================ +---------------- .. autosummary:: :toctree: generated/ @@ -48,7 +47,7 @@ rfftfreq Convolutions (:mod:`scipy.fftpack.convolve`) -============================================ +-------------------------------------------- .. module:: scipy.fftpack.convolve @@ -61,8 +60,8 @@ destroy_convolve_cache -:mod:`scipy.fftpack._fftpack` -============================= +Other (:mod:`scipy.fftpack._fftpack`) +------------------------------------- .. module:: scipy.fftpack._fftpack diff -Nru python-scipy-0.7.2+dfsg1/doc/source/index.rst python-scipy-0.8.0+dfsg1/doc/source/index.rst --- python-scipy-0.7.2+dfsg1/doc/source/index.rst 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/source/index.rst 2010-07-26 15:48:29.000000000 +0100 @@ -1,6 +1,5 @@ -##### SciPy -##### +===== :Release: |version| :Date: |today| @@ -42,5 +41,4 @@ spatial special stats - stsci weave diff -Nru python-scipy-0.7.2+dfsg1/doc/source/interpolate.rst python-scipy-0.8.0+dfsg1/doc/source/interpolate.rst --- python-scipy-0.7.2+dfsg1/doc/source/interpolate.rst 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/source/interpolate.rst 2010-07-26 15:48:29.000000000 +0100 @@ -41,6 +41,7 @@ The above univariate spline classes have the following methods: + .. autosummary:: :toctree: generated/ @@ -53,6 +54,7 @@ UnivariateSpline.get_residual UnivariateSpline.set_smoothing_factor + Low-level interface to FITPACK functions: .. autosummary:: diff -Nru python-scipy-0.7.2+dfsg1/doc/source/linalg.rst python-scipy-0.8.0+dfsg1/doc/source/linalg.rst --- python-scipy-0.7.2+dfsg1/doc/source/linalg.rst 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/source/linalg.rst 2010-07-26 15:48:29.000000000 +0100 @@ -20,8 +20,8 @@ pinv pinv2 -Eigenvalues and Decompositions -============================== +Eigenvalue Problem +================== .. autosummary:: :toctree: generated/ @@ -32,6 +32,13 @@ eigvalsh eig_banded eigvals_banded + +Decompositions +============== + +.. autosummary:: + :toctree: generated/ + lu lu_factor lu_solve @@ -43,6 +50,7 @@ cholesky_banded cho_factor cho_solve + cho_solve_banded qr schur rsf2csf @@ -68,16 +76,20 @@ sqrtm funm -Iterative linear systems solutions -================================== +Special Matrices +================ .. autosummary:: :toctree: generated/ - cg - cgs - qmr - gmres - bicg - bicgstab - + block_diag + circulant + companion + hadamard + hankel + kron + leslie + toeplitz + tri + tril + triu diff -Nru python-scipy-0.7.2+dfsg1/doc/source/maxentropy.rst python-scipy-0.8.0+dfsg1/doc/source/maxentropy.rst --- python-scipy-0.7.2+dfsg1/doc/source/maxentropy.rst 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/doc/source/maxentropy.rst 2010-07-26 15:48:29.000000000 +0100 @@ -4,36 +4,40 @@ .. automodule:: scipy.maxentropy - Models ====== +.. autoclass:: scipy.maxentropy.basemodel + +.. autosummary:: + :toctree: generated/ + + basemodel.beginlogging + basemodel.endlogging + basemodel.clearcache + basemodel.crossentropy + basemodel.dual + basemodel.fit + basemodel.grad + basemodel.log + basemodel.logparams + basemodel.normconst + basemodel.reset + basemodel.setcallback + basemodel.setparams + basemodel.setsmooth -.. autoclass:: model +.. autoclass:: scipy.maxentropy.model .. autosummary:: :toctree: generated/ - model.beginlogging - model.endlogging - model.clearcache - model.crossentropy - model.dual - model.fit - model.grad - model.log - model.logparams - model.normconst - model.reset - model.setcallback - model.setparams - model.setsmooth model.expectations model.lognormconst model.logpmf model.pmf_function model.setfeaturesandsamplespace -.. autoclass:: bigmodel +.. autoclass:: scipy.maxentropy.bigmodel .. autosummary:: :toctree: generated/ @@ -48,7 +52,7 @@ bigmodel.stochapprox bigmodel.test -.. autoclass:: conditionalmodel +.. autoclass:: scipy.maxentropy.conditionalmodel .. autosummary:: :toctree: generated/ diff -Nru python-scipy-0.7.2+dfsg1/doc/source/optimize.rst python-scipy-0.8.0+dfsg1/doc/source/optimize.rst --- python-scipy-0.7.2+dfsg1/doc/source/optimize.rst 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/source/optimize.rst 2010-07-26 15:48:29.000000000 +0100 @@ -30,6 +30,7 @@ fmin_l_bfgs_b fmin_tnc fmin_cobyla + fmin_slsqp nnls Global @@ -52,6 +53,14 @@ bracket brent +Fitting +======= + +.. autosummary:: + :toctree: generated/ + + curve_fit + Root finding ============ diff -Nru python-scipy-0.7.2+dfsg1/doc/source/release.rst python-scipy-0.8.0+dfsg1/doc/source/release.rst --- python-scipy-0.7.2+dfsg1/doc/source/release.rst 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/source/release.rst 2010-07-26 15:48:29.000000000 +0100 @@ -2,4 +2,4 @@ Release Notes ************* -.. include:: ../release/0.7.0-notes.rst +.. include:: ../release/0.8.0-notes.rst diff -Nru python-scipy-0.7.2+dfsg1/doc/source/signal.rst python-scipy-0.8.0+dfsg1/doc/source/signal.rst --- python-scipy-0.7.2+dfsg1/doc/source/signal.rst 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/source/signal.rst 2010-07-26 15:48:29.000000000 +0100 @@ -39,18 +39,20 @@ order_filter medfilt - medfilt2 + medfilt2d wiener symiirorder1 symiirorder2 lfilter + lfiltic deconvolve hilbert get_window + decimate detrend resample @@ -60,12 +62,14 @@ .. autosummary:: :toctree: generated/ - remez + bilinear firwin - iirdesign - iirfilter freqs freqz + iirdesign + iirfilter + kaiserord + remez unique_roots residue @@ -96,11 +100,14 @@ lti lsim + lsim2 impulse + impulse2 step + step2 -LTI Reresentations -================== +LTI Representations +=================== .. autosummary:: :toctree: generated/ @@ -118,10 +125,11 @@ .. autosummary:: :toctree: generated/ + chirp + gausspulse sawtooth square - gausspulse - chirp + sweep_poly Window functions ================ @@ -129,22 +137,24 @@ .. autosummary:: :toctree: generated/ - boxcar - triang - parzen - bohman + get_window + barthann + bartlett blackman blackmanharris - nuttall + bohman + boxcar + chebwin flattop - bartlett - hann - barthann - hamming - kaiser gaussian general_gaussian + hamming + hann + kaiser + nuttall + parzen slepian + triang Wavelets ======== @@ -152,6 +162,7 @@ .. autosummary:: :toctree: generated/ + cascade daub + morlet qmf - cascade diff -Nru python-scipy-0.7.2+dfsg1/doc/source/special.rst python-scipy-0.8.0+dfsg1/doc/source/special.rst --- python-scipy-0.7.2+dfsg1/doc/source/special.rst 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/source/special.rst 2010-07-26 15:48:29.000000000 +0100 @@ -160,7 +160,7 @@ sph_kn sph_inkn -Ricatti-Bessel Functions +Riccati-Bessel Functions ^^^^^^^^^^^^^^^^^^^^^^^^ These are not universal functions: @@ -244,6 +244,7 @@ psi rgamma polygamma + multigammaln Error Function and Fresnel Integrals @@ -292,24 +293,33 @@ Orthogonal polynomials ---------------------- -These functions all return a polynomial class which can then be -evaluated: ``vals = chebyt(n)(x)``. +The following functions evaluate values of orthogonal polynomials: -The class also has an attribute 'weights' which return the roots, -weights, and total weights for the appropriate form of Gaussian -quadrature. These are returned in an n x 3 array with roots in -the first column, weights in the second column, and total weights -in the final column. - -.. warning:: - - Evaluating large-order polynomials using these functions can be - numerically unstable. +.. autosummary:: + :toctree: generated/ - The reason is that the functions below return polynomials as - `numpy.poly1d` objects, which represent the polynomial in terms - of their coefficients, and this can result to loss of precision - when the polynomial terms are summed. + eval_legendre + eval_chebyt + eval_chebyu + eval_chebyc + eval_chebys + eval_jacobi + eval_laguerre + eval_genlaguerre + eval_hermite + eval_hermitenorm + eval_gegenbauer + eval_sh_legendre + eval_sh_chebyt + eval_sh_chebyu + eval_sh_jacobi + +The functions below, in turn, return :ref:`orthopoly1d` objects, which +functions similarly as :ref:`numpy.poly1d`. The :ref:`orthopoly1d` +class also has an attribute ``weights`` which returns the roots, weights, +and total weights for the appropriate form of Gaussian quadrature. +These are returned in an ``n x 3`` array with roots in the first column, +weights in the second column, and total weights in the final column. .. autosummary:: :toctree: generated/ @@ -330,6 +340,17 @@ sh_chebyu sh_jacobi +.. warning:: + + Large-order polynomials obtained from these functions + are numerically unstable. + + ``orthopoly1d`` objects are converted to ``poly1d``, when doing + arithmetic. ``numpy.poly1d`` works in power basis and cannot + represent high-order polynomials accurately, which can cause + significant inaccuracy. + + Hypergeometric Functions ------------------------ @@ -467,6 +488,7 @@ shichi sici spence + lambertw zeta zetac diff -Nru python-scipy-0.7.2+dfsg1/doc/source/stats.rst python-scipy-0.8.0+dfsg1/doc/source/stats.rst --- python-scipy-0.7.2+dfsg1/doc/source/stats.rst 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/source/stats.rst 2010-07-26 15:48:29.000000000 +0100 @@ -33,14 +33,6 @@ rv_discrete.isf rv_discrete.stats - -Masked statistics functions -=========================== - -.. toctree:: - - stats.mstats - Continuous distributions ======================== @@ -219,7 +211,6 @@ :toctree: generated/ f_oneway - paired pearsonr spearmanr pointbiserialr @@ -235,7 +226,7 @@ kstest chisquare ks_2samp - meanwhitneyu + mannwhitneyu tiecorrect ranksums wilcoxon @@ -272,6 +263,15 @@ ppcc_max ppcc_plot + +Masked statistics functions +=========================== + +.. toctree:: + + stats.mstats + + Univariate and multivariate kernel density estimation (:mod:`scipy.stats.kde`) ============================================================================== diff -Nru python-scipy-0.7.2+dfsg1/doc/source/stsci.rst python-scipy-0.8.0+dfsg1/doc/source/stsci.rst --- python-scipy-0.7.2+dfsg1/doc/source/stsci.rst 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/source/stsci.rst 1970-01-01 01:00:00.000000000 +0100 @@ -1,40 +0,0 @@ -============================================================= -Image Array Manipulation and Convolution (:mod:`scipy.stsci`) -============================================================= - -.. module:: scipy.stsci - -Image Array manipulation Functions (:mod:`scipy.stsci.image`) -============================================================= - -.. module:: scipy.stsci.image - -.. autosummary:: - :toctree: generated/ - - average - combine - median - minimum - threshhold - translate - - -Image Array Convolution Functions (:mod:`scipy.stsci.convolve`) -=============================================================== - -.. module:: scipy.stsci.convolve - -.. autosummary:: - :toctree: generated/ - - boxcar - convolution_modes - convolve - convolve2d - correlate - correlate2d - cross_correlate - dft - iraf_frame - pix_modes diff -Nru python-scipy-0.7.2+dfsg1/doc/source/_templates/autosummary/class.rst python-scipy-0.8.0+dfsg1/doc/source/_templates/autosummary/class.rst --- python-scipy-0.7.2+dfsg1/doc/source/_templates/autosummary/class.rst 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/source/_templates/autosummary/class.rst 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,23 @@ +{% extends "!autosummary/class.rst" %} + +{% block methods %} +{% if methods %} + .. HACK + .. autosummary:: + :toctree: + {% for item in methods %} + {{ name }}.{{ item }} + {%- endfor %} +{% endif %} +{% endblock %} + +{% block attributes %} +{% if attributes %} + .. HACK + .. autosummary:: + :toctree: + {% for item in attributes %} + {{ name }}.{{ item }} + {%- endfor %} +{% endif %} +{% endblock %} diff -Nru python-scipy-0.7.2+dfsg1/doc/source/tutorial/basic.rst python-scipy-0.8.0+dfsg1/doc/source/tutorial/basic.rst --- python-scipy-0.7.2+dfsg1/doc/source/tutorial/basic.rst 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/source/tutorial/basic.rst 2010-07-26 15:48:29.000000000 +0100 @@ -138,7 +138,7 @@ [0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4], - [0, 1, 2, 3, 4]]]) + [0, 1, 2, 3, 4]]]) >>> mgrid[0:5:4j,0:5:4j] array([[[ 0. , 0. , 0. , 0. ], [ 1.6667, 1.6667, 1.6667, 1.6667], @@ -254,7 +254,7 @@ functions :func:`angle`, and :obj:`unwrap` are also useful. Also, the :obj:`linspace` and :obj:`logspace` functions return equally spaced samples in a linear or log scale. Finally, it's useful to be aware of the indexing -capabilities of Numpy.mention should be made of the new +capabilities of Numpy. Mention should be made of the new function :obj:`select` which extends the functionality of :obj:`where` to include multiple conditions and multiple choices. The calling convention is ``select(condlist,choicelist,default=0).`` :obj:`select` is diff -Nru python-scipy-0.7.2+dfsg1/doc/source/tutorial/examples/normdiscr_plot1.py python-scipy-0.8.0+dfsg1/doc/source/tutorial/examples/normdiscr_plot1.py --- python-scipy-0.7.2+dfsg1/doc/source/tutorial/examples/normdiscr_plot1.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/source/tutorial/examples/normdiscr_plot1.py 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,45 @@ +import numpy as np +import matplotlib.pyplot as plt +from scipy import stats + +npoints = 20 # number of integer support points of the distribution minus 1 +npointsh = npoints / 2 +npointsf = float(npoints) +nbound = 4 #bounds for the truncated normal +normbound = (1 + 1 / npointsf) * nbound #actual bounds of truncated normal +grid = np.arange(-npointsh, npointsh+2, 1) #integer grid +gridlimitsnorm = (grid-0.5) / npointsh * nbound #bin limits for the truncnorm +gridlimits = grid - 0.5 +grid = grid[:-1] +probs = np.diff(stats.truncnorm.cdf(gridlimitsnorm, -normbound, normbound)) +gridint = grid +normdiscrete = stats.rv_discrete( + values=(gridint, np.round(probs, decimals=7)), + name='normdiscrete') + + +n_sample = 500 +np.random.seed(87655678) #fix the seed for replicability +rvs = normdiscrete.rvs(size=n_sample) +rvsnd=rvs +f,l = np.histogram(rvs, bins=gridlimits) +sfreq = np.vstack([gridint, f, probs*n_sample]).T +fs = sfreq[:,1] / float(n_sample) +ft = sfreq[:,2] / float(n_sample) +nd_std = np.sqrt(normdiscrete.stats(moments='v')) + +ind = gridint # the x locations for the groups +width = 0.35 # the width of the bars + +plt.subplot(111) +rects1 = plt.bar(ind, ft, width, color='b') +rects2 = plt.bar(ind+width, fs, width, color='r') +normline = plt.plot(ind+width/2.0, stats.norm.pdf(ind, scale=nd_std), + color='b') + +plt.ylabel('Frequency') +plt.title('Frequency and Probability of normdiscrete') +plt.xticks(ind+width, ind ) +plt.legend((rects1[0], rects2[0]), ('true', 'sample')) + +plt.show() diff -Nru python-scipy-0.7.2+dfsg1/doc/source/tutorial/examples/normdiscr_plot2.py python-scipy-0.8.0+dfsg1/doc/source/tutorial/examples/normdiscr_plot2.py --- python-scipy-0.7.2+dfsg1/doc/source/tutorial/examples/normdiscr_plot2.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/source/tutorial/examples/normdiscr_plot2.py 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,48 @@ +import numpy as np +import matplotlib.pyplot as plt +from scipy import stats + +npoints = 20 # number of integer support points of the distribution minus 1 +npointsh = npoints / 2 +npointsf = float(npoints) +nbound = 4 #bounds for the truncated normal +normbound = (1 + 1 / npointsf) * nbound #actual bounds of truncated normal +grid = np.arange(-npointsh, npointsh+2,1) #integer grid +gridlimitsnorm = (grid - 0.5) / npointsh * nbound #bin limits for the truncnorm +gridlimits = grid - 0.5 +grid = grid[:-1] +probs = np.diff(stats.truncnorm.cdf(gridlimitsnorm, -normbound, normbound)) +gridint = grid +normdiscrete = stats.rv_discrete( + values=(gridint, np.round(probs, decimals=7)), + name='normdiscrete') + +n_sample = 500 +np.random.seed(87655678) #fix the seed for replicability +rvs = normdiscrete.rvs(size=n_sample) +rvsnd = rvs +f,l = np.histogram(rvs,bins=gridlimits) +sfreq = np.vstack([gridint,f,probs*n_sample]).T +fs = sfreq[:,1] / float(n_sample) +ft = sfreq[:,2] / float(n_sample) +fs = sfreq[:,1].cumsum() / float(n_sample) +ft = sfreq[:,2].cumsum() / float(n_sample) +nd_std = np.sqrt(normdiscrete.stats(moments='v')) + + +ind = gridint # the x locations for the groups +width = 0.35 # the width of the bars + +plt.figure() +plt.subplot(111) +rects1 = plt.bar(ind, ft, width, color='b') +rects2 = plt.bar(ind+width, fs, width, color='r') +normline = plt.plot(ind+width/2.0, stats.norm.cdf(ind+0.5,scale=nd_std), + color='b') + +plt.ylabel('cdf') +plt.title('Cumulative Frequency and CDF of normdiscrete') +plt.xticks(ind+width, ind ) +plt.legend( (rects1[0], rects2[0]), ('true', 'sample') ) + +plt.show() diff -Nru python-scipy-0.7.2+dfsg1/doc/source/tutorial/fftpack.rst python-scipy-0.8.0+dfsg1/doc/source/tutorial/fftpack.rst --- python-scipy-0.7.2+dfsg1/doc/source/tutorial/fftpack.rst 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/source/tutorial/fftpack.rst 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,145 @@ +Fourier Transforms (:mod:`scipy.fftpack`) +========================================= + +.. sectionauthor:: Scipy Developers + +.. currentmodule:: scipy.fftpack + +.. warning:: + + This is currently a stub page + + +.. contents:: + + +Fourier analysis is fundamentally a method for expressing a function as a +sum of periodic components, and for recovering the signal from those +components. When both the function and its Fourier transform are +replaced with discretized counterparts, it is called the discrete Fourier +transform (DFT). The DFT has become a mainstay of numerical computing in +part because of a very fast algorithm for computing it, called the Fast +Fourier Transform (FFT), which was known to Gauss (1805) and was brought +to light in its current form by Cooley and Tukey [CT]_. Press et al. [NR]_ +provide an accessible introduction to Fourier analysis and its +applications. + + +Fast Fourier transforms +----------------------- + +One dimensional discrete Fourier transforms +------------------------------------------- + +fft, ifft, rfft, irfft + + +Two and n dimensional discrete Fourier transforms +------------------------------------------------- + +fft in more than one dimension + + +Discrete Cosine Transforms +-------------------------- + + +Return the Discrete Cosine Transform [Mak]_ of arbitrary type sequence ``x``. + +For a single dimension array ``x``, ``dct(x, norm='ortho')`` is equal to +matlab ``dct(x)``. + +There are theoretically 8 types of the DCT [WP]_, only the first 3 types are +implemented in scipy. 'The' DCT generally refers to DCT type 2, and 'the' +Inverse DCT generally refers to DCT type 3. + +type I +~~~~~~ + +There are several definitions of the DCT-I; we use the following +(for ``norm=None``): + +.. math:: + :nowrap: + + \[ y_k = x_0 + (-1)^k x_{N-1} + 2\sum_{n=1}^{N-2} x_n + \cos\left({\pi nk\over N-1}\right), + \qquad 0 \le k < N. \] + +Only None is supported as normalization mode for DCT-I. Note also that the +DCT-I is only supported for input size > 1 + +type II +~~~~~~~ + +There are several definitions of the DCT-II; we use the following +(for ``norm=None``): + +.. math:: + :nowrap: + + \[ y_k = 2 \sum_{n=0}^{N-1} x_n + \cos \left({\pi(2n+1)k \over 2N} \right) + \qquad 0 \le k < N.\] + +If ``norm='ortho'``, :math:`y_k` is multiplied by a scaling factor `f`: + +.. math:: + :nowrap: + + \[f = \begin{cases} \sqrt{1/(4N)}, & \text{if $k = 0$} \\ + \sqrt{1/(2N)}, & \text{otherwise} \end{cases} \] + + +Which makes the corresponding matrix of coefficients orthonormal +(`OO' = Id`). + +type III +~~~~~~~~ + +There are several definitions, we use the following +(for ``norm=None``): + +.. math:: + :nowrap: + + \[ y_k = x_0 + 2 \sum_{n=1}^{N-1} x_n + \cos\left({\pi n(2k+1) \over 2N}\right) + \qquad 0 \le k < N,\] + +or, for ``norm='ortho'``: + +.. math:: + :nowrap: + + \[ y_k = {x_0\over\sqrt{N}} + {1\over\sqrt{N}} \sum_{n=1}^{N-1} + x_n \cos\left({\pi n(2k+1) \over 2N}\right) + \qquad 0 \le k < N.\] + +The (unnormalized) DCT-III is the inverse of the (unnormalized) DCT-II, up +to a factor `2N`. The orthonormalized DCT-III is exactly the inverse of the +orthonormalized DCT-II. + +References +~~~~~~~~~~ + +.. [CT] Cooley, James W., and John W. Tukey, 1965, "An algorithm for the + machine calculation of complex Fourier series," *Math. Comput.* + 19: 297-301. + +.. [NR] Press, W., Teukolsky, S., Vetterline, W.T., and Flannery, B.P., + 2007, *Numerical Recipes: The Art of Scientific Computing*, ch. + 12-13. Cambridge Univ. Press, Cambridge, UK. + +.. [Mak] J. Makhoul, 1980, 'A Fast Cosine Transform in One and Two Dimensions', + `IEEE Transactions on acoustics, speech and signal processing` + vol. 28(1), pp. 27-34, http://dx.doi.org/10.1109/TASSP.1980.1163351 + +.. [WP] http://en.wikipedia.org/wiki/Discrete_cosine_transform + + +FFT convolution +--------------- + +scipy.fftpack.convolve performs a convolution of two one-dimensional +arrays in frequency domain. diff -Nru python-scipy-0.7.2+dfsg1/doc/source/tutorial/general.rst python-scipy-0.8.0+dfsg1/doc/source/tutorial/general.rst --- python-scipy-0.7.2+dfsg1/doc/source/tutorial/general.rst 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/source/tutorial/general.rst 2010-07-26 15:48:29.000000000 +0100 @@ -52,9 +52,9 @@ .. currentmodule:: scipy -================== ===================================================================== +================== ====================================================== Subpackage Description -================== ===================================================================== +================== ====================================================== :mod:`cluster` Clustering algorithms :mod:`constants` Physical and mathematical constants :mod:`fftpack` Fast Fourier Transform routines @@ -72,7 +72,7 @@ :mod:`special` Special functions :mod:`stats` Statistical distributions and functions :mod:`weave` C/C++ integration -================== ===================================================================== +================== ====================================================== Scipy sub-packages need to be imported separately, for example:: @@ -92,10 +92,10 @@ Scipy and Numpy have HTML and PDF versions of their documentation available at http://docs.scipy.org/, which currently details nearly all available functionality. However, this documentation is still -work-in-progress, and some parts may be incomplete or sparse. As +work-in-progress, and some parts may be incomplete or sparse. As we are a volunteer organization and depend on the community for -growth, your participation--everything from providing feedback to -improving the documentation and code--is welcome and actively +growth, your participation - everything from providing feedback to +improving the documentation and code - is welcome and actively encouraged. Python also provides the facility of documentation strings. The @@ -127,4 +127,3 @@ an algorithm or understanding exactly what a function is doing with its arguments. Also don't forget about the Python command ``dir`` which can be used to look at the namespace of a module or package. - diff -Nru python-scipy-0.7.2+dfsg1/doc/source/tutorial/index.rst python-scipy-0.8.0+dfsg1/doc/source/tutorial/index.rst --- python-scipy-0.7.2+dfsg1/doc/source/tutorial/index.rst 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/source/tutorial/index.rst 2010-07-26 15:48:29.000000000 +0100 @@ -13,7 +13,10 @@ integrate optimize interpolate + fftpack signal linalg stats ndimage + io + weave diff -Nru python-scipy-0.7.2+dfsg1/doc/source/tutorial/interpolate.rst python-scipy-0.8.0+dfsg1/doc/source/tutorial/interpolate.rst --- python-scipy-0.7.2+dfsg1/doc/source/tutorial/interpolate.rst 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/doc/source/tutorial/interpolate.rst 2010-07-26 15:48:29.000000000 +0100 @@ -11,7 +11,8 @@ first facility is an interpolation class which performs linear 1-dimensional interpolation. The second facility is based on the FORTRAN library FITPACK and provides functions for 1- and -2-dimensional (smoothed) cubic-spline interpolation. +2-dimensional (smoothed) cubic-spline interpolation. There are both +procedural and object-oriented interfaces for the FITPACK library. Linear 1-d interpolation (:class:`interp1d`) @@ -22,11 +23,11 @@ anywhere within the domain defined by the given data using linear interpolation. An instance of this class is created by passing the 1-d vectors comprising the data. The instance of this class defines a -:meth:`__call__ ` method and can therefore by -treated like a function which interpolates between known data values -to obtain unknown values (it also has a docstring for help). Behavior -at the boundary can be specified at instantiation time. The following -example demonstrates it's use. +__call__ method and can therefore by treated like a function which +interpolates between known data values to obtain unknown values (it +also has a docstring for help). Behavior at the boundary can be +specified at instantiation time. The following example demonstrates +it's use. .. plot:: @@ -45,13 +46,13 @@ .. class :obj:`interpolate.interp1d` -Spline interpolation in 1-d (interpolate.splXXX) ------------------------------------------------- +Spline interpolation in 1-d: Procedural (interpolate.splXXX) +------------------------------------------------------------ Spline interpolation requires two essential steps: (1) a spline representation of the curve is computed, and (2) the spline is evaluated at the desired points. In order to find the spline -representation, there are two different was to represent a curve and +representation, there are two different ways to represent a curve and obtain (smoothing) spline coefficients: directly and parametrically. The direct method finds the spline representation of a curve in a two- dimensional plane using the function :obj:`splrep`. The @@ -84,7 +85,7 @@ Once the spline representation of the data has been determined, functions are available for evaluating the spline (:func:`splev`) and its derivatives -(:func:`splev`, :func:`splade`) at any point +(:func:`splev`, :func:`spalde`) at any point and the integral of the spline between any two points ( :func:`splint`). In addition, for cubic splines ( :math:`k=3` ) with 8 or more knots, the roots of the spline can be estimated ( @@ -160,9 +161,80 @@ >>> plt.title('Spline of parametrically-defined curve') >>> plt.show() +Spline interpolation in 1-d: Object-oriented (:class:`UnivariateSpline`) +----------------------------------------------------------------------------- -Two-dimensional spline representation (:func:`bisplrep`) --------------------------------------------------------- +The spline-fitting capabilities described above are also available via +an objected-oriented interface. The one dimensional splines are +objects of the `UnivariateSpline` class, and are created with the +:math:`x` and :math:`y` components of the curve provided as arguments +to the constructor. The class defines __call__, allowing the object +to be called with the x-axis values at which the spline should be +evaluated, returning the interpolated y-values. This is shown in +the example below for the subclass `InterpolatedUnivariateSpline`. +The methods :meth:`integral `, +:meth:`derivatives `, and +:meth:`roots ` methods are also available +on `UnivariateSpline` objects, allowing definite integrals, +derivatives, and roots to be computed for the spline. + +The UnivariateSpline class can also be used to smooth data by +providing a non-zero value of the smoothing parameter `s`, with the +same meaning as the `s` keyword of the :obj:`splrep` function +described above. This results in a spline that has fewer knots +than the number of data points, and hence is no longer strictly +an interpolating spline, but rather a smoothing spline. If this +is not desired, the `InterpolatedUnivariateSpline` class is available. +It is a subclass of `UnivariateSpline` that always passes through all +points (equivalent to forcing the smoothing parameter to 0). This +class is demonstrated in the example below. + +The `LSQUnivarateSpline` is the other subclass of `UnivarateSpline`. +It allows the user to specify the number and location of internal +knots as explicitly with the parameter `t`. This allows creation +of customized splines with non-linear spacing, to interpolate in +some domains and smooth in others, or change the character of the +spline. + + +.. plot:: + + >>> import numpy as np + >>> import matplotlib.pyplot as plt + >>> from scipy import interpolate + + InterpolatedUnivariateSpline + + >>> x = np.arange(0,2*np.pi+np.pi/4,2*np.pi/8) + >>> y = np.sin(x) + >>> s = interpolate.InterpolatedUnivariateSpline(x,y) + >>> xnew = np.arange(0,2*np.pi,np.pi/50) + >>> ynew = s(xnew) + + >>> plt.figure() + >>> plt.plot(x,y,'x',xnew,ynew,xnew,np.sin(xnew),x,y,'b') + >>> plt.legend(['Linear','InterpolatedUnivariateSpline', 'True']) + >>> plt.axis([-0.05,6.33,-1.05,1.05]) + >>> plt.title('InterpolatedUnivariateSpline') + >>> plt.show() + + LSQUnivarateSpline with non-uniform knots + + >>> t = [np.pi/2-.1,np.pi/2-.1,3*np.pi/2-.1,3*np.pi/2+.1] + >>> s = interpolate.LSQUnivariateSpline(x,y,t) + >>> ynew = s(xnew) + + >>> plt.figure() + >>> plt.plot(x,y,'x',xnew,ynew,xnew,np.sin(xnew),x,y,'b') + >>> plt.legend(['Linear','LSQUnivariateSpline', 'True']) + >>> plt.axis([-0.05,6.33,-1.05,1.05]) + >>> plt.title('Spline with Specified Interior Knots') + >>> plt.show() + + + +Two-dimensional spline representation: Procedural (:func:`bisplrep`) +-------------------------------------------------------------------- For (smooth) spline-fitting to a two dimensional surface, the function :func:`bisplrep` is available. This function takes as required inputs @@ -234,6 +306,18 @@ .. :caption: Example of two-dimensional spline interpolation. + +Two-dimensional spline representation: Object-oriented (:class:`BivariateSpline`) +--------------------------------------------------------------------------------- + +The :class:`BivariateSpline` class is the 2-dimensional analog of the +:class:`UnivariateSpline` class. It and its subclasses implement +the FITPACK functions described above in an object oriented fashion, +allowing objects to be instantiated that can be called to compute +the spline value by passing in the two coordinates as the two +arguments. + + Using radial basis functions for smoothing/interpolation --------------------------------------------------------- @@ -274,7 +358,7 @@ >>> plt.subplot(2, 1, 2) >>> plt.plot(x, y, 'bo') - >>> plt.plot(xi, yi, 'g') + >>> plt.plot(xi, fi, 'g') >>> plt.plot(xi, np.sin(xi), 'r') >>> plt.title('Interpolation using RBF - multiquadrics') >>> plt.show() @@ -313,4 +397,3 @@ >>> plt.xlim(-2, 2) >>> plt.ylim(-2, 2) >>> plt.colorbar() - diff -Nru python-scipy-0.7.2+dfsg1/doc/source/tutorial/io.rst python-scipy-0.8.0+dfsg1/doc/source/tutorial/io.rst --- python-scipy-0.7.2+dfsg1/doc/source/tutorial/io.rst 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/source/tutorial/io.rst 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,376 @@ +File IO (:mod:`scipy.io`) +========================= + +.. sectionauthor:: Matthew Brett + +.. currentmodule:: scipy.io + +.. seealso:: :ref:`numpy-reference.routines.io` (in numpy) + +Matlab files +------------ + +.. autosummary:: + :toctree: generated/ + + loadmat + savemat + +Getting started: + + >>> import scipy.io as sio + +If you are using IPython, try tab completing on ``sio``. You'll find:: + + sio.loadmat + sio.savemat + +These are the high-level functions you will most likely use. You'll also find:: + + sio.matlab + +This is the package from which ``loadmat`` and ``savemat`` are imported. +Within ``sio.matlab``, you will find the ``mio`` module - containing +the machinery that ``loadmat`` and ``savemat`` use. From time to time +you may find yourself re-using this machinery. + +How do I start? +``````````````` + +You may have a ``.mat`` file that you want to read into Scipy. Or, you +want to pass some variables from Scipy / Numpy into Matlab. + +To save us using a Matlab license, let's start in Octave_. Octave has +Matlab-compatible save / load functions. Start Octave (``octave`` at +the command line for me): + +.. sourcecode:: octave + + octave:1> a = 1:12 + a = + + 1 2 3 4 5 6 7 8 9 10 11 12 + + octave:2> a = reshape(a, [1 3 4]) + a = + + ans(:,:,1) = + + 1 2 3 + + ans(:,:,2) = + + 4 5 6 + + ans(:,:,3) = + + 7 8 9 + + ans(:,:,4) = + + 10 11 12 + + + + octave:3> save -6 octave_a.mat a % Matlab 6 compatible + octave:4> ls octave_a.mat + octave_a.mat + +Now, to Python: + + >>> mat_contents = sio.loadmat('octave_a.mat') + >>> print mat_contents + {'a': array([[[ 1., 4., 7., 10.], + [ 2., 5., 8., 11.], + [ 3., 6., 9., 12.]]]), '__version__': '1.0', '__header__': 'MATLAB 5.0 MAT-file, written by Octave 3.2.3, 2010-05-30 02:13:40 UTC', '__globals__': []} + >>> oct_a = mat_contents['a'] + >>> print oct_a + [[[ 1. 4. 7. 10.] + [ 2. 5. 8. 11.] + [ 3. 6. 9. 12.]]] + >>> print oct_a.shape + (1, 3, 4) + +Now let's try the other way round: + + >>> import numpy as np + >>> vect = np.arange(10) + >>> print vect.shape + (10,) + >>> sio.savemat('np_vector.mat', {'vect':vect}) + /Users/mb312/usr/local/lib/python2.6/site-packages/scipy/io/matlab/mio.py:196: FutureWarning: Using oned_as default value ('column') This will change to 'row' in future versions + oned_as=oned_as) + +Then back to Octave: + +.. sourcecode:: octave + + octave:5> load np_vector.mat + octave:6> vect + vect = + + 0 + 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + + octave:7> size(vect) + ans = + + 10 1 + +Note the deprecation warning. The ``oned_as`` keyword determines the way in +which one-dimensional vectors are stored. In the future, this will default +to ``row`` instead of ``column``: + + >>> sio.savemat('np_vector.mat', {'vect':vect}, oned_as='row') + +We can load this in Octave or Matlab: + +.. sourcecode:: octave + + octave:8> load np_vector.mat + octave:9> vect + vect = + + 0 1 2 3 4 5 6 7 8 9 + + octave:10> size(vect) + ans = + + 1 10 + + +Matlab structs +`````````````` + +Matlab structs are a little bit like Python dicts, except the field +names must be strings. Any Matlab object can be a value of a field. As +for all objects in Matlab, structs are in fact arrays of structs, where +a single struct is an array of shape (1, 1). + +.. sourcecode:: octave + + octave:11> my_struct = struct('field1', 1, 'field2', 2) + my_struct = + { + field1 = 1 + field2 = 2 + } + + octave:12> save -6 octave_struct.mat my_struct + +We can load this in Python: + + >>> mat_contents = sio.loadmat('octave_struct.mat') + >>> print mat_contents + {'my_struct': array([[([[1.0]], [[2.0]])]], + dtype=[('field1', '|O8'), ('field2', '|O8')]), '__version__': '1.0', '__header__': 'MATLAB 5.0 MAT-file, written by Octave 3.2.3, 2010-05-30 02:00:26 UTC', '__globals__': []} + >>> oct_struct = mat_contents['my_struct'] + >>> print oct_struct.shape + (1, 1) + >>> val = oct_struct[0,0] + >>> print val + ([[1.0]], [[2.0]]) + >>> print val['field1'] + [[ 1.]] + >>> print val['field2'] + [[ 2.]] + >>> print val.dtype + [('field1', '|O8'), ('field2', '|O8')] + +In this version of Scipy (0.8.0), Matlab structs come back as numpy +structured arrays, with fields named for the struct fields. You can see +the field names in the ``dtype`` output above. Note also: + + >>> val = oct_struct[0,0] + +and: + +.. sourcecode:: octave + + octave:13> size(my_struct) + ans = + + 1 1 + +So, in Matlab, the struct array must be at least 2D, and we replicate +that when we read into Scipy. If you want all length 1 dimensions +squeezed out, try this: + + >>> mat_contents = sio.loadmat('octave_struct.mat', squeeze_me=True) + >>> oct_struct = mat_contents['my_struct'] + >>> oct_struct.shape + () + +Sometimes, it's more convenient to load the matlab structs as python +objects rather than numpy structured arrarys - it can make the access +syntax in python a bit more similar to that in matlab. In order to do +this, use the ``struct_as_record=False`` parameter to ``loadmat``. + + >>> mat_contents = sio.loadmat('octave_struct.mat', struct_as_record=False) + >>> oct_struct = mat_contents['my_struct'] + >>> oct_struct[0,0].field1 + array([[ 1.]]) + +``struct_as_record=False`` works nicely with ``squeeze_me``: + + >>> mat_contents = sio.loadmat('octave_struct.mat', struct_as_record=False, squeeze_me=True) + >>> oct_struct = mat_contents['my_struct'] + >>> oct_struct.shape # but no - it's a scalar + Traceback (most recent call last): + File "", line 1, in + AttributeError: 'mat_struct' object has no attribute 'shape' + >>> print type(oct_struct) + + >>> print oct_struct.field1 + 1.0 + +Saving struct arrays can be done in various ways. One simple method is +to use dicts: + + >>> a_dict = {'field1': 0.5, 'field2': 'a string'} + >>> sio.savemat('saved_struct.mat', {'a_dict': a_dict}) + +loaded as: + +.. sourcecode:: octave + + octave:21> load saved_struct + octave:22> a_dict + a_dict = + { + field2 = a string + field1 = 0.50000 + } + +You can also save structs back again to Matlab (or Octave in our case) +like this: + + >>> dt = [('f1', 'f8'), ('f2', 'S10')] + >>> arr = np.zeros((2,), dtype=dt) + >>> print arr + [(0.0, '') (0.0, '')] + >>> arr[0]['f1'] = 0.5 + >>> arr[0]['f2'] = 'python' + >>> arr[1]['f1'] = 99 + >>> arr[1]['f2'] = 'not perl' + >>> sio.savemat('np_struct_arr.mat', {'arr': arr}) + +Matlab cell arrays +`````````````````` + +Cell arrays in Matlab are rather like python lists, in the sense that +the elements in the arrays can contain any type of Matlab object. In +fact they are most similar to numpy object arrays, and that is how we +load them into numpy. + +.. sourcecode:: octave + + octave:14> my_cells = {1, [2, 3]} + my_cells = + + { + [1,1] = 1 + [1,2] = + + 2 3 + + } + + octave:15> save -6 octave_cells.mat my_cells + +Back to Python: + + >>> mat_contents = sio.loadmat('octave_cells.mat') + >>> oct_cells = mat_contents['my_cells'] + >>> print oct_cells.dtype + object + >>> val = oct_cells[0,0] + >>> print val + [[ 1.]] + >>> print val.dtype + float64 + +Saving to a Matlab cell array just involves making a numpy object array: + + >>> obj_arr = np.zeros((2,), dtype=np.object) + >>> obj_arr[0] = 1 + >>> obj_arr[1] = 'a string' + >>> print obj_arr + [1 a string] + >>> sio.savemat('np_cells.mat', {'obj_arr':obj_arr}) + +.. sourcecode:: octave + + octave:16> load np_cells.mat + octave:17> obj_arr + obj_arr = + + { + [1,1] = 1 + [2,1] = a string + } + + +Matrix Market files +------------------- + +.. autosummary:: + :toctree: generated/ + + mminfo + mmread + mmwrite + +Other +----- + +.. autosummary:: + :toctree: generated/ + + save_as_module + +Wav sound files (:mod:`scipy.io.wavfile`) +----------------------------------------- + +.. module:: scipy.io.wavfile + +.. autosummary:: + :toctree: generated/ + + read + write + +Arff files (:mod:`scipy.io.arff`) +--------------------------------- + +.. automodule:: scipy.io.arff + +.. autosummary:: + :toctree: generated/ + + loadarff + +Netcdf (:mod:`scipy.io.netcdf`) +------------------------------- + +.. module:: scipy.io.netcdf + +.. autosummary:: + :toctree: generated/ + + netcdf_file + +Allows reading of NetCDF files (version of pupynere_ package) + + +.. _pupynere: http://pypi.python.org/pypi/pupynere/ +.. _octave: http://www.gnu.org/software/octave +.. _matlab: http://www.mathworks.com/ Binary files /tmp/tTuQn84fcN/python-scipy-0.7.2+dfsg1/doc/source/tutorial/octave_a.mat and /tmp/9QioNV5RG6/python-scipy-0.8.0+dfsg1/doc/source/tutorial/octave_a.mat differ Binary files /tmp/tTuQn84fcN/python-scipy-0.7.2+dfsg1/doc/source/tutorial/octave_cells.mat and /tmp/9QioNV5RG6/python-scipy-0.8.0+dfsg1/doc/source/tutorial/octave_cells.mat differ Binary files /tmp/tTuQn84fcN/python-scipy-0.7.2+dfsg1/doc/source/tutorial/octave_struct.mat and /tmp/9QioNV5RG6/python-scipy-0.8.0+dfsg1/doc/source/tutorial/octave_struct.mat differ diff -Nru python-scipy-0.7.2+dfsg1/doc/source/tutorial/optimize.rst python-scipy-0.8.0+dfsg1/doc/source/tutorial/optimize.rst --- python-scipy-0.7.2+dfsg1/doc/source/tutorial/optimize.rst 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/source/tutorial/optimize.rst 2010-07-26 15:48:29.000000000 +0100 @@ -42,14 +42,14 @@ >>> def rosen(x): ... """The Rosenbrock function""" ... return sum(100.0*(x[1:]-x[:-1]**2.0)**2.0 + (1-x[:-1])**2.0) - + >>> x0 = [1.3, 0.7, 0.8, 1.9, 1.2] >>> xopt = fmin(rosen, x0, xtol=1e-8) Optimization terminated successfully. Current function value: 0.000000 Iterations: 339 Function evaluations: 571 - + >>> print xopt [ 1. 1. 1. 1. 1.] @@ -95,14 +95,14 @@ ... der[0] = -400*x[0]*(x[1]-x[0]**2) - 2*(1-x[0]) ... der[-1] = 200*(x[-1]-x[-2]**2) ... return der - + The calling signature for the BFGS minimization algorithm is similar to :obj:`fmin` with the addition of the *fprime* argument. An example usage of :obj:`fmin_bfgs` is shown in the following example which minimizes the Rosenbrock function. >>> from scipy.optimize import fmin_bfgs - + >>> x0 = [1.3, 0.7, 0.8, 1.9, 1.2] >>> xopt = fmin_bfgs(rosen, x0, fprime=rosen_der) Optimization terminated successfully. @@ -187,7 +187,7 @@ ... diagonal[1:-1] = 202 + 1200*x[1:-1]**2 - 400*x[2:] ... H = H + diag(diagonal) ... return H - + >>> x0 = [1.3, 0.7, 0.8, 1.9, 1.2] >>> xopt = fmin_ncg(rosen, x0, rosen_der, fhess=rosen_hess, avextol=1e-8) Optimization terminated successfully. @@ -235,7 +235,7 @@ ... -400*x[1:-1]*p[2:] ... Hp[-1] = -400*x[-2]*p[-2] + 200*p[-1] ... return Hp - + >>> x0 = [1.3, 0.7, 0.8, 1.9, 1.2] >>> xopt = fmin_ncg(rosen, x0, rosen_der, fhess_p=rosen_hess_p, avextol=1e-8) Optimization terminated successfully. @@ -255,8 +255,8 @@ solve a least-squares problem provided the appropriate objective function is constructed. For example, suppose it is desired to fit a set of data :math:`\left\{\mathbf{x}_{i}, \mathbf{y}_{i}\right\}` -to a known model, -:math:`\mathbf{y}=\mathbf{f}\left(\mathbf{x},\mathbf{p}\right)` +to a known model, +:math:`\mathbf{y}=\mathbf{f}\left(\mathbf{x},\mathbf{p}\right)` where :math:`\mathbf{p}` is a vector of parameters for the model that need to be found. A common method for determining which parameter vector gives the best fit to the data is to minimize the sum of squares @@ -341,10 +341,184 @@ >>> plt.legend(['Fit', 'Noisy', 'True']) >>> plt.show() -.. :caption: Least-square fitting to noisy data using +.. :caption: Least-square fitting to noisy data using .. :obj:`scipy.optimize.leastsq` +.. _tutorial-sqlsp: + +Sequential Least-square fitting with constraints (:func:`fmin_slsqp`) +--------------------------------------------------------------------- + +This module implements the Sequential Least SQuares Programming optimization algorithm (SLSQP). + +.. math:: + :nowrap: + + \begin{eqnarray*} \min F(x) \\ \text{subject to } & C_j(X) = 0 , &j = 1,...,\text{MEQ}\\ + & C_j(x) \geq 0 , &j = \text{MEQ}+1,...,M\\ + & XL \leq x \leq XU , &I = 1,...,N. \end{eqnarray*} + +The following script shows examples for how constraints can be specified. + +:: + + """ + This script tests fmin_slsqp using Example 14.4 from Numerical Methods for + Engineers by Steven Chapra and Raymond Canale. This example maximizes the + function f(x) = 2*x*y + 2*x - x**2 - 2*y**2, which has a maximum at x=2,y=1. + """ + + from scipy.optimize import fmin_slsqp + from numpy import array, asfarray, finfo,ones, sqrt, zeros + + + def testfunc(d,*args): + """ + Arguments: + d - A list of two elements, where d[0] represents x and + d[1] represents y in the following equation. + sign - A multiplier for f. Since we want to optimize it, and the scipy + optimizers can only minimize functions, we need to multiply it by + -1 to achieve the desired solution + Returns: + 2*x*y + 2*x - x**2 - 2*y**2 + + """ + try: + sign = args[0] + except: + sign = 1.0 + x = d[0] + y = d[1] + return sign*(2*x*y + 2*x - x**2 - 2*y**2) + + def testfunc_deriv(d,*args): + """ This is the derivative of testfunc, returning a numpy array + representing df/dx and df/dy + + """ + try: + sign = args[0] + except: + sign = 1.0 + x = d[0] + y = d[1] + dfdx = sign*(-2*x + 2*y + 2) + dfdy = sign*(2*x - 4*y) + return array([ dfdx, dfdy ],float) + + + from time import time + + print '\n\n' + + print "Unbounded optimization. Derivatives approximated." + t0 = time() + x = fmin_slsqp(testfunc, [-1.0,1.0], args=(-1.0,), iprint=2, full_output=1) + print "Elapsed time:", 1000*(time()-t0), "ms" + print "Results",x + print "\n\n" + + print "Unbounded optimization. Derivatives provided." + t0 = time() + x = fmin_slsqp(testfunc, [-1.0,1.0], args=(-1.0,), iprint=2, full_output=1) + print "Elapsed time:", 1000*(time()-t0), "ms" + print "Results",x + print "\n\n" + + print "Bound optimization. Derivatives approximated." + t0 = time() + x = fmin_slsqp(testfunc, [-1.0,1.0], args=(-1.0,), + eqcons=[lambda x, y: x[0]-x[1] ], iprint=2, full_output=1) + print "Elapsed time:", 1000*(time()-t0), "ms" + print "Results",x + print "\n\n" + + print "Bound optimization (equality constraints). Derivatives provided." + t0 = time() + x = fmin_slsqp(testfunc, [-1.0,1.0], fprime=testfunc_deriv, args=(-1.0,), + eqcons=[lambda x, y: x[0]-x[1] ], iprint=2, full_output=1) + print "Elapsed time:", 1000*(time()-t0), "ms" + print "Results",x + print "\n\n" + + print "Bound optimization (equality and inequality constraints)." + print "Derivatives provided." + + t0 = time() + x = fmin_slsqp(testfunc,[-1.0,1.0], fprime=testfunc_deriv, args=(-1.0,), + eqcons=[lambda x, y: x[0]-x[1] ], + ieqcons=[lambda x, y: x[0]-.5], iprint=2, full_output=1) + print "Elapsed time:", 1000*(time()-t0), "ms" + print "Results",x + print "\n\n" + + + def test_eqcons(d,*args): + try: + sign = args[0] + except: + sign = 1.0 + x = d[0] + y = d[1] + return array([ x**3-y ]) + + + def test_ieqcons(d,*args): + try: + sign = args[0] + except: + sign = 1.0 + x = d[0] + y = d[1] + return array([ y-1 ]) + + print "Bound optimization (equality and inequality constraints)." + print "Derivatives provided via functions." + t0 = time() + x = fmin_slsqp(testfunc, [-1.0,1.0], fprime=testfunc_deriv, args=(-1.0,), + f_eqcons=test_eqcons, f_ieqcons=test_ieqcons, + iprint=2, full_output=1) + print "Elapsed time:", 1000*(time()-t0), "ms" + print "Results",x + print "\n\n" + + + def test_fprime_eqcons(d,*args): + try: + sign = args[0] + except: + sign = 1.0 + x = d[0] + y = d[1] + return array([ 3.0*(x**2.0), -1.0 ]) + + + def test_fprime_ieqcons(d,*args): + try: + sign = args[0] + except: + sign = 1.0 + x = d[0] + y = d[1] + return array([ 0.0, 1.0 ]) + + print "Bound optimization (equality and inequality constraints)." + print "Derivatives provided via functions." + print "Constraint jacobians provided via functions" + t0 = time() + x = fmin_slsqp(testfunc,[-1.0,1.0], fprime=testfunc_deriv, args=(-1.0,), + f_eqcons=test_eqcons, f_ieqcons=test_ieqcons, + fprime_eqcons=test_fprime_eqcons, + fprime_ieqcons=test_fprime_ieqcons, iprint=2, full_output=1) + print "Elapsed time:", 1000*(time()-t0), "ms" + print "Results",x + print "\n\n" + + + + Scalar function minimizers -------------------------- @@ -420,21 +594,21 @@ >>> def func(x): ... return x + 2*cos(x) - + >>> def func2(x): ... out = [x[0]*cos(x[1]) - 4] ... out.append(x[1]*x[0] - x[1] - 5) ... return out - + >>> from scipy.optimize import fsolve >>> x0 = fsolve(func, 0.3) >>> print x0 -1.02986652932 - + >>> x02 = fsolve(func2, [1, 1]) >>> print x02 [ 6.50409711 0.90841421] - + Scalar function root finding @@ -461,4 +635,3 @@ :obj:`fixed_point` provides a simple iterative method using Aitkens sequence acceleration to estimate the fixed point of :math:`g` given a starting point. - diff -Nru python-scipy-0.7.2+dfsg1/doc/source/tutorial/signal.rst python-scipy-0.8.0+dfsg1/doc/source/tutorial/signal.rst --- python-scipy-0.7.2+dfsg1/doc/source/tutorial/signal.rst 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/doc/source/tutorial/signal.rst 2010-07-26 15:48:29.000000000 +0100 @@ -159,10 +159,10 @@ of the flattened Numpy array by an appropriate matrix resulting in another flattened Numpy array. Of course, this is not usually the best way to compute the filter as the matrices and vectors involved may be -huge. For example filtering a :math:`512\times512` image with this -method would require multiplication of a :math:`512^{2}x512^{2}` -matrix with a :math:`512^{2}` vector. Just trying to store the -:math:`512^{2}\times512^{2}` matrix using a standard Numpy array would +huge. For example filtering a :math:`512 \times 512` image with this +method would require multiplication of a :math:`512^2 \times 512^2` +matrix with a :math:`512^2` vector. Just trying to store the +:math:`512^2 \times 512^2` matrix using a standard Numpy array would require :math:`68,719,476,736` elements. At 4 bytes per element this would require :math:`256\textrm{GB}` of memory. In most applications most of the elements of this matrix are zero and a different method @@ -439,106 +439,106 @@ .. .. Detrend .. """"""" -.. +.. .. Filter design .. ------------- -.. -.. +.. +.. .. Finite-impulse response design .. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -.. -.. +.. +.. .. Inifinite-impulse response design .. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -.. -.. +.. +.. .. Analog filter frequency response .. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -.. -.. +.. +.. .. Digital filter frequency response .. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -.. -.. +.. +.. .. Linear Time-Invariant Systems .. ----------------------------- -.. -.. +.. +.. .. LTI Object .. ^^^^^^^^^^ -.. -.. +.. +.. .. Continuous-Time Simulation .. ^^^^^^^^^^^^^^^^^^^^^^^^^^ -.. -.. +.. +.. .. Step response .. ^^^^^^^^^^^^^ -.. -.. +.. +.. .. Impulse response .. ^^^^^^^^^^^^^^^^ -.. -.. +.. +.. .. Input/Output .. ============ -.. -.. +.. +.. .. Binary .. ------ -.. -.. +.. +.. .. Arbitrary binary input and output (fopen) .. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -.. -.. +.. +.. .. Read and write Matlab .mat files .. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -.. -.. +.. +.. .. Saving workspace .. ^^^^^^^^^^^^^^^^ -.. -.. +.. +.. .. Text-file .. --------- -.. -.. +.. +.. .. Read text-files (read_array) .. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -.. -.. +.. +.. .. Write a text-file (write_array) .. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -.. -.. +.. +.. .. Fourier Transforms .. ================== -.. -.. +.. +.. .. One-dimensional .. --------------- -.. -.. +.. +.. .. Two-dimensional .. --------------- -.. -.. +.. +.. .. N-dimensional .. ------------- -.. -.. +.. +.. .. Shifting .. -------- -.. -.. +.. +.. .. Sample frequencies .. ------------------ -.. -.. +.. +.. .. Hilbert transform .. ----------------- -.. -.. +.. +.. .. Tilbert transform .. ----------------- diff -Nru python-scipy-0.7.2+dfsg1/doc/source/tutorial/special.rst python-scipy-0.8.0+dfsg1/doc/source/tutorial/special.rst --- python-scipy-0.7.2+dfsg1/doc/source/tutorial/special.rst 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/source/tutorial/special.rst 2010-07-26 15:48:29.000000000 +0100 @@ -14,10 +14,10 @@ provided by the ``stats`` module. Most of these functions can take array arguments and return array results following the same broadcasting rules as other math functions in Numerical Python. Many -of these functions also accept complex-numbers as input. For a +of these functions also accept complex numbers as input. For a complete list of the available functions with a one-line description -type ``>>> help(special).`` Each function also has it's own +type ``>>> help(special).`` Each function also has its own documentation accessible using help. If you don't see a function you need, consider writing it and contributing it to the library. You can write the function in either C, Fortran, or Python. Look in the source -code of the library for examples of each of these kind of functions. +code of the library for examples of each of these kinds of functions. diff -Nru python-scipy-0.7.2+dfsg1/doc/source/tutorial/stats/continuous.rst python-scipy-0.8.0+dfsg1/doc/source/tutorial/stats/continuous.rst --- python-scipy-0.7.2+dfsg1/doc/source/tutorial/stats/continuous.rst 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/source/tutorial/stats/continuous.rst 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,2564 @@ +.. _continuous-random-variables: + +==================================== +Continuous Statistical Distributions +==================================== + +Overview +======== + +All distributions will have location (L) and Scale (S) parameters +along with any shape parameters needed, the names for the shape +parameters will vary. Standard form for the distributions will be +given where :math:`L=0.0` and :math:`S=1.0.` The nonstandard forms can be obtained for the various functions using +(note :math:`U` is a standard uniform random variate). + + +====================================== ============================================================================================================================== ========================================================================================================================================= +Function Name Standard Function Transformation +====================================== ============================================================================================================================== ========================================================================================================================================= +Cumulative Distribution Function (CDF) :math:`F\left(x\right)` :math:`F\left(x;L,S\right)=F\left(\frac{\left(x-L\right)}{S}\right)` +Probability Density Function (PDF) :math:`f\left(x\right)=F^{\prime}\left(x\right)` :math:`f\left(x;L,S\right)=\frac{1}{S}f\left(\frac{\left(x-L\right)}{S}\right)` +Percent Point Function (PPF) :math:`G\left(q\right)=F^{-1}\left(q\right)` :math:`G\left(q;L,S\right)=L+SG\left(q\right)` +Probability Sparsity Function (PSF) :math:`g\left(q\right)=G^{\prime}\left(q\right)` :math:`g\left(q;L,S\right)=Sg\left(q\right)` +Hazard Function (HF) :math:`h_{a}\left(x\right)=\frac{f\left(x\right)}{1-F\left(x\right)}` :math:`h_{a}\left(x;L,S\right)=\frac{1}{S}h_{a}\left(\frac{\left(x-L\right)}{S}\right)` +Cumulative Hazard Functon (CHF) :math:`H_{a}\left(x\right)=` :math:`\log\frac{1}{1-F\left(x\right)}` :math:`H_{a}\left(x;L,S\right)=H_{a}\left(\frac{\left(x-L\right)}{S}\right)` +Survival Function (SF) :math:`S\left(x\right)=1-F\left(x\right)` :math:`S\left(x;L,S\right)=S\left(\frac{\left(x-L\right)}{S}\right)` +Inverse Survival Function (ISF) :math:`Z\left(\alpha\right)=S^{-1}\left(\alpha\right)=G\left(1-\alpha\right)` :math:`Z\left(\alpha;L,S\right)=L+SZ\left(\alpha\right)` +Moment Generating Function (MGF) :math:`M_{Y}\left(t\right)=E\left[e^{Yt}\right]` :math:`M_{X}\left(t\right)=e^{Lt}M_{Y}\left(St\right)` +Random Variates :math:`Y=G\left(U\right)` :math:`X=L+SY` +(Differential) Entropy :math:`h\left[Y\right]=-\int f\left(y\right)\log f\left(y\right)dy` :math:`h\left[X\right]=h\left[Y\right]+\log S` +(Non-central) Moments :math:`\mu_{n}^{\prime}=E\left[Y^{n}\right]` :math:`E\left[X^{n}\right]=L^{n}\sum_{k=0}^{N}\left(\begin{array}{c} n\\ k\end{array}\right)\left(\frac{S}{L}\right)^{k}\mu_{k}^{\prime}` +Central Moments :math:`\mu_{n}=E\left[\left(Y-\mu\right)^{n}\right]` :math:`E\left[\left(X-\mu_{X}\right)^{n}\right]=S^{n}\mu_{n}` +mean (mode, median), var :math:`\mu,\,\mu_{2}` :math:`L+S\mu,\, S^{2}\mu_{2}` +skewness, kurtosis :math:`\gamma_{1}=\frac{\mu_{3}}{\left(\mu_{2}\right)^{3/2}},\,` :math:`\gamma_{2}=\frac{\mu_{4}}{\left(\mu_{2}\right)^{2}}-3` :math:`\gamma_{1},\,\gamma_{2}` +====================================== ============================================================================================================================== ========================================================================================================================================= + + + + + + +Moments +------- + +Non-central moments are defined using the PDF + +.. math:: + :nowrap: + + \[ \mu_{n}^{\prime}=\int_{-\infty}^{\infty}x^{n}f\left(x\right)dx.\] + +Note, that these can always be computed using the PPF. Substitute :math:`x=G\left(q\right)` in the above equation and get + +.. math:: + :nowrap: + + \[ \mu_{n}^{\prime}=\int_{0}^{1}G^{n}\left(q\right)dq\] + +which may be easier to compute numerically. Note that :math:`q=F\left(x\right)` so that :math:`dq=f\left(x\right)dx.` Central moments are computed similarly :math:`\mu=\mu_{1}^{\prime}` + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu_{n} & = & \int_{-\infty}^{\infty}\left(x-\mu\right)^{n}f\left(x\right)dx\\ & = & \int_{0}^{1}\left(G\left(q\right)-\mu\right)^{n}dq\\ & = & \sum_{k=0}^{n}\left(\begin{array}{c} n\\ k\end{array}\right)\left(-\mu\right)^{k}\mu_{n-k}^{\prime}\end{eqnarray*} + +In particular + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu_{3} & = & \mu_{3}^{\prime}-3\mu\mu_{2}^{\prime}+2\mu^{3}\\ & = & \mu_{3}^{\prime}-3\mu\mu_{2}-\mu^{3}\\ \mu_{4} & = & \mu_{4}^{\prime}-4\mu\mu_{3}^{\prime}+6\mu^{2}\mu_{2}^{\prime}-3\mu^{4}\\ & = & \mu_{4}^{\prime}-4\mu\mu_{3}-6\mu^{2}\mu_{2}-\mu^{4}\end{eqnarray*} + +Skewness is defined as + +.. math:: + :nowrap: + + \[ \gamma_{1}=\sqrt{\beta_{1}}=\frac{\mu_{3}}{\mu_{2}^{3/2}}\] + +while (Fisher) kurtosis is + +.. math:: + :nowrap: + + \[ \gamma_{2}=\frac{\mu_{4}}{\mu_{2}^{2}}-3,\] + +so that a normal distribution has a kurtosis of zero. + + +Median and mode +--------------- + +The median, :math:`m_{n}` is defined as the point at which half of the density is on one side +and half on the other. In other words, :math:`F\left(m_{n}\right)=\frac{1}{2}` so that + +.. math:: + :nowrap: + + \[ m_{n}=G\left(\frac{1}{2}\right).\] + +In addition, the mode, :math:`m_{d}` , is defined as the value for which the probability density function +reaches it's peak + +.. math:: + :nowrap: + + \[ m_{d}=\arg\max_{x}f\left(x\right).\] + + + + +Fitting data +------------ + +To fit data to a distribution, maximizing the likelihood function is +common. Alternatively, some distributions have well-known minimum +variance unbiased estimators. These will be chosen by default, but the +likelihood function will always be available for minimizing. + +If :math:`f\left(x;\boldsymbol{\theta}\right)` is the PDF of a random-variable where :math:`\boldsymbol{\theta}` is a vector of parameters ( *e.g.* :math:`L` and :math:`S` ), then for a collection of :math:`N` independent samples from this distribution, the joint distribution the +random vector :math:`\mathbf{x}` is + +.. math:: + :nowrap: + + \[ f\left(\mathbf{x};\boldsymbol{\theta}\right)=\prod_{i=1}^{N}f\left(x_{i};\boldsymbol{\theta}\right).\] + +The maximum likelihood estimate of the parameters :math:`\boldsymbol{\theta}` are the parameters which maximize this function with :math:`\mathbf{x}` fixed and given by the data: + +.. math:: + :nowrap: + + \begin{eqnarray*} \boldsymbol{\theta}_{es} & = & \arg\max_{\boldsymbol{\theta}}f\left(\mathbf{x};\boldsymbol{\theta}\right)\\ & = & \arg\min_{\boldsymbol{\theta}}l_{\mathbf{x}}\left(\boldsymbol{\theta}\right).\end{eqnarray*} + +Where + +.. math:: + :nowrap: + + \begin{eqnarray*} l_{\mathbf{x}}\left(\boldsymbol{\theta}\right) & = & -\sum_{i=1}^{N}\log f\left(x_{i};\boldsymbol{\theta}\right)\\ & = & -N\overline{\log f\left(x_{i};\boldsymbol{\theta}\right)}\end{eqnarray*} + +Note that if :math:`\boldsymbol{\theta}` includes only shape parameters, the location and scale-parameters can +be fit by replacing :math:`x_{i}` with :math:`\left(x_{i}-L\right)/S` in the log-likelihood function adding :math:`N\log S` and minimizing, thus + +.. math:: + :nowrap: + + \begin{eqnarray*} l_{\mathbf{x}}\left(L,S;\boldsymbol{\theta}\right) & = & N\log S-\sum_{i=1}^{N}\log f\left(\frac{x_{i}-L}{S};\boldsymbol{\theta}\right)\\ & = & N\log S+l_{\frac{\mathbf{x}-S}{L}}\left(\boldsymbol{\theta}\right)\end{eqnarray*} + +If desired, sample estimates for :math:`L` and :math:`S` (not necessarily maximum likelihood estimates) can be obtained from +samples estimates of the mean and variance using + +.. math:: + :nowrap: + + \begin{eqnarray*} \hat{S} & = & \sqrt{\frac{\hat{\mu}_{2}}{\mu_{2}}}\\ \hat{L} & = & \hat{\mu}-\hat{S}\mu\end{eqnarray*} + +where :math:`\mu` and :math:`\mu_{2}` are assumed known as the mean and variance of the **untransformed** distribution (when :math:`L=0` and :math:`S=1` ) and + +.. math:: + :nowrap: + + \begin{eqnarray*} \hat{\mu} & = & \frac{1}{N}\sum_{i=1}^{N}x_{i}=\bar{\mathbf{x}}\\ \hat{\mu}_{2} & = & \frac{1}{N-1}\sum_{i=1}^{N}\left(x_{i}-\hat{\mu}\right)^{2}=\frac{N}{N-1}\overline{\left(\mathbf{x}-\bar{\mathbf{x}}\right)^{2}}\end{eqnarray*} + + + + +Standard notation for mean +-------------------------- + +We will use + +.. math:: + :nowrap: + + \[ \overline{y\left(\mathbf{x}\right)}=\frac{1}{N}\sum_{i=1}^{N}y\left(x_{i}\right)\] + +where :math:`N` should be clear from context as the number of samples :math:`x_{i}` + + +Alpha +===== + +One shape parameters :math:`\alpha>0` (paramter :math:`\beta` in DATAPLOT is a scale-parameter). Standard form is :math:`x>0:` + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;\alpha\right) & = & \frac{1}{x^{2}\Phi\left(\alpha\right)\sqrt{2\pi}}\exp\left(-\frac{1}{2}\left(\alpha-\frac{1}{x}\right)^{2}\right)\\ F\left(x;\alpha\right) & = & \frac{\Phi\left(\alpha-\frac{1}{x}\right)}{\Phi\left(\alpha\right)}\\ G\left(q;\alpha\right) & = & \left[\alpha-\Phi^{-1}\left(q\Phi\left(\alpha\right)\right)\right]^{-1}\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \[ M\left(t\right)=\frac{1}{\Phi\left(a\right)\sqrt{2\pi}}\int_{0}^{\infty}\frac{e^{xt}}{x^{2}}\exp\left(-\frac{1}{2}\left(\alpha-\frac{1}{x}\right)^{2}\right)dx\] + + + +No moments? + +.. math:: + :nowrap: + + \[ l_{\mathbf{x}}\left(\alpha\right)=N\log\left[\Phi\left(\alpha\right)\sqrt{2\pi}\right]+2N\overline{\log\mathbf{x}}+\frac{N}{2}\alpha^{2}-\alpha\overline{\mathbf{x}^{-1}}+\frac{1}{2}\overline{\mathbf{x}^{-2}}\] + + + + +Anglit +====== + +Defined over :math:`x\in\left[-\frac{\pi}{4},\frac{\pi}{4}\right]` + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x\right) & = & \sin\left(2x+\frac{\pi}{2}\right)=\cos\left(2x\right)\\ F\left(x\right) & = & \sin^{2}\left(x+\frac{\pi}{4}\right)\\ G\left(q\right) & = & \arcsin\left(\sqrt{q}\right)-\frac{\pi}{4}\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & 0\\ \mu_{2} & = & \frac{\pi^{2}}{16}-\frac{1}{2}\\ \gamma_{1} & = & 0\\ \gamma_{2} & = & -2\frac{\pi^{4}-96}{\left(\pi^{2}-8\right)^{2}}\end{eqnarray*} + + + +.. math:: + :nowrap: + + \begin{eqnarray*} h\left[X\right] & = & 1-\log2\\ & \approx & 0.30685281944005469058\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} M\left(t\right) & = & \int_{-\frac{\pi}{4}}^{\frac{\pi}{4}}\cos\left(2x\right)e^{xt}dx\\ & = & \frac{4\cosh\left(\frac{\pi t}{4}\right)}{t^{2}+4}\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \[ l_{\mathbf{x}}\left(\cdot\right)=-N\overline{\log\left[\cos\left(2\mathbf{x}\right)\right]}\] + + + + +Arcsine +======= + +Defined over :math:`x\in\left(0,1\right)` . To get the JKB definition put :math:`x=\frac{u+1}{2}.` i.e. :math:`L=-1` and :math:`S=2.` + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x\right) & = & \frac{1}{\pi\sqrt{x\left(1-x\right)}}\\ F\left(x\right) & = & \frac{2}{\pi}\arcsin\left(\sqrt{x}\right)\\ G\left(q\right) & = & \sin^{2}\left(\frac{\pi}{2}q\right)\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ M\left(t\right)=E^{t/2}I_{0}\left(\frac{t}{2}\right)\] + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu_{n}^{\prime} & = & \frac{1}{\pi}\int_{0}^{1}dx\, x^{n-1/2}\left(1-x\right)^{-1/2}\\ & = & \frac{1}{\pi}B\left(\frac{1}{2},n+\frac{1}{2}\right)=\frac{\left(2n-1\right)!!}{2^{n}n!}\end{eqnarray*} + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \frac{1}{2}\\ \mu_{2} & = & \frac{1}{8}\\ \gamma_{1} & = & 0\\ \gamma_{2} & = & -\frac{3}{2}\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \[ h\left[X\right]\approx-0.24156447527049044468\] + + + + + +.. math:: + :nowrap: + + \[ l_{\mathbf{x}}\left(\cdot\right)=N\log\pi+\frac{N}{2}\overline{\log\mathbf{x}}+\frac{N}{2}\overline{\log\left(1-\mathbf{x}\right)}\] + + + + +Beta +==== + +Two shape parameters + + + +.. math:: + :nowrap: + + \[ a,b>0\] + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;a,b\right) & = & \frac{\Gamma\left(a+b\right)}{\Gamma\left(a\right)\Gamma\left(b\right)}x^{a-1}\left(1-x\right)^{b-1}I_{\left(0,1\right)}\left(x\right)\\ F\left(x;a,b\right) & = & \int_{0}^{x}f\left(y;a,b\right)dy=I\left(x,a,b\right)\\ G\left(\alpha;a,b\right) & = & I^{-1}\left(\alpha;a,b\right)\\ M\left(t\right) & = & \frac{\Gamma\left(a\right)\Gamma\left(b\right)}{\Gamma\left(a+b\right)}\,_{1}F_{1}\left(a;a+b;t\right)\\ \mu & = & \frac{a}{a+b}\\ \mu_{2} & = & \frac{ab\left(a+b+1\right)}{\left(a+b\right)^{2}}\\ \gamma_{1} & = & 2\frac{b-a}{a+b+2}\sqrt{\frac{a+b+1}{ab}}\\ \gamma_{2} & = & \frac{6\left(a^{3}+a^{2}\left(1-2b\right)+b^{2}\left(b+1\right)-2ab\left(b+2\right)\right)}{ab\left(a+b+2\right)\left(a+b+3\right)}\\ m_{d} & = & \frac{\left(a-1\right)}{\left(a+b-2\right)}\, a+b\neq2\end{eqnarray*} + + + +:math:`f\left(x;a,1\right)` is also called the Power-function distribution. + + + +.. math:: + :nowrap: + + \[ l_{\mathbf{x}}\left(a,b\right)=-N\log\Gamma\left(a+b\right)+N\log\Gamma\left(a\right)+N\log\Gamma\left(b\right)-N\left(a-1\right)\overline{\log\mathbf{x}}-N\left(b-1\right)\overline{\log\left(1-\mathbf{x}\right)}\] + +All of the :math:`x_{i}\in\left[0,1\right]` + + +Beta Prime +========== + +Defined over :math:`00.` (Note the CDF evaluation uses Eq. 3.194.1 on pg. 313 of Gradshteyn & +Ryzhik (sixth edition). + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;\alpha,\beta\right) & = & \frac{\Gamma\left(\alpha+\beta\right)}{\Gamma\left(\alpha\right)\Gamma\left(\beta\right)}x^{\alpha-1}\left(1+x\right)^{-\alpha-\beta}\\ F\left(x;\alpha,\beta\right) & = & \frac{\Gamma\left(\alpha+\beta\right)}{\alpha\Gamma\left(\alpha\right)\Gamma\left(\beta\right)}x^{\alpha}\,_{2}F_{1}\left(\alpha+\beta,\alpha;1+\alpha;-x\right)\\ G\left(q;\alpha,\beta\right) & = & F^{-1}\left(x;\alpha,\beta\right)\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \[ \mu_{n}^{\prime}=\left\{ \begin{array}{ccc} \frac{\Gamma\left(n+\alpha\right)\Gamma\left(\beta-n\right)}{\Gamma\left(\alpha\right)\Gamma\left(\beta\right)}=\frac{\left(\alpha\right)_{n}}{\left(\beta-n\right)_{n}} & & \beta>n\\ \infty & & \textrm{otherwise}\end{array}\right.\] + +Therefore, + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \frac{\alpha}{\beta-1}\quad\beta>1\\ \mu_{2} & = & \frac{\alpha\left(\alpha+1\right)}{\left(\beta-2\right)\left(\beta-1\right)}-\frac{\alpha^{2}}{\left(\beta-1\right)^{2}}\quad\beta>2\\ \gamma_{1} & = & \frac{\frac{\alpha\left(\alpha+1\right)\left(\alpha+2\right)}{\left(\beta-3\right)\left(\beta-2\right)\left(\beta-1\right)}-3\mu\mu_{2}-\mu^{3}}{\mu_{2}^{3/2}}\quad\beta>3\\ \gamma_{2} & = & \frac{\mu_{4}}{\mu_{2}^{2}}-3\\ \mu_{4} & = & \frac{\alpha\left(\alpha+1\right)\left(\alpha+2\right)\left(\alpha+3\right)}{\left(\beta-4\right)\left(\beta-3\right)\left(\beta-2\right)\left(\beta-1\right)}-4\mu\mu_{3}-6\mu^{2}\mu_{2}-\mu^{4}\quad\beta>4\end{eqnarray*} + + + + +Bradford +======== + + + +.. math:: + :nowrap: + + \begin{eqnarray*} c & > & 0\\ k & = & \log\left(1+c\right)\end{eqnarray*} + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;c\right) & = & \frac{c}{k\left(1+cx\right)}I_{\left(0,1\right)}\left(x\right)\\ F\left(x;c\right) & = & \frac{\log\left(1+cx\right)}{k}\\ G\left(\alpha\; c\right) & = & \frac{\left(1+c\right)^{\alpha}-1}{c}\\ M\left(t\right) & = & \frac{1}{k}e^{-t/c}\left[\textrm{Ei}\left(t+\frac{t}{c}\right)-\textrm{Ei}\left(\frac{t}{c}\right)\right]\\ \mu & = & \frac{c-k}{ck}\\ \mu_{2} & = & \frac{\left(c+2\right)k-2c}{2ck^{2}}\\ \gamma_{1} & = & \frac{\sqrt{2}\left(12c^{2}-9kc\left(c+2\right)+2k^{2}\left(c\left(c+3\right)+3\right)\right)}{\sqrt{c\left(c\left(k-2\right)+2k\right)}\left(3c\left(k-2\right)+6k\right)}\\ \gamma_{2} & = & \frac{c^{3}\left(k-3\right)\left(k\left(3k-16\right)+24\right)+12kc^{2}\left(k-4\right)\left(k-3\right)+6ck^{2}\left(3k-14\right)+12k^{3}}{3c\left(c\left(k-2\right)+2k\right)^{2}}\\ m_{d} & = & 0\\ m_{n} & = & \sqrt{1+c}-1\end{eqnarray*} + +where :math:`\textrm{Ei}\left(\textrm{z}\right)` is the exponential integral function. Also + +.. math:: + :nowrap: + + \[ h\left[X\right]=\frac{1}{2}\log\left(1+c\right)-\log\left(\frac{c}{\log\left(1+c\right)}\right)\] + + + + +Burr +==== + + + +.. math:: + :nowrap: + + \begin{eqnarray*} c & > & 0\\ d & > & 0\\ k & = & \Gamma\left(d\right)\Gamma\left(1-\frac{2}{c}\right)\Gamma\left(\frac{2}{c}+d\right)-\Gamma^{2}\left(1-\frac{1}{c}\right)\Gamma^{2}\left(\frac{1}{c}+d\right)\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;c,d\right) & = & \frac{cd}{x^{c+1}\left(1+x^{-c}\right)^{d+1}}I_{\left(0,\infty\right)}\left(x\right)\\ F\left(x;c,d\right) & = & \left(1+x^{-c}\right)^{-d}\\ G\left(\alpha;c,d\right) & = & \left(\alpha^{-1/d}-1\right)^{-1/c}\\ \mu & = & \frac{\Gamma\left(1-\frac{1}{c}\right)\Gamma\left(\frac{1}{c}+d\right)}{\Gamma\left(d\right)}\\ \mu_{2} & = & \frac{k}{\Gamma^{2}\left(d\right)}\\ \gamma_{1} & = & \frac{1}{\sqrt{k^{3}}}\left[2\Gamma^{3}\left(1-\frac{1}{c}\right)\Gamma^{3}\left(\frac{1}{c}+d\right)+\Gamma^{2}\left(d\right)\Gamma\left(1-\frac{3}{c}\right)\Gamma\left(\frac{3}{c}+d\right)\right.\\ & & \left.-3\Gamma\left(d\right)\Gamma\left(1-\frac{2}{c}\right)\Gamma\left(1-\frac{1}{c}\right)\Gamma\left(\frac{1}{c}+d\right)\Gamma\left(\frac{2}{c}+d\right)\right]\\ \gamma_{2} & = & -3+\frac{1}{k^{2}}\left[6\Gamma\left(d\right)\Gamma\left(1-\frac{2}{c}\right)\Gamma^{2}\left(1-\frac{1}{c}\right)\Gamma^{2}\left(\frac{1}{c}+d\right)\Gamma\left(\frac{2}{c}+d\right)\right.\\ & & -3\Gamma^{4}\left(1-\frac{1}{c}\right)\Gamma^{4}\left(\frac{1}{c}+d\right)+\Gamma^{3}\left(d\right)\Gamma\left(1-\frac{4}{c}\right)\Gamma\left(\frac{4}{c}+d\right)\\ & & \left.-4\Gamma^{2}\left(d\right)\Gamma\left(1-\frac{3}{c}\right)\Gamma\left(1-\frac{1}{c}\right)\Gamma\left(\frac{1}{c}+d\right)\Gamma\left(\frac{3}{c}+d\right)\right]\\ m_{d} & = & \left(\frac{cd-1}{c+1}\right)^{1/c}\,\textrm{if }cd>1\,\textrm{otherwise }0\\ m_{n} & = & \left(2^{1/d}-1\right)^{-1/c}\end{eqnarray*} + + + + +Cauchy +====== + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x\right) & = & \frac{1}{\pi\left(1+x^{2}\right)}\\ F\left(x\right) & = & \frac{1}{2}+\frac{1}{\pi}\tan^{-1}x\\ G\left(\alpha\right) & = & \tan\left(\pi\alpha-\frac{\pi}{2}\right)\\ m_{d} & = & 0\\ m_{n} & = & 0\end{eqnarray*} + +No finite moments. This is the t distribution with one degree of +freedom. + +.. math:: + :nowrap: + + \begin{eqnarray*} h\left[X\right] & = & \log\left(4\pi\right)\\ & \approx & 2.5310242469692907930.\end{eqnarray*} + + + + +Chi +=== + +Generated by taking the (positive) square-root of chi-squared +variates. + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;\nu\right) & = & \frac{x^{\nu-1}e^{-x^{2}/2}}{2^{\nu/2-1}\Gamma\left(\frac{\nu}{2}\right)}I_{\left(0,\infty\right)}\left(x\right)\\ F\left(x;\nu\right) & = & \Gamma\left(\frac{\nu}{2},\frac{x^{2}}{2}\right)\\ G\left(\alpha;\nu\right) & = & \sqrt{2\Gamma^{-1}\left(\frac{\nu}{2},\alpha\right)}\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ M\left(t\right)=\Gamma\left(\frac{v}{2}\right)\,_{1}F_{1}\left(\frac{v}{2};\frac{1}{2};\frac{t^{2}}{2}\right)+\frac{t}{\sqrt{2}}\Gamma\left(\frac{1+\nu}{2}\right)\,_{1}F_{1}\left(\frac{1+\nu}{2};\frac{3}{2};\frac{t^{2}}{2}\right)\] + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \frac{\sqrt{2}\Gamma\left(\frac{\nu+1}{2}\right)}{\Gamma\left(\frac{\nu}{2}\right)}\\ \mu_{2} & = & \nu-\mu^{2}\\ \gamma_{1} & = & \frac{2\mu^{3}+\mu\left(1-2\nu\right)}{\mu_{2}^{3/2}}\\ \gamma_{2} & = & \frac{2\nu\left(1-\nu\right)-6\mu^{4}+4\mu^{2}\left(2\nu-1\right)}{\mu_{2}^{2}}\\ m_{d} & = & \sqrt{\nu-1}\quad\nu\geq1\\ m_{n} & = & \sqrt{2\Gamma^{-1}\left(\frac{\nu}{2},\frac{1}{2}\right)}\end{eqnarray*} + + + + +Chi-squared +=========== + +This is the gamma distribution with :math:`L=0.0` and :math:`S=2.0` and :math:`\alpha=\nu/2` where :math:`\nu` is called the degrees of freedom. If :math:`Z_{1}\ldots Z_{\nu}` are all standard normal distributions, then :math:`W=\sum_{k}Z_{k}^{2}` has (standard) chi-square distribution with :math:`\nu` degrees of freedom. + +The standard form (most often used in standard form only) is :math:`x>0` + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;\alpha\right) & = & \frac{1}{2\Gamma\left(\frac{\nu}{2}\right)}\left(\frac{x}{2}\right)^{\nu/2-1}e^{-x/2}\\ F\left(x;\alpha\right) & = & \Gamma\left(\frac{\nu}{2},\frac{x}{2}\right)\\ G\left(q;\alpha\right) & = & 2\Gamma^{-1}\left(\frac{\nu}{2},q\right)\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ M\left(t\right)=\frac{\Gamma\left(\frac{\nu}{2}\right)}{\left(\frac{1}{2}-t\right)^{\nu/2}}\] + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \nu\\ \mu_{2} & = & 2\nu\\ \gamma_{1} & = & \frac{2\sqrt{2}}{\sqrt{\nu}}\\ \gamma_{2} & = & \frac{12}{\nu}\\ m_{d} & = & \frac{\nu}{2}-1\end{eqnarray*} + + + + +Cosine +====== + +Approximation to the normal distribution. + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x\right) & = & \frac{1}{2\pi}\left[1+\cos x\right]I_{\left[-\pi,\pi\right]}\left(x\right)\\ F\left(x\right) & = & \frac{1}{2\pi}\left[\pi+x+\sin x\right]I_{\left[-\pi,\pi\right]}\left(x\right)+I_{\left(\pi,\infty\right)}\left(x\right)\\ G\left(\alpha\right) & = & F^{-1}\left(\alpha\right)\\ M\left(t\right) & = & \frac{\sinh\left(\pi t\right)}{\pi t\left(1+t^{2}\right)}\\ \mu=m_{d}=m_{n} & = & 0\\ \mu_{2} & = & \frac{\pi^{2}}{3}-2\\ \gamma_{1} & = & 0\\ \gamma_{2} & = & \frac{-6\left(\pi^{4}-90\right)}{5\left(\pi^{2}-6\right)^{2}}\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} h\left[X\right] & = & \log\left(4\pi\right)-1\\ & \approx & 1.5310242469692907930.\end{eqnarray*} + + + + +Double Gamma +============ + +The double gamma is the signed version of the Gamma distribution. For :math:`\alpha>0:` + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;\alpha\right) & = & \frac{1}{2\Gamma\left(\alpha\right)}\left|x\right|^{\alpha-1}e^{-\left|x\right|}\\ F\left(x;\alpha\right) & = & \left\{ \begin{array}{ccc} \frac{1}{2}-\frac{1}{2}\Gamma\left(\alpha,\left|x\right|\right) & & x\leq0\\ \frac{1}{2}+\frac{1}{2}\Gamma\left(\alpha,\left|x\right|\right) & & x>0\end{array}\right.\\ G\left(q;\alpha\right) & = & \left\{ \begin{array}{ccc} -\Gamma^{-1}\left(\alpha,\left|2q-1\right|\right) & & q\leq\frac{1}{2}\\ \Gamma^{-1}\left(\alpha,\left|2q-1\right|\right) & & q>\frac{1}{2}\end{array}\right.\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \[ M\left(t\right)=\frac{1}{2\left(1-t\right)^{a}}+\frac{1}{2\left(1+t\right)^{a}}\] + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu=m_{n} & = & 0\\ \mu_{2} & = & \alpha\left(\alpha+1\right)\\ \gamma_{1} & = & 0\\ \gamma_{2} & = & \frac{\left(\alpha+2\right)\left(\alpha+3\right)}{\alpha\left(\alpha+1\right)}-3\\ m_{d} & = & \textrm{NA}\end{eqnarray*} + + + + +Doubly Non-central F* +===================== + + +Doubly Non-central t* +===================== + + +Double Weibull +============== + +This is a signed form of the Weibull distribution. + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;c\right) & = & \frac{c}{2}\left|x\right|^{c-1}\exp\left(-\left|x\right|^{c}\right)\\ F\left(x;c\right) & = & \left\{ \begin{array}{ccc} \frac{1}{2}\exp\left(-\left|x\right|^{c}\right) & & x\leq0\\ 1-\frac{1}{2}\exp\left(-\left|x\right|^{c}\right) & & x>0\end{array}\right.\\ G\left(q;c\right) & = & \left\{ \begin{array}{ccc} -\log^{1/c}\left(\frac{1}{2q}\right) & & q\leq\frac{1}{2}\\ \log^{1/c}\left(\frac{1}{2q-1}\right) & & q>\frac{1}{2}\end{array}\right.\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ \mu_{n}^{\prime}=\mu_{n}=\begin{cases} \Gamma\left(1+\frac{n}{c}\right) & n\textrm{ even}\\ 0 & n\textrm{ odd}\end{cases}\] + + + +.. math:: + :nowrap: + + \begin{eqnarray*} m_{d}=\mu & = & 0\\ \mu_{2} & = & \Gamma\left(\frac{c+2}{c}\right)\\ \gamma_{1} & = & 0\\ \gamma_{2} & = & \frac{\Gamma\left(1+\frac{4}{c}\right)}{\Gamma^{2}\left(1+\frac{2}{c}\right)}\\ m_{d} & = & \textrm{NA bimodal}\end{eqnarray*} + + + + +Erlang +====== + +This is just the Gamma distribution with shape parameter :math:`\alpha=n` an integer. + + +Exponential +=========== + +This is a special case of the Gamma (and Erlang) distributions with +shape parameter :math:`\left(\alpha=1\right)` and the same location and scale parameters. The standard form is +therefore ( :math:`x\geq0` ) + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x\right) & = & e^{-x}\\ F\left(x\right) & = & \Gamma\left(1,x\right)=1-e^{-x}\\ G\left(q\right) & = & -\log\left(1-q\right)\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \[ \mu_{n}^{\prime}=n!\] + + + + + +.. math:: + :nowrap: + + \[ M\left(t\right)=\frac{1}{1-t}\] + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & 1\\ \mu_{2} & = & 1\\ \gamma_{1} & = & 2\\ \gamma_{2} & = & 6\\ m_{d} & = & 0\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ h\left[X\right]=1.\] + + + + +Exponentiated Weibull +===================== + +Two positive shape parameters :math:`a` and :math:`c` and :math:`x\in\left(0,\infty\right)` + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;a,c\right) & = & ac\left[1-\exp\left(-x^{c}\right)\right]^{a-1}\exp\left(-x^{c}\right)x^{c-1}\\ F\left(x;a,c\right) & = & \left[1-\exp\left(-x^{c}\right)\right]^{a}\\ G\left(q;a,c\right) & = & \left[-\log\left(1-q^{1/a}\right)\right]^{1/c}\end{eqnarray*} + + + + +Exponential Power +================= + +One positive shape parameter :math:`b` . Defined for :math:`x\geq0.` + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;b\right) & = & ebx^{b-1}\exp\left[x^{b}-e^{x^{b}}\right]\\ F\left(x;b\right) & = & 1-\exp\left[1-e^{x^{b}}\right]\\ G\left(q;b\right) & = & \log^{1/b}\left[1-\log\left(1-q\right)\right]\end{eqnarray*} + + + + +Fatigue Life (Birnbaum-Sanders) +=============================== + +This distribution's pdf is the average of the inverse-Gaussian :math:`\left(\mu=1\right)` and reciprocal inverse-Gaussian pdf :math:`\left(\mu=1\right)` . We follow the notation of JKB here with :math:`\beta=S.` for :math:`x>0` + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;c\right) & = & \frac{x+1}{2c\sqrt{2\pi x^{3}}}\exp\left(-\frac{\left(x-1\right)^{2}}{2xc^{2}}\right)\\ F\left(x;c\right) & = & \Phi\left(\frac{1}{c}\left(\sqrt{x}-\frac{1}{\sqrt{x}}\right)\right)\\ G\left(q;c\right) & = & \frac{1}{4}\left[c\Phi^{-1}\left(q\right)+\sqrt{c^{2}\left(\Phi^{-1}\left(q\right)\right)^{2}+4}\right]^{2}\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ M\left(t\right)=c\sqrt{2\pi}\exp\left[\frac{1}{c^{2}}\left(1-\sqrt{1-2c^{2}t}\right)\right]\left(1+\frac{1}{\sqrt{1-2c^{2}t}}\right)\] + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \frac{c^{2}}{2}+1\\ \mu_{2} & = & c^{2}\left(\frac{5}{4}c^{2}+1\right)\\ \gamma_{1} & = & \frac{4c\sqrt{11c^{2}+6}}{\left(5c^{2}+4\right)^{3/2}}\\ \gamma_{2} & = & \frac{6c^{2}\left(93c^{2}+41\right)}{\left(5c^{2}+4\right)^{2}}\end{eqnarray*} + + + + +Fisk (Log Logistic) +=================== + +Special case of the Burr distribution with :math:`d=1` + + + +.. math:: + :nowrap: + + \begin{eqnarray*} c & > & 0\\ k & = & \Gamma\left(1-\frac{2}{c}\right)\Gamma\left(\frac{2}{c}+1\right)-\Gamma^{2}\left(1-\frac{1}{c}\right)\Gamma^{2}\left(\frac{1}{c}+1\right)\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;c,d\right) & = & \frac{cx^{c-1}}{\left(1+x^{c}\right)^{2}}I_{\left(0,\infty\right)}\left(x\right)\\ F\left(x;c,d\right) & = & \left(1+x^{-c}\right)^{-1}\\ G\left(\alpha;c,d\right) & = & \left(\alpha^{-1}-1\right)^{-1/c}\\ \mu & = & \Gamma\left(1-\frac{1}{c}\right)\Gamma\left(\frac{1}{c}+1\right)\\ \mu_{2} & = & k\\ \gamma_{1} & = & \frac{1}{\sqrt{k^{3}}}\left[2\Gamma^{3}\left(1-\frac{1}{c}\right)\Gamma^{3}\left(\frac{1}{c}+1\right)+\Gamma\left(1-\frac{3}{c}\right)\Gamma\left(\frac{3}{c}+1\right)\right.\\ & & \left.-3\Gamma\left(1-\frac{2}{c}\right)\Gamma\left(1-\frac{1}{c}\right)\Gamma\left(\frac{1}{c}+1\right)\Gamma\left(\frac{2}{c}+1\right)\right]\\ \gamma_{2} & = & -3+\frac{1}{k^{2}}\left[6\Gamma\left(1-\frac{2}{c}\right)\Gamma^{2}\left(1-\frac{1}{c}\right)\Gamma^{2}\left(\frac{1}{c}+1\right)\Gamma\left(\frac{2}{c}+1\right)\right.\\ & & -3\Gamma^{4}\left(1-\frac{1}{c}\right)\Gamma^{4}\left(\frac{1}{c}+1\right)+\Gamma\left(1-\frac{4}{c}\right)\Gamma\left(\frac{4}{c}+1\right)\\ & & \left.-4\Gamma\left(1-\frac{3}{c}\right)\Gamma\left(1-\frac{1}{c}\right)\Gamma\left(\frac{1}{c}+1\right)\Gamma\left(\frac{3}{c}+1\right)\right]\\ m_{d} & = & \left(\frac{c-1}{c+1}\right)^{1/c}\,\textrm{if }c>1\,\textrm{otherwise }0\\ m_{n} & = & 1\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \[ h\left[X\right]=2-\log c.\] + + + + +Folded Cauchy +============= + +This formula can be expressed in terms of the standard formulas for +the Cauchy distribution (call the cdf :math:`C\left(x\right)` and the pdf :math:`d\left(x\right)` ). if :math:`Y` is cauchy then :math:`\left|Y\right|` is folded cauchy. Note that :math:`x\geq0.` + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;c\right) & = & \frac{1}{\pi\left(1+\left(x-c\right)^{2}\right)}+\frac{1}{\pi\left(1+\left(x+c\right)^{2}\right)}\\ F\left(x;c\right) & = & \frac{1}{\pi}\tan^{-1}\left(x-c\right)+\frac{1}{\pi}\tan^{-1}\left(x+c\right)\\ G\left(q;c\right) & = & F^{-1}\left(x;c\right)\end{eqnarray*} + + + +No moments + + +Folded Normal +============= + +If :math:`Z` is Normal with mean :math:`L` and :math:`\sigma=S` , then :math:`\left|Z\right|` is a folded normal with shape parameter :math:`c=\left|L\right|/S` , location parameter :math:`0` and scale parameter :math:`S` . This is a special case of the non-central chi distribution with one- +degree of freedom and non-centrality parameter :math:`c^{2}.` Note that :math:`c\geq0` . The standard form of the folded normal is + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;c\right) & = & \sqrt{\frac{2}{\pi}}\cosh\left(cx\right)\exp\left(-\frac{x^{2}+c^{2}}{2}\right)\\ F\left(x;c\right) & = & \Phi\left(x-c\right)-\Phi\left(-x-c\right)=\Phi\left(x-c\right)+\Phi\left(x+c\right)-1\\ G\left(\alpha;c\right) & = & F^{-1}\left(x;c\right)\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ M\left(t\right)=\exp\left[\frac{t}{2}\left(t-2c\right)\right]\left(1+e^{2ct}\right)\] + + + +.. math:: + :nowrap: + + \begin{eqnarray*} k & = & \textrm{erf}\left(\frac{c}{\sqrt{2}}\right)\\ p & = & \exp\left(-\frac{c^{2}}{2}\right)\\ \mu & = & \sqrt{\frac{2}{\pi}}p+ck\\ \mu_{2} & = & c^{2}+1-\mu^{2}\\ \gamma_{1} & = & \frac{\sqrt{\frac{2}{\pi}}p^{3}\left(4-\frac{\pi}{p^{2}}\left(2c^{2}+1\right)\right)+2ck\left(6p^{2}+3cpk\sqrt{2\pi}+\pi c\left(k^{2}-1\right)\right)}{\pi\mu_{2}^{3/2}}\\ \gamma_{2} & = & \frac{c^{4}+6c^{2}+3+6\left(c^{2}+1\right)\mu^{2}-3\mu^{4}-4p\mu\left(\sqrt{\frac{2}{\pi}}\left(c^{2}+2\right)+\frac{ck}{p}\left(c^{2}+3\right)\right)}{\mu_{2}^{2}}\end{eqnarray*} + + + + +Fratio (or F) +============= + +Defined for :math:`x>0` . The distribution of :math:`\left(X_{1}/X_{2}\right)\left(\nu_{2}/\nu_{1}\right)` if :math:`X_{1}` is chi-squared with :math:`v_{1}` degrees of freedom and :math:`X_{2}` is chi-squared with :math:`v_{2}` degrees of freedom. + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;\nu_{1},\nu_{2}\right) & = & \frac{\nu_{2}^{\nu_{2}/2}\nu_{1}^{\nu_{1}/2}x^{\nu_{1}/2-1}}{\left(\nu_{2}+\nu_{1}x\right)^{\left(\nu_{1}+\nu_{2}\right)/2}B\left(\frac{\nu_{1}}{2},\frac{\nu_{2}}{2}\right)}\\ F\left(x;v_{1},v_{2}\right) & = & I\left(\frac{\nu_{1}}{2},\frac{\nu_{2}}{2},\frac{\nu_{2}x}{\nu_{2}+\nu_{1}x}\right)\\ G\left(q;\nu_{1},\nu_{2}\right) & = & \left[\frac{\nu_{2}}{I^{-1}\left(\nu_{1}/2,\nu_{2}/2,q\right)}-\frac{\nu_{1}}{\nu_{2}}\right]^{-1}.\end{eqnarray*} + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \frac{\nu_{2}}{\nu_{2}-2}\quad\nu_{2}>2\\ \mu_{2} & = & \frac{2\nu_{2}^{2}\left(\nu_{1}+\nu_{2}-2\right)}{\nu_{1}\left(\nu_{2}-2\right)^{2}\left(\nu_{2}-4\right)}\quad v_{2}>4\\ \gamma_{1} & = & \frac{2\left(2\nu_{1}+\nu_{2}-2\right)}{\nu_{2}-6}\sqrt{\frac{2\left(\nu_{2}-4\right)}{\nu_{1}\left(\nu_{1}+\nu_{2}-2\right)}}\quad\nu_{2}>6\\ \gamma_{2} & = & \frac{3\left[8+\left(\nu_{2}-6\right)\gamma_{1}^{2}\right]}{2\nu-16}\quad\nu_{2}>8\end{eqnarray*} + + + + +Fréchet (ExtremeLB, Extreme Value II, Weibull minimum) +======================================================= + +A type of extreme-value distribution with a lower bound. Defined for :math:`x>0` and :math:`c>0` + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;c\right) & = & cx^{c-1}\exp\left(-x^{c}\right)\\ F\left(x;c\right) & = & 1-\exp\left(-x^{c}\right)\\ G\left(q;c\right) & = & \left[-\log\left(1-q\right)\right]^{1/c}\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ \mu_{n}^{\prime}=\Gamma\left(1+\frac{n}{c}\right)\] + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \Gamma\left(1+\frac{1}{c}\right)\\ \mu_{2} & = & \Gamma\left(1+\frac{2}{c}\right)-\Gamma^{2}\left(1-\frac{1}{c}\right)\\ \gamma_{1} & = & \frac{\Gamma\left(1+\frac{3}{c}\right)-3\Gamma\left(1+\frac{2}{c}\right)\Gamma\left(1+\frac{1}{c}\right)+2\Gamma^{3}\left(1+\frac{1}{c}\right)}{\mu_{2}^{3/2}}\\ \gamma_{2} & = & \frac{\Gamma\left(1+\frac{4}{c}\right)-4\Gamma\left(1+\frac{1}{c}\right)\Gamma\left(1+\frac{3}{c}\right)+6\Gamma^{2}\left(1+\frac{1}{c}\right)\Gamma\left(1+\frac{2}{c}\right)-\Gamma^{4}\left(1+\frac{1}{c}\right)}{\mu_{2}^{2}}-3\\ m_{d} & = & \left(\frac{c}{1+c}\right)^{1/c}\\ m_{n} & = & G\left(\frac{1}{2};c\right)\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ h\left[X\right]=-\frac{\gamma}{c}-\log\left(c\right)+\gamma+1\] + +where :math:`\gamma` is Euler's constant and equal to + +.. math:: + :nowrap: + + \[ \gamma\approx0.57721566490153286061.\] + + + + +Fréchet (left-skewed, Extreme Value Type III, Weibull maximum) +=============================================================== + +Defined for :math:`x<0` and :math:`c>0` . + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;c\right) & = & c\left(-x\right)^{c-1}\exp\left(-\left(-x\right)^{c}\right)\\ F\left(x;c\right) & = & \exp\left(-\left(-x\right)^{c}\right)\\ G\left(q;c\right) & = & -\left(-\log q\right)^{1/c}\end{eqnarray*} + + + +The mean is the negative of the right-skewed Frechet distribution +given above, and the other statistical parameters can be computed from + + + +.. math:: + :nowrap: + + \[ \mu_{n}^{\prime}=\left(-1\right)^{n}\Gamma\left(1+\frac{n}{c}\right).\] + + + + + +.. math:: + :nowrap: + + \[ h\left[X\right]=-\frac{\gamma}{c}-\log\left(c\right)+\gamma+1\] + +where :math:`\gamma` is Euler's constant and equal to + +.. math:: + :nowrap: + + \[ \gamma\approx0.57721566490153286061.\] + + + + +Gamma +===== + +The standard form for the gamma distribution is :math:`\left(\alpha>0\right)` valid for :math:`x\geq0` . + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;\alpha\right) & = & \frac{1}{\Gamma\left(\alpha\right)}x^{\alpha-1}e^{-x}\\ F\left(x;\alpha\right) & = & \Gamma\left(\alpha,x\right)\\ G\left(q;\alpha\right) & = & \Gamma^{-1}\left(\alpha,q\right)\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ M\left(t\right)=\frac{1}{\left(1-t\right)^{\alpha}}\] + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \alpha\\ \mu_{2} & = & \alpha\\ \gamma_{1} & = & \frac{2}{\sqrt{\alpha}}\\ \gamma_{2} & = & \frac{6}{\alpha}\\ m_{d} & = & \alpha-1\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \[ h\left[X\right]=\Psi\left(a\right)\left[1-a\right]+a+\log\Gamma\left(a\right)\] + +where + +.. math:: + :nowrap: + + \[ \Psi\left(a\right)=\frac{\Gamma^{\prime}\left(a\right)}{\Gamma\left(a\right)}.\] + + + + +Generalized Logistic +==================== + +Has been used in the analysis of extreme values. Has one shape +parameter :math:`c>0.` And :math:`x>0` + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;c\right) & = & \frac{c\exp\left(-x\right)}{\left[1+\exp\left(-x\right)\right]^{c+1}}\\ F\left(x;c\right) & = & \frac{1}{\left[1+\exp\left(-x\right)\right]^{c}}\\ G\left(q;c\right) & = & -\log\left(q^{-1/c}-1\right)\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \[ M\left(t\right)=\frac{c}{1-t}\,_{2}F_{1}\left(1+c,\,1-t\,;\,2-t\,;-1\right)\] + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \gamma+\psi_{0}\left(c\right)\\ \mu_{2} & = & \frac{\pi^{2}}{6}+\psi_{1}\left(c\right)\\ \gamma_{1} & = & \frac{\psi_{2}\left(c\right)+2\zeta\left(3\right)}{\mu_{2}^{3/2}}\\ \gamma_{2} & = & \frac{\left(\frac{\pi^{4}}{15}+\psi_{3}\left(c\right)\right)}{\mu_{2}^{2}}\\ m_{d} & = & \log c\\ m_{n} & = & -\log\left(2^{1/c}-1\right)\end{eqnarray*} + +Note that the polygamma function is + +.. math:: + :nowrap: + + \begin{eqnarray*} \psi_{n}\left(z\right) & = & \frac{d^{n+1}}{dz^{n+1}}\log\Gamma\left(z\right)\\ & = & \left(-1\right)^{n+1}n!\sum_{k=0}^{\infty}\frac{1}{\left(z+k\right)^{n+1}}\\ & = & \left(-1\right)^{n+1}n!\zeta\left(n+1,z\right)\end{eqnarray*} + +where :math:`\zeta\left(k,x\right)` is a generalization of the Riemann zeta function called the Hurwitz +zeta function Note that :math:`\zeta\left(n\right)\equiv\zeta\left(n,1\right)` + + +Generalized Pareto +================== + +Shape parameter :math:`c\neq0` and defined for :math:`x\geq0` for all :math:`c` and :math:`x<\frac{1}{\left|c\right|}` if :math:`c` is negative. + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;c\right) & = & \left(1+cx\right)^{-1-\frac{1}{c}}\\ F\left(x;c\right) & = & 1-\frac{1}{\left(1+cx\right)^{1/c}}\\ G\left(q;c\right) & = & \frac{1}{c}\left[\left(\frac{1}{1-q}\right)^{c}-1\right]\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \[ M\left(t\right)=\left\{ \begin{array}{cc} \left(-\frac{t}{c}\right)^{\frac{1}{c}}e^{-\frac{t}{c}}\left[\Gamma\left(1-\frac{1}{c}\right)+\Gamma\left(-\frac{1}{c},-\frac{t}{c}\right)-\pi\csc\left(\frac{\pi}{c}\right)/\Gamma\left(\frac{1}{c}\right)\right] & c>0\\ \left(\frac{\left|c\right|}{t}\right)^{1/\left|c\right|}\Gamma\left[\frac{1}{\left|c\right|},\frac{t}{\left|c\right|}\right] & c<0\end{array}\right.\] + + + + + +.. math:: + :nowrap: + + \[ \mu_{n}^{\prime}=\frac{\left(-1\right)^{n}}{c^{n}}\sum_{k=0}^{n}\left(\begin{array}{c} n\\ k\end{array}\right)\frac{\left(-1\right)^{k}}{1-ck}\quad cn<1\] + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu_{1}^{\prime} & = & \frac{1}{1-c}\quad c<1\\ \mu_{2}^{\prime} & = & \frac{2}{\left(1-2c\right)\left(1-c\right)}\quad c<\frac{1}{2}\\ \mu_{3}^{\prime} & = & \frac{6}{\left(1-c\right)\left(1-2c\right)\left(1-3c\right)}\quad c<\frac{1}{3}\\ \mu_{4}^{\prime} & = & \frac{24}{\left(1-c\right)\left(1-2c\right)\left(1-3c\right)\left(1-4c\right)}\quad c<\frac{1}{4}\end{eqnarray*} + +Thus, + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \mu_{1}^{\prime}\\ \mu_{2} & = & \mu_{2}^{\prime}-\mu^{2}\\ \gamma_{1} & = & \frac{\mu_{3}^{\prime}-3\mu\mu_{2}-\mu^{3}}{\mu_{2}^{3/2}}\\ \gamma_{2} & = & \frac{\mu_{4}^{\prime}-4\mu\mu_{3}-6\mu^{2}\mu_{2}-\mu^{4}}{\mu_{2}^{2}}-3\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \[ h\left[X\right]=1+c\quad c>0\] + + + + +Generalized Exponential +======================= + +Three positive shape parameters for :math:`x\geq0.` Note that :math:`a,b,` and :math:`c` are all :math:`>0.` + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;a,b,c\right) & = & \left(a+b\left(1-e^{-cx}\right)\right)\exp\left[ax-bx+\frac{b}{c}\left(1-e^{-cx}\right)\right]\\ F\left(x;a,b,c\right) & = & 1-\exp\left[ax-bx+\frac{b}{c}\left(1-e^{-cx}\right)\right]\\ G\left(q;a,b,c\right) & = & F^{-1}\end{eqnarray*} + + + + +Generalized Extreme Value +========================= + +Extreme value distributions with shape parameter :math:`c` . + +For :math:`c>0` defined on :math:`-\infty-1\] + +So, + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu_{1}^{\prime} & = & \frac{1}{c}\left(1-\Gamma\left(1+c\right)\right)\quad c>-1\\ \mu_{2}^{\prime} & = & \frac{1}{c^{2}}\left(1-2\Gamma\left(1+c\right)+\Gamma\left(1+2c\right)\right)\quad c>-\frac{1}{2}\\ \mu_{3}^{\prime} & = & \frac{1}{c^{3}}\left(1-3\Gamma\left(1+c\right)+3\Gamma\left(1+2c\right)-\Gamma\left(1+3c\right)\right)\quad c>-\frac{1}{3}\\ \mu_{4}^{\prime} & = & \frac{1}{c^{4}}\left(1-4\Gamma\left(1+c\right)+6\Gamma\left(1+2c\right)-4\Gamma\left(1+3c\right)+\Gamma\left(1+4c\right)\right)\quad c>-\frac{1}{4}\end{eqnarray*} + +For :math:`c<0` defined on :math:`\frac{1}{c}\leq x<\infty.` For :math:`c=0` defined over all space + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;0\right) & = & \exp\left[-e^{-x}\right]e^{-x}\\ F\left(x;0\right) & = & \exp\left[-e^{-x}\right]\\ G\left(q;0\right) & = & -\log\left(-\log q\right)\end{eqnarray*} + +This is just the (left-skewed) Gumbel distribution for c=0. + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \gamma=-\psi_{0}\left(1\right)\\ \mu_{2} & = & \frac{\pi^{2}}{6}\\ \gamma_{1} & = & \frac{12\sqrt{6}}{\pi^{3}}\zeta\left(3\right)\\ \gamma_{2} & = & \frac{12}{5}\end{eqnarray*} + + + + +Generalized Gamma +================= + +A general probability form that reduces to many common distributions: :math:`x>0` :math:`a>0` and :math:`c\neq0.` + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;a,c\right) & = & \frac{\left|c\right|x^{ca-1}}{\Gamma\left(a\right)}\exp\left(-x^{c}\right)\\ F\left(x;a,c\right) & = & \begin{array}{cc} \frac{\Gamma\left(a,x^{c}\right)}{\Gamma\left(a\right)} & c>0\\ 1-\frac{\Gamma\left(a,x^{c}\right)}{\Gamma\left(a\right)} & c<0\end{array}\\ G\left(q;a,c\right) & = & \left\{ \Gamma^{-1}\left[a,\Gamma\left(a\right)q\right]\right\} ^{1/c}\quad c>0\\ & & \left\{ \Gamma^{-1}\left[a,\Gamma\left(a\right)\left(1-q\right)\right]\right\} ^{1/c}\quad c<0\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ \mu_{n}^{\prime}=\frac{\Gamma\left(a+\frac{n}{c}\right)}{\Gamma\left(a\right)}\] + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \frac{\Gamma\left(a+\frac{1}{c}\right)}{\Gamma\left(a\right)}\\ \mu_{2} & = & \frac{\Gamma\left(a+\frac{2}{c}\right)}{\Gamma\left(a\right)}-\mu^{2}\\ \gamma_{1} & = & \frac{\Gamma\left(a+\frac{3}{c}\right)/\Gamma\left(a\right)-3\mu\mu_{2}-\mu^{3}}{\mu_{2}^{3/2}}\\ \gamma_{2} & = & \frac{\Gamma\left(a+\frac{4}{c}\right)/\Gamma\left(a\right)-4\mu\mu_{3}-6\mu^{2}\mu_{2}-\mu^{4}}{\mu_{2}^{2}}-3\\ m_{d} & = & \left(\frac{ac-1}{c}\right)^{1/c}.\end{eqnarray*} + +Special cases are Weibull :math:`\left(a=1\right)` , half-normal :math:`\left(a=1/2,c=2\right)` and ordinary gamma distributions :math:`c=1.` If :math:`c=-1` then it is the inverted gamma distribution. + + + +.. math:: + :nowrap: + + \[ h\left[X\right]=a-a\Psi\left(a\right)+\frac{1}{c}\Psi\left(a\right)+\log\Gamma\left(a\right)-\log\left|c\right|.\] + + + + +Generalized Half-Logistic +========================= + +For :math:`x\in\left[0,1/c\right]` and :math:`c>0` we have + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;c\right) & = & \frac{2\left(1-cx\right)^{\frac{1}{c}-1}}{\left(1+\left(1-cx\right)^{1/c}\right)^{2}}\\ F\left(x;c\right) & = & \frac{1-\left(1-cx\right)^{1/c}}{1+\left(1-cx\right)^{1/c}}\\ G\left(q;c\right) & = & \frac{1}{c}\left[1-\left(\frac{1-q}{1+q}\right)^{c}\right]\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} h\left[X\right] & = & 2-\left(2c+1\right)\log2.\end{eqnarray*} + + + + +Gilbrat +======= + +Special case of the log-normal with :math:`\sigma=1` and :math:`S=1.0` (typically also :math:`L=0.0` ) + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;\sigma\right) & = & \frac{1}{x\sqrt{2\pi}}\exp\left[-\frac{1}{2}\left(\log x\right)^{2}\right]\\ F\left(x;\sigma\right) & = & \Phi\left(\log x\right)=\frac{1}{2}\left[1+\textrm{erf}\left(\frac{\log x}{\sqrt{2}}\right)\right]\\ G\left(q;\sigma\right) & = & \exp\left\{ \Phi^{-1}\left(q\right)\right\} \end{eqnarray*} + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \sqrt{e}\\ \mu_{2} & = & e\left[e-1\right]\\ \gamma_{1} & = & \sqrt{e-1}\left(2+e\right)\\ \gamma_{2} & = & e^{4}+2e^{3}+3e^{2}-6\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} h\left[X\right] & = & \log\left(\sqrt{2\pi e}\right)\\ & \approx & 1.4189385332046727418\end{eqnarray*} + + + + +Gompertz (Truncated Gumbel) +=========================== + +For :math:`x\geq0` and :math:`c>0` . In JKB the two shape parameters :math:`b,a` are reduced to the single shape-parameter :math:`c=b/a` . As :math:`a` is just a scale parameter when :math:`a\neq0` . If :math:`a=0,` the distribution reduces to the exponential distribution scaled by :math:`1/b.` Thus, the standard form is given as + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;c\right) & = & ce^{x}\exp\left[-c\left(e^{x}-1\right)\right]\\ F\left(x;c\right) & = & 1-\exp\left[-c\left(e^{x}-1\right)\right]\\ G\left(q;c\right) & = & \log\left[1-\frac{1}{c}\log\left(1-q\right)\right]\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \[ h\left[X\right]=1-\log\left(c\right)-e^{c}\textrm{Ei}\left(1,c\right),\] + +where + +.. math:: + :nowrap: + + \[ \textrm{Ei}\left(n,x\right)=\int_{1}^{\infty}t^{-n}\exp\left(-xt\right)dt\] + + + + +Gumbel (LogWeibull, Fisher-Tippetts, Type I Extreme Value) +========================================================== + +One of a clase of extreme value distributions (right-skewed). + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x\right) & = & \exp\left(-\left(x+e^{-x}\right)\right)\\ F\left(x\right) & = & \exp\left(-e^{-x}\right)\\ G\left(q\right) & = & -\log\left(-\log\left(q\right)\right)\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ M\left(t\right)=\Gamma\left(1-t\right)\] + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \gamma=-\psi_{0}\left(1\right)\\ \mu_{2} & = & \frac{\pi^{2}}{6}\\ \gamma_{1} & = & \frac{12\sqrt{6}}{\pi^{3}}\zeta\left(3\right)\\ \gamma_{2} & = & \frac{12}{5}\\ m_{d} & = & 0\\ m_{n} & = & -\log\left(\log2\right)\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \[ h\left[X\right]\approx1.0608407169541684911\] + + + + +Gumbel Left-skewed (for minimum order statistic) +================================================ + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x\right) & = & \exp\left(x-e^{x}\right)\\ F\left(x\right) & = & 1-\exp\left(-e^{x}\right)\\ G\left(q\right) & = & \log\left(-\log\left(1-q\right)\right)\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ M\left(t\right)=\Gamma\left(1+t\right)\] + +Note, that :math:`\mu` is negative the mean for the right-skewed distribution. Similar for +median and mode. All other moments are the same. + + + +.. math:: + :nowrap: + + \[ h\left[X\right]\approx1.0608407169541684911.\] + + + + +HalfCauchy +========== + +If :math:`Z` is Hyperbolic Secant distributed then :math:`e^{Z}` is Half-Cauchy distributed. Also, if :math:`W` is (standard) Cauchy distributed, then :math:`\left|W\right|` is Half-Cauchy distributed. Special case of the Folded Cauchy +distribution with :math:`c=0.` The standard form is + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x\right) & = & \frac{2}{\pi\left(1+x^{2}\right)}I_{[0,\infty)}\left(x\right)\\ F\left(x\right) & = & \frac{2}{\pi}\arctan\left(x\right)I_{\left[0,\infty\right]}\left(x\right)\\ G\left(q\right) & = & \tan\left(\frac{\pi}{2}q\right)\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ M\left(t\right)=\cos t+\frac{2}{\pi}\left[\textrm{Si}\left(t\right)\cos t-\textrm{Ci}\left(\textrm{-}t\right)\sin t\right]\] + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} m_{d} & = & 0\\ m_{n} & = & \tan\left(\frac{\pi}{4}\right)\end{eqnarray*} + +No moments, as the integrals diverge. + + + +.. math:: + :nowrap: + + \begin{eqnarray*} h\left[X\right] & = & \log\left(2\pi\right)\\ & \approx & 1.8378770664093454836.\end{eqnarray*} + + + + +HalfNormal +========== + +This is a special case of the chi distribution with :math:`L=a` and :math:`S=b` and :math:`\nu=1.` This is also a special case of the folded normal with shape parameter :math:`c=0` and :math:`S=S.` If :math:`Z` is (standard) normally distributed then, :math:`\left|Z\right|` is half-normal. The standard form is + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x\right) & = & \sqrt{\frac{2}{\pi}}e^{-x^{2}/2}I_{\left(0,\infty\right)}\left(x\right)\\ F\left(x\right) & = & 2\Phi\left(x\right)-1\\ G\left(q\right) & = & \Phi^{-1}\left(\frac{1+q}{2}\right)\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ M\left(t\right)=\sqrt{2\pi}e^{t^{2}/2}\Phi\left(t\right)\] + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \sqrt{\frac{2}{\pi}}\\ \mu_{2} & = & 1-\frac{2}{\pi}\\ \gamma_{1} & = & \frac{\sqrt{2}\left(4-\pi\right)}{\left(\pi-2\right)^{3/2}}\\ \gamma_{2} & = & \frac{8\left(\pi-3\right)}{\left(\pi-2\right)^{2}}\\ m_{d} & = & 0\\ m_{n} & = & \Phi^{-1}\left(\frac{3}{4}\right)\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} h\left[X\right] & = & \log\left(\sqrt{\frac{\pi e}{2}}\right)\\ & \approx & 0.72579135264472743239.\end{eqnarray*} + + + + +Half-Logistic +============= + +In the limit as :math:`c\rightarrow\infty` for the generalized half-logistic we have the half-logistic defined +over :math:`x\geq0.` Also, the distribution of :math:`\left|X\right|` where :math:`X` has logistic distribtution. + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x\right) & = & \frac{2e^{-x}}{\left(1+e^{-x}\right)^{2}}=\frac{1}{2}\textrm{sech}^{2}\left(\frac{x}{2}\right)\\ F\left(x\right) & = & \frac{1-e^{-x}}{1+e^{-x}}=\tanh\left(\frac{x}{2}\right)\\ G\left(q\right) & = & \log\left(\frac{1+q}{1-q}\right)=2\textrm{arctanh}\left(q\right)\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \[ M\left(t\right)=1-t\psi_{0}\left(\frac{1}{2}-\frac{t}{2}\right)+t\psi_{0}\left(1-\frac{t}{2}\right)\] + + + +.. math:: + :nowrap: + + \[ \mu_{n}^{\prime}=2\left(1-2^{1-n}\right)n!\zeta\left(n\right)\quad n\neq1\] + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu_{1}^{\prime} & = & 2\log\left(2\right)\\ \mu_{2}^{\prime} & = & 2\zeta\left(2\right)=\frac{\pi^{2}}{3}\\ \mu_{3}^{\prime} & = & 9\zeta\left(3\right)\\ \mu_{4}^{\prime} & = & 42\zeta\left(4\right)=\frac{7\pi^{4}}{15}\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} h\left[X\right] & = & 2-\log\left(2\right)\\ & \approx & 1.3068528194400546906.\end{eqnarray*} + + + + +Hyperbolic Secant +================= + +Related to the logistic distribution and used in lifetime analysis. +Standard form is (defined over all :math:`x` ) + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x\right) & = & \frac{1}{\pi}\textrm{sech}\left(x\right)\\ F\left(x\right) & = & \frac{2}{\pi}\arctan\left(e^{x}\right)\\ G\left(q\right) & = & \log\left(\tan\left(\frac{\pi}{2}q\right)\right)\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ M\left(t\right)=\sec\left(\frac{\pi}{2}t\right)\] + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu_{n}^{\prime} & = & \frac{1+\left(-1\right)^{n}}{2\pi2^{2n}}n!\left[\zeta\left(n+1,\frac{1}{4}\right)-\zeta\left(n+1,\frac{3}{4}\right)\right]\\ & = & \left\{ \begin{array}{cc} 0 & n\textrm{ odd}\\ C_{n/2}\frac{\pi^{n}}{2^{n}} & n\textrm{ even}\end{array}\right.\end{eqnarray*} + +where :math:`C_{m}` is an integer given by + +.. math:: + :nowrap: + + \begin{eqnarray*} C_{m} & = & \frac{\left(2m\right)!\left[\zeta\left(2m+1,\frac{1}{4}\right)-\zeta\left(2m+1,\frac{3}{4}\right)\right]}{\pi^{2m+1}2^{2m}}\\ & = & 4\left(-1\right)^{m-1}\frac{16^{m}}{2m+1}B_{2m+1}\left(\frac{1}{4}\right)\end{eqnarray*} + +where :math:`B_{2m+1}\left(\frac{1}{4}\right)` is the Bernoulli polynomial of order :math:`2m+1` evaluated at :math:`1/4.` Thus + +.. math:: + :nowrap: + + \[ \mu_{n}^{\prime}=\left\{ \begin{array}{cc} 0 & n\textrm{ odd}\\ 4\left(-1\right)^{n/2-1}\frac{\left(2\pi\right)^{n}}{n+1}B_{n+1}\left(\frac{1}{4}\right) & n\textrm{ even}\end{array}\right.\] + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} m_{d}=m_{n}=\mu & = & 0\\ \mu_{2} & = & \frac{\pi^{2}}{4}\\ \gamma_{1} & = & 0\\ \gamma_{2} & = & 2\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \[ h\left[X\right]=\log\left(2\pi\right).\] + + + + +Gauss Hypergeometric +==================== + +:math:`x\in\left[0,1\right]` , :math:`\alpha>0,\,\beta>0` + +.. math:: + :nowrap: + + \[ C^{-1}=B\left(\alpha,\beta\right)\,_{2}F_{1}\left(\gamma,\alpha;\alpha+\beta;-z\right)\] + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;\alpha,\beta,\gamma,z\right) & = & Cx^{\alpha-1}\frac{\left(1-x\right)^{\beta-1}}{\left(1+zx\right)^{\gamma}}\\ \mu_{n}^{\prime} & = & \frac{B\left(n+\alpha,\beta\right)}{B\left(\alpha,\beta\right)}\frac{\,_{2}F_{1}\left(\gamma,\alpha+n;\alpha+\beta+n;-z\right)}{\,_{2}F_{1}\left(\gamma,\alpha;\alpha+\beta;-z\right)}\end{eqnarray*} + + + + +Inverted Gamma +============== + +Special case of the generalized Gamma distribution with :math:`c=-1` and :math:`a>0` , :math:`x>0` + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;a\right) & = & \frac{x^{-a-1}}{\Gamma\left(a\right)}\exp\left(-\frac{1}{x}\right)\\ F\left(x;a\right) & = & \frac{\Gamma\left(a,\frac{1}{x}\right)}{\Gamma\left(a\right)}\\ G\left(q;a\right) & = & \left\{ \Gamma^{-1}\left[a,\Gamma\left(a\right)q\right]\right\} ^{-1}\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ \mu_{n}^{\prime}=\frac{\Gamma\left(a-n\right)}{\Gamma\left(a\right)}\quad a>n\] + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \frac{1}{a-1}\quad a>1\\ \mu_{2} & = & \frac{1}{\left(a-2\right)\left(a-1\right)}-\mu^{2}\quad a>2\\ \gamma_{1} & = & \frac{\frac{1}{\left(a-3\right)\left(a-2\right)\left(a-1\right)}-3\mu\mu_{2}-\mu^{3}}{\mu_{2}^{3/2}}\\ \gamma_{2} & = & \frac{\frac{1}{\left(a-4\right)\left(a-3\right)\left(a-2\right)\left(a-1\right)}-4\mu\mu_{3}-6\mu^{2}\mu_{2}-\mu^{4}}{\mu_{2}^{2}}-3\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ m_{d}=\frac{1}{a+1}\] + + + + + +.. math:: + :nowrap: + + \[ h\left[X\right]=a-\left(a+1\right)\Psi\left(a\right)+\log\Gamma\left(a\right).\] + + + + +Inverse Normal (Inverse Gaussian) +================================= + +The standard form involves the shape parameter :math:`\mu` (in most definitions, :math:`L=0.0` is used). (In terms of the regress documentation :math:`\mu=A/B` ) and :math:`B=S` and :math:`L` is not a parameter in that distribution. A standard form is :math:`x>0` + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;\mu\right) & = & \frac{1}{\sqrt{2\pi x^{3}}}\exp\left(-\frac{\left(x-\mu\right)^{2}}{2x\mu^{2}}\right).\\ F\left(x;\mu\right) & = & \Phi\left(\frac{1}{\sqrt{x}}\frac{x-\mu}{\mu}\right)+\exp\left(\frac{2}{\mu}\right)\Phi\left(-\frac{1}{\sqrt{x}}\frac{x+\mu}{\mu}\right)\\ G\left(q;\mu\right) & = & F^{-1}\left(q;\mu\right)\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \mu\\ \mu_{2} & = & \mu^{3}\\ \gamma_{1} & = & 3\sqrt{\mu}\\ \gamma_{2} & = & 15\mu\\ m_{d} & = & \frac{\mu}{2}\left(\sqrt{9\mu^{2}+4}-3\mu\right)\end{eqnarray*} + + + +This is related to the canonical form or JKB "two-parameter "inverse Gaussian when written in it's full form with scale parameter :math:`S` and location parameter :math:`L` by taking :math:`L=0` and :math:`S\equiv\lambda,` then :math:`\mu S` is equal to :math:`\mu_{2}` where :math:`\mu_{2}` is the parameter used by JKB. We prefer this form because of it's +consistent use of the scale parameter. Notice that in JKB the skew :math:`\left(\sqrt{\beta_{1}}\right)` and the kurtosis ( :math:`\beta_{2}-3` ) are both functions only of :math:`\mu_{2}/\lambda=\mu S/S=\mu` as shown here, while the variance and mean of the standard form here +are transformed appropriately. + + +Inverted Weibull +================ + +Shape parameter :math:`c>0` and :math:`x>0` . Then + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;c\right) & = & cx^{-c-1}\exp\left(-x^{-c}\right)\\ F\left(x;c\right) & = & \exp\left(-x^{-c}\right)\\ G\left(q;c\right) & = & \left(-\log q\right)^{-1/c}\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ h\left[X\right]=1+\gamma+\frac{\gamma}{c}-\log\left(c\right)\] + +where :math:`\gamma` is Euler's constant. + + +Johnson SB +========== + +Defined for :math:`x\in\left(0,1\right)` with two shape parameters :math:`a` and :math:`b>0.` + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;a,b\right) & = & \frac{b}{x\left(1-x\right)}\phi\left(a+b\log\frac{x}{1-x}\right)\\ F\left(x;a,b\right) & = & \Phi\left(a+b\log\frac{x}{1-x}\right)\\ G\left(q;a,b\right) & = & \frac{1}{1+\exp\left[-\frac{1}{b}\left(\Phi^{-1}\left(q\right)-a\right)\right]}\end{eqnarray*} + + + + +Johnson SU +========== + +Defined for all :math:`x` with two shape parameters :math:`a` and :math:`b>0` . + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;a,b\right) & = & \frac{b}{\sqrt{x^{2}+1}}\phi\left(a+b\log\left(x+\sqrt{x^{2}+1}\right)\right)\\ F\left(x;a,b\right) & = & \Phi\left(a+b\log\left(x+\sqrt{x^{2}+1}\right)\right)\\ G\left(q;a,b\right) & = & \sinh\left[\frac{\Phi^{-1}\left(q\right)-a}{b}\right]\end{eqnarray*} + + + + +KSone +===== + + +KStwo +===== + + +Laplace (Double Exponential, Bilateral Expoooonential) +====================================================== + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x\right) & = & \frac{1}{2}e^{-\left|x\right|}\\ F\left(x\right) & = & \left\{ \begin{array}{ccc} \frac{1}{2}e^{x} & & x\leq0\\ 1-\frac{1}{2}e^{-x} & & x>0\end{array}\right.\\ G\left(q\right) & = & \left\{ \begin{array}{ccc} \log\left(2q\right) & & q\leq\frac{1}{2}\\ -\log\left(2-2q\right) & & q>\frac{1}{2}\end{array}\right.\end{eqnarray*} + + + +.. math:: + :nowrap: + + \begin{eqnarray*} m_{d}=m_{n}=\mu & = & 0\\ \mu_{2} & = & 2\\ \gamma_{1} & = & 0\\ \gamma_{2} & = & 3\end{eqnarray*} + + + +The ML estimator of the location parameter is + +.. math:: + :nowrap: + + \[ \hat{L}=\textrm{median}\left(X_{i}\right)\] + +where :math:`X_{i}` is a sequence of :math:`N` mutually independent Laplace RV's and the median is some number +between the :math:`\frac{1}{2}N\textrm{th}` and the :math:`(N/2+1)\textrm{th}` order statistic ( *e.g.* take the average of these two) when :math:`N` is even. Also, + +.. math:: + :nowrap: + + \[ \hat{S}=\frac{1}{N}\sum_{j=1}^{N}\left|X_{j}-\hat{L}\right|.\] + +Replace :math:`\hat{L}` with :math:`L` if it is known. If :math:`L` is known then this estimator is distributed as :math:`\left(2N\right)^{-1}S\cdot\chi_{2N}^{2}` . + + + +.. math:: + :nowrap: + + \begin{eqnarray*} h\left[X\right] & = & \log\left(2e\right)\\ & \approx & 1.6931471805599453094.\end{eqnarray*} + + + + +Left-skewed Lévy +================= + +Special case of Lévy-stable distribution with :math:`\alpha=\frac{1}{2}` and :math:`\beta=-1` the support is :math:`x<0` . In standard form + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x\right) & = & \frac{1}{\left|x\right|\sqrt{2\pi\left|x\right|}}\exp\left(-\frac{1}{2\left|x\right|}\right)\\ F\left(x\right) & = & 2\Phi\left(\frac{1}{\sqrt{\left|x\right|}}\right)-1\\ G\left(q\right) & = & -\left[\Phi^{-1}\left(\frac{q+1}{2}\right)\right]^{-2}.\end{eqnarray*} + +No moments. + + +Lévy +===== + +A special case of Lévy-stable distributions with :math:`\alpha=\frac{1}{2}` and :math:`\beta=1` . In standard form it is defined for :math:`x>0` as + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x\right) & = & \frac{1}{x\sqrt{2\pi x}}\exp\left(-\frac{1}{2x}\right)\\ F\left(x\right) & = & 2\left[1-\Phi\left(\frac{1}{\sqrt{x}}\right)\right]\\ G\left(q\right) & = & \left[\Phi^{-1}\left(1-\frac{q}{2}\right)\right]^{-2}.\end{eqnarray*} + +It has no finite moments. + + +Logistic (Sech-squared) +======================= + +A special case of the Generalized Logistic distribution with :math:`c=1.` Defined for :math:`x>0` + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x\right) & = & \frac{\exp\left(-x\right)}{\left[1+\exp\left(-x\right)\right]^{2}}\\ F\left(x\right) & = & \frac{1}{1+\exp\left(-x\right)}\\ G\left(q\right) & = & -\log\left(1/q-1\right)\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \gamma+\psi_{0}\left(1\right)=0\\ \mu_{2} & = & \frac{\pi^{2}}{6}+\psi_{1}\left(1\right)=\frac{\pi^{2}}{3}\\ \gamma_{1} & = & \frac{\psi_{2}\left(c\right)+2\zeta\left(3\right)}{\mu_{2}^{3/2}}=0\\ \gamma_{2} & = & \frac{\left(\frac{\pi^{4}}{15}+\psi_{3}\left(c\right)\right)}{\mu_{2}^{2}}=\frac{6}{5}\\ m_{d} & = & \log1=0\\ m_{n} & = & -\log\left(2-1\right)=0\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \[ h\left[X\right]=1.\] + + + + +Log Double Exponential (Log-Laplace) +==================================== + +Defined over :math:`x>0` with :math:`c>0` + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;c\right) & = & \left\{ \begin{array}{ccc} \frac{c}{2}x^{c-1} & & 00` (Defined for all :math:`x` ) + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;c\right) & = & \frac{\exp\left(cx-e^{x}\right)}{\Gamma\left(c\right)}\\ F\left(x;c\right) & = & \frac{\Gamma\left(c,e^{x}\right)}{\Gamma\left(c\right)}\\ G\left(q;c\right) & = & \log\left[\Gamma^{-1}\left[c,q\Gamma\left(c\right)\right]\right]\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ \mu_{n}^{\prime}=\int_{0}^{\infty}\left[\log y\right]^{n}y^{c-1}\exp\left(-y\right)dy.\] + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \mu_{1}^{\prime}\\ \mu_{2} & = & \mu_{2}^{\prime}-\mu^{2}\\ \gamma_{1} & = & \frac{\mu_{3}^{\prime}-3\mu\mu_{2}-\mu^{3}}{\mu_{2}^{3/2}}\\ \gamma_{2} & = & \frac{\mu_{4}^{\prime}-4\mu\mu_{3}-6\mu^{2}\mu_{2}-\mu^{4}}{\mu_{2}^{2}}-3\end{eqnarray*} + + + + +Log Normal (Cobb-Douglass) +========================== + +Has one shape parameter :math:`\sigma` >0. (Notice that the "Regress ":math:`A=\log S` where :math:`S` is the scale parameter and :math:`A` is the mean of the underlying normal distribution). The standard form +is :math:`x>0` + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;\sigma\right) & = & \frac{1}{\sigma x\sqrt{2\pi}}\exp\left[-\frac{1}{2}\left(\frac{\log x}{\sigma}\right)^{2}\right]\\ F\left(x;\sigma\right) & = & \Phi\left(\frac{\log x}{\sigma}\right)\\ G\left(q;\sigma\right) & = & \exp\left\{ \sigma\Phi^{-1}\left(q\right)\right\} \end{eqnarray*} + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \exp\left(\sigma^{2}/2\right)\\ \mu_{2} & = & \exp\left(\sigma^{2}\right)\left[\exp\left(\sigma^{2}\right)-1\right]\\ \gamma_{1} & = & \sqrt{p-1}\left(2+p\right)\\ \gamma_{2} & = & p^{4}+2p^{3}+3p^{2}-6\quad\quad p=e^{\sigma^{2}}\end{eqnarray*} + + + +Notice that using JKB notation we have :math:`\theta=L,` :math:`\zeta=\log S` and we have given the so-called antilognormal form of the +distribution. This is more consistent with the location, scale +parameter description of general probability distributions. + + + +.. math:: + :nowrap: + + \[ h\left[X\right]=\frac{1}{2}\left[1+\log\left(2\pi\right)+2\log\left(\sigma\right)\right].\] + + + +Also, note that if :math:`X` is a log-normally distributed random-variable with :math:`L=0` and :math:`S` and shape parameter :math:`\sigma.` Then, :math:`\log X` is normally distributed with variance :math:`\sigma^{2}` and mean :math:`\log S.` + + +Nakagami +======== + +Generalization of the chi distribution. Shape parameter is :math:`\nu>0.` Defined for :math:`x>0.` + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;\nu\right) & = & \frac{2\nu^{\nu}}{\Gamma\left(\nu\right)}x^{2\nu-1}\exp\left(-\nu x^{2}\right)\\ F\left(x;\nu\right) & = & \Gamma\left(\nu,\nu x^{2}\right)\\ G\left(q;\nu\right) & = & \sqrt{\frac{1}{\nu}\Gamma^{-1}\left(v,q\right)}\end{eqnarray*} + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \frac{\Gamma\left(\nu+\frac{1}{2}\right)}{\sqrt{\nu}\Gamma\left(\nu\right)}\\ \mu_{2} & = & \left[1-\mu^{2}\right]\\ \gamma_{1} & = & \frac{\mu\left(1-4v\mu_{2}\right)}{2\nu\mu_{2}^{3/2}}\\ \gamma_{2} & = & \frac{-6\mu^{4}\nu+\left(8\nu-2\right)\mu^{2}-2\nu+1}{\nu\mu_{2}^{2}}\end{eqnarray*} + + + + +Noncentral beta* +================ + +Defined over :math:`x\in\left[0,1\right]` with :math:`a>0` and :math:`b>0` and :math:`c\geq0` + + + +.. math:: + :nowrap: + + \[ F\left(x;a,b,c\right)=\sum_{j=0}^{\infty}\frac{e^{-c/2}\left(\frac{c}{2}\right)^{j}}{j!}I_{B}\left(a+j,b;0\right)\] + + + + +Noncentral chi* +=============== + + +Noncentral chi-squared +====================== + +The distribution of :math:`\sum_{i=1}^{\nu}\left(Z_{i}+\delta_{i}\right)^{2}` where :math:`Z_{i}` are independent standard normal variables and :math:`\delta_{i}` are constants. :math:`\lambda=\sum_{i=1}^{\nu}\delta_{i}^{2}>0.` (In communications it is called the Marcum-Q function). Can be thought +of as a Generalized Rayleigh-Rice distribution. For :math:`x>0` + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;\nu,\lambda\right) & = & e^{-\left(\lambda+x\right)/2}\frac{1}{2}\left(\frac{x}{\lambda}\right)^{\left(\nu-2\right)/4}I_{\left(\nu-2\right)/2}\left(\sqrt{\lambda x}\right)\\ F\left(x;\nu,\lambda\right) & = & \sum_{j=0}^{\infty}\left\{ \frac{\left(\lambda/2\right)^{j}}{j!}e^{-\lambda/2}\right\} \textrm{Pr}\left[\chi_{\nu+2j}^{2}\leq x\right]\\ G\left(q;\nu,\lambda\right) & = & F^{-1}\left(x;\nu,\lambda\right)\end{eqnarray*} + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \nu+\lambda\\ \mu_{2} & = & 2\left(\nu+2\lambda\right)\\ \gamma_{1} & = & \frac{\sqrt{8}\left(\nu+3\lambda\right)}{\left(\nu+2\lambda\right)^{3/2}}\\ \gamma_{2} & = & \frac{12\left(\nu+4\lambda\right)}{\left(\nu+2\lambda\right)^{2}}\end{eqnarray*} + + + + +Noncentral F +============ + +Let :math:`\lambda>0` and :math:`\nu_{1}>0` and :math:`\nu_{2}>0.` + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;\lambda,\nu_{1},\nu_{2}\right) & = & \exp\left[\frac{\lambda}{2}+\frac{\left(\lambda\nu_{1}x\right)}{2\left(\nu_{1}x+\nu_{2}\right)}\right]\nu_{1}^{\nu_{1}/2}\nu_{2}^{\nu_{2}/2}x^{\nu_{1}/2-1}\\ & & \times\left(\nu_{2}+\nu_{1}x\right)^{-\left(\nu_{1}+\nu_{2}\right)/2}\frac{\Gamma\left(\frac{\nu_{1}}{2}\right)\Gamma\left(1+\frac{\nu_{2}}{2}\right)L_{\nu_{2}/2}^{\nu_{1}/2-1}\left(-\frac{\lambda\nu_{1}x}{2\left(\nu_{1}x+\nu_{2}\right)}\right)}{B\left(\frac{\nu_{1}}{2},\frac{\nu_{2}}{2}\right)\Gamma\left(\frac{\nu_{1}+\nu_{2}}{2}\right)}\end{eqnarray*} + + + + +Noncentral t +============ + +The distribution of the ratio + +.. math:: + :nowrap: + + \[ \frac{U+\lambda}{\chi_{\nu}/\sqrt{\nu}}\] + +where :math:`U` and :math:`\chi_{\nu}` are independent and distributed as a standard normal and chi with :math:`\nu` degrees of freedom. Note :math:`\lambda>0` and :math:`\nu>0` . + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;\lambda,\nu\right) & = & \frac{\nu^{\nu/2}\Gamma\left(\nu+1\right)}{2^{\nu}e^{\lambda^{2}/2}\left(\nu+x^{2}\right)^{\nu/2}\Gamma\left(\nu/2\right)}\\ & & \times\left\{ \frac{\sqrt{2}\lambda x\,_{1}F_{1}\left(\frac{\nu}{2}+1;\frac{3}{2};\frac{\lambda^{2}x^{2}}{2\left(\nu+x^{2}\right)}\right)}{\left(\nu+x^{2}\right)\Gamma\left(\frac{\nu+1}{2}\right)}\right.\\ & & -\left.\frac{\,_{1}F_{1}\left(\frac{\nu+1}{2};\frac{1}{2};\frac{\lambda^{2}x^{2}}{2\left(\nu+x^{2}\right)}\right)}{\sqrt{\nu+x^{2}}\Gamma\left(\frac{\nu}{2}+1\right)}\right\} \\ & = & \frac{\Gamma\left(\nu+1\right)}{2^{\left(\nu-1\right)/2}\sqrt{\pi\nu}\Gamma\left(\nu/2\right)}\exp\left[-\frac{\nu\lambda^{2}}{\nu+x^{2}}\right]\\ & & \times\left(\frac{\nu}{\nu+x^{2}}\right)^{\left(\nu-1\right)/2}Hh_{\nu}\left(-\frac{\lambda x}{\sqrt{\nu+x^{2}}}\right)\\ F\left(x;\lambda,\nu\right) & =\end{eqnarray*} + + + + +Normal +====== + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x\right) & = & \frac{e^{-x^{2}/2}}{\sqrt{2\pi}}\\ F\left(x\right) & = & \Phi\left(x\right)=\frac{1}{2}+\frac{1}{2}\textrm{erf}\left(\frac{\textrm{x}}{\sqrt{2}}\right)\\ G\left(q\right) & = & \Phi^{-1}\left(q\right)\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} m_{d}=m_{n}=\mu & = & 0\\ \mu_{2} & = & 1\\ \gamma_{1} & = & 0\\ \gamma_{2} & = & 0\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} h\left[X\right] & = & \log\left(\sqrt{2\pi e}\right)\\ & \approx & 1.4189385332046727418\end{eqnarray*} + + + + +Maxwell +======= + +This is a special case of the Chi distribution with :math:`L=0` and :math:`S=S=\frac{1}{\sqrt{a}}` and :math:`\nu=3.` + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x\right) & = & \sqrt{\frac{2}{\pi}}x^{2}e^{-x^{2}/2}I_{\left(0,\infty\right)}\left(x\right)\\ F\left(x\right) & = & \Gamma\left(\frac{3}{2},\frac{x^{2}}{2}\right)\\ G\left(\alpha\right) & = & \sqrt{2\Gamma^{-1}\left(\frac{3}{2},\alpha\right)}\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & 2\sqrt{\frac{2}{\pi}}\\ \mu_{2} & = & 3-\frac{8}{\pi}\\ \gamma_{1} & = & \sqrt{2}\frac{32-10\pi}{\left(3\pi-8\right)^{3/2}}\\ \gamma_{2} & = & \frac{-12\pi^{2}+160\pi-384}{\left(3\pi-8\right)^{2}}\\ m_{d} & = & \sqrt{2}\\ m_{n} & = & \sqrt{2\Gamma^{-1}\left(\frac{3}{2},\frac{1}{2}\right)}\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ h\left[X\right]=\log\left(\sqrt{\frac{2\pi}{e}}\right)+\gamma.\] + + + + +Mielke's Beta-Kappa +=================== + +A generalized F distribution. Two shape parameters :math:`\kappa` and :math:`\theta` , and :math:`x>0` . The :math:`\beta` in the DATAPLOT reference is a scale parameter. + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;\kappa,\theta\right) & = & \frac{\kappa x^{\kappa-1}}{\left(1+x^{\theta}\right)^{1+\frac{\kappa}{\theta}}}\\ F\left(x;\kappa,\theta\right) & = & \frac{x^{\kappa}}{\left(1+x^{\theta}\right)^{\kappa/\theta}}\\ G\left(q;\kappa,\theta\right) & = & \left(\frac{q^{\theta/\kappa}}{1-q^{\theta/\kappa}}\right)^{1/\theta}\end{eqnarray*} + + + + +Pareto +====== + +For :math:`x\geq1` and :math:`b>0` . Standard form is + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;b\right) & = & \frac{b}{x^{b+1}}\\ F\left(x;b\right) & = & 1-\frac{1}{x^{b}}\\ G\left(q;b\right) & = & \left(1-q\right)^{-1/b}\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \frac{b}{b-1}\quad b>1\\ \mu_{2} & = & \frac{b}{\left(b-2\right)\left(b-1\right)^{2}}\quad b>2\\ \gamma_{1} & = & \frac{2\left(b+1\right)\sqrt{b-2}}{\left(b-3\right)\sqrt{b}}\quad b>3\\ \gamma_{2} & = & \frac{6\left(b^{3}+b^{2}-6b-2\right)}{b\left(b^{2}-7b+12\right)}\quad b>4\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \[ h\left(X\right)=\frac{1}{c}+1-\log\left(c\right)\] + + + + +Pareto Second Kind (Lomax) +========================== + +:math:`c>0.` This is Pareto of the first kind with :math:`L=-1.0` so :math:`x\geq0` + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;c\right) & = & \frac{c}{\left(1+x\right)^{c+1}}\\ F\left(x;c\right) & = & 1-\frac{1}{\left(1+x\right)^{c}}\\ G\left(q;c\right) & = & \left(1-q\right)^{-1/c}-1\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ h\left[X\right]=\frac{1}{c}+1-\log\left(c\right).\] + + + + +Power Log Normal +================ + +A generalization of the log-normal distribution :math:`\sigma>0` and :math:`c>0` and :math:`x>0` + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;\sigma,c\right) & = & \frac{c}{x\sigma}\phi\left(\frac{\log x}{\sigma}\right)\left(\Phi\left(-\frac{\log x}{\sigma}\right)\right)^{c-1}\\ F\left(x;\sigma,c\right) & = & 1-\left(\Phi\left(-\frac{\log x}{\sigma}\right)\right)^{c}\\ G\left(q;\sigma,c\right) & = & \exp\left[-\sigma\Phi^{-1}\left[\left(1-q\right)^{1/c}\right]\right]\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ \mu_{n}^{\prime}=\int_{0}^{1}\exp\left[-n\sigma\Phi^{-1}\left(y^{1/c}\right)\right]dy\] + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \mu_{1}^{\prime}\\ \mu_{2} & = & \mu_{2}^{\prime}-\mu^{2}\\ \gamma_{1} & = & \frac{\mu_{3}^{\prime}-3\mu\mu_{2}-\mu^{3}}{\mu_{2}^{3/2}}\\ \gamma_{2} & = & \frac{\mu_{4}^{\prime}-4\mu\mu_{3}-6\mu^{2}\mu_{2}-\mu^{4}}{\mu_{2}^{2}}-3\end{eqnarray*} + +This distribution reduces to the log-normal distribution when :math:`c=1.` + + +Power Normal +============ + +A generalization of the normal distribution, :math:`c>0` for + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;c\right) & = & c\phi\left(x\right)\left(\Phi\left(-x\right)\right)^{c-1}\\ F\left(x;c\right) & = & 1-\left(\Phi\left(-x\right)\right)^{c}\\ G\left(q;c\right) & = & -\Phi^{-1}\left[\left(1-q\right)^{1/c}\right]\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ \mu_{n}^{\prime}=\left(-1\right)^{n}\int_{0}^{1}\left[\Phi^{-1}\left(y^{1/c}\right)\right]^{n}dy\] + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \mu_{1}^{\prime}\\ \mu_{2} & = & \mu_{2}^{\prime}-\mu^{2}\\ \gamma_{1} & = & \frac{\mu_{3}^{\prime}-3\mu\mu_{2}-\mu^{3}}{\mu_{2}^{3/2}}\\ \gamma_{2} & = & \frac{\mu_{4}^{\prime}-4\mu\mu_{3}-6\mu^{2}\mu_{2}-\mu^{4}}{\mu_{2}^{2}}-3\end{eqnarray*} + +For :math:`c=1` this reduces to the normal distribution. + + +Power-function +============== + +A special case of the beta distribution with :math:`b=1` : defined for :math:`x\in\left[0,1\right]` + + + +.. math:: + :nowrap: + + \[ a>0\] + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;a\right) & = & ax^{a-1}\\ F\left(x;a\right) & = & x^{a}\\ G\left(q;a\right) & = & q^{1/a}\\ \mu & = & \frac{a}{a+1}\\ \mu_{2} & = & \frac{a\left(a+2\right)}{\left(a+1\right)^{2}}\\ \gamma_{1} & = & 2\left(1-a\right)\sqrt{\frac{a+2}{a\left(a+3\right)}}\\ \gamma_{2} & = & \frac{6\left(a^{3}-a^{2}-6a+2\right)}{a\left(a+3\right)\left(a+4\right)}\\ m_{d} & = & 1\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ h\left[X\right]=1-\frac{1}{a}-\log\left(a\right)\] + + + + +R-distribution +============== + +A general-purpose distribution with a variety of shapes controlled by :math:`c>0.` Range of standard distribution is :math:`x\in\left[-1,1\right]` + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;c\right) & = & \frac{\left(1-x^{2}\right)^{c/2-1}}{B\left(\frac{1}{2},\frac{c}{2}\right)}\\ F\left(x;c\right) & = & \frac{1}{2}+\frac{x}{B\left(\frac{1}{2},\frac{c}{2}\right)}\,_{2}F_{1}\left(\frac{1}{2},1-\frac{c}{2};\frac{3}{2};x^{2}\right)\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ \mu_{n}^{\prime}=\frac{\left(1+\left(-1\right)^{n}\right)}{2}B\left(\frac{n+1}{2},\frac{c}{2}\right)\] + +The R-distribution with parameter :math:`n` is the distribution of the correlation coefficient of a random sample +of size :math:`n` drawn from a bivariate normal distribution with :math:`\rho=0.` The mean of the standard distribution is always zero and as the sample +size grows, the distribution's mass concentrates more closely about +this mean. + + +Rayleigh +======== + +This is Chi distribution with :math:`L=0.0` and :math:`\nu=2` and :math:`S=S` (no location parameter is generally used), the mode of the +distribution is :math:`S.` + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(r\right) & = & re^{-r^{2}/2}I_{[0,\infty)}\left(x\right)\\ F\left(r\right) & = & 1-e^{-r^{2}/2}I_{[0,\infty)}\left(x\right)\\ G\left(q\right) & = & \sqrt{-2\log\left(1-q\right)}\end{eqnarray*} + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \sqrt{\frac{\pi}{2}}\\ \mu_{2} & = & \frac{4-\pi}{2}\\ \gamma_{1} & = & \frac{2\left(\pi-3\right)\sqrt{\pi}}{\left(4-\pi\right)^{3/2}}\\ \gamma_{2} & = & \frac{24\pi-6\pi^{2}-16}{\left(4-\pi\right)^{2}}\\ m_{d} & = & 1\\ m_{n} & = & \sqrt{2\log\left(2\right)}\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ h\left[X\right]=\frac{\gamma}{2}+\log\left(\frac{e}{\sqrt{2}}\right).\] + + + + + +.. math:: + :nowrap: + + \[ \mu_{n}^{\prime}=\sqrt{2^{n}}\Gamma\left(\frac{n}{2}+1\right)\] + + + + +Rice* +===== + +Defined for :math:`x>0` and :math:`b>0` + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;b\right) & = & x\exp\left(-\frac{x^{2}+b^{2}}{2}\right)I_{0}\left(xb\right)\\ F\left(x;b\right) & = & \int_{0}^{x}\alpha\exp\left(-\frac{\alpha^{2}+b^{2}}{2}\right)I_{0}\left(\alpha b\right)d\alpha\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \[ \mu_{n}^{\prime}=\sqrt{2^{n}}\Gamma\left(1+\frac{n}{2}\right)\,_{1}F_{1}\left(-\frac{n}{2};1;-\frac{b^{2}}{2}\right)\] + + + + +Reciprocal +========== + +Shape parameters :math:`a,b>0` :math:`x\in\left[a,b\right]` + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;a,b\right) & = & \frac{1}{x\log\left(b/a\right)}\\ F\left(x;a,b\right) & = & \frac{\log\left(x/a\right)}{\log\left(b/a\right)}\\ G\left(q;a,b\right) & = & a\exp\left(q\log\left(b/a\right)\right)=a\left(\frac{b}{a}\right)^{q}\end{eqnarray*} + + + +.. math:: + :nowrap: + + \begin{eqnarray*} d & = & \log\left(a/b\right)\\ \mu & = & \frac{a-b}{d}\\ \mu_{2} & = & \mu\frac{a+b}{2}-\mu^{2}=\frac{\left(a-b\right)\left[a\left(d-2\right)+b\left(d+2\right)\right]}{2d^{2}}\\ \gamma_{1} & = & \frac{\sqrt{2}\left[12d\left(a-b\right)^{2}+d^{2}\left(a^{2}\left(2d-9\right)+2abd+b^{2}\left(2d+9\right)\right)\right]}{3d\sqrt{a-b}\left[a\left(d-2\right)+b\left(d+2\right)\right]^{3/2}}\\ \gamma_{2} & = & \frac{-36\left(a-b\right)^{3}+36d\left(a-b\right)^{2}\left(a+b\right)-16d^{2}\left(a^{3}-b^{3}\right)+3d^{3}\left(a^{2}+b^{2}\right)\left(a+b\right)}{3\left(a-b\right)\left[a\left(d-2\right)+b\left(d+2\right)\right]^{2}}-3\\ m_{d} & = & a\\ m_{n} & = & \sqrt{ab}\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ h\left[X\right]=\frac{1}{2}\log\left(ab\right)+\log\left[\log\left(\frac{b}{a}\right)\right].\] + + + + +Reciprocal Inverse Gaussian +=========================== + +The pdf is found from the inverse gaussian (IG), :math:`f_{RIG}\left(x;\mu\right)=\frac{1}{x^{2}}f_{IG}\left(\frac{1}{x};\mu\right)` defined for :math:`x\geq0` as + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f_{IG}\left(x;\mu\right) & = & \frac{1}{\sqrt{2\pi x^{3}}}\exp\left(-\frac{\left(x-\mu\right)^{2}}{2x\mu^{2}}\right).\\ F_{IG}\left(x;\mu\right) & = & \Phi\left(\frac{1}{\sqrt{x}}\frac{x-\mu}{\mu}\right)+\exp\left(\frac{2}{\mu}\right)\Phi\left(-\frac{1}{\sqrt{x}}\frac{x+\mu}{\mu}\right)\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f_{RIG}\left(x;\mu\right) & = & \frac{1}{\sqrt{2\pi x}}\exp\left(-\frac{\left(1-\mu x\right)^{2}}{2x\mu^{2}}\right)\\ F_{RIG}\left(x;\mu\right) & = & 1-F_{IG}\left(\frac{1}{x},\mu\right)\\ & = & 1-\Phi\left(\frac{1}{\sqrt{x}}\frac{1-\mu x}{\mu}\right)-\exp\left(\frac{2}{\mu}\right)\Phi\left(-\frac{1}{\sqrt{x}}\frac{1+\mu x}{\mu}\right)\end{eqnarray*} + + + + +Semicircular +============ + +Defined on :math:`x\in\left[-1,1\right]` + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x\right) & = & \frac{2}{\pi}\sqrt{1-x^{2}}\\ F\left(x\right) & = & \frac{1}{2}+\frac{1}{\pi}\left[x\sqrt{1-x^{2}}+\arcsin x\right]\\ G\left(q\right) & = & F^{-1}\left(q\right)\end{eqnarray*} + + + +.. math:: + :nowrap: + + \begin{eqnarray*} m_{d}=m_{n}=\mu & = & 0\\ \mu_{2} & = & \frac{1}{4}\\ \gamma_{1} & = & 0\\ \gamma_{2} & = & -1\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ h\left[X\right]=0.64472988584940017414.\] + + + + +Studentized Range* +================== + + +Student t +========= + +Shape parameter :math:`\nu>0.` :math:`I\left(a,b,x\right)` is the incomplete beta integral and :math:`I^{-1}\left(a,b,I\left(a,b,x\right)\right)=x` + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;\nu\right) & = & \frac{\Gamma\left(\frac{\nu+1}{2}\right)}{\sqrt{\pi\nu}\Gamma\left(\frac{\nu}{2}\right)\left[1+\frac{x^{2}}{\nu}\right]^{\frac{\nu+1}{2}}}\\ F\left(x;\nu\right) & = & \left\{ \begin{array}{ccc} \frac{1}{2}I\left(\frac{\nu}{2},\frac{1}{2},\frac{\nu}{\nu+x^{2}}\right) & & x\leq0\\ 1-\frac{1}{2}I\left(\frac{\nu}{2},\frac{1}{2},\frac{\nu}{\nu+x^{2}}\right) & & x\geq0\end{array}\right.\\ G\left(q;\nu\right) & = & \left\{ \begin{array}{ccc} -\sqrt{\frac{\nu}{I^{-1}\left(\frac{\nu}{2},\frac{1}{2},2q\right)}-\nu} & & q\leq\frac{1}{2}\\ \sqrt{\frac{\nu}{I^{-1}\left(\frac{\nu}{2},\frac{1}{2},2-2q\right)}-\nu} & & q\geq\frac{1}{2}\end{array}\right.\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} m_{n}=m_{d}=\mu & = & 0\\ \mu_{2} & = & \frac{\nu}{\nu-2}\quad\nu>2\\ \gamma_{1} & = & 0\quad\nu>3\\ \gamma_{2} & = & \frac{6}{\nu-4}\quad\nu>4\end{eqnarray*} + +As :math:`\nu\rightarrow\infty,` this distribution approaches the standard normal distribution. + + + +.. math:: + :nowrap: + + \[ h\left[X\right]=\frac{1}{4}\log\left(\frac{\pi c\Gamma^{2}\left(\frac{c}{2}\right)}{\Gamma^{2}\left(\frac{c+1}{2}\right)}\right)-\frac{\left(c+1\right)}{4}\left[\Psi\left(\frac{c}{2}\right)-cZ\left(c\right)+\pi\tan\left(\frac{\pi c}{2}\right)+\gamma+2\log2\right]\] + +where + +.. math:: + :nowrap: + + \[ Z\left(c\right)=\,_{3}F_{2}\left(1,1,1+\frac{c}{2};\frac{3}{2},2;1\right)=\sum_{k=0}^{\infty}\frac{k!}{k+1}\frac{\Gamma\left(\frac{c}{2}+1+k\right)}{\Gamma\left(\frac{c}{2}+1\right)}\frac{\Gamma\left(\frac{3}{2}\right)}{\Gamma\left(\frac{3}{2}+k\right)}\] + + + + +Student Z +========= + +The student Z distriubtion is defined over all space with one shape +parameter :math:`\nu>0` + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;\nu\right) & = & \frac{\Gamma\left(\frac{\nu}{2}\right)}{\sqrt{\pi}\Gamma\left(\frac{\nu-1}{2}\right)}\left(1+x^{2}\right)^{-\nu/2}\\ F\left(x;\nu\right) & = & \left\{ \begin{array}{ccc} Q\left(x;\nu\right) & & x\leq0\\ 1-Q\left(x;\nu\right) & & x\geq0\end{array}\right.\\ Q\left(x;\nu\right) & = & \frac{\left|x\right|^{1-n}\Gamma\left(\frac{n}{2}\right)\,_{2}F_{1}\left(\frac{n-1}{2},\frac{n}{2};\frac{n+1}{2};-\frac{1}{x^{2}}\right)}{2\sqrt{\pi}\Gamma\left(\frac{n+1}{2}\right)}\end{eqnarray*} + +Interesting moments are + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & 0\\ \sigma^{2} & = & \frac{1}{\nu-3}\\ \gamma_{1} & = & 0\\ \gamma_{2} & = & \frac{6}{\nu-5}.\end{eqnarray*} + +The moment generating function is + +.. math:: + :nowrap: + + \[ \theta\left(t\right)=2\sqrt{\left|\frac{t}{2}\right|^{\nu-1}}\frac{K_{\left(n-1\right)/2}\left(\left|t\right|\right)}{\Gamma\left(\frac{\nu-1}{2}\right)}.\] + + + + +Symmetric Power* +================ + + +Triangular +========== + +One shape parameter :math:`c\in[0,1]` giving the distance to the peak as a percentage of the total extent of +the non-zero portion. The location parameter is the start of the non- +zero portion, and the scale-parameter is the width of the non-zero +portion. In standard form we have :math:`x\in\left[0,1\right].` + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;c\right) & = & \left\{ \begin{array}{ccc} 2\frac{x}{c} & & x0` . Note, the PDF and CDF functions are periodic and are always defined +over :math:`x\in\left[-\pi,\pi\right]` regardless of the location parameter. Thus, if an input beyond this +range is given, it is converted to the equivalent angle in this range. +For values of :math:`b<100` the PDF and CDF formulas below are used. Otherwise, a normal +approximation with variance :math:`1/b` is used. + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;b\right) & = & \frac{e^{b\cos x}}{2\pi I_{0}\left(b\right)}\\ F\left(x;b\right) & = & \frac{1}{2}+\frac{x}{2\pi}+\sum_{k=1}^{\infty}\frac{I_{k}\left(b\right)\sin\left(kx\right)}{I_{0}\left(b\right)\pi k}\\ G\left(q;b\right) & = & F^{-1}\left(x;b\right)\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & 0\\ \mu_{2} & = & \int_{-\pi}^{\pi}x^{2}f\left(x;b\right)dx\\ \gamma_{1} & = & 0\\ \gamma_{2} & = & \frac{\int_{-\pi}^{\pi}x^{4}f\left(x;b\right)dx}{\mu_{2}^{2}}-3\end{eqnarray*} + +This can be used for defining circular variance. + + +Wald +==== + +Special case of the Inverse Normal with shape parameter set to :math:`1.0` . Defined for :math:`x>0` . + + + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x\right) & = & \frac{1}{\sqrt{2\pi x^{3}}}\exp\left(-\frac{\left(x-1\right)^{2}}{2x}\right).\\ F\left(x\right) & = & \Phi\left(\frac{x-1}{\sqrt{x}}\right)+\exp\left(2\right)\Phi\left(-\frac{x+1}{\sqrt{x}}\right)\\ G\left(q;\mu\right) & = & F^{-1}\left(q;\mu\right)\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & 1\\ \mu_{2} & = & 1\\ \gamma_{1} & = & 3\\ \gamma_{2} & = & 15\\ m_{d} & = & \frac{1}{2}\left(\sqrt{13}-3\right)\end{eqnarray*} + + + + +Wishart* +======== + + +Wrapped Cauchy +============== + +For :math:`x\in\left[0,2\pi\right]` :math:`c\in\left(0,1\right)` + +.. math:: + :nowrap: + + \begin{eqnarray*} f\left(x;c\right) & = & \frac{1-c^{2}}{2\pi\left(1+c^{2}-2c\cos x\right)}\\ g_{c}\left(x\right) & = & \frac{1}{\pi}\arctan\left[\frac{1+c}{1-c}\tan\left(\frac{x}{2}\right)\right]\\ r_{c}\left(q\right) & = & 2\arctan\left[\frac{1-c}{1+c}\tan\left(\pi q\right)\right]\\ F\left(x;c\right) & = & \left\{ \begin{array}{ccc} g_{c}\left(x\right) & & 0\leq x<\pi\\ 1-g_{c}\left(2\pi-x\right) & & \pi\leq x\leq2\pi\end{array}\right.\\ G\left(q;c\right) & = & \left\{ \begin{array}{ccc} r_{c}\left(q\right) & & 0\leq q<\frac{1}{2}\\ 2\pi-r_{c}\left(1-q\right) & & \frac{1}{2}\leq q\leq1\end{array}\right.\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ \] + + + +.. math:: + :nowrap: + + \[ h\left[X\right]=\log\left(2\pi\left(1-c^{2}\right)\right).\] diff -Nru python-scipy-0.7.2+dfsg1/doc/source/tutorial/stats/discrete.rst python-scipy-0.8.0+dfsg1/doc/source/tutorial/stats/discrete.rst --- python-scipy-0.7.2+dfsg1/doc/source/tutorial/stats/discrete.rst 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/source/tutorial/stats/discrete.rst 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,690 @@ +.. _discrete-random-variables: + + +================================== +Discrete Statistical Distributions +================================== + +Discrete random variables take on only a countable number of values. +The commonly used distributions are included in SciPy and described in +this document. Each discrete distribution can take one extra integer +parameter: :math:`L.` The relationship between the general distribution +:math:`p` and the standard distribution :math:`p_{0}` is + +.. math:: + :nowrap: + + \[ p\left(x\right)=p_{0}\left(x-L\right)\] + +which allows for shifting of the input. When a distribution generator +is initialized, the discrete distribution can either specify the +beginning and ending (integer) values :math:`a` and :math:`b` which must be such that + +.. math:: + :nowrap: + + \[ p_{0}\left(x\right)=0\quad xb\] + +in which case, it is assumed that the pdf function is specified on the +integers :math:`a+mk\leq b` where :math:`k` is a non-negative integer ( :math:`0,1,2,\ldots` ) and :math:`m` is a positive integer multiplier. Alternatively, the two lists :math:`x_{k}` and :math:`p\left(x_{k}\right)` can be provided directly in which case a dictionary is set up +internally to evaulate probabilities and generate random variates. + + +Probability Mass Function (PMF) +------------------------------- + +The probability mass function of a random variable X is defined as the +probability that the random variable takes on a particular value. + +.. math:: + :nowrap: + + \[ p\left(x_{k}\right)=P\left[X=x_{k}\right]\] + +This is also sometimes called the probability density function, +although technically + +.. math:: + :nowrap: + + \[ f\left(x\right)=\sum_{k}p\left(x_{k}\right)\delta\left(x-x_{k}\right)\] + +is the probability density function for a discrete distribution [#]_ . + + + +.. [#] + XXX: Unknown layout Plain Layout: Note that we will be using :math:`p` to represent the probability mass function and a parameter (a + XXX: probability). The usage should be obvious from context. + + + +Cumulative Distribution Function (CDF) +-------------------------------------- + +The cumulative distribution function is + +.. math:: + :nowrap: + + \[ F\left(x\right)=P\left[X\leq x\right]=\sum_{x_{k}\leq x}p\left(x_{k}\right)\] + +and is also useful to be able to compute. Note that + +.. math:: + :nowrap: + + \[ F\left(x_{k}\right)-F\left(x_{k-1}\right)=p\left(x_{k}\right)\] + + + + +Survival Function +----------------- + +The survival function is just + +.. math:: + :nowrap: + + \[ S\left(x\right)=1-F\left(x\right)=P\left[X>k\right]\] + +the probability that the random variable is strictly larger than :math:`k` . + + +Percent Point Function (Inverse CDF) +------------------------------------ + +The percent point function is the inverse of the cumulative +distribution function and is + +.. math:: + :nowrap: + + \[ G\left(q\right)=F^{-1}\left(q\right)\] + +for discrete distributions, this must be modified for cases where +there is no :math:`x_{k}` such that :math:`F\left(x_{k}\right)=q.` In these cases we choose :math:`G\left(q\right)` to be the smallest value :math:`x_{k}=G\left(q\right)` for which :math:`F\left(x_{k}\right)\geq q` . If :math:`q=0` then we define :math:`G\left(0\right)=a-1` . This definition allows random variates to be defined in the same way +as with continuous rv's using the inverse cdf on a uniform +distribution to generate random variates. + + +Inverse survival function +------------------------- + +The inverse survival function is the inverse of the survival function + +.. math:: + :nowrap: + + \[ Z\left(\alpha\right)=S^{-1}\left(\alpha\right)=G\left(1-\alpha\right)\] + +and is thus the smallest non-negative integer :math:`k` for which :math:`F\left(k\right)\geq1-\alpha` or the smallest non-negative integer :math:`k` for which :math:`S\left(k\right)\leq\alpha.` + + +Hazard functions +---------------- + +If desired, the hazard function and the cumulative hazard function +could be defined as + +.. math:: + :nowrap: + + \[ h\left(x_{k}\right)=\frac{p\left(x_{k}\right)}{1-F\left(x_{k}\right)}\] + +and + +.. math:: + :nowrap: + + \[ H\left(x\right)=\sum_{x_{k}\leq x}h\left(x_{k}\right)=\sum_{x_{k}\leq x}\frac{F\left(x_{k}\right)-F\left(x_{k-1}\right)}{1-F\left(x_{k}\right)}.\] + + + + +Moments +------- + +Non-central moments are defined using the PDF + +.. math:: + :nowrap: + + \[ \mu_{m}^{\prime}=E\left[X^{m}\right]=\sum_{k}x_{k}^{m}p\left(x_{k}\right).\] + +Central moments are computed similarly :math:`\mu=\mu_{1}^{\prime}` + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu_{m}=E\left[\left(X-\mu\right)^{m}\right] & = & \sum_{k}\left(x_{k}-\mu\right)^{m}p\left(x_{k}\right)\\ & = & \sum_{k=0}^{m}\left(-1\right)^{m-k}\left(\begin{array}{c} m\\ k\end{array}\right)\mu^{m-k}\mu_{k}^{\prime}\end{eqnarray*} + +The mean is the first moment + +.. math:: + :nowrap: + + \[ \mu=\mu_{1}^{\prime}=E\left[X\right]=\sum_{k}x_{k}p\left(x_{k}\right)\] + +the variance is the second central moment + +.. math:: + :nowrap: + + \[ \mu_{2}=E\left[\left(X-\mu\right)^{2}\right]=\sum_{x_{k}}x_{k}^{2}p\left(x_{k}\right)-\mu^{2}.\] + +Skewness is defined as + +.. math:: + :nowrap: + + \[ \gamma_{1}=\frac{\mu_{3}}{\mu_{2}^{3/2}}\] + +while (Fisher) kurtosis is + +.. math:: + :nowrap: + + \[ \gamma_{2}=\frac{\mu_{4}}{\mu_{2}^{2}}-3,\] + +so that a normal distribution has a kurtosis of zero. + + +Moment generating function +-------------------------- + +The moment generating funtion is defined as + +.. math:: + :nowrap: + + \[ M_{X}\left(t\right)=E\left[e^{Xt}\right]=\sum_{x_{k}}e^{x_{k}t}p\left(x_{k}\right)\] + +Moments are found as the derivatives of the moment generating function +evaluated at :math:`0.` + + +Fitting data +------------ + +To fit data to a distribution, maximizing the likelihood function is +common. Alternatively, some distributions have well-known minimum +variance unbiased estimators. These will be chosen by default, but the +likelihood function will always be available for minimizing. + +If :math:`f_{i}\left(k;\boldsymbol{\theta}\right)` is the PDF of a random-variable where :math:`\boldsymbol{\theta}` is a vector of parameters ( *e.g.* :math:`L` and :math:`S` ), then for a collection of :math:`N` independent samples from this distribution, the joint distribution the +random vector :math:`\mathbf{k}` is + +.. math:: + :nowrap: + + \[ f\left(\mathbf{k};\boldsymbol{\theta}\right)=\prod_{i=1}^{N}f_{i}\left(k_{i};\boldsymbol{\theta}\right).\] + +The maximum likelihood estimate of the parameters :math:`\boldsymbol{\theta}` are the parameters which maximize this function with :math:`\mathbf{x}` fixed and given by the data: + +.. math:: + :nowrap: + + \begin{eqnarray*} \hat{\boldsymbol{\theta}} & = & \arg\max_{\boldsymbol{\theta}}f\left(\mathbf{k};\boldsymbol{\theta}\right)\\ & = & \arg\min_{\boldsymbol{\theta}}l_{\mathbf{k}}\left(\boldsymbol{\theta}\right).\end{eqnarray*} + +Where + +.. math:: + :nowrap: + + \begin{eqnarray*} l_{\mathbf{k}}\left(\boldsymbol{\theta}\right) & = & -\sum_{i=1}^{N}\log f\left(k_{i};\boldsymbol{\theta}\right)\\ & = & -N\overline{\log f\left(k_{i};\boldsymbol{\theta}\right)}\end{eqnarray*} + + + + +Standard notation for mean +-------------------------- + +We will use + +.. math:: + :nowrap: + + \[ \overline{y\left(\mathbf{x}\right)}=\frac{1}{N}\sum_{i=1}^{N}y\left(x_{i}\right)\] + +where :math:`N` should be clear from context. + + +Combinations +------------ + +Note that + +.. math:: + :nowrap: + + \[ k!=k\cdot\left(k-1\right)\cdot\left(k-2\right)\cdot\cdots\cdot1=\Gamma\left(k+1\right)\] + +and has special cases of + +.. math:: + :nowrap: + + \begin{eqnarray*} 0! & \equiv & 1\\ k! & \equiv & 0\quad k<0\end{eqnarray*} + +and + +.. math:: + :nowrap: + + \[ \left(\begin{array}{c} n\\ k\end{array}\right)=\frac{n!}{\left(n-k\right)!k!}.\] + +If :math:`n<0` or :math:`k<0` or :math:`k>n` we define :math:`\left(\begin{array}{c} n\\ k\end{array}\right)=0` + + +Bernoulli +========= + +A Bernoulli random variable of parameter :math:`p` takes one of only two values :math:`X=0` or :math:`X=1` . The probability of success ( :math:`X=1` ) is :math:`p` , and the probability of failure ( :math:`X=0` ) is :math:`1-p.` It can be thought of as a binomial random variable with :math:`n=1` . The PMF is :math:`p\left(k\right)=0` for :math:`k\neq0,1` and + +.. math:: + :nowrap: + + \begin{eqnarray*} p\left(k;p\right) & = & \begin{cases} 1-p & k=0\\ p & k=1\end{cases}\\ F\left(x;p\right) & = & \begin{cases} 0 & x<0\\ 1-p & 0\le x<1\\ 1 & 1\leq x\end{cases}\\ G\left(q;p\right) & = & \begin{cases} 0 & 0\leq q<1-p\\ 1 & 1-p\leq q\leq1\end{cases}\\ \mu & = & p\\ \mu_{2} & = & p\left(1-p\right)\\ \gamma_{3} & = & \frac{1-2p}{\sqrt{p\left(1-p\right)}}\\ \gamma_{4} & = & \frac{1-6p\left(1-p\right)}{p\left(1-p\right)}\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \[ M\left(t\right)=1-p\left(1-e^{t}\right)\] + + + + + +.. math:: + :nowrap: + + \[ \mu_{m}^{\prime}=p\] + + + + + +.. math:: + :nowrap: + + \[ h\left[X\right]=p\log p+\left(1-p\right)\log\left(1-p\right)\] + + + + +Binomial +======== + +A binomial random variable with parameters :math:`\left(n,p\right)` can be described as the sum of :math:`n` independent Bernoulli random variables of parameter :math:`p;` + +.. math:: + :nowrap: + + \[ Y=\sum_{i=1}^{n}X_{i}.\] + +Therefore, this random variable counts the number of successes in :math:`n` independent trials of a random experiment where the probability of +success is :math:`p.` + +.. math:: + :nowrap: + + \begin{eqnarray*} p\left(k;n,p\right) & = & \left(\begin{array}{c} n\\ k\end{array}\right)p^{k}\left(1-p\right)^{n-k}\,\, k\in\left\{ 0,1,\ldots n\right\} ,\\ F\left(x;n,p\right) & = & \sum_{k\leq x}\left(\begin{array}{c} n\\ k\end{array}\right)p^{k}\left(1-p\right)^{n-k}=I_{1-p}\left(n-\left\lfloor x\right\rfloor ,\left\lfloor x\right\rfloor +1\right)\quad x\geq0\end{eqnarray*} + +where the incomplete beta integral is + +.. math:: + :nowrap: + + \[ I_{x}\left(a,b\right)=\frac{\Gamma\left(a+b\right)}{\Gamma\left(a\right)\Gamma\left(b\right)}\int_{0}^{x}t^{a-1}\left(1-t\right)^{b-1}dt.\] + +Now + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & np\\ \mu_{2} & = & np\left(1-p\right)\\ \gamma_{1} & = & \frac{1-2p}{\sqrt{np\left(1-p\right)}}\\ \gamma_{2} & = & \frac{1-6p\left(1-p\right)}{np\left(1-p\right)}.\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ M\left(t\right)=\left[1-p\left(1-e^{t}\right)\right]^{n}\] + + + + +Boltzmann (truncated Planck) +============================ + + + +.. math:: + :nowrap: + + \begin{eqnarray*} p\left(k;N,\lambda\right) & = & \frac{1-e^{-\lambda}}{1-e^{-\lambda N}}\exp\left(-\lambda k\right)\quad k\in\left\{ 0,1,\ldots,N-1\right\} \\ F\left(x;N,\lambda\right) & = & \left\{ \begin{array}{cc} 0 & x<0\\ \frac{1-\exp\left[-\lambda\left(\left\lfloor x\right\rfloor +1\right)\right]}{1-\exp\left(-\lambda N\right)} & 0\leq x\leq N-1\\ 1 & x\geq N-1\end{array}\right.\\ G\left(q,\lambda\right) & = & \left\lceil -\frac{1}{\lambda}\log\left[1-q\left(1-e^{-\lambda N}\right)\right]-1\right\rceil \end{eqnarray*} + +Define :math:`z=e^{-\lambda}` + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \frac{z}{1-z}-\frac{Nz^{N}}{1-z^{N}}\\ \mu_{2} & = & \frac{z}{\left(1-z\right)^{2}}-\frac{N^{2}z^{N}}{\left(1-z^{N}\right)^{2}}\\ \gamma_{1} & = & \frac{z\left(1+z\right)\left(\frac{1-z^{N}}{1-z}\right)^{3}-N^{3}z^{N}\left(1+z^{N}\right)}{\left[z\left(\frac{1-z^{N}}{1-z}\right)^{2}-N^{2}z^{N}\right]^{3/2}}\\ \gamma_{2} & = & \frac{z\left(1+4z+z^{2}\right)\left(\frac{1-z^{N}}{1-z}\right)^{4}-N^{4}z^{N}\left(1+4z^{N}+z^{2N}\right)}{\left[z\left(\frac{1-z^{N}}{1-z}\right)^{2}-N^{2}z^{N}\right]^{2}}\end{eqnarray*} + + + +.. math:: + :nowrap: + + \[ M\left(t\right)=\frac{1-e^{N\left(t-\lambda\right)}}{1-e^{t-\lambda}}\frac{1-e^{-\lambda}}{1-e^{-\lambda N}}\] + + + + +Planck (discrete exponential) +============================= + +Named Planck because of its relationship to the black-body problem he +solved. + + + +.. math:: + :nowrap: + + \begin{eqnarray*} p\left(k;\lambda\right) & = & \left(1-e^{-\lambda}\right)e^{-\lambda k}\quad k\lambda\geq0\\ F\left(x;\lambda\right) & = & 1-e^{-\lambda\left(\left\lfloor x\right\rfloor +1\right)}\quad x\lambda\geq0\\ G\left(q;\lambda\right) & = & \left\lceil -\frac{1}{\lambda}\log\left[1-q\right]-1\right\rceil .\end{eqnarray*} + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \frac{1}{e^{\lambda}-1}\\ \mu_{2} & = & \frac{e^{-\lambda}}{\left(1-e^{-\lambda}\right)^{2}}\\ \gamma_{1} & = & 2\cosh\left(\frac{\lambda}{2}\right)\\ \gamma_{2} & = & 4+2\cosh\left(\lambda\right)\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \[ M\left(t\right)=\frac{1-e^{-\lambda}}{1-e^{t-\lambda}}\] + + + +.. math:: + :nowrap: + + \[ h\left[X\right]=\frac{\lambda e^{-\lambda}}{1-e^{-\lambda}}-\log\left(1-e^{-\lambda}\right)\] + + + + +Poisson +======= + +The Poisson random variable counts the number of successes in :math:`n` independent Bernoulli trials in the limit as :math:`n\rightarrow\infty` and :math:`p\rightarrow0` where the probability of success in each trial is :math:`p` and :math:`np=\lambda\geq0` is a constant. It can be used to approximate the Binomial random +variable or in it's own right to count the number of events that occur +in the interval :math:`\left[0,t\right]` for a process satisfying certain "sparsity "constraints. The functions are + +.. math:: + :nowrap: + + \begin{eqnarray*} p\left(k;\lambda\right) & = & e^{-\lambda}\frac{\lambda^{k}}{k!}\quad k\geq0,\\ F\left(x;\lambda\right) & = & \sum_{n=0}^{\left\lfloor x\right\rfloor }e^{-\lambda}\frac{\lambda^{n}}{n!}=\frac{1}{\Gamma\left(\left\lfloor x\right\rfloor +1\right)}\int_{\lambda}^{\infty}t^{\left\lfloor x\right\rfloor }e^{-t}dt,\\ \mu & = & \lambda\\ \mu_{2} & = & \lambda\\ \gamma_{1} & = & \frac{1}{\sqrt{\lambda}}\\ \gamma_{2} & = & \frac{1}{\lambda}.\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \[ M\left(t\right)=\exp\left[\lambda\left(e^{t}-1\right)\right].\] + + + + +Geometric +========= + +The geometric random variable with parameter :math:`p\in\left(0,1\right)` can be defined as the number of trials required to obtain a success +where the probability of success on each trial is :math:`p` . Thus, + +.. math:: + :nowrap: + + \begin{eqnarray*} p\left(k;p\right) & = & \left(1-p\right)^{k-1}p\quad k\geq1\\ F\left(x;p\right) & = & 1-\left(1-p\right)^{\left\lfloor x\right\rfloor }\quad x\geq1\\ G\left(q;p\right) & = & \left\lceil \frac{\log\left(1-q\right)}{\log\left(1-p\right)}\right\rceil \\ \mu & = & \frac{1}{p}\\ \mu_{2} & = & \frac{1-p}{p^{2}}\\ \gamma_{1} & = & \frac{2-p}{\sqrt{1-p}}\\ \gamma_{2} & = & \frac{p^{2}-6p+6}{1-p}.\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} M\left(t\right) & = & \frac{p}{e^{-t}-\left(1-p\right)}\end{eqnarray*} + + + + +Negative Binomial +================= + +The negative binomial random variable with parameters :math:`n` and :math:`p\in\left(0,1\right)` can be defined as the number of *extra* independent trials (beyond :math:`n` ) required to accumulate a total of :math:`n` successes where the probability of a success on each trial is :math:`p.` Equivalently, this random variable is the number of failures +encoutered while accumulating :math:`n` successes during independent trials of an experiment that succeeds +with probability :math:`p.` Thus, + +.. math:: + :nowrap: + + \begin{eqnarray*} p\left(k;n,p\right) & = & \left(\begin{array}{c} k+n-1\\ n-1\end{array}\right)p^{n}\left(1-p\right)^{k}\quad k\geq0\\ F\left(x;n,p\right) & = & \sum_{i=0}^{\left\lfloor x\right\rfloor }\left(\begin{array}{c} i+n-1\\ i\end{array}\right)p^{n}\left(1-p\right)^{i}\quad x\geq0\\ & = & I_{p}\left(n,\left\lfloor x\right\rfloor +1\right)\quad x\geq0\\ \mu & = & n\frac{1-p}{p}\\ \mu_{2} & = & n\frac{1-p}{p^{2}}\\ \gamma_{1} & = & \frac{2-p}{\sqrt{n\left(1-p\right)}}\\ \gamma_{2} & = & \frac{p^{2}+6\left(1-p\right)}{n\left(1-p\right)}.\end{eqnarray*} + +Recall that :math:`I_{p}\left(a,b\right)` is the incomplete beta integral. + + +Hypergeometric +============== + +The hypergeometric random variable with parameters :math:`\left(M,n,N\right)` counts the number of "good "objects in a sample of size :math:`N` chosen without replacement from a population of :math:`M` objects where :math:`n` is the number of "good "objects in the total population. + +.. math:: + :nowrap: + + \begin{eqnarray*} p\left(k;N,n,M\right) & = & \frac{\left(\begin{array}{c} n\\ k\end{array}\right)\left(\begin{array}{c} M-n\\ N-k\end{array}\right)}{\left(\begin{array}{c} M\\ N\end{array}\right)}\quad N-\left(M-n\right)\leq k\leq\min\left(n,N\right)\\ F\left(x;N,n,M\right) & = & \sum_{k=0}^{\left\lfloor x\right\rfloor }\frac{\left(\begin{array}{c} m\\ k\end{array}\right)\left(\begin{array}{c} N-m\\ n-k\end{array}\right)}{\left(\begin{array}{c} N\\ n\end{array}\right)},\\ \mu & = & \frac{nN}{M}\\ \mu_{2} & = & \frac{nN\left(M-n\right)\left(M-N\right)}{M^{2}\left(M-1\right)}\\ \gamma_{1} & = & \frac{\left(M-2n\right)\left(M-2N\right)}{M-2}\sqrt{\frac{M-1}{nN\left(M-m\right)\left(M-n\right)}}\\ \gamma_{2} & = & \frac{g\left(N,n,M\right)}{nN\left(M-n\right)\left(M-3\right)\left(M-2\right)\left(N-M\right)}\end{eqnarray*} + +where (defining :math:`m=M-n` ) + +.. math:: + :nowrap: + + \begin{eqnarray*} g\left(N,n,M\right) & = & m^{3}-m^{5}+3m^{2}n-6m^{3}n+m^{4}n+3mn^{2}\\ & & -12m^{2}n^{2}+8m^{3}n^{2}+n^{3}-6mn^{3}+8m^{2}n^{3}\\ & & +mn^{4}-n^{5}-6m^{3}N+6m^{4}N+18m^{2}nN\\ & & -6m^{3}nN+18mn^{2}N-24m^{2}n^{2}N-6n^{3}N\\ & & -6mn^{3}N+6n^{4}N+6m^{2}N^{2}-6m^{3}N^{2}-24mnN^{2}\\ & & +12m^{2}nN^{2}+6n^{2}N^{2}+12mn^{2}N^{2}-6n^{3}N^{2}.\end{eqnarray*} + + + + +Zipf (Zeta) +=========== + +A random variable has the zeta distribution (also called the zipf +distribution) with parameter :math:`\alpha>1` if it's probability mass function is given by + +.. math:: + :nowrap: + + \begin{eqnarray*} p\left(k;\alpha\right) & = & \frac{1}{\zeta\left(\alpha\right)k^{\alpha}}\quad k\geq1\end{eqnarray*} + +where + +.. math:: + :nowrap: + + \[ \zeta\left(\alpha\right)=\sum_{n=1}^{\infty}\frac{1}{n^{\alpha}}\] + +is the Riemann zeta function. Other functions of this distribution are + +.. math:: + :nowrap: + + \begin{eqnarray*} F\left(x;\alpha\right) & = & \frac{1}{\zeta\left(\alpha\right)}\sum_{k=1}^{\left\lfloor x\right\rfloor }\frac{1}{k^{\alpha}}\\ \mu & = & \frac{\zeta_{1}}{\zeta_{0}}\quad\alpha>2\\ \mu_{2} & = & \frac{\zeta_{2}\zeta_{0}-\zeta_{1}^{2}}{\zeta_{0}^{2}}\quad\alpha>3\\ \gamma_{1} & = & \frac{\zeta_{3}\zeta_{0}^{2}-3\zeta_{0}\zeta_{1}\zeta_{2}+2\zeta_{1}^{3}}{\left[\zeta_{2}\zeta_{0}-\zeta_{1}^{2}\right]^{3/2}}\quad\alpha>4\\ \gamma_{2} & = & \frac{\zeta_{4}\zeta_{0}^{3}-4\zeta_{3}\zeta_{1}\zeta_{0}^{2}+12\zeta_{2}\zeta_{1}^{2}\zeta_{0}-6\zeta_{1}^{4}-3\zeta_{2}^{2}\zeta_{0}^{2}}{\left(\zeta_{2}\zeta_{0}-\zeta_{1}^{2}\right)^{2}}.\end{eqnarray*} + + + + + +.. math:: + :nowrap: + + \begin{eqnarray*} M\left(t\right) & = & \frac{\textrm{Li}_{\alpha}\left(e^{t}\right)}{\zeta\left(\alpha\right)}\end{eqnarray*} + +where :math:`\zeta_{i}=\zeta\left(\alpha-i\right)` and :math:`\textrm{Li}_{n}\left(z\right)` is the :math:`n^{\textrm{th}}` polylogarithm function of :math:`z` defined as + +.. math:: + :nowrap: + + \[ \textrm{Li}_{n}\left(z\right)\equiv\sum_{k=1}^{\infty}\frac{z^{k}}{k^{n}}\] + + + +.. math:: + :nowrap: + + \[ \mu_{n}^{\prime}=\left.M^{\left(n\right)}\left(t\right)\right|_{t=0}=\left.\frac{\textrm{Li}_{\alpha-n}\left(e^{t}\right)}{\zeta\left(a\right)}\right|_{t=0}=\frac{\zeta\left(\alpha-n\right)}{\zeta\left(\alpha\right)}\] + + + + +Logarithmic (Log-Series, Series) +================================ + +The logarimthic distribution with parameter :math:`p` has a probability mass function with terms proportional to the Taylor +series expansion of :math:`\log\left(1-p\right)` + +.. math:: + :nowrap: + + \begin{eqnarray*} p\left(k;p\right) & = & -\frac{p^{k}}{k\log\left(1-p\right)}\quad k\geq1\\ F\left(x;p\right) & = & -\frac{1}{\log\left(1-p\right)}\sum_{k=1}^{\left\lfloor x\right\rfloor }\frac{p^{k}}{k}=1+\frac{p^{1+\left\lfloor x\right\rfloor }\Phi\left(p,1,1+\left\lfloor x\right\rfloor \right)}{\log\left(1-p\right)}\end{eqnarray*} + +where + +.. math:: + :nowrap: + + \[ \Phi\left(z,s,a\right)=\sum_{k=0}^{\infty}\frac{z^{k}}{\left(a+k\right)^{s}}\] + +is the Lerch Transcendent. Also define :math:`r=\log\left(1-p\right)` + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & -\frac{p}{\left(1-p\right)r}\\ \mu_{2} & = & -\frac{p\left[p+r\right]}{\left(1-p\right)^{2}r^{2}}\\ \gamma_{1} & = & -\frac{2p^{2}+3pr+\left(1+p\right)r^{2}}{r\left(p+r\right)\sqrt{-p\left(p+r\right)}}r\\ \gamma_{2} & = & -\frac{6p^{3}+12p^{2}r+p\left(4p+7\right)r^{2}+\left(p^{2}+4p+1\right)r^{3}}{p\left(p+r\right)^{2}}.\end{eqnarray*} + + + +.. math:: + :nowrap: + + \begin{eqnarray*} M\left(t\right) & = & -\frac{1}{\log\left(1-p\right)}\sum_{k=1}^{\infty}\frac{e^{tk}p^{k}}{k}\\ & = & \frac{\log\left(1-pe^{t}\right)}{\log\left(1-p\right)}\end{eqnarray*} + +Thus, + +.. math:: + :nowrap: + + \[ \mu_{n}^{\prime}=\left.M^{\left(n\right)}\left(t\right)\right|_{t=0}=\left.\frac{\textrm{Li}_{1-n}\left(pe^{t}\right)}{\log\left(1-p\right)}\right|_{t=0}=-\frac{\textrm{Li}_{1-n}\left(p\right)}{\log\left(1-p\right)}.\] + + + + +Discrete Uniform (randint) +========================== + +The discrete uniform distribution with parameters :math:`\left(a,b\right)` constructs a random variable that has an equal probability of being +any one of the integers in the half-open range :math:`[a,b).` If :math:`a` is not given it is assumed to be zero and the only parameter is :math:`b.` Therefore, + +.. math:: + :nowrap: + + \begin{eqnarray*} p\left(k;a,b\right) & = & \frac{1}{b-a}\quad a\leq k0` + +.. math:: + :nowrap: + + \begin{eqnarray*} p\left(k\right) & = & \tanh\left(\frac{a}{2}\right)e^{-a\left|k\right|},\\ F\left(x\right) & = & \left\{ \begin{array}{cc} \frac{e^{a\left(\left\lfloor x\right\rfloor +1\right)}}{e^{a}+1} & \left\lfloor x\right\rfloor <0,\\ 1-\frac{e^{-a\left\lfloor x\right\rfloor }}{e^{a}+1} & \left\lfloor x\right\rfloor \geq0.\end{array}\right.\\ G\left(q\right) & = & \left\{ \begin{array}{cc} \left\lceil \frac{1}{a}\log\left[q\left(e^{a}+1\right)\right]-1\right\rceil & q<\frac{1}{1+e^{-a}},\\ \left\lceil -\frac{1}{a}\log\left[\left(1-q\right)\left(1+e^{a}\right)\right]\right\rceil & q\geq\frac{1}{1+e^{-a}}.\end{array}\right.\end{eqnarray*} + + + +.. math:: + :nowrap: + + \begin{eqnarray*} M\left(t\right) & = & \tanh\left(\frac{a}{2}\right)\sum_{k=-\infty}^{\infty}e^{tk}e^{-a\left|k\right|}\\ & = & C\left(1+\sum_{k=1}^{\infty}e^{-\left(t+a\right)k}+\sum_{1}^{\infty}e^{\left(t-a\right)k}\right)\\ & = & \tanh\left(\frac{a}{2}\right)\left(1+\frac{e^{-\left(t+a\right)}}{1-e^{-\left(t+a\right)}}+\frac{e^{t-a}}{1-e^{t-a}}\right)\\ & = & \frac{\tanh\left(\frac{a}{2}\right)\sinh a}{\cosh a-\cosh t}.\end{eqnarray*} + +Thus, + +.. math:: + :nowrap: + + \[ \mu_{n}^{\prime}=M^{\left(n\right)}\left(0\right)=\left[1+\left(-1\right)^{n}\right]\textrm{Li}_{-n}\left(e^{-a}\right)\] + +where :math:`\textrm{Li}_{-n}\left(z\right)` is the polylogarithm function of order :math:`-n` evaluated at :math:`z.` + +.. math:: + :nowrap: + + \[ h\left[X\right]=-\log\left(\tanh\left(\frac{a}{2}\right)\right)+\frac{a}{\sinh a}\] + + + + +Discrete Gaussian* +================== + +Defined for all :math:`\mu` and :math:`\lambda>0` and :math:`k` + +.. math:: + :nowrap: + + \[ p\left(k;\mu,\lambda\right)=\frac{1}{Z\left(\lambda\right)}\exp\left[-\lambda\left(k-\mu\right)^{2}\right]\] + +where + +.. math:: + :nowrap: + + \[ Z\left(\lambda\right)=\sum_{k=-\infty}^{\infty}\exp\left[-\lambda k^{2}\right]\] + + + +.. math:: + :nowrap: + + \begin{eqnarray*} \mu & = & \mu\\ \mu_{2} & = & -\frac{\partial}{\partial\lambda}\log Z\left(\lambda\right)\\ & = & G\left(\lambda\right)e^{-\lambda}\end{eqnarray*} + +where :math:`G\left(0\right)\rightarrow\infty` and :math:`G\left(\infty\right)\rightarrow2` with a minimum less than 2 near :math:`\lambda=1` + +.. math:: + :nowrap: + + \[ G\left(\lambda\right)=\frac{1}{Z\left(\lambda\right)}\sum_{k=-\infty}^{\infty}k^{2}\exp\left[-\lambda\left(k+1\right)\left(k-1\right)\right]\] diff -Nru python-scipy-0.7.2+dfsg1/doc/source/tutorial/stats.rst python-scipy-0.8.0+dfsg1/doc/source/tutorial/stats.rst --- python-scipy-0.7.2+dfsg1/doc/source/tutorial/stats.rst 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/source/tutorial/stats.rst 2010-07-26 15:48:29.000000000 +0100 @@ -3,20 +3,577 @@ .. sectionauthor:: Travis E. Oliphant +Introduction +------------ + SciPy has a tremendous number of basic statistics routines with more easily added by the end user (if you create one please contribute it). All of the statistics functions are located in the sub-package :mod:`scipy.stats` and a fairly complete listing of these functions can be had using ``info(stats)``. - Random Variables ----------------- +^^^^^^^^^^^^^^^^ There are two general distribution classes that have been implemented -for encapsulating continuous random variables and discrete random -variables. Over 80 continuous random variables and 10 discrete random +for encapsulating +:ref:`continuous random variables ` +and +:ref:`discrete random variables ` +. Over 80 continuous random variables and 10 discrete random variables have been implemented using these classes. The list of the random variables available is in the docstring for the stats sub- -package. A detailed description of each of them is also located in the -files continuous.lyx and discrete.lyx in the stats sub-directories. +package. + + +Note: The following is work in progress + +Distributions +------------- + + +First some imports + + >>> import numpy as np + >>> from scipy import stats + >>> import warnings + >>> warnings.simplefilter('ignore', DeprecationWarning) + +We can obtain the list of available distribution through introspection: + + >>> dist_continu = [d for d in dir(stats) if + ... isinstance(getattr(stats,d), stats.rv_continuous)] + >>> dist_discrete = [d for d in dir(stats) if + ... isinstance(getattr(stats,d), stats.rv_discrete)] + >>> print 'number of continuous distributions:', len(dist_continu) + number of continuous distributions: 84 + >>> print 'number of discrete distributions: ', len(dist_discrete) + number of discrete distributions: 12 + + + + +Distributions can be used in one of two ways, either by passing all distribution +parameters to each method call or by freezing the parameters for the instance +of the distribution. As an example, we can get the median of the distribution by using +the percent point function, ppf, which is the inverse of the cdf: + + >>> print stats.nct.ppf(0.5, 10, 2.5) + 2.56880722561 + >>> my_nct = stats.nct(10, 2.5) + >>> print my_nct.ppf(0.5) + 2.56880722561 + +``help(stats.nct)`` prints the complete docstring of the distribution. Instead +we can print just some basic information:: + + >>> print stats.nct.extradoc #contains the distribution specific docs + Non-central Student T distribution + + df**(df/2) * gamma(df+1) + nct.pdf(x,df,nc) = -------------------------------------------------- + 2**df*exp(nc**2/2)*(df+x**2)**(df/2) * gamma(df/2) + for df > 0, nc > 0. + + + >>> print 'number of arguments: %d, shape parameters: %s'% (stats.nct.numargs, + ... stats.nct.shapes) + number of arguments: 2, shape parameters: df,nc + >>> print 'bounds of distribution lower: %s, upper: %s' % (stats.nct.a, + ... stats.nct.b) + bounds of distribution lower: -1.#INF, upper: 1.#INF + +We can list all methods and properties of the distribution with +``dir(stats.nct)``. Some of the methods are private methods, that are +not named as such, i.e. no leading underscore, for example veccdf or +xa and xb are for internal calculation. The main methods we can see +when we list the methods of the frozen distribution: + + >>> print dir(my_nct) #reformatted + ['__class__', '__delattr__', '__dict__', '__doc__', '__getattribute__', + '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', + '__repr__', '__setattr__', '__str__', '__weakref__', 'args', 'cdf', 'dist', + 'entropy', 'isf', 'kwds', 'moment', 'pdf', 'pmf', 'ppf', 'rvs', 'sf', 'stats'] + + +The main public methods are: + +* rvs: Random Variates +* pdf: Probability Density Function +* cdf: Cumulative Distribution Function +* sf: Survival Function (1-CDF) +* ppf: Percent Point Function (Inverse of CDF) +* isf: Inverse Survival Function (Inverse of SF) +* stats: Return mean, variance, (Fisher's) skew, or (Fisher's) kurtosis +* moment: non-central moments of the distribution + +The main additional methods of the not frozen distribution are related to the estimation +of distrition parameters: + +* fit: maximum likelihood estimation of distribution parameters, including location + and scale +* fit_loc_scale: estimation of location and scale when shape parameters are given +* nnlf: negative log likelihood function +* expect: Calculate the expectation of a function against the pdf or pmf + +All continuous distributions take `loc` and `scale` as keyword +parameters to adjust the location and scale of the distribution, +e.g. for the standard normal distribution location is the mean and +scale is the standard deviation. The standardized distribution for a +random variable `x` is obtained through ``(x - loc) / scale``. + +Discrete distribution have most of the same basic methods, however +pdf is replaced the probability mass function `pmf`, no estimation +methods, such as fit, are available, and scale is not a valid +keyword parameter. The location parameter, keyword `loc` can be used +to shift the distribution. + +The basic methods, pdf, cdf, sf, ppf, and isf are vectorized with +``np.vectorize``, and the usual numpy broadcasting is applied. For +example, we can calculate the critical values for the upper tail of +the t distribution for different probabilites and degrees of freedom. + + >>> stats.t.isf([0.1, 0.05, 0.01], [[10], [11]]) + array([[ 1.37218364, 1.81246112, 2.76376946], + [ 1.36343032, 1.79588482, 2.71807918]]) + +Here, the first row are the critical values for 10 degrees of freedom and the second row +is for 11 d.o.f., i.e. this is the same as + + >>> stats.t.isf([0.1, 0.05, 0.01], 10) + array([ 1.37218364, 1.81246112, 2.76376946]) + >>> stats.t.isf([0.1, 0.05, 0.01], 11) + array([ 1.36343032, 1.79588482, 2.71807918]) + +If both, probabilities and degrees of freedom have the same array shape, then element +wise matching is used. As an example, we can obtain the 10% tail for 10 d.o.f., the 5% tail +for 11 d.o.f. and the 1% tail for 12 d.o.f. by + + >>> stats.t.isf([0.1, 0.05, 0.01], [10, 11, 12]) + array([ 1.37218364, 1.79588482, 2.68099799]) + + + +Performance and Remaining Issues +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The performance of the individual methods, in terms of speed, varies +widely by distribution and method. The results of a method are +obtained in one of two ways, either by explicit calculation or by a +generic algorithm that is independent of the specific distribution. +Explicit calculation, requires that the method is directly specified +for the given distribution, either through analytic formulas or +through special functions in scipy.special or numpy.random for +`rvs`. These are usually relatively fast calculations. The generic +methods are used if the distribution does not specify any explicit +calculation. To define a distribution, only one of pdf or cdf is +necessary, all other methods can be derived using numeric integration +and root finding. These indirect methods can be very slow. As an +example, ``rgh = stats.gausshyper.rvs(0.5, 2, 2, 2, size=100)`` creates +random variables in a very indirect way and takes about 19 seconds +for 100 random variables on my computer, while one million random +variables from the standard normal or from the t distribution take +just above one second. + + +The distributions in scipy.stats have recently been corrected and improved +and gained a considerable test suite, however a few issues remain: + +* skew and kurtosis, 3rd and 4th moments and entropy are not thoroughly + tested and some coarse testing indicates that there are still some + incorrect results left. +* the distributions have been tested over some range of parameters, + however in some corner ranges, a few incorrect results may remain. +* the maximum likelihood estimation in `fit` does not work with + default starting parameters for all distributions and the user + needs to supply good starting parameters. Also, for some + distribution using a maximum likelihood estimator might + inherently not be the best choice. + + +The next example shows how to build our own discrete distribution, +and more examples for the usage of the distributions are shown below +together with the statistical tests. + + + +Example: discrete distribution rv_discrete +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +In the following we use stats.rv_discrete to generate a discrete distribution +that has the probabilites of the truncated normal for the intervalls +centered around the integers. + + + >>> npoints = 20 # number of integer support points of the distribution minus 1 + >>> npointsh = npoints / 2 + >>> npointsf = float(npoints) + >>> nbound = 4 # bounds for the truncated normal + >>> normbound = (1+1/npointsf) * nbound # actual bounds of truncated normal + >>> grid = np.arange(-npointsh, npointsh+2, 1) # integer grid + >>> gridlimitsnorm = (grid-0.5) / npointsh * nbound # bin limits for the truncnorm + >>> gridlimits = grid - 0.5 + >>> grid = grid[:-1] + >>> probs = np.diff(stats.truncnorm.cdf(gridlimitsnorm, -normbound, normbound)) + >>> gridint = grid + >>> normdiscrete = stats.rv_discrete(values = (gridint, + ... np.round(probs, decimals=7)), name='normdiscrete') + +From the docstring of rv_discrete: + "You can construct an aribtrary discrete rv where P{X=xk} = pk by + passing to the rv_discrete initialization method (through the values= + keyword) a tuple of sequences (xk, pk) which describes only those + values of X (xk) that occur with nonzero probability (pk)." + +There are some requirements for this distribution to work. The +keyword `name` is required. The support points of the distribution +xk have to be integers. Also, I needed to limit the number of +decimals. If the last two requirements are not satisfied an +exception may be raised or the resulting numbers may be incorrect. + +After defining the distribution, we obtain access to all methods of +discrete distributions. + + >>> print 'mean = %6.4f, variance = %6.4f, skew = %6.4f, kurtosis = %6.4f'% \ + ... normdiscrete.stats(moments = 'mvsk') + mean = -0.0000, variance = 6.3302, skew = 0.0000, kurtosis = -0.0076 + + >>> nd_std = np.sqrt(normdiscrete.stats(moments = 'v')) + +**Generate a random sample and compare observed frequencies with probabilities** + + >>> n_sample = 500 + >>> np.random.seed(87655678) # fix the seed for replicability + >>> rvs = normdiscrete.rvs(size=n_sample) + >>> rvsnd = rvs + >>> f, l = np.histogram(rvs, bins=gridlimits) + >>> sfreq = np.vstack([gridint, f, probs*n_sample]).T + >>> print sfreq + [[ -1.00000000e+01 0.00000000e+00 2.95019349e-02] + [ -9.00000000e+00 0.00000000e+00 1.32294142e-01] + [ -8.00000000e+00 0.00000000e+00 5.06497902e-01] + [ -7.00000000e+00 2.00000000e+00 1.65568919e+00] + [ -6.00000000e+00 1.00000000e+00 4.62125309e+00] + [ -5.00000000e+00 9.00000000e+00 1.10137298e+01] + [ -4.00000000e+00 2.60000000e+01 2.24137683e+01] + [ -3.00000000e+00 3.70000000e+01 3.89503370e+01] + [ -2.00000000e+00 5.10000000e+01 5.78004747e+01] + [ -1.00000000e+00 7.10000000e+01 7.32455414e+01] + [ 0.00000000e+00 7.40000000e+01 7.92618251e+01] + [ 1.00000000e+00 8.90000000e+01 7.32455414e+01] + [ 2.00000000e+00 5.50000000e+01 5.78004747e+01] + [ 3.00000000e+00 5.00000000e+01 3.89503370e+01] + [ 4.00000000e+00 1.70000000e+01 2.24137683e+01] + [ 5.00000000e+00 1.10000000e+01 1.10137298e+01] + [ 6.00000000e+00 4.00000000e+00 4.62125309e+00] + [ 7.00000000e+00 3.00000000e+00 1.65568919e+00] + [ 8.00000000e+00 0.00000000e+00 5.06497902e-01] + [ 9.00000000e+00 0.00000000e+00 1.32294142e-01] + [ 1.00000000e+01 0.00000000e+00 2.95019349e-02]] + + +.. plot:: examples/normdiscr_plot1.py + :align: center + :include-source: 0 + + +.. plot:: examples/normdiscr_plot2.py + :align: center + :include-source: 0 + + +Next, we can test, whether our sample was generated by our normdiscrete +distribution. This also verifies, whether the random numbers are generated +correctly + +The chisquare test requires that there are a minimum number of observations +in each bin. We combine the tail bins into larger bins so that they contain +enough observations. + + >>> f2 = np.hstack([f[:5].sum(), f[5:-5], f[-5:].sum()]) + >>> p2 = np.hstack([probs[:5].sum(), probs[5:-5], probs[-5:].sum()]) + >>> ch2, pval = stats.chisquare(f2, p2*n_sample) + + >>> print 'chisquare for normdiscrete: chi2 = %6.3f pvalue = %6.4f' % (ch2, pval) + chisquare for normdiscrete: chi2 = 12.466 pvalue = 0.4090 + +The pvalue in this case is high, so we can be quite confident that +our random sample was actually generated by the distribution. + + + +Analysing One Sample +-------------------- + +First, we create some random variables. We set a seed so that in each run +we get identical results to look at. As an example we take a sample from +the Student t distribution: + + >>> np.random.seed(282629734) + >>> x = stats.t.rvs(10, size=1000) + +Here, we set the required shape parameter of the t distribution, which +in statistics corresponds to the degrees of freedom, to 10. Using size=100 means +that our sample consists of 1000 independently drawn (pseudo) random numbers. +Since we did not specify the keyword arguments `loc` and `scale`, those are +set to their default values zero and one. + +Descriptive Statistics +^^^^^^^^^^^^^^^^^^^^^^ + +`x` is a numpy array, and we have direct access to all array methods, e.g. + + >>> print x.max(), x.min() # equivalent to np.max(x), np.min(x) + 5.26327732981 -3.78975572422 + >>> print x.mean(), x.var() # equivalent to np.mean(x), np.var(x) + 0.0140610663985 1.28899386208 + + +How do the some sample properties compare to their theoretical counterparts? + + >>> m, v, s, k = stats.t.stats(10, moments='mvsk') + >>> n, (smin, smax), sm, sv, ss, sk = stats.describe(x) + + >>> print 'distribution:', + distribution: + >>> sstr = 'mean = %6.4f, variance = %6.4f, skew = %6.4f, kurtosis = %6.4f' + >>> print sstr %(m, v, s ,k) + mean = 0.0000, variance = 1.2500, skew = 0.0000, kurtosis = 1.0000 + >>> print 'sample: ', + sample: + >>> print sstr %(sm, sv, ss, sk) + mean = 0.0141, variance = 1.2903, skew = 0.2165, kurtosis = 1.0556 + +Note: stats.describe uses the unbiased estimator for the variance, while +np.var is the biased estimator. + + +For our sample the sample statistics differ a by a small amount from +their theoretical counterparts. + + +T-test and KS-test +^^^^^^^^^^^^^^^^^^ + +We can use the t-test to test whether the mean of our sample differs +in a statistcally significant way from the theoretical expectation. + + >>> print 't-statistic = %6.3f pvalue = %6.4f' % stats.ttest_1samp(x, m) + t-statistic = 0.391 pvalue = 0.6955 + +The pvalue is 0.7, this means that with an alpha error of, for +example, 10%, we cannot reject the hypothesis that the sample mean +is equal to zero, the expectation of the standard t-distribution. + + +As an exercise, we can calculate our ttest also directly without +using the provided function, which should give us the same answer, +and so it does: + + >>> tt = (sm-m)/np.sqrt(sv/float(n)) # t-statistic for mean + >>> pval = stats.t.sf(np.abs(tt), n-1)*2 # two-sided pvalue = Prob(abs(t)>tt) + >>> print 't-statistic = %6.3f pvalue = %6.4f' % (tt, pval) + t-statistic = 0.391 pvalue = 0.6955 + +The Kolmogorov-Smirnov test can be used to test the hypothesis that +the sample comes from the standard t-distribution + + >>> print 'KS-statistic D = %6.3f pvalue = %6.4f' % stats.kstest(x, 't', (10,)) + KS-statistic D = 0.016 pvalue = 0.9606 + +Again the p-value is high enough that we cannot reject the +hypothesis that the random sample really is distributed according to the +t-distribution. In real applications, we don't know what the +underlying distribution is. If we perform the Kolmogorov-Smirnov +test of our sample against the standard normal distribution, then we +also cannot reject the hypothesis that our sample was generated by the +normal distribution given that in this example the p-value is almost 40%. + + >>> print 'KS-statistic D = %6.3f pvalue = %6.4f' % stats.kstest(x,'norm') + KS-statistic D = 0.028 pvalue = 0.3949 + +However, the standard normal distribution has a variance of 1, while our +sample has a variance of 1.29. If we standardize our sample and test it +against the normal distribution, then the p-value is again large enough +that we cannot reject the hypothesis that the sample came form the +normal distribution. + + >>> d, pval = stats.kstest((x-x.mean())/x.std(), 'norm') + >>> print 'KS-statistic D = %6.3f pvalue = %6.4f' % (d, pval) + KS-statistic D = 0.032 pvalue = 0.2402 + +Note: The Kolmogorov-Smirnov test assumes that we test against a +distribution with given parameters, since in the last case we +estimated mean and variance, this assumption is violated, and the +distribution of the test statistic on which the p-value is based, is +not correct. + +Tails of the distribution +^^^^^^^^^^^^^^^^^^^^^^^^^ + +Finally, we can check the upper tail of the distribution. We can use +the percent point function ppf, which is the inverse of the cdf +function, to obtain the critical values, or, more directly, we can use +the inverse of the survival function + + >>> crit01, crit05, crit10 = stats.t.ppf([1-0.01, 1-0.05, 1-0.10], 10) + >>> print 'critical values from ppf at 1%%, 5%% and 10%% %8.4f %8.4f %8.4f'% (crit01, crit05, crit10) + critical values from ppf at 1%, 5% and 10% 2.7638 1.8125 1.3722 + >>> print 'critical values from isf at 1%%, 5%% and 10%% %8.4f %8.4f %8.4f'% tuple(stats.t.isf([0.01,0.05,0.10],10)) + critical values from isf at 1%, 5% and 10% 2.7638 1.8125 1.3722 + + >>> freq01 = np.sum(x>crit01) / float(n) * 100 + >>> freq05 = np.sum(x>crit05) / float(n) * 100 + >>> freq10 = np.sum(x>crit10) / float(n) * 100 + >>> print 'sample %%-frequency at 1%%, 5%% and 10%% tail %8.4f %8.4f %8.4f'% (freq01, freq05, freq10) + sample %-frequency at 1%, 5% and 10% tail 1.4000 5.8000 10.5000 + +In all three cases, our sample has more weight in the top tail than the +underlying distribution. +We can briefly check a larger sample to see if we get a closer match. In this +case the empirical frequency is quite close to the theoretical probability, +but if we repeat this several times the fluctuations are still pretty large. + + >>> freq05l = np.sum(stats.t.rvs(10, size=10000) > crit05) / 10000.0 * 100 + >>> print 'larger sample %%-frequency at 5%% tail %8.4f'% freq05l + larger sample %-frequency at 5% tail 4.8000 + +We can also compare it with the tail of the normal distribution, which +has less weight in the tails: + + >>> print 'tail prob. of normal at 1%%, 5%% and 10%% %8.4f %8.4f %8.4f'% \ + ... tuple(stats.norm.sf([crit01, crit05, crit10])*100) + tail prob. of normal at 1%, 5% and 10% 0.2857 3.4957 8.5003 + +The chisquare test can be used to test, whether for a finite number of bins, +the observed frequencies differ significantly from the probabilites of the +hypothesized distribution. + + >>> quantiles = [0.0, 0.01, 0.05, 0.1, 1-0.10, 1-0.05, 1-0.01, 1.0] + >>> crit = stats.t.ppf(quantiles, 10) + >>> print crit + [ -Inf -2.76376946 -1.81246112 -1.37218364 1.37218364 1.81246112 + 2.76376946 Inf] + >>> n_sample = x.size + >>> freqcount = np.histogram(x, bins=crit)[0] + >>> tprob = np.diff(quantiles) + >>> nprob = np.diff(stats.norm.cdf(crit)) + >>> tch, tpval = stats.chisquare(freqcount, tprob*n_sample) + >>> nch, npval = stats.chisquare(freqcount, nprob*n_sample) + >>> print 'chisquare for t: chi2 = %6.3f pvalue = %6.4f' % (tch, tpval) + chisquare for t: chi2 = 2.300 pvalue = 0.8901 + >>> print 'chisquare for normal: chi2 = %6.3f pvalue = %6.4f' % (nch, npval) + chisquare for normal: chi2 = 64.605 pvalue = 0.0000 + +We see that the standard normal distribution is clearly rejected while the +standard t-distribution cannot be rejected. Since the variance of our sample +differs from both standard distribution, we can again redo the test taking +the estimate for scale and location into account. + +The fit method of the distributions can be used to estimate the parameters +of the distribution, and the test is repeated using probabilites of the +estimated distribution. + + >>> tdof, tloc, tscale = stats.t.fit(x) + >>> nloc, nscale = stats.norm.fit(x) + >>> tprob = np.diff(stats.t.cdf(crit, tdof, loc=tloc, scale=tscale)) + >>> nprob = np.diff(stats.norm.cdf(crit, loc=nloc, scale=nscale)) + >>> tch, tpval = stats.chisquare(freqcount, tprob*n_sample) + >>> nch, npval = stats.chisquare(freqcount, nprob*n_sample) + >>> print 'chisquare for t: chi2 = %6.3f pvalue = %6.4f' % (tch, tpval) + chisquare for t: chi2 = 1.577 pvalue = 0.9542 + >>> print 'chisquare for normal: chi2 = %6.3f pvalue = %6.4f' % (nch, npval) + chisquare for normal: chi2 = 11.084 pvalue = 0.0858 + +Taking account of the estimated parameters, we can still reject the +hypothesis that our sample came from a normal distribution (at the 5% level), +but again, with a p-value of 0.95, we cannot reject the t distribution. + + + +Special tests for normal distributions +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Since the normal distribution is the most common distribution in statistics, +there are several additional functions available to test whether a sample +could have been drawn from a normal distribution + +First we can test if skew and kurtosis of our sample differ significantly from +those of a normal distribution: + + >>> print 'normal skewtest teststat = %6.3f pvalue = %6.4f' % stats.skewtest(x) + normal skewtest teststat = 2.785 pvalue = 0.0054 + >>> print 'normal kurtosistest teststat = %6.3f pvalue = %6.4f' % stats.kurtosistest(x) + normal kurtosistest teststat = 4.757 pvalue = 0.0000 + +These two tests are combined in the normality test + + >>> print 'normaltest teststat = %6.3f pvalue = %6.4f' % stats.normaltest(x) + normaltest teststat = 30.379 pvalue = 0.0000 + +In all three tests the p-values are very low and we can reject the hypothesis +that the our sample has skew and kurtosis of the normal distribution. + +Since skew and kurtosis of our sample are based on central moments, we get +exactly the same results if we test the standardized sample: + + >>> print 'normaltest teststat = %6.3f pvalue = %6.4f' % \ + ... stats.normaltest((x-x.mean())/x.std()) + normaltest teststat = 30.379 pvalue = 0.0000 + +Because normality is rejected so strongly, we can check whether the +normaltest gives reasonable results for other cases: + + >>> print 'normaltest teststat = %6.3f pvalue = %6.4f' % stats.normaltest(stats.t.rvs(10, size=100)) + normaltest teststat = 4.698 pvalue = 0.0955 + >>> print 'normaltest teststat = %6.3f pvalue = %6.4f' % stats.normaltest(stats.norm.rvs(size=1000)) + normaltest teststat = 0.613 pvalue = 0.7361 + +When testing for normality of a small sample of t-distributed observations +and a large sample of normal distributed observation, then in neither case +can we reject the null hypothesis that the sample comes from a normal +distribution. In the first case this is because the test is not powerful +enough to distinguish a t and a normally distributed random variable in a +small sample. + + +Comparing two samples +--------------------- + +In the following, we are given two samples, which can come either from the +same or from different distribution, and we want to test whether these +samples have the same statistical properties. + +Comparing means +^^^^^^^^^^^^^^^ + +Test with sample with identical means: + + >>> rvs1 = stats.norm.rvs(loc=5, scale=10, size=500) + >>> rvs2 = stats.norm.rvs(loc=5, scale=10, size=500) + >>> stats.ttest_ind(rvs1, rvs2) + (-0.54890361750888583, 0.5831943748663857) + + +Test with sample with different means: + + >>> rvs3 = stats.norm.rvs(loc=8, scale=10, size=500) + >>> stats.ttest_ind(rvs1, rvs3) + (-4.5334142901750321, 6.507128186505895e-006) + + + +Kolmogorov-Smirnov test for two samples ks_2samp +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +For the example where both samples are drawn from the same distribution, +we cannot reject the null hypothesis since the pvalue is high + + >>> stats.ks_2samp(rvs1, rvs2) + (0.025999999999999995, 0.99541195173064878) + +In the second example, with different location, i.e. means, we can +reject the null hypothesis since the pvalue is below 1% + + >>> stats.ks_2samp(rvs1, rvs3) + (0.11399999999999999, 0.0027132103661283141) diff -Nru python-scipy-0.7.2+dfsg1/doc/source/tutorial/weave.rst python-scipy-0.8.0+dfsg1/doc/source/tutorial/weave.rst --- python-scipy-0.7.2+dfsg1/doc/source/tutorial/weave.rst 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/source/tutorial/weave.rst 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,2538 @@ +.. sectionauthor:: Eric Jones eric@enthought.com + +***** +Weave +***** + +======= +Outline +======= + +.. contents:: + + +============ +Introduction +============ + +The :mod:`scipy.weave` (below just :mod:`weave`) package provides tools for +including C/C++ code within in +Python code. This offers both another level of optimization to those who need +it, and an easy way to modify and extend any supported extension libraries +such as wxPython and hopefully VTK soon. Inlining C/C++ code within Python +generally results in speed ups of 1.5x to 30x speed-up over algorithms +written in pure Python (However, it is also possible to slow things down...). +Generally algorithms that require a large number of calls to the Python API +don't benefit as much from the conversion to C/C++ as algorithms that have +inner loops completely convertable to C. + +There are three basic ways to use ``weave``. The ``weave.inline()`` function +executes C code directly within Python, and ``weave.blitz()`` translates +Python NumPy expressions to C++ for fast execution. ``blitz()`` was the +original reason ``weave`` was built. For those interested in building +extension libraries, the ``ext_tools`` module provides classes for building +extension modules within Python. + +Most of ``weave's`` functionality should work on Windows and Unix, although +some of its functionality requires ``gcc`` or a similarly modern C++ compiler +that handles templates well. Up to now, most testing has been done on Windows +2000 with Microsoft's C++ compiler (MSVC) and with gcc (mingw32 2.95.2 and +2.95.3-6). All tests also pass on Linux (RH 7.1 with gcc 2.96), and I've had +reports that it works on Debian also (thanks Pearu). + +The ``inline`` and ``blitz`` provide new functionality to Python (although +I've recently learned about the `PyInline`_ project which may offer similar +functionality to ``inline``). On the other hand, tools for building Python +extension modules already exists (SWIG, SIP, pycpp, CXX, and others). As of +yet, I'm not sure where ``weave`` fits in this spectrum. It is closest in +flavor to CXX in that it makes creating new C/C++ extension modules pretty +easy. However, if you're wrapping a gaggle of legacy functions or classes, +SWIG and friends are definitely the better choice. ``weave`` is set up so +that you can customize how Python types are converted to C types in +``weave``. This is great for ``inline()``, but, for wrapping legacy code, it +is more flexible to specify things the other way around -- that is how C +types map to Python types. This ``weave`` does not do. I guess it would be +possible to build such a tool on top of ``weave``, but with good tools like +SWIG around, I'm not sure the effort produces any new capabilities. Things +like function overloading are probably easily implemented in ``weave`` and it +might be easier to mix Python/C code in function calls, but nothing beyond +this comes to mind. So, if you're developing new extension modules or +optimizing Python functions in C, ``weave.ext_tools()`` might be the tool for +you. If you're wrapping legacy code, stick with SWIG. + +The next several sections give the basics of how to use ``weave``. We'll +discuss what's happening under the covers in more detail later on. Serious +users will need to at least look at the type conversion section to understand +how Python variables map to C/C++ types and how to customize this behavior. +One other note. If you don't know C or C++ then these docs are probably of +very little help to you. Further, it'd be helpful if you know something about +writing Python extensions. ``weave`` does quite a bit for you, but for +anything complex, you'll need to do some conversions, reference counting, +etc. + +.. note:: + + ``weave`` is actually part of the `SciPy`_ package. However, it + also works fine as a standalone package (you can check out the sources using + ``svn co http://svn.scipy.org/svn/scipy/trunk/Lib/weave weave`` and install as + python setup.py install). The examples here are given as if it is used as a + stand alone package. If you are using from within scipy, you can use `` from + scipy import weave`` and the examples will work identically. + + +============== + Requirements +============== + +- Python + + I use 2.1.1. Probably 2.0 or higher should work. + +- C++ compiler + + ``weave`` uses ``distutils`` to actually build extension modules, so + it uses whatever compiler was originally used to build Python. ``weave`` + itself requires a C++ compiler. If you used a C++ compiler to build + Python, your probably fine. + + On Unix gcc is the preferred choice because I've done a little + testing with it. All testing has been done with gcc, but I expect the + majority of compilers should work for ``inline`` and ``ext_tools``. The + one issue I'm not sure about is that I've hard coded things so that + compilations are linked with the ``stdc++`` library. *Is this standard + across Unix compilers, or is this a gcc-ism?* + + For ``blitz()``, you'll need a reasonably recent version of gcc. + 2.95.2 works on windows and 2.96 looks fine on Linux. Other versions are + likely to work. Its likely that KAI's C++ compiler and maybe some others + will work, but I haven't tried. My advice is to use gcc for now unless + your willing to tinker with the code some. + + On Windows, either MSVC or gcc (`mingw32`_) should work. Again, + you'll need gcc for ``blitz()`` as the MSVC compiler doesn't handle + templates well. + + I have not tried Cygwin, so please report success if it works for + you. + +- NumPy + + The python `NumPy`_ module is required for ``blitz()`` to + work and for numpy.distutils which is used by weave. + + +============== + Installation +============== + +There are currently two ways to get ``weave``. First, ``weave`` is part of +SciPy and installed automatically (as a sub- package) whenever SciPy is +installed. Second, since ``weave`` is useful outside of the scientific +community, it has been setup so that it can be used as a stand-alone module. + +The stand-alone version can be downloaded from `here`_. Instructions for +installing should be found there as well. setup.py file to simplify +installation. + + +========= + Testing +========= + +Once ``weave`` is installed, fire up python and run its unit tests. + +:: + + >>> import weave + >>> weave.test() + runs long time... spews tons of output and a few warnings + . + . + . + .............................................................. + ................................................................ + .................................................. + ---------------------------------------------------------------------- + Ran 184 tests in 158.418s + OK + >>> + + +This takes a while, usually several minutes. On Unix with remote file +systems, I've had it take 15 or so minutes. In the end, it should run about +180 tests and spew some speed results along the way. If you get errors, +they'll be reported at the end of the output. Please report errors that you +find. Some tests are known to fail at this point. + + +If you only want to test a single module of the package, you can do this by +running test() for that specific module. + +:: + + >>> import weave.scalar_spec + >>> weave.scalar_spec.test() + ....... + ---------------------------------------------------------------------- + Ran 7 tests in 23.284s + + +Testing Notes: +============== + + +- Windows 1 + + I've had some test fail on windows machines where I have msvc, + gcc-2.95.2 (in c:\gcc-2.95.2), and gcc-2.95.3-6 (in c:\gcc) all + installed. My environment has c:\gcc in the path and does not have + c:\gcc-2.95.2 in the path. The test process runs very smoothly until the + end where several test using gcc fail with cpp0 not found by g++. If I + check os.system('gcc -v') before running tests, I get gcc-2.95.3-6. If I + check after running tests (and after failure), I get gcc-2.95.2. ??huh??. + The os.environ['PATH'] still has c:\gcc first in it and is not corrupted + (msvc/distutils messes with the environment variables, so we have to undo + its work in some places). If anyone else sees this, let me know - - it + may just be an quirk on my machine (unlikely). Testing with the gcc- + 2.95.2 installation always works. + +- Windows 2 + + If you run the tests from PythonWin or some other GUI tool, you'll + get a ton of DOS windows popping up periodically as ``weave`` spawns the + compiler multiple times. Very annoying. Anyone know how to fix this? + +- wxPython + + wxPython tests are not enabled by default because importing wxPython + on a Unix machine without access to a X-term will cause the program to + exit. Anyone know of a safe way to detect whether wxPython can be + imported and whether a display exists on a machine? + +============ + Benchmarks +============ + +This section has not been updated from old scipy weave and Numeric.... + +This section has a few benchmarks -- thats all people want to see anyway +right? These are mostly taken from running files in the ``weave/example`` +directory and also from the test scripts. Without more information about what +the test actually do, their value is limited. Still, their here for the +curious. Look at the example scripts for more specifics about what problem +was actually solved by each run. These examples are run under windows 2000 +using Microsoft Visual C++ and python2.1 on a 850 MHz PIII laptop with 320 MB +of RAM. Speed up is the improvement (degredation) factor of ``weave`` +compared to conventional Python functions. ``The blitz()`` comparisons are +shown compared to NumPy. + +.. table:: inline and ext_tools + + ====================== =========== + Algorithm Speed up + ====================== =========== + binary search 1.50 + fibonacci (recursive) 82.10 + fibonacci (loop) 9.17 + return None 0.14 + map 1.20 + dictionary sort 2.54 + vector quantization 37.40 + ====================== =========== + +.. table:: blitz -- double precision + + ==================================== ============= + Algorithm Speed up + ==================================== ============= + a = b + c 512x512 3.05 + a = b + c + d 512x512 4.59 + 5 pt avg. filter, 2D Image 512x512 9.01 + Electromagnetics (FDTD) 100x100x100 8.61 + ==================================== ============= + +The benchmarks shown ``blitz`` in the best possible light. NumPy (at least on +my machine) is significantly worse for double precision than it is for single +precision calculations. If your interested in single precision results, you +can pretty much divide the double precision speed up by 3 and you'll be +close. + + +======== + Inline +======== + +``inline()`` compiles and executes C/C++ code on the fly. Variables in the +local and global Python scope are also available in the C/C++ code. Values +are passed to the C/C++ code by assignment much like variables are passed +into a standard Python function. Values are returned from the C/C++ code +through a special argument called return_val. Also, the contents of mutable +objects can be changed within the C/C++ code and the changes remain after the +C code exits and returns to Python. (more on this later) + +Here's a trivial ``printf`` example using ``inline()``:: + + >>> import weave + >>> a = 1 + >>> weave.inline('printf("%d\\n",a);',['a']) + 1 + +In this, its most basic form, ``inline(c_code, var_list)`` requires two +arguments. ``c_code`` is a string of valid C/C++ code. ``var_list`` is a list +of variable names that are passed from Python into C/C++. Here we have a +simple ``printf`` statement that writes the Python variable ``a`` to the +screen. The first time you run this, there will be a pause while the code is +written to a .cpp file, compiled into an extension module, loaded into +Python, cataloged for future use, and executed. On windows (850 MHz PIII), +this takes about 1.5 seconds when using Microsoft's C++ compiler (MSVC) and +6-12 seconds using gcc (mingw32 2.95.2). All subsequent executions of the +code will happen very quickly because the code only needs to be compiled +once. If you kill and restart the interpreter and then execute the same code +fragment again, there will be a much shorter delay in the fractions of +seconds range. This is because ``weave`` stores a catalog of all previously +compiled functions in an on disk cache. When it sees a string that has been +compiled, it loads the already compiled module and executes the appropriate +function. + +.. note:: + If you try the ``printf`` example in a GUI shell such as IDLE, + PythonWin, PyShell, etc., you're unlikely to see the output. This is because + the C code is writing to stdout, instead of to the GUI window. This doesn't + mean that inline doesn't work in these environments -- it only means that + standard out in C is not the same as the standard out for Python in these + cases. Non input/output functions will work as expected. + +Although effort has been made to reduce the overhead associated with calling +inline, it is still less efficient for simple code snippets than using +equivalent Python code. The simple ``printf`` example is actually slower by +30% or so than using Python ``print`` statement. And, it is not difficult to +create code fragments that are 8-10 times slower using inline than equivalent +Python. However, for more complicated algorithms, the speed up can be worth +while -- anywhwere from 1.5- 30 times faster. Algorithms that have to +manipulate Python objects (sorting a list) usually only see a factor of 2 or +so improvement. Algorithms that are highly computational or manipulate NumPy +arrays can see much larger improvements. The examples/vq.py file shows a +factor of 30 or more improvement on the vector quantization algorithm that is +used heavily in information theory and classification problems. + + +More with printf +================ + +MSVC users will actually see a bit of compiler output that distutils does not +supress the first time the code executes:: + + >>> weave.inline(r'printf("%d\n",a);',['a']) + sc_e013937dbc8c647ac62438874e5795131.cpp + Creating library C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp + \Release\sc_e013937dbc8c647ac62438874e5795131.lib and + object C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\sc_e013937dbc8c647ac62438874e5795131.exp + 1 + +Nothing bad is happening, its just a bit annoying. * Anyone know how to turn +this off?* + +This example also demonstrates using 'raw strings'. The ``r`` preceeding the +code string in the last example denotes that this is a 'raw string'. In raw +strings, the backslash character is not interpreted as an escape character, +and so it isn't necessary to use a double backslash to indicate that the '\n' +is meant to be interpreted in the C ``printf`` statement instead of by +Python. If your C code contains a lot of strings and control characters, raw +strings might make things easier. Most of the time, however, standard strings +work just as well. + +The ``printf`` statement in these examples is formatted to print out +integers. What happens if ``a`` is a string? ``inline`` will happily, compile +a new version of the code to accept strings as input, and execute the code. +The result? + +:: + + >>> a = 'string' + >>> weave.inline(r'printf("%d\n",a);',['a']) + 32956972 + + +In this case, the result is non-sensical, but also non-fatal. In other +situations, it might produce a compile time error because ``a`` is required +to be an integer at some point in the code, or it could produce a +segmentation fault. Its possible to protect against passing ``inline`` +arguments of the wrong data type by using asserts in Python. + +:: + + >>> a = 'string' + >>> def protected_printf(a): + ... assert(type(a) == type(1)) + ... weave.inline(r'printf("%d\n",a);',['a']) + >>> protected_printf(1) + 1 + >>> protected_printf('string') + AssertError... + + +For printing strings, the format statement needs to be changed. Also, weave +doesn't convert strings to char*. Instead it uses CXX Py::String type, so you +have to do a little more work. Here we convert it to a C++ std::string and +then ask cor the char* version. + +:: + + >>> a = 'string' + >>> weave.inline(r'printf("%s\n",std::string(a).c_str());',['a']) + string + +.. admonition:: XXX + + This is a little convoluted. Perhaps strings should convert to ``std::string`` + objects instead of CXX objects. Or maybe to ``char*``. + +As in this case, C/C++ code fragments often have to change to accept +different types. For the given printing task, however, C++ streams provide a +way of a single statement that works for integers and strings. By default, +the stream objects live in the std (standard) namespace and thus require the +use of ``std::``. + +:: + + >>> weave.inline('std::cout << a << std::endl;',['a']) + 1 + >>> a = 'string' + >>> weave.inline('std::cout << a << std::endl;',['a']) + string + + +Examples using ``printf`` and ``cout`` are included in +examples/print_example.py. + + +More examples +============= + +This section shows several more advanced uses of ``inline``. It includes a +few algorithms from the `Python Cookbook`_ that have been re-written in +inline C to improve speed as well as a couple examples using NumPy and +wxPython. + +Binary search +------------- + +Lets look at the example of searching a sorted list of integers for a value. +For inspiration, we'll use Kalle Svensson's `binary_search()`_ algorithm +from the Python Cookbook. His recipe follows:: + + def binary_search(seq, t): + min = 0; max = len(seq) - 1 + while 1: + if max < min: + return -1 + m = (min + max) / 2 + if seq[m] < t: + min = m + 1 + elif seq[m] > t: + max = m - 1 + else: + return m + + +This Python version works for arbitrary Python data types. The C version +below is specialized to handle integer values. There is a little type +checking done in Python to assure that we're working with the correct data +types before heading into C. The variables ``seq`` and ``t`` don't need to be +declared beacuse ``weave`` handles converting and declaring them in the C +code. All other temporary variables such as ``min, max``, etc. must be +declared -- it is C after all. Here's the new mixed Python/C function:: + + def c_int_binary_search(seq,t): + # do a little type checking in Python + assert(type(t) == type(1)) + assert(type(seq) == type([])) + + # now the C code + code = """ + #line 29 "binary_search.py" + int val, m, min = 0; + int max = seq.length() - 1; + PyObject *py_val; + for(;;) + { + if (max < min ) + { + return_val = Py::new_reference_to(Py::Int(-1)); + break; + } + m = (min + max) /2; + val = py_to_int(PyList_GetItem(seq.ptr(),m),"val"); + if (val < t) + min = m + 1; + else if (val > t) + max = m - 1; + else + { + return_val = Py::new_reference_to(Py::Int(m)); + break; + } + } + """ + return inline(code,['seq','t']) + +We have two variables ``seq`` and ``t`` passed in. ``t`` is guaranteed (by +the ``assert``) to be an integer. Python integers are converted to C int +types in the transition from Python to C. ``seq`` is a Python list. By +default, it is translated to a CXX list object. Full documentation for the +CXX library can be found at its `website`_. The basics are that the CXX +provides C++ class equivalents for Python objects that simplify, or at least +object orientify, working with Python objects in C/C++. For example, +``seq.length()`` returns the length of the list. A little more about CXX and +its class methods, etc. is in the ** type conversions ** section. + +.. note:: + CXX uses templates and therefore may be a little less portable than + another alternative by Gordan McMillan called SCXX which was + inspired by CXX. It doesn't use templates so it should compile + faster and be more portable. SCXX has a few less features, but it + appears to me that it would mesh with the needs of weave quite well. + Hopefully xxx_spec files will be written for SCXX in the future, and + we'll be able to compare on a more empirical basis. Both sets of + spec files will probably stick around, it just a question of which + becomes the default. + +Most of the algorithm above looks similar in C to the original Python code. +There are two main differences. The first is the setting of ``return_val`` +instead of directly returning from the C code with a ``return`` statement. +``return_val`` is an automatically defined variable of type ``PyObject*`` +that is returned from the C code back to Python. You'll have to handle +reference counting issues when setting this variable. In this example, CXX +classes and functions handle the dirty work. All CXX functions and classes +live in the namespace ``Py::``. The following code converts the integer ``m`` +to a CXX ``Int()`` object and then to a ``PyObject*`` with an incremented +reference count using ``Py::new_reference_to()``. + +:: + + return_val = Py::new_reference_to(Py::Int(m)); + + +The second big differences shows up in the retrieval of integer values from +the Python list. The simple Python ``seq[i]`` call balloons into a C Python +API call to grab the value out of the list and then a separate call to +``py_to_int()`` that converts the PyObject* to an integer. ``py_to_int()`` +includes both a NULL cheack and a ``PyInt_Check()`` call as well as the +conversion call. If either of the checks fail, an exception is raised. The +entire C++ code block is executed with in a ``try/catch`` block that handles +exceptions much like Python does. This removes the need for most error +checking code. + +It is worth note that CXX lists do have indexing operators that result in +code that looks much like Python. However, the overhead in using them appears +to be relatively high, so the standard Python API was used on the +``seq.ptr()`` which is the underlying ``PyObject*`` of the List object. + +The ``#line`` directive that is the first line of the C code block isn't +necessary, but it's nice for debugging. If the compilation fails because of +the syntax error in the code, the error will be reported as an error in the +Python file "binary_search.py" with an offset from the given line number (29 +here). + +So what was all our effort worth in terms of efficiency? Well not a lot in +this case. The examples/binary_search.py file runs both Python and C versions +of the functions As well as using the standard ``bisect`` module. If we run +it on a 1 million element list and run the search 3000 times (for 0- 2999), +here are the results we get:: + + C:\home\ej\wrk\scipy\weave\examples> python binary_search.py + Binary search for 3000 items in 1000000 length list of integers: + speed in python: 0.159999966621 + speed of bisect: 0.121000051498 + speed up: 1.32 + speed in c: 0.110000014305 + speed up: 1.45 + speed in c(no asserts): 0.0900000333786 + speed up: 1.78 + + +So, we get roughly a 50-75% improvement depending on whether we use the +Python asserts in our C version. If we move down to searching a 10000 element +list, the advantage evaporates. Even smaller lists might result in the Python +version being faster. I'd like to say that moving to NumPy lists (and getting +rid of the GetItem() call) offers a substantial speed up, but my preliminary +efforts didn't produce one. I think the log(N) algorithm is to blame. Because +the algorithm is nice, there just isn't much time spent computing things, so +moving to C isn't that big of a win. If there are ways to reduce conversion +overhead of values, this may improve the C/Python speed up. Anyone have other +explanations or faster code, please let me know. + + +Dictionary Sort +--------------- + +The demo in examples/dict_sort.py is another example from the Python +CookBook. `This submission`_, by Alex Martelli, demonstrates how to return +the values from a dictionary sorted by their keys: + +:: + + def sortedDictValues3(adict): + keys = adict.keys() + keys.sort() + return map(adict.get, keys) + + +Alex provides 3 algorithms and this is the 3rd and fastest of the set. The C +version of this same algorithm follows:: + + def c_sort(adict): + assert(type(adict) == type({})) + code = """ + #line 21 "dict_sort.py" + Py::List keys = adict.keys(); + Py::List items(keys.length()); keys.sort(); + PyObject* item = NULL; + for(int i = 0; i < keys.length();i++) + { + item = PyList_GET_ITEM(keys.ptr(),i); + item = PyDict_GetItem(adict.ptr(),item); + Py_XINCREF(item); + PyList_SetItem(items.ptr(),i,item); + } + return_val = Py::new_reference_to(items); + """ + return inline_tools.inline(code,['adict'],verbose=1) + + +Like the original Python function, the C++ version can handle any Python +dictionary regardless of the key/value pair types. It uses CXX objects for +the most part to declare python types in C++, but uses Python API calls to +manipulate their contents. Again, this choice is made for speed. The C++ +version, while more complicated, is about a factor of 2 faster than Python. + +:: + + C:\home\ej\wrk\scipy\weave\examples> python dict_sort.py + Dict sort of 1000 items for 300 iterations: + speed in python: 0.319999933243 + [0, 1, 2, 3, 4] + speed in c: 0.151000022888 + speed up: 2.12 + [0, 1, 2, 3, 4] + + + +NumPy -- cast/copy/transpose +---------------------------- + +CastCopyTranspose is a function called quite heavily by Linear Algebra +routines in the NumPy library. Its needed in part because of the row-major +memory layout of multi-demensional Python (and C) arrays vs. the col-major +order of the underlying Fortran algorithms. For small matrices (say 100x100 +or less), a significant portion of the common routines such as LU +decompisition or singular value decompostion are spent in this setup routine. +This shouldn't happen. Here is the Python version of the function using +standard NumPy operations. + +:: + + def _castCopyAndTranspose(type, array): + if a.typecode() == type: + cast_array = copy.copy(NumPy.transpose(a)) + else: + cast_array = copy.copy(NumPy.transpose(a).astype(type)) + return cast_array + + +And the following is a inline C version of the same function:: + + from weave.blitz_tools import blitz_type_factories + from weave import scalar_spec + from weave import inline + def _cast_copy_transpose(type,a_2d): + assert(len(shape(a_2d)) == 2) + new_array = zeros(shape(a_2d),type) + NumPy_type = scalar_spec.NumPy_to_blitz_type_mapping[type] + code = \ + """ + for(int i = 0;i < _Na_2d[0]; i++) + for(int j = 0; j < _Na_2d[1]; j++) + new_array(i,j) = (%s) a_2d(j,i); + """ % NumPy_type + inline(code,['new_array','a_2d'], + type_factories = blitz_type_factories,compiler='gcc') + return new_array + + +This example uses blitz++ arrays instead of the standard representation of +NumPy arrays so that indexing is simplier to write. This is accomplished by +passing in the blitz++ "type factories" to override the standard Python to +C++ type conversions. Blitz++ arrays allow you to write clean, fast code, but +they also are sloooow to compile (20 seconds or more for this snippet). This +is why they aren't the default type used for Numeric arrays (and also because +most compilers can't compile blitz arrays...). ``inline()`` is also forced to +use 'gcc' as the compiler because the default compiler on Windows (MSVC) will +not compile blitz code. ('gcc' I think will use the standard compiler on +Unix machine instead of explicitly forcing gcc (check this)) Comparisons of +the Python vs inline C++ code show a factor of 3 speed up. Also shown are the +results of an "inplace" transpose routine that can be used if the output of +the linear algebra routine can overwrite the original matrix (this is often +appropriate). This provides another factor of 2 improvement. + +:: + + #C:\home\ej\wrk\scipy\weave\examples> python cast_copy_transpose.py + # Cast/Copy/Transposing (150,150)array 1 times + # speed in python: 0.870999932289 + # speed in c: 0.25 + # speed up: 3.48 + # inplace transpose c: 0.129999995232 + # speed up: 6.70 + +wxPython +-------- + +``inline`` knows how to handle wxPython objects. Thats nice in and of itself, +but it also demonstrates that the type conversion mechanism is reasonably +flexible. Chances are, it won't take a ton of effort to support special types +you might have. The examples/wx_example.py borrows the scrolled window +example from the wxPython demo, accept that it mixes inline C code in the +middle of the drawing function. + +:: + + def DoDrawing(self, dc): + + red = wxNamedColour("RED"); + blue = wxNamedColour("BLUE"); + grey_brush = wxLIGHT_GREY_BRUSH; + code = \ + """ + #line 108 "wx_example.py" + dc->BeginDrawing(); + dc->SetPen(wxPen(*red,4,wxSOLID)); + dc->DrawRectangle(5,5,50,50); + dc->SetBrush(*grey_brush); + dc->SetPen(wxPen(*blue,4,wxSOLID)); + dc->DrawRectangle(15, 15, 50, 50); + """ + inline(code,['dc','red','blue','grey_brush']) + + dc.SetFont(wxFont(14, wxSWISS, wxNORMAL, wxNORMAL)) + dc.SetTextForeground(wxColour(0xFF, 0x20, 0xFF)) + te = dc.GetTextExtent("Hello World") + dc.DrawText("Hello World", 60, 65) + + dc.SetPen(wxPen(wxNamedColour('VIOLET'), 4)) + dc.DrawLine(5, 65+te[1], 60+te[0], 65+te[1]) + ... + +Here, some of the Python calls to wx objects were just converted to C++ +calls. There isn't any benefit, it just demonstrates the capabilities. You +might want to use this if you have a computationally intensive loop in your +drawing code that you want to speed up. On windows, you'll have to use the +MSVC compiler if you use the standard wxPython DLLs distributed by Robin +Dunn. Thats because MSVC and gcc, while binary compatible in C, are not +binary compatible for C++. In fact, its probably best, no matter what +platform you're on, to specify that ``inline`` use the same compiler that was +used to build wxPython to be on the safe side. There isn't currently a way to +learn this info from the library -- you just have to know. Also, at least on +the windows platform, you'll need to install the wxWindows libraries and link +to them. I think there is a way around this, but I haven't found it yet -- I +get some linking errors dealing with wxString. One final note. You'll +probably have to tweak weave/wx_spec.py or weave/wx_info.py for your +machine's configuration to point at the correct directories etc. There. That +should sufficiently scare people into not even looking at this... :) + +Keyword Option +============== + +The basic definition of the ``inline()`` function has a slew of optional +variables. It also takes keyword arguments that are passed to ``distutils`` +as compiler options. The following is a formatted cut/paste of the argument +section of ``inline's`` doc-string. It explains all of the variables. Some +examples using various options will follow. + +:: + + def inline(code,arg_names,local_dict = None, global_dict = None, + force = 0, + compiler='', + verbose = 0, + support_code = None, + customize=None, + type_factories = None, + auto_downcast=1, + **kw): + + +``inline`` has quite a few options as listed below. Also, the keyword +arguments for distutils extension modules are accepted to specify extra +information needed for compiling. + +Inline Arguments +================ + +code string. A string of valid C++ code. It should not specify a return +statement. Instead it should assign results that need to be returned to +Python in the return_val. arg_names list of strings. A list of Python +variable names that should be transferred from Python into the C/C++ code. +local_dict optional. dictionary. If specified, it is a dictionary of values +that should be used as the local scope for the C/C++ code. If local_dict is +not specified the local dictionary of the calling function is used. +global_dict optional. dictionary. If specified, it is a dictionary of values +that should be used as the global scope for the C/C++ code. If global_dict is +not specified the global dictionary of the calling function is used. force +optional. 0 or 1. default 0. If 1, the C++ code is compiled every time inline +is called. This is really only useful for debugging, and probably only useful +if you're editing support_code a lot. compiler optional. string. The name +of compiler to use when compiling. On windows, it understands 'msvc' and +'gcc' as well as all the compiler names understood by distutils. On Unix, +it'll only understand the values understoof by distutils. (I should add 'gcc' +though to this). + +On windows, the compiler defaults to the Microsoft C++ compiler. If this +isn't available, it looks for mingw32 (the gcc compiler). + +On Unix, it'll probably use the same compiler that was used when compiling +Python. Cygwin's behavior should be similar. + +verbose optional. 0,1, or 2. defualt 0. Speficies how much much +information is printed during the compile phase of inlining code. 0 is silent +(except on windows with msvc where it still prints some garbage). 1 informs +you when compiling starts, finishes, and how long it took. 2 prints out the +command lines for the compilation process and can be useful if you're having +problems getting code to work. Its handy for finding the name of the .cpp +file if you need to examine it. verbose has no affect if the compilation +isn't necessary. support_code optional. string. A string of valid C++ code +declaring extra code that might be needed by your compiled function. This +could be declarations of functions, classes, or structures. customize +optional. base_info.custom_info object. An alternative way to specifiy +support_code, headers, etc. needed by the function see the weave.base_info +module for more details. (not sure this'll be used much). type_factories +optional. list of type specification factories. These guys are what convert +Python data types to C/C++ data types. If you'd like to use a different set +of type conversions than the default, specify them here. Look in the type +conversions section of the main documentation for examples. auto_downcast +optional. 0 or 1. default 1. This only affects functions that have Numeric +arrays as input variables. Setting this to 1 will cause all floating point +values to be cast as float instead of double if all the NumPy arrays are of +type float. If even one of the arrays has type double or double complex, all +variables maintain there standard types. + + +Distutils keywords +================== + +``inline()`` also accepts a number of ``distutils`` keywords for +controlling how the code is compiled. The following descriptions have been +copied from Greg Ward's ``distutils.extension.Extension`` class doc- strings +for convenience: sources [string] list of source filenames, relative to the +distribution root (where the setup script lives), in Unix form (slash- +separated) for portability. Source files may be C, C++, SWIG (.i), platform- +specific resource files, or whatever else is recognized by the "build_ext" +command as source for a Python extension. Note: The module_path file is +always appended to the front of this list include_dirs [string] list of +directories to search for C/C++ header files (in Unix form for portability) +define_macros [(name : string, value : string|None)] list of macros to +define; each macro is defined using a 2-tuple, where 'value' is either the +string to define it to or None to define it without a particular value +(equivalent of "#define FOO" in source or -DFOO on Unix C compiler command +line) undef_macros [string] list of macros to undefine explicitly +library_dirs [string] list of directories to search for C/C++ libraries at +link time libraries [string] list of library names (not filenames or paths) +to link against runtime_library_dirs [string] list of directories to search +for C/C++ libraries at run time (for shared extensions, this is when the +extension is loaded) extra_objects [string] list of extra files to link +with (eg. object files not implied by 'sources', static library that must be +explicitly specified, binary resource files, etc.) extra_compile_args +[string] any extra platform- and compiler-specific information to use when +compiling the source files in 'sources'. For platforms and compilers where +"command line" makes sense, this is typically a list of command-line +arguments, but for other platforms it could be anything. extra_link_args +[string] any extra platform- and compiler-specific information to use when +linking object files together to create the extension (or to create a new +static Python interpreter). Similar interpretation as for +'extra_compile_args'. export_symbols [string] list of symbols to be +exported from a shared extension. Not used on all platforms, and not +generally necessary for Python extensions, which typically export exactly one +symbol: "init" + extension_name. + + +Keyword Option Examples +----------------------- + +We'll walk through several examples here to demonstrate the behavior of +``inline`` and also how the various arguments are used. In the simplest +(most) cases, ``code`` and ``arg_names`` are the only arguments that need to +be specified. Here's a simple example run on Windows machine that has +Microsoft VC++ installed. + +:: + + >>> from weave import inline + >>> a = 'string' + >>> code = """ + ... int l = a.length(); + ... return_val = Py::new_reference_to(Py::Int(l)); + ... """ + >>> inline(code,['a']) + sc_86e98826b65b047ffd2cd5f479c627f12.cpp + Creating + library C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\sc_86e98826b65b047ffd2cd5f479c627f12.lib + and object C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\sc_86e98826b65b047ff + d2cd5f479c627f12.exp + 6 + >>> inline(code,['a']) + 6 + + +When ``inline`` is first run, you'll notice that pause and some trash printed +to the screen. The "trash" is acutually part of the compilers output that +distutils does not supress. The name of the extension file, +``sc_bighonkingnumber.cpp``, is generated from the md5 check sum of the C/C++ +code fragment. On Unix or windows machines with only gcc installed, the trash +will not appear. On the second call, the code fragment is not compiled since +it already exists, and only the answer is returned. Now kill the interpreter +and restart, and run the same code with a different string. + +:: + + >>> from weave import inline + >>> a = 'a longer string' + >>> code = """ + ... int l = a.length(); + ... return_val = Py::new_reference_to(Py::Int(l)); + ... """ + >>> inline(code,['a']) + 15 + + +Notice this time, ``inline()`` did not recompile the code because it found +the compiled function in the persistent catalog of functions. There is a +short pause as it looks up and loads the function, but it is much shorter +than compiling would require. + +You can specify the local and global dictionaries if you'd like (much like +``exec`` or ``eval()`` in Python), but if they aren't specified, the +"expected" ones are used -- i.e. the ones from the function that called +``inline()``. This is accomplished through a little call frame trickery. +Here is an example where the local_dict is specified using the same code +example from above:: + + >>> a = 'a longer string' + >>> b = 'an even longer string' + >>> my_dict = {'a':b} + >>> inline(code,['a']) + 15 + >>> inline(code,['a'],my_dict) + 21 + + +Everytime, the ``code`` is changed, ``inline`` does a recompile. However, +changing any of the other options in inline does not force a recompile. The +``force`` option was added so that one could force a recompile when tinkering +with other variables. In practice, it is just as easy to change the ``code`` +by a single character (like adding a space some place) to force the +recompile. + +.. note:: + It also might be nice to add some methods for purging the + cache and on disk catalogs. + +I use ``verbose`` sometimes for debugging. When set to 2, it'll output all +the information (including the name of the .cpp file) that you'd expect from +running a make file. This is nice if you need to examine the generated code +to see where things are going haywire. Note that error messages from failed +compiles are printed to the screen even if ``verbose`` is set to 0. + +The following example demonstrates using gcc instead of the standard msvc +compiler on windows using same code fragment as above. Because the example +has already been compiled, the ``force=1`` flag is needed to make +``inline()`` ignore the previously compiled version and recompile using gcc. +The verbose flag is added to show what is printed out:: + + >>>inline(code,['a'],compiler='gcc',verbose=2,force=1) + running build_ext + building 'sc_86e98826b65b047ffd2cd5f479c627f13' extension + c:\gcc-2.95.2\bin\g++.exe -mno-cygwin -mdll -O2 -w -Wstrict-prototypes -IC: + \home\ej\wrk\scipy\weave -IC:\Python21\Include -c C:\DOCUME~1\eric\LOCAL + S~1\Temp\python21_compiled\sc_86e98826b65b047ffd2cd5f479c627f13.cpp + -o C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\sc_86e98826b65b04ffd2cd5f479c627f13.o + skipping C:\home\ej\wrk\scipy\weave\CXX\cxxextensions.c + (C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\cxxextensions.o up-to-date) + skipping C:\home\ej\wrk\scipy\weave\CXX\cxxsupport.cxx + (C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\cxxsupport.o up-to-date) + skipping C:\home\ej\wrk\scipy\weave\CXX\IndirectPythonInterface.cxx + (C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\indirectpythoninterface.o up-to-date) + skipping C:\home\ej\wrk\scipy\weave\CXX\cxx_extensions.cxx + (C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\cxx_extensions.o + up-to-date) + writing C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\sc_86e98826b65b047ffd2cd5f479c627f13.def + c:\gcc-2.95.2\bin\dllwrap.exe --driver-name g++ -mno-cygwin + -mdll -static --output-lib + C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\libsc_86e98826b65b047ffd2cd5f479c627f13.a --def + C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\sc_86e98826b65b047ffd2cd5f479c627f13.def + -sC:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\sc_86e98826b65b047ffd2cd5f479c627f13.o + C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\cxxextensions.o + C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\cxxsupport.o + C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\indirectpythoninterface.o + C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\cxx_extensions.o -LC:\Python21\libs + -lpython21 -o + C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\sc_86e98826b65b047ffd2cd5f479c627f13.pyd + 15 + +That's quite a bit of output. ``verbose=1`` just prints the compile time. + +:: + + >>>inline(code,['a'],compiler='gcc',verbose=1,force=1) + Compiling code... + finished compiling (sec): 6.00800001621 + 15 + + +.. note:: + I've only used the ``compiler`` option for switching between 'msvc' + and 'gcc' on windows. It may have use on Unix also, but I don't know yet. + +The ``support_code`` argument is likely to be used a lot. It allows you to +specify extra code fragments such as function, structure or class definitions +that you want to use in the ``code`` string. Note that changes to +``support_code`` do *not* force a recompile. The catalog only relies on +``code`` (for performance reasons) to determine whether recompiling is +necessary. So, if you make a change to support_code, you'll need to alter +``code`` in some way or use the ``force`` argument to get the code to +recompile. I usually just add some inocuous whitespace to the end of one of +the lines in ``code`` somewhere. Here's an example of defining a separate +method for calculating the string length: + +:: + + >>> from weave import inline + >>> a = 'a longer string' + >>> support_code = """ + ... PyObject* length(Py::String a) + ... { + ... int l = a.length(); + ... return Py::new_reference_to(Py::Int(l)); + ... } + ... """ + >>> inline("return_val = length(a);",['a'], + ... support_code = support_code) + 15 + + +``customize`` is a left over from a previous way of specifying compiler +options. It is a ``custom_info`` object that can specify quite a bit of +information about how a file is compiled. These ``info`` objects are the +standard way of defining compile information for type conversion classes. +However, I don't think they are as handy here, especially since we've exposed +all the keyword arguments that distutils can handle. Between these keywords, +and the ``support_code`` option, I think ``customize`` may be obsolete. We'll +see if anyone cares to use it. If not, it'll get axed in the next version. + +The ``type_factories`` variable is important to people who want to customize +the way arguments are converted from Python to C. We'll talk about this in +the next chapter **xx** of this document when we discuss type conversions. + +``auto_downcast`` handles one of the big type conversion issues that is +common when using NumPy arrays in conjunction with Python scalar values. If +you have an array of single precision values and multiply that array by a +Python scalar, the result is upcast to a double precision array because the +scalar value is double precision. This is not usually the desired behavior +because it can double your memory usage. ``auto_downcast`` goes some distance +towards changing the casting precedence of arrays and scalars. If your only +using single precision arrays, it will automatically downcast all scalar +values from double to single precision when they are passed into the C++ +code. This is the default behavior. If you want all values to keep there +default type, set ``auto_downcast`` to 0. + + +Returning Values +---------------- + +Python variables in the local and global scope transfer seemlessly from +Python into the C++ snippets. And, if ``inline`` were to completely live up +to its name, any modifications to variables in the C++ code would be +reflected in the Python variables when control was passed back to Python. For +example, the desired behavior would be something like:: + + # THIS DOES NOT WORK + >>> a = 1 + >>> weave.inline("a++;",['a']) + >>> a + 2 + + +Instead you get:: + + >>> a = 1 + >>> weave.inline("a++;",['a']) + >>> a + 1 + + +Variables are passed into C++ as if you are calling a Python function. +Python's calling convention is sometimes called "pass by assignment". This +means its as if a ``c_a = a`` assignment is made right before ``inline`` call +is made and the ``c_a`` variable is used within the C++ code. Thus, any +changes made to ``c_a`` are not reflected in Python's ``a`` variable. Things +do get a little more confusing, however, when looking at variables with +mutable types. Changes made in C++ to the contents of mutable types *are* +reflected in the Python variables. + +:: + + >>> a= [1,2] + >>> weave.inline("PyList_SetItem(a.ptr(),0,PyInt_FromLong(3));",['a']) + >>> print a + [3, 2] + + +So modifications to the contents of mutable types in C++ are seen when +control is returned to Python. Modifications to immutable types such as +tuples, strings, and numbers do not alter the Python variables. If you need +to make changes to an immutable variable, you'll need to assign the new value +to the "magic" variable ``return_val`` in C++. This value is returned by the +``inline()`` function:: + + >>> a = 1 + >>> a = weave.inline("return_val = Py::new_reference_to(Py::Int(a+1));",['a']) + >>> a + 2 + + +The ``return_val`` variable can also be used to return newly created values. +This is possible by returning a tuple. The following trivial example +illustrates how this can be done:: + + # python version + def multi_return(): + return 1, '2nd' + + # C version. + def c_multi_return(): + code = """ + py::tuple results(2); + results[0] = 1; + results[1] = "2nd"; + return_val = results; + """ + return inline_tools.inline(code) + +The example is available in ``examples/tuple_return.py``. It also has the +dubious honor of demonstrating how much ``inline()`` can slow things down. +The C version here is about 7-10 times slower than the Python version. Of +course, something so trivial has no reason to be written in C anyway. + + +The issue with ``locals()`` +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``inline`` passes the ``locals()`` and ``globals()`` dictionaries from Python +into the C++ function from the calling function. It extracts the variables +that are used in the C++ code from these dictionaries, converts then to C++ +variables, and then calculates using them. It seems like it would be trivial, +then, after the calculations were finished to then insert the new values back +into the ``locals()`` and ``globals()`` dictionaries so that the modified +values were reflected in Python. Unfortunately, as pointed out by the Python +manual, the locals() dictionary is not writable. + +I suspect ``locals()`` is not writable because there are some optimizations +done to speed lookups of the local namespace. I'm guessing local lookups +don't always look at a dictionary to find values. Can someone "in the know" +confirm or correct this? Another thing I'd like to know is whether there is a +way to write to the local namespace of another stack frame from C/C++. If so, +it would be possible to have some clean up code in compiled functions that +wrote final values of variables in C++ back to the correct Python stack +frame. I think this goes a long way toward making ``inline`` truely live up +to its name. I don't think we'll get to the point of creating variables in +Python for variables created in C -- although I suppose with a C/C++ parser +you could do that also. + + +A quick look at the code +------------------------ + +``weave`` generates a C++ file holding an extension function for each +``inline`` code snippet. These file names are generated using from the md5 +signature of the code snippet and saved to a location specified by the +PYTHONCOMPILED environment variable (discussed later). The cpp files are +generally about 200-400 lines long and include quite a few functions to +support type conversions, etc. However, the actual compiled function is +pretty simple. Below is the familiar ``printf`` example: + +:: + + >>> import weave + >>> a = 1 + >>> weave.inline('printf("%d\\n",a);',['a']) + 1 + + +And here is the extension function generated by ``inline``:: + + static PyObject* compiled_func(PyObject*self, PyObject* args) + { + py::object return_val; + int exception_occured = 0; + PyObject *py__locals = NULL; + PyObject *py__globals = NULL; + PyObject *py_a; + py_a = NULL; + + if(!PyArg_ParseTuple(args,"OO:compiled_func",&py__locals,&py__globals)) + return NULL; + try + { + PyObject* raw_locals = py_to_raw_dict(py__locals,"_locals"); + PyObject* raw_globals = py_to_raw_dict(py__globals,"_globals"); + /* argument conversion code */ + py_a = get_variable("a",raw_locals,raw_globals); + int a = convert_to_int(py_a,"a"); + /* inline code */ + /* NDARRAY API VERSION 90907 */ + printf("%d\n",a); /*I would like to fill in changed locals and globals here...*/ + } + catch(...) + { + return_val = py::object(); + exception_occured = 1; + } + /* cleanup code */ + if(!(PyObject*)return_val && !exception_occured) + { + return_val = Py_None; + } + return return_val.disown(); + } + +Every inline function takes exactly two arguments -- the local and global +dictionaries for the current scope. All variable values are looked up out of +these dictionaries. The lookups, along with all ``inline`` code execution, +are done within a C++ ``try`` block. If the variables aren't found, or there +is an error converting a Python variable to the appropriate type in C++, an +exception is raised. The C++ exception is automatically converted to a Python +exception by SCXX and returned to Python. The ``py_to_int()`` function +illustrates how the conversions and exception handling works. py_to_int first +checks that the given PyObject* pointer is not NULL and is a Python integer. +If all is well, it calls the Python API to convert the value to an ``int``. +Otherwise, it calls ``handle_bad_type()`` which gathers information about +what went wrong and then raises a SCXX TypeError which returns to Python as a +TypeError. + +:: + + int py_to_int(PyObject* py_obj,char* name) + { + if (!py_obj || !PyInt_Check(py_obj)) + handle_bad_type(py_obj,"int", name); + return (int) PyInt_AsLong(py_obj); + } + + +:: + + void handle_bad_type(PyObject* py_obj, char* good_type, char* var_name) + { + char msg[500]; + sprintf(msg,"received '%s' type instead of '%s' for variable '%s'", + find_type(py_obj),good_type,var_name); + throw Py::TypeError(msg); + } + + char* find_type(PyObject* py_obj) + { + if(py_obj == NULL) return "C NULL value"; + if(PyCallable_Check(py_obj)) return "callable"; + if(PyString_Check(py_obj)) return "string"; + if(PyInt_Check(py_obj)) return "int"; + if(PyFloat_Check(py_obj)) return "float"; + if(PyDict_Check(py_obj)) return "dict"; + if(PyList_Check(py_obj)) return "list"; + if(PyTuple_Check(py_obj)) return "tuple"; + if(PyFile_Check(py_obj)) return "file"; + if(PyModule_Check(py_obj)) return "module"; + + //should probably do more interagation (and thinking) on these. + if(PyCallable_Check(py_obj) && PyInstance_Check(py_obj)) return "callable"; + if(PyInstance_Check(py_obj)) return "instance"; + if(PyCallable_Check(py_obj)) return "callable"; + return "unkown type"; + } + +Since the ``inline`` is also executed within the ``try/catch`` block, you can +use CXX exceptions within your code. It is usually a bad idea to directly +``return`` from your code, even if an error occurs. This skips the clean up +section of the extension function. In this simple example, there isn't any +clean up code, but in more complicated examples, there may be some reference +counting that needs to be taken care of here on converted variables. To avoid +this, either uses exceptions or set ``return_val`` to NULL and use +``if/then's`` to skip code after errors. + +Technical Details +================= + +There are several main steps to using C/C++ code withing Python: + +1. Type conversion +2. Generating C/C++ code +3. Compile the code to an extension module +4. Catalog (and cache) the function for future use + +Items 1 and 2 above are related, but most easily discussed separately. Type +conversions are customizable by the user if needed. Understanding them is +pretty important for anything beyond trivial uses of ``inline``. Generating +the C/C++ code is handled by ``ext_function`` and ``ext_module`` classes and +. For the most part, compiling the code is handled by distutils. Some +customizations were needed, but they were relatively minor and do not require +changes to distutils itself. Cataloging is pretty simple in concept, but +surprisingly required the most code to implement (and still likely needs some +work). So, this section covers items 1 and 4 from the list. Item 2 is covered +later in the chapter covering the ``ext_tools`` module, and distutils is +covered by a completely separate document xxx. + + +Passing Variables in/out of the C/C++ code +========================================== + +.. note:: + Passing variables into the C code is pretty straight forward, but + there are subtlties to how variable modifications in C are returned to + Python. see `Returning Values`_ for a more thorough discussion of this issue. + +Type Conversions +================ + +.. note:: + Maybe ``xxx_converter`` instead of ``xxx_specification`` is a more + descriptive name. Might change in future version? + +By default, ``inline()`` makes the following type conversions between Python +and C++ types. + +.. table:: Default Data Type Conversions + + ============= ======= + Python C++ + ============= ======= + int int + float double + complex std::complex + string py::string + list py::list + dict py::dict + tuple py::tuple + file FILE* + callable py::object + instance py::object + numpy.ndarray PyArrayObject* + wxXXX wxXXX* + ============= ======= + +The ``Py::`` namespace is defined by the SCXX library which has C++ class +equivalents for many Python types. ``std::`` is the namespace of the standard +library in C++. + + +.. note:: + - I haven't figured out how to handle ``long int`` yet (I think they + are currenlty converted to int - - check this). + - Hopefully VTK will be added to the list soon + +Python to C++ conversions fill in code in several locations in the generated +``inline`` extension function. Below is the basic template for the function. +This is actually the exact code that is generated by calling +``weave.inline("")``. + + +The ``/* inline code */`` section is filled with the code passed to the +``inline()`` function call. The ``/*argument convserion code*/`` and ``/* +cleanup code */`` sections are filled with code that handles conversion from +Python to C++ types and code that deallocates memory or manipulates reference +counts before the function returns. The following sections demostrate how +these two areas are filled in by the default conversion methods. * Note: I'm +not sure I have reference counting correct on a few of these. The only thing +I increase/decrease the ref count on is NumPy arrays. If you see an issue, +please let me know. + +NumPy Argument Conversion +------------------------- + +Integer, floating point, and complex arguments are handled in a very similar +fashion. Consider the following inline function that has a single integer +variable passed in:: + + >>> a = 1 + >>> inline("",['a']) + + +The argument conversion code inserted for ``a`` is:: + + /* argument conversion code */ + int a = py_to_int (get_variable("a",raw_locals,raw_globals),"a"); + +``get_variable()`` reads the variable ``a`` from the local and global +namespaces. ``py_to_int()`` has the following form:: + + static int py_to_int(PyObject* py_obj,char* name) + { + if (!py_obj || !PyInt_Check(py_obj)) + handle_bad_type(py_obj,"int", name); + return (int) PyInt_AsLong(py_obj); + } + + +Similarly, the float and complex conversion routines look like:: + + static double py_to_float(PyObject* py_obj,char* name) + { + if (!py_obj || !PyFloat_Check(py_obj)) + handle_bad_type(py_obj,"float", name); + return PyFloat_AsDouble(py_obj); + } + + static std::complex py_to_complex(PyObject* py_obj,char* name) + { + if (!py_obj || !PyComplex_Check(py_obj)) + handle_bad_type(py_obj,"complex", name); + return std::complex(PyComplex_RealAsDouble(py_obj), + PyComplex_ImagAsDouble(py_obj)); + } + +NumPy conversions do not require any clean up code. + +String, List, Tuple, and Dictionary Conversion +---------------------------------------------- + +Strings, Lists, Tuples and Dictionary conversions are all converted to SCXX +types by default. For the following code, + +:: + + >>> a = [1] + >>> inline("",['a']) + + +The argument conversion code inserted for ``a`` is:: + + /* argument conversion code */ + Py::List a = py_to_list(get_variable("a",raw_locals,raw_globals),"a"); + + +``get_variable()`` reads the variable ``a`` from the local and global +namespaces. ``py_to_list()`` and its friends has the following form:: + + static Py::List py_to_list(PyObject* py_obj,char* name) + { + if (!py_obj || !PyList_Check(py_obj)) + handle_bad_type(py_obj,"list", name); + return Py::List(py_obj); + } + + static Py::String py_to_string(PyObject* py_obj,char* name) + { + if (!PyString_Check(py_obj)) + handle_bad_type(py_obj,"string", name); + return Py::String(py_obj); + } + + static Py::Dict py_to_dict(PyObject* py_obj,char* name) + { + if (!py_obj || !PyDict_Check(py_obj)) + handle_bad_type(py_obj,"dict", name); + return Py::Dict(py_obj); + } + + static Py::Tuple py_to_tuple(PyObject* py_obj,char* name) + { + if (!py_obj || !PyTuple_Check(py_obj)) + handle_bad_type(py_obj,"tuple", name); + return Py::Tuple(py_obj); + } + +SCXX handles reference counts on for strings, lists, tuples, and +dictionaries, so clean up code isn't necessary. + +File Conversion +--------------- + +For the following code, + +:: + + >>> a = open("bob",'w') + >>> inline("",['a']) + + +The argument conversion code is:: + + /* argument conversion code */ + PyObject* py_a = get_variable("a",raw_locals,raw_globals); + FILE* a = py_to_file(py_a,"a"); + + +``get_variable()`` reads the variable ``a`` from the local and global +namespaces. ``py_to_file()`` converts PyObject* to a FILE* and increments the +reference count of the PyObject*:: + + FILE* py_to_file(PyObject* py_obj, char* name) + { + if (!py_obj || !PyFile_Check(py_obj)) + handle_bad_type(py_obj,"file", name); + + Py_INCREF(py_obj); + return PyFile_AsFile(py_obj); + } + +Because the PyObject* was incremented, the clean up code needs to decrement +the counter + +:: + + /* cleanup code */ + Py_XDECREF(py_a); + + +Its important to understand that file conversion only works on actual files +-- i.e. ones created using the ``open()`` command in Python. It does not +support converting arbitrary objects that support the file interface into C +``FILE*`` pointers. This can affect many things. For example, in initial +``printf()`` examples, one might be tempted to solve the problem of C and +Python IDE's (PythonWin, PyCrust, etc.) writing to different stdout and +stderr by using ``fprintf()`` and passing in ``sys.stdout`` and +``sys.stderr``. For example, instead of + +:: + + >>> weave.inline('printf("hello\\n");') + + +You might try: + +:: + + >>> buf = sys.stdout + >>> weave.inline('fprintf(buf,"hello\\n");',['buf']) + + +This will work as expected from a standard python interpreter, but in +PythonWin, the following occurs: + +:: + + >>> buf = sys.stdout + >>> weave.inline('fprintf(buf,"hello\\n");',['buf']) + Traceback (most recent call last): + File "", line 1, in ? + File "C:\Python21\weave\inline_tools.py", line 315, in inline + auto_downcast = auto_downcast, + File "C:\Python21\weave\inline_tools.py", line 386, in compile_function + type_factories = type_factories) + File "C:\Python21\weave\ext_tools.py", line 197, in __init__ + auto_downcast, type_factories) + File "C:\Python21\weave\ext_tools.py", line 390, in assign_variable_types + raise TypeError, format_error_msg(errors) + TypeError: {'buf': "Unable to convert variable 'buf' to a C++ type."} + + +The traceback tells us that ``inline()`` was unable to convert 'buf' to a C++ +type (If instance conversion was implemented, the error would have occurred +at runtime instead). Why is this? Let's look at what the ``buf`` object +really is:: + + >>> buf + pywin.framework.interact.InteractiveView instance at 00EAD014 + + +PythonWin has reassigned ``sys.stdout`` to a special object that implements +the Python file interface. This works great in Python, but since the special +object doesn't have a FILE* pointer underlying it, fprintf doesn't know what +to do with it (well this will be the problem when instance conversion is +implemented...). + +Callable, Instance, and Module Conversion +----------------------------------------- + + +.. note:: + Need to look into how ref counts should be handled. Also, Instance and + Module conversion are not currently implemented. + +:: + + >>> def a(): + pass + >>> inline("",['a']) + + +Callable and instance variables are converted to PyObject*. Nothing is done +to there reference counts. + +:: + + /* argument conversion code */ + PyObject* a = py_to_callable(get_variable("a",raw_locals,raw_globals),"a"); + + +``get_variable()`` reads the variable ``a`` from the local and global +namespaces. The ``py_to_callable()`` and ``py_to_instance()`` don't currently +increment the ref count. + +:: + + PyObject* py_to_callable(PyObject* py_obj, char* name) + { + if (!py_obj || !PyCallable_Check(py_obj)) + handle_bad_type(py_obj,"callable", name); + return py_obj; + } + + PyObject* py_to_instance(PyObject* py_obj, char* name) + { + if (!py_obj || !PyFile_Check(py_obj)) + handle_bad_type(py_obj,"instance", name); + return py_obj; + } + +There is no cleanup code for callables, modules, or instances. + +Customizing Conversions +----------------------- + +Converting from Python to C++ types is handled by xxx_specification classes. +A type specification class actually serve in two related but different roles. +The first is in determining whether a Python variable that needs to be +converted should be represented by the given class. The second is as a code +generator that generate C++ code needed to convert from Python to C++ types +for a specific variable. + +When + +:: + + >>> a = 1 + >>> weave.inline('printf("%d",a);',['a']) + + +is called for the first time, the code snippet has to be compiled. In this +process, the variable 'a' is tested against a list of type specifications +(the default list is stored in weave/ext_tools.py). The *first* specification +in the list is used to represent the variable. + +Examples of ``xxx_specification`` are scattered throughout numerous +"xxx_spec.py" files in the ``weave`` package. Closely related to the +``xxx_specification`` classes are ``yyy_info`` classes. These classes contain +compiler, header, and support code information necessary for including a +certain set of capabilities (such as blitz++ or CXX support) in a compiled +module. ``xxx_specification`` classes have one or more ``yyy_info`` classes +associated with them. If you'd like to define your own set of type +specifications, the current best route is to examine some of the existing +spec and info files. Maybe looking over sequence_spec.py and cxx_info.py are +a good place to start. After defining specification classes, you'll need to +pass them into ``inline`` using the ``type_factories`` argument. A lot of +times you may just want to change how a specific variable type is +represented. Say you'd rather have Python strings converted to +``std::string`` or maybe ``char*`` instead of using the CXX string object, +but would like all other type conversions to have default behavior. This +requires that a new specification class that handles strings is written and +then prepended to a list of the default type specifications. Since it is +closer to the front of the list, it effectively overrides the default string +specification. The following code demonstrates how this is done: ... + + +The Catalog +=========== + +``catalog.py`` has a class called ``catalog`` that helps keep track of +previously compiled functions. This prevents ``inline()`` and related +functions from having to compile functions everytime they are called. +Instead, catalog will check an in memory cache to see if the function has +already been loaded into python. If it hasn't, then it starts searching +through persisent catalogs on disk to see if it finds an entry for the given +function. By saving information about compiled functions to disk, it isn't +necessary to re-compile functions everytime you stop and restart the +interpreter. Functions are compiled once and stored for future use. + +When ``inline(cpp_code)`` is called the following things happen: + +1. A fast local cache of functions is checked for the last function + called for ``cpp_code``. If an entry for ``cpp_code`` doesn't exist in + the cache or the cached function call fails (perhaps because the function + doesn't have compatible types) then the next step is to check the + catalog. + +2. The catalog class also keeps an in-memory cache with a list of all + the functions compiled for ``cpp_code``. If ``cpp_code`` has ever been + called, then this cache will be present (loaded from disk). If the cache + isn't present, then it is loaded from disk. + + If the cache is present, each function in the cache is called until + one is found that was compiled for the correct argument types. If none of + the functions work, a new function is compiled with the given argument + types. This function is written to the on-disk catalog as well as into + the in-memory cache. + +3. When a lookup for ``cpp_code`` fails, the catalog looks through the + on-disk function catalogs for the entries. The PYTHONCOMPILED variable + determines where to search for these catalogs and in what order. If + PYTHONCOMPILED is not present several platform dependent locations are + searched. All functions found for ``cpp_code`` in the path are loaded + into the in-memory cache with functions found earlier in the search path + closer to the front of the call list. + + If the function isn't found in the on-disk catalog, then the function + is compiled, written to the first writable directory in the + PYTHONCOMPILED path, and also loaded into the in-memory cache. + + +Function Storage +---------------- + +Function caches are stored as dictionaries where the key is the entire C++ +code string and the value is either a single function (as in the "level 1" +cache) or a list of functions (as in the main catalog cache). On disk +catalogs are stored in the same manor using standard Python shelves. + +Early on, there was a question as to whether md5 check sums of the C++ code +strings should be used instead of the actual code strings. I think this is +the route inline Perl took. Some (admittedly quick) tests of the md5 vs. the +entire string showed that using the entire string was at least a factor of 3 +or 4 faster for Python. I think this is because it is more time consuming to +compute the md5 value than it is to do look-ups of long strings in the +dictionary. Look at the examples/md5_speed.py file for the test run. + + +Catalog search paths and the PYTHONCOMPILED variable +---------------------------------------------------- + +The default location for catalog files on Unix is is ~/.pythonXX_compiled +where XX is version of Python being used. If this directory doesn't exist, it +is created the first time a catalog is used. The directory must be writable. +If, for any reason it isn't, then the catalog attempts to create a directory +based on your user id in the /tmp directory. The directory permissions are +set so that only you have access to the directory. If this fails, I think +you're out of luck. I don't think either of these should ever fail though. On +Windows, a directory called pythonXX_compiled is created in the user's +temporary directory. + +The actual catalog file that lives in this directory is a Python shelve with +a platform specific name such as "nt21compiled_catalog" so that multiple OSes +can share the same file systems without trampling on each other. Along with +the catalog file, the .cpp and .so or .pyd files created by inline will live +in this directory. The catalog file simply contains keys which are the C++ +code strings with values that are lists of functions. The function lists +point at functions within these compiled modules. Each function in the lists +executes the same C++ code string, but compiled for different input +variables. + +You can use the PYTHONCOMPILED environment variable to specify alternative +locations for compiled functions. On Unix this is a colon (':') separated +list of directories. On windows, it is a (';') separated list of directories. +These directories will be searched prior to the default directory for a +compiled function catalog. Also, the first writable directory in the list is +where all new compiled function catalogs, .cpp and .so or .pyd files are +written. Relative directory paths ('.' and '..') should work fine in the +PYTHONCOMPILED variable as should environement variables. + +There is a "special" path variable called MODULE that can be placed in the +PYTHONCOMPILED variable. It specifies that the compiled catalog should reside +in the same directory as the module that called it. This is useful if an +admin wants to build a lot of compiled functions during the build of a +package and then install them in site-packages along with the package. User's +who specify MODULE in their PYTHONCOMPILED variable will have access to these +compiled functions. Note, however, that if they call the function with a set +of argument types that it hasn't previously been built for, the new function +will be stored in their default directory (or some other writable directory +in the PYTHONCOMPILED path) because the user will not have write access to +the site-packages directory. + +An example of using the PYTHONCOMPILED path on bash follows:: + + PYTHONCOMPILED=MODULE:/some/path;export PYTHONCOMPILED; + + +If you are using python21 on linux, and the module bob.py in site-packages +has a compiled function in it, then the catalog search order when calling +that function for the first time in a python session would be:: + + /usr/lib/python21/site-packages/linuxpython_compiled + /some/path/linuxpython_compiled + ~/.python21_compiled/linuxpython_compiled + + +The default location is always included in the search path. + +.. note:: + hmmm. see a possible problem here. I should probably make a sub- + directory such as /usr/lib/python21/site- + packages/python21_compiled/linuxpython_compiled so that library files + compiled with python21 are tried to link with python22 files in some strange + scenarios. Need to check this. + +The in-module cache (in ``weave.inline_tools`` reduces the overhead of +calling inline functions by about a factor of 2. It can be reduced a little +more for type loop calls where the same function is called over and over +again if the cache was a single value instead of a dictionary, but the +benefit is very small (less than 5%) and the utility is quite a bit less. So, +we'll stick with a dictionary as the cache. + + +======= + Blitz +======= + +.. note:: + most of this section is lifted from old documentation. It should be + pretty accurate, but there may be a few discrepancies. + +``weave.blitz()`` compiles NumPy Python expressions for fast execution. For +most applications, compiled expressions should provide a factor of 2-10 +speed-up over NumPy arrays. Using compiled expressions is meant to be as +unobtrusive as possible and works much like pythons exec statement. As an +example, the following code fragment takes a 5 point average of the 512x512 +2d image, b, and stores it in array, a:: + + from scipy import * # or from NumPy import * + a = ones((512,512), Float64) + b = ones((512,512), Float64) + # ...do some stuff to fill in b... + # now average + a[1:-1,1:-1] = (b[1:-1,1:-1] + b[2:,1:-1] + b[:-2,1:-1] \ + + b[1:-1,2:] + b[1:-1,:-2]) / 5. + + +To compile the expression, convert the expression to a string by putting +quotes around it and then use ``weave.blitz``:: + + import weave + expr = "a[1:-1,1:-1] = (b[1:-1,1:-1] + b[2:,1:-1] + b[:-2,1:-1]" \ + "+ b[1:-1,2:] + b[1:-1,:-2]) / 5." + weave.blitz(expr) + + +The first time ``weave.blitz`` is run for a given expression and set of +arguements, C++ code that accomplishes the exact same task as the Python +expression is generated and compiled to an extension module. This can take up +to a couple of minutes depending on the complexity of the function. +Subsequent calls to the function are very fast. Futher, the generated module +is saved between program executions so that the compilation is only done once +for a given expression and associated set of array types. If the given +expression is executed with a new set of array types, the code most be +compiled again. This does not overwrite the previously compiled function -- +both of them are saved and available for exectution. + +The following table compares the run times for standard NumPy code and +compiled code for the 5 point averaging. + +Method Run Time (seconds) +Standard NumPy 0.46349 +blitz (1st time compiling) 78.95526 +blitz (subsequent calls) 0.05843 (factor of 8 speedup) + +These numbers are for a 512x512 double precision image run on a 400 MHz +Celeron processor under RedHat Linux 6.2. + +Because of the slow compile times, its probably most effective to develop +algorithms as you usually do using the capabilities of scipy or the NumPy +module. Once the algorithm is perfected, put quotes around it and execute it +using ``weave.blitz``. This provides the standard rapid prototyping strengths +of Python and results in algorithms that run close to that of hand coded C or +Fortran. + + +Requirements +============ + +Currently, the ``weave.blitz`` has only been tested under Linux with +gcc-2.95-3 and on Windows with Mingw32 (2.95.2). Its compiler requirements +are pretty heavy duty (see the `blitz++ home page`_), so it won't work with +just any compiler. Particularly MSVC++ isn't up to snuff. A number of other +compilers such as KAI++ will also work, but my suspicions are that gcc will +get the most use. + +Limitations +=========== + +1. Currently, ``weave.blitz`` handles all standard mathematic operators + except for the ** power operator. The built-in trigonmetric, log, + floor/ceil, and fabs functions might work (but haven't been tested). It + also handles all types of array indexing supported by the NumPy module. + numarray's NumPy compatible array indexing modes are likewise supported, + but numarray's enhanced (array based) indexing modes are not supported. + + ``weave.blitz`` does not currently support operations that use array + broadcasting, nor have any of the special purpose functions in NumPy such + as take, compress, etc. been implemented. Note that there are no obvious + reasons why most of this functionality cannot be added to scipy.weave, so + it will likely trickle into future versions. Using ``slice()`` objects + directly instead of ``start:stop:step`` is also not supported. + +2. Currently Python only works on expressions that include assignment + such as + + :: + + >>> result = b + c + d + + This means that the result array must exist before calling + ``weave.blitz``. Future versions will allow the following:: + + >>> result = weave.blitz_eval("b + c + d") + +3. ``weave.blitz`` works best when algorithms can be expressed in a + "vectorized" form. Algorithms that have a large number of if/thens and + other conditions are better hand written in C or Fortran. Further, the + restrictions imposed by requiring vectorized expressions sometimes + preclude the use of more efficient data structures or algorithms. For + maximum speed in these cases, hand-coded C or Fortran code is the only + way to go. + +4. ``weave.blitz`` can produce different results than NumPy in certain + situations. It can happen when the array receiving the results of a + calculation is also used during the calculation. The NumPy behavior is to + carry out the entire calculation on the right hand side of an equation + and store it in a temporary array. This temprorary array is assigned to + the array on the left hand side of the equation. blitz, on the other + hand, does a "running" calculation of the array elements assigning values + from the right hand side to the elements on the left hand side + immediately after they are calculated. Here is an example, provided by + Prabhu Ramachandran, where this happens:: + + # 4 point average. + >>> expr = "u[1:-1, 1:-1] = (u[0:-2, 1:-1] + u[2:, 1:-1] + \ + ... "u[1:-1,0:-2] + u[1:-1, 2:])*0.25" + >>> u = zeros((5, 5), 'd'); u[0,:] = 100 + >>> exec (expr) + >>> u + array([[ 100., 100., 100., 100., 100.], + [ 0., 25., 25., 25., 0.], + [ 0., 0., 0., 0., 0.], + [ 0., 0., 0., 0., 0.], + [ 0., 0., 0., 0., 0.]]) + + >>> u = zeros((5, 5), 'd'); u[0,:] = 100 + >>> weave.blitz (expr) + >>> u + array([[ 100. , 100. , 100. , 100. , 100. ], + [ 0. , 25. , 31.25 , 32.8125 , 0. ], + [ 0. , 6.25 , 9.375 , 10.546875 , 0. ], + [ 0. , 1.5625 , 2.734375 , 3.3203125, 0. ], + [ 0. , 0. , 0. , 0. , 0. ]]) + + You can prevent this behavior by using a temporary array. + + :: + + >>> u = zeros((5, 5), 'd'); u[0,:] = 100 + >>> temp = zeros((4, 4), 'd'); + >>> expr = "temp = (u[0:-2, 1:-1] + u[2:, 1:-1] + "\ + ... "u[1:-1,0:-2] + u[1:-1, 2:])*0.25;"\ + ... "u[1:-1,1:-1] = temp" + >>> weave.blitz (expr) + >>> u + array([[ 100., 100., 100., 100., 100.], + [ 0., 25., 25., 25., 0.], + [ 0., 0., 0., 0., 0.], + [ 0., 0., 0., 0., 0.], + [ 0., 0., 0., 0., 0.]]) + +5. One other point deserves mention lest people be confused. + ``weave.blitz`` is not a general purpose Python->C compiler. It only + works for expressions that contain NumPy arrays and/or Python scalar + values. This focused scope concentrates effort on the compuationally + intensive regions of the program and sidesteps the difficult issues + associated with a general purpose Python->C compiler. + + +NumPy efficiency issues: What compilation buys you +================================================== + +Some might wonder why compiling NumPy expressions to C++ is beneficial since +operations on NumPy array operations are already executed within C loops. The +problem is that anything other than the simplest expression are executed in +less than optimal fashion. Consider the following NumPy expression:: + + a = 1.2 * b + c * d + + +When NumPy calculates the value for the 2d array, ``a``, it does the +following steps:: + + temp1 = 1.2 * b + temp2 = c * d + a = temp1 + temp2 + + +Two things to note. Since ``c`` is an (perhaps large) array, a large +temporary array must be created to store the results of ``1.2 * b``. The same +is true for ``temp2``. Allocation is slow. The second thing is that we have 3 +loops executing, one to calculate ``temp1``, one for ``temp2`` and one for +adding them up. A C loop for the same problem might look like:: + + for(int i = 0; i < M; i++) + for(int j = 0; j < N; j++) + a[i,j] = 1.2 * b[i,j] + c[i,j] * d[i,j] + + +Here, the 3 loops have been fused into a single loop and there is no longer a +need for a temporary array. This provides a significant speed improvement +over the above example (write me and tell me what you get). + +So, converting NumPy expressions into C/C++ loops that fuse the loops and +eliminate temporary arrays can provide big gains. The goal then,is to convert +NumPy expression to C/C++ loops, compile them in an extension module, and +then call the compiled extension function. The good news is that there is an +obvious correspondence between the NumPy expression above and the C loop. The +bad news is that NumPy is generally much more powerful than this simple +example illustrates and handling all possible indexing possibilities results +in loops that are less than straight forward to write. (take a peak in NumPy +for confirmation). Luckily, there are several available tools that simplify +the process. + + +The Tools +========= + +``weave.blitz`` relies heavily on several remarkable tools. On the Python +side, the main facilitators are Jermey Hylton's parser module and Travis +Oliphant's NumPy module. On the compiled language side, Todd Veldhuizen's +blitz++ array library, written in C++ (shhhh. don't tell David Beazley), does +the heavy lifting. Don't assume that, because it's C++, it's much slower than +C or Fortran. Blitz++ uses a jaw dropping array of template techniques +(metaprogramming, template expression, etc) to convert innocent looking and +readable C++ expressions into to code that usually executes within a few +percentage points of Fortran code for the same problem. This is good. +Unfortunately all the template raz-ma-taz is very expensive to compile, so +the 200 line extension modules often take 2 or more minutes to compile. This +isn't so good. ``weave.blitz`` works to minimize this issue by remembering +where compiled modules live and reusing them instead of re-compiling every +time a program is re-run. + +Parser +------ + +Tearing NumPy expressions apart, examining the pieces, and then rebuilding +them as C++ (blitz) expressions requires a parser of some sort. I can imagine +someone attacking this problem with regular expressions, but it'd likely be +ugly and fragile. Amazingly, Python solves this problem for us. It actually +exposes its parsing engine to the world through the ``parser`` module. The +following fragment creates an Abstract Syntax Tree (AST) object for the +expression and then converts to a (rather unpleasant looking) deeply nested +list representation of the tree. + +:: + + >>> import parser + >>> import scipy.weave.misc + >>> ast = parser.suite("a = b * c + d") + >>> ast_list = ast.tolist() + >>> sym_list = scipy.weave.misc.translate_symbols(ast_list) + >>> pprint.pprint(sym_list) + ['file_input', + ['stmt', + ['simple_stmt', + ['small_stmt', + ['expr_stmt', + ['testlist', + ['test', + ['and_test', + ['not_test', + ['comparison', + ['expr', + ['xor_expr', + ['and_expr', + ['shift_expr', + ['arith_expr', + ['term', + ['factor', ['power', ['atom', ['NAME', 'a']]]]]]]]]]]]]]], + ['EQUAL', '='], + ['testlist', + ['test', + ['and_test', + ['not_test', + ['comparison', + ['expr', + ['xor_expr', + ['and_expr', + ['shift_expr', + ['arith_expr', + ['term', + ['factor', ['power', ['atom', ['NAME', 'b']]]], + ['STAR', '*'], + ['factor', ['power', ['atom', ['NAME', 'c']]]]], + ['PLUS', '+'], + ['term', + ['factor', ['power', ['atom', ['NAME', 'd']]]]]]]]]]]]]]]]], + ['NEWLINE', '']]], + ['ENDMARKER', '']] + + +Despite its looks, with some tools developed by Jermey H., its possible to +search these trees for specific patterns (sub-trees), extract the sub-tree, +manipulate them converting python specific code fragments to blitz code +fragments, and then re-insert it in the parse tree. The parser module +documentation has some details on how to do this. Traversing the new +blitzified tree, writing out the terminal symbols as you go, creates our new +blitz++ expression string. + +Blitz and NumPy +--------------- + +The other nice discovery in the project is that the data structure used for +NumPy arrays and blitz arrays is nearly identical. NumPy stores "strides" as +byte offsets and blitz stores them as element offsets, but other than that, +they are the same. Further, most of the concept and capabilities of the two +libraries are remarkably similar. It is satisfying that two completely +different implementations solved the problem with similar basic +architectures. It is also fortuitous. The work involved in converting NumPy +expressions to blitz expressions was greatly diminished. As an example, +consider the code for slicing an array in Python with a stride:: + + >>> a = b[0:4:2] + c + >>> a + [0,2,4] + + +In Blitz it is as follows:: + + Array<2,int> b(10); + Array<2,int> c(3); + // ... + Array<2,int> a = b(Range(0,3,2)) + c; + + +Here the range object works exactly like Python slice objects with the +exception that the top index (3) is inclusive where as Python's (4) is +exclusive. Other differences include the type declaraions in C++ and +parentheses instead of brackets for indexing arrays. Currently, +``weave.blitz`` handles the inclusive/exclusive issue by subtracting one from +upper indices during the translation. An alternative that is likely more +robust/maintainable in the long run, is to write a PyRange class that behaves +like Python's range. This is likely very easy. + +The stock blitz also doesn't handle negative indices in ranges. The current +implementation of the ``blitz()`` has a partial solution to this problem. It +calculates and index that starts with a '-' sign by subtracting it from the +maximum index in the array so that:: + + upper index limit + /-----\ + b[:-1] -> b(Range(0,Nb[0]-1-1)) + + +This approach fails, however, when the top index is calculated from other +values. In the following scenario, if ``i+j`` evaluates to a negative value, +the compiled code will produce incorrect results and could even core- dump. +Right now, all calculated indices are assumed to be positive. + +:: + + b[:i-j] -> b(Range(0,i+j)) + + +A solution is to calculate all indices up front using if/then to handle the ++/- cases. This is a little work and results in more code, so it hasn't been +done. I'm holding out to see if blitz++ can be modified to handle negative +indexing, but haven't looked into how much effort is involved yet. While it +needs fixin', I don't think there is a ton of code where this is an issue. + +The actual translation of the Python expressions to blitz expressions is +currently a two part process. First, all x:y:z slicing expression are removed +from the AST, converted to slice(x,y,z) and re-inserted into the tree. Any +math needed on these expressions (subtracting from the maximum index, etc.) +are also preformed here. _beg and _end are used as special variables that are +defined as blitz::fromBegin and blitz::toEnd. + +:: + + a[i+j:i+j+1,:] = b[2:3,:] + + +becomes a more verbose:: + + a[slice(i+j,i+j+1),slice(_beg,_end)] = b[slice(2,3),slice(_beg,_end)] + + +The second part does a simple string search/replace to convert to a blitz +expression with the following translations:: + + slice(_beg,_end) -> _all # not strictly needed, but cuts down on code. + slice -> blitz::Range + [ -> ( + ] -> ) + _stp -> 1 + + +``_all`` is defined in the compiled function as ``blitz::Range.all()``. These +translations could of course happen directly in the syntax tree. But the +string replacement is slightly easier. Note that name spaces are maintained +in the C++ code to lessen the likelyhood of name clashes. Currently no effort +is made to detect name clashes. A good rule of thumb is don't use values that +start with '_' or 'py\_' in compiled expressions and you'll be fine. + +Type definitions and coersion +============================= + +So far we've glossed over the dynamic vs. static typing issue between Python +and C++. In Python, the type of value that a variable holds can change +through the course of program execution. C/C++, on the other hand, forces you +to declare the type of value a variables will hold prior at compile time. +``weave.blitz`` handles this issue by examining the types of the variables in +the expression being executed, and compiling a function for those explicit +types. For example:: + + a = ones((5,5),Float32) + b = ones((5,5),Float32) + weave.blitz("a = a + b") + + +When compiling this expression to C++, ``weave.blitz`` sees that the values +for a and b in the local scope have type ``Float32``, or 'float' on a 32 bit +architecture. As a result, it compiles the function using the float type (no +attempt has been made to deal with 64 bit issues). + +What happens if you call a compiled function with array types that are +different than the ones for which it was originally compiled? No biggie, +you'll just have to wait on it to compile a new version for your new types. +This doesn't overwrite the old functions, as they are still accessible. See +the catalog section in the inline() documentation to see how this is handled. +Suffice to say, the mechanism is transparent to the user and behaves like +dynamic typing with the occasional wait for compiling newly typed functions. + +When working with combined scalar/array operations, the type of the array is +*always* used. This is similar to the savespace flag that was recently added +to NumPy. This prevents issues with the following expression perhaps +unexpectedly being calculated at a higher (more expensive) precision that can +occur in Python:: + + >>> a = array((1,2,3),typecode = Float32) + >>> b = a * 2.1 # results in b being a Float64 array. + +In this example, + +:: + + >>> a = ones((5,5),Float32) + >>> b = ones((5,5),Float32) + >>> weave.blitz("b = a * 2.1") + + +the ``2.1`` is cast down to a ``float`` before carrying out the operation. If +you really want to force the calculation to be a ``double``, define ``a`` and +``b`` as ``double`` arrays. + +One other point of note. Currently, you must include both the right hand side +and left hand side (assignment side) of your equation in the compiled +expression. Also, the array being assigned to must be created prior to +calling ``weave.blitz``. I'm pretty sure this is easily changed so that a +compiled_eval expression can be defined, but no effort has been made to +allocate new arrays (and decern their type) on the fly. + + +Cataloging Compiled Functions +============================= + +See `The Catalog`_ section in the ``weave.inline()`` +documentation. + +Checking Array Sizes +==================== + +Surprisingly, one of the big initial problems with compiled code was making +sure all the arrays in an operation were of compatible type. The following +case is trivially easy:: + + a = b + c + + +It only requires that arrays ``a``, ``b``, and ``c`` have the same shape. +However, expressions like:: + + a[i+j:i+j+1,:] = b[2:3,:] + c + + +are not so trivial. Since slicing is involved, the size of the slices, not +the input arrays must be checked. Broadcasting complicates things further +because arrays and slices with different dimensions and shapes may be +compatible for math operations (broadcasting isn't yet supported by +``weave.blitz``). Reductions have a similar effect as their results are +different shapes than their input operand. The binary operators in NumPy +compare the shapes of their two operands just before they operate on them. +This is possible because NumPy treats each operation independently. The +intermediate (temporary) arrays created during sub-operations in an +expression are tested for the correct shape before they are combined by +another operation. Because ``weave.blitz`` fuses all operations into a single +loop, this isn't possible. The shape comparisons must be done and guaranteed +compatible before evaluating the expression. + +The solution chosen converts input arrays to "dummy arrays" that only +represent the dimensions of the arrays, not the data. Binary operations on +dummy arrays check that input array sizes are comptible and return a dummy +array with the size correct size. Evaluating an expression of dummy arrays +traces the changing array sizes through all operations and fails if +incompatible array sizes are ever found. + +The machinery for this is housed in ``weave.size_check``. It basically +involves writing a new class (dummy array) and overloading it math operators +to calculate the new sizes correctly. All the code is in Python and there is +a fair amount of logic (mainly to handle indexing and slicing) so the +operation does impose some overhead. For large arrays (ie. 50x50x50), the +overhead is negligible compared to evaluating the actual expression. For +small arrays (ie. 16x16), the overhead imposed for checking the shapes with +this method can cause the ``weave.blitz`` to be slower than evaluating the +expression in Python. + +What can be done to reduce the overhead? (1) The size checking code could be +moved into C. This would likely remove most of the overhead penalty compared +to NumPy (although there is also some calling overhead), but no effort has +been made to do this. (2) You can also call ``weave.blitz`` with +``check_size=0`` and the size checking isn't done. However, if the sizes +aren't compatible, it can cause a core-dump. So, foregoing size_checking +isn't advisable until your code is well debugged. + + +Creating the Extension Module +============================= + +``weave.blitz`` uses the same machinery as ``weave.inline`` to build the +extension module. The only difference is the code included in the function is +automatically generated from the NumPy array expression instead of supplied +by the user. + +=================== + Extension Modules +=================== + +``weave.inline`` and ``weave.blitz`` are high level tools that generate +extension modules automatically. Under the covers, they use several classes +from ``weave.ext_tools`` to help generate the extension module. The main two +classes are ``ext_module`` and ``ext_function`` (I'd like to add +``ext_class`` and ``ext_method`` also). These classes simplify the process of +generating extension modules by handling most of the "boiler plate" code +automatically. + +.. note:: + ``inline`` actually sub-classes ``weave.ext_tools.ext_function`` to + generate slightly different code than the standard ``ext_function``. + The main difference is that the standard class converts function + arguments to C types, while inline always has two arguments, the + local and global dicts, and the grabs the variables that need to be + convereted to C from these. + +A Simple Example +================ + +The following simple example demonstrates how to build an extension module +within a Python function:: + + # examples/increment_example.py + from weave import ext_tools + + def build_increment_ext(): + """ Build a simple extension with functions that increment numbers. + The extension will be built in the local directory. + """ + mod = ext_tools.ext_module('increment_ext') + + a = 1 # effectively a type declaration for 'a' in the + # following functions. + + ext_code = "return_val = Py::new_reference_to(Py::Int(a+1));" + func = ext_tools.ext_function('increment',ext_code,['a']) + mod.add_function(func) + + ext_code = "return_val = Py::new_reference_to(Py::Int(a+2));" + func = ext_tools.ext_function('increment_by_2',ext_code,['a']) + mod.add_function(func) + + mod.compile() + +The function ``build_increment_ext()`` creates an extension module named +``increment_ext`` and compiles it to a shared library (.so or .pyd) that can +be loaded into Python.. ``increment_ext`` contains two functions, +``increment`` and ``increment_by_2``. The first line of +``build_increment_ext()``, + + mod = ext_tools.ext_module('increment_ext') + + +creates an ``ext_module`` instance that is ready to have ``ext_function`` +instances added to it. ``ext_function`` instances are created much with a +calling convention similar to ``weave.inline()``. The most common call +includes a C/C++ code snippet and a list of the arguments for the function. +The following + + ext_code = "return_val = Py::new_reference_to(Py::Int(a+1));" + func = ext_tools.ext_function('increment',ext_code,['a']) + + +creates a C/C++ extension function that is equivalent to the following Python +function:: + + def increment(a): + return a + 1 + + +A second method is also added to the module and then, + +:: + + mod.compile() + + +is called to build the extension module. By default, the module is created in +the current working directory. This example is available in the +``examples/increment_example.py`` file found in the ``weave`` directory. At +the bottom of the file in the module's "main" program, an attempt to import +``increment_ext`` without building it is made. If this fails (the module +doesn't exist in the PYTHONPATH), the module is built by calling +``build_increment_ext()``. This approach only takes the time consuming ( a +few seconds for this example) process of building the module if it hasn't +been built before. + +:: + + if __name__ == "__main__": + try: + import increment_ext + except ImportError: + build_increment_ext() + import increment_ext + a = 1 + print 'a, a+1:', a, increment_ext.increment(a) + print 'a, a+2:', a, increment_ext.increment_by_2(a) + +.. note:: + If we were willing to always pay the penalty of building the C++ + code for a module, we could store the md5 checksum of the C++ code + along with some information about the compiler, platform, etc. Then, + ``ext_module.compile()`` could try importing the module before it + actually compiles it, check the md5 checksum and other meta-data in + the imported module with the meta-data of the code it just produced + and only compile the code if the module didn't exist or the + meta-data didn't match. This would reduce the above code to:: + + if __name__ == "__main__": + build_increment_ext() + + a = 1 + print 'a, a+1:', a, increment_ext.increment(a) + print 'a, a+2:', a, increment_ext.increment_by_2(a) + +.. note:: + There would always be the overhead of building the C++ code, but it + would only actually compile the code once. You pay a little in overhead and + get cleaner "import" code. Needs some thought. + +If you run ``increment_example.py`` from the command line, you get the +following:: + + [eric@n0]$ python increment_example.py + a, a+1: 1 2 + a, a+2: 1 3 + + +If the module didn't exist before it was run, the module is created. If it +did exist, it is just imported and used. + +Fibonacci Example +================= + +``examples/fibonacci.py`` provides a little more complex example of how to +use ``ext_tools``. Fibonacci numbers are a series of numbers where each +number in the series is the sum of the previous two: 1, 1, 2, 3, 5, 8, etc. +Here, the first two numbers in the series are taken to be 1. One approach to +calculating Fibonacci numbers uses recursive function calls. In Python, it +might be written as:: + + def fib(a): + if a <= 2: + return 1 + else: + return fib(a-2) + fib(a-1) + + +In C, the same function would look something like this:: + + int fib(int a) + { + if(a <= 2) + return 1; + else + return fib(a-2) + fib(a-1); + } + + +Recursion is much faster in C than in Python, so it would be beneficial to +use the C version for fibonacci number calculations instead of the Python +version. We need an extension function that calls this C function to do this. +This is possible by including the above code snippet as "support code" and +then calling it from the extension function. Support code snippets (usually +structure definitions, helper functions and the like) are inserted into the +extension module C/C++ file before the extension function code. Here is how +to build the C version of the fibonacci number generator:: + + def build_fibonacci(): + """ Builds an extension module with fibonacci calculators. + """ + mod = ext_tools.ext_module('fibonacci_ext') + a = 1 # this is effectively a type declaration + + # recursive fibonacci in C + fib_code = """ + int fib1(int a) + { + if(a <= 2) + return 1; + else + return fib1(a-2) + fib1(a-1); + } + """ + ext_code = """ + int val = fib1(a); + return_val = Py::new_reference_to(Py::Int(val)); + """ + fib = ext_tools.ext_function('fib',ext_code,['a']) + fib.customize.add_support_code(fib_code) + mod.add_function(fib) + + mod.compile() + +XXX More about custom_info, and what xxx_info instances are good for. + +.. note:: + recursion is not the fastest way to calculate fibonacci numbers, but + this approach serves nicely for this example. + + +================================================ + Customizing Type Conversions -- Type Factories +================================================ + +not written + +============================= + Things I wish ``weave`` did +============================= + +It is possible to get name clashes if you uses a variable name that is +already defined in a header automatically included (such as ``stdio.h``) For +instance, if you try to pass in a variable named ``stdout``, you'll get a +cryptic error report due to the fact that ``stdio.h`` also defines the name. +``weave`` should probably try and handle this in some way. Other things... + +.. _PyInline: http://pyinline.sourceforge.net/ +.. _SciPy: http://www.scipy.org +.. _mingw32: http://www.mingw.org%3Ewww.mingw.org +.. _NumPy: http://numeric.scipy.org/ +.. _here: http://www.scipy.org/Weave +.. _Python Cookbook: http://aspn.activestate.com/ASPN/Cookbook/Python +.. _binary_search(): + http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/81188 +.. _website: http://cxx.sourceforge.net/ +.. _This submission: + http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52306 +.. _blitz++ home page: http://www.oonumerics.org/blitz/ + +.. + Local Variables: + mode: rst + End: diff -Nru python-scipy-0.7.2+dfsg1/doc/source/weave.rst python-scipy-0.8.0+dfsg1/doc/source/weave.rst --- python-scipy-0.7.2+dfsg1/doc/source/weave.rst 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/doc/source/weave.rst 2010-07-26 15:48:29.000000000 +0100 @@ -8,3 +8,12 @@ .. automodule:: scipy.weave :members: + + +.. autosummary:: + :toctree: generated/ + + inline + blitz + ext_tools + accelerate diff -Nru python-scipy-0.7.2+dfsg1/doc/sphinxext/autosummary_generate.py python-scipy-0.8.0+dfsg1/doc/sphinxext/autosummary_generate.py --- python-scipy-0.7.2+dfsg1/doc/sphinxext/autosummary_generate.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/sphinxext/autosummary_generate.py 2010-07-26 16:59:44.000000000 +0100 @@ -0,0 +1,219 @@ +#!/usr/bin/env python +r""" +autosummary_generate.py OPTIONS FILES + +Generate automatic RST source files for items referred to in +autosummary:: directives. + +Each generated RST file contains a single auto*:: directive which +extracts the docstring of the referred item. + +Example Makefile rule:: + + generate: + ./ext/autosummary_generate.py -o source/generated source/*.rst + +""" +import glob, re, inspect, os, optparse, pydoc +from autosummary import import_by_name + +try: + from phantom_import import import_phantom_module +except ImportError: + import_phantom_module = lambda x: x + +def main(): + p = optparse.OptionParser(__doc__.strip()) + p.add_option("-p", "--phantom", action="store", type="string", + dest="phantom", default=None, + help="Phantom import modules from a file") + p.add_option("-o", "--output-dir", action="store", type="string", + dest="output_dir", default=None, + help=("Write all output files to the given directory (instead " + "of writing them as specified in the autosummary:: " + "directives)")) + options, args = p.parse_args() + + if len(args) == 0: + p.error("wrong number of arguments") + + if options.phantom and os.path.isfile(options.phantom): + import_phantom_module(options.phantom) + + # read + names = {} + for name, loc in get_documented(args).items(): + for (filename, sec_title, keyword, toctree) in loc: + if toctree is not None: + path = os.path.join(os.path.dirname(filename), toctree) + names[name] = os.path.abspath(path) + + # write + for name, path in sorted(names.items()): + if options.output_dir is not None: + path = options.output_dir + + if not os.path.isdir(path): + os.makedirs(path) + + try: + obj, name = import_by_name(name) + except ImportError, e: + print "Failed to import '%s': %s" % (name, e) + continue + + fn = os.path.join(path, '%s.rst' % name) + + if os.path.exists(fn): + # skip + continue + + f = open(fn, 'w') + + try: + f.write('%s\n%s\n\n' % (name, '='*len(name))) + + if inspect.isclass(obj): + if issubclass(obj, Exception): + f.write(format_modulemember(name, 'autoexception')) + else: + f.write(format_modulemember(name, 'autoclass')) + elif inspect.ismodule(obj): + f.write(format_modulemember(name, 'automodule')) + elif inspect.ismethod(obj) or inspect.ismethoddescriptor(obj): + f.write(format_classmember(name, 'automethod')) + elif callable(obj): + f.write(format_modulemember(name, 'autofunction')) + elif hasattr(obj, '__get__'): + f.write(format_classmember(name, 'autoattribute')) + else: + f.write(format_modulemember(name, 'autofunction')) + finally: + f.close() + +def format_modulemember(name, directive): + parts = name.split('.') + mod, name = '.'.join(parts[:-1]), parts[-1] + return ".. currentmodule:: %s\n\n.. %s:: %s\n" % (mod, directive, name) + +def format_classmember(name, directive): + parts = name.split('.') + mod, name = '.'.join(parts[:-2]), '.'.join(parts[-2:]) + return ".. currentmodule:: %s\n\n.. %s:: %s\n" % (mod, directive, name) + +def get_documented(filenames): + """ + Find out what items are documented in source/*.rst + See `get_documented_in_lines`. + + """ + documented = {} + for filename in filenames: + f = open(filename, 'r') + lines = f.read().splitlines() + documented.update(get_documented_in_lines(lines, filename=filename)) + f.close() + return documented + +def get_documented_in_docstring(name, module=None, filename=None): + """ + Find out what items are documented in the given object's docstring. + See `get_documented_in_lines`. + + """ + try: + obj, real_name = import_by_name(name) + lines = pydoc.getdoc(obj).splitlines() + return get_documented_in_lines(lines, module=name, filename=filename) + except AttributeError: + pass + except ImportError, e: + print "Failed to import '%s': %s" % (name, e) + return {} + +def get_documented_in_lines(lines, module=None, filename=None): + """ + Find out what items are documented in the given lines + + Returns + ------- + documented : dict of list of (filename, title, keyword, toctree) + Dictionary whose keys are documented names of objects. + The value is a list of locations where the object was documented. + Each location is a tuple of filename, the current section title, + the name of the directive, and the value of the :toctree: argument + (if present) of the directive. + + """ + title_underline_re = re.compile("^[-=*_^#]{3,}\s*$") + autodoc_re = re.compile(".. auto(function|method|attribute|class|exception|module)::\s*([A-Za-z0-9_.]+)\s*$") + autosummary_re = re.compile(r'^\.\.\s+autosummary::\s*') + module_re = re.compile(r'^\.\.\s+(current)?module::\s*([a-zA-Z0-9_.]+)\s*$') + autosummary_item_re = re.compile(r'^\s+([_a-zA-Z][a-zA-Z0-9_.]*)\s*.*?') + toctree_arg_re = re.compile(r'^\s+:toctree:\s*(.*?)\s*$') + + documented = {} + + current_title = [] + last_line = None + toctree = None + current_module = module + in_autosummary = False + + for line in lines: + try: + if in_autosummary: + m = toctree_arg_re.match(line) + if m: + toctree = m.group(1) + continue + + if line.strip().startswith(':'): + continue # skip options + + m = autosummary_item_re.match(line) + if m: + name = m.group(1).strip() + if current_module and not name.startswith(current_module + '.'): + name = "%s.%s" % (current_module, name) + documented.setdefault(name, []).append( + (filename, current_title, 'autosummary', toctree)) + continue + if line.strip() == '': + continue + in_autosummary = False + + m = autosummary_re.match(line) + if m: + in_autosummary = True + continue + + m = autodoc_re.search(line) + if m: + name = m.group(2).strip() + if m.group(1) == "module": + current_module = name + documented.update(get_documented_in_docstring( + name, filename=filename)) + elif current_module and not name.startswith(current_module+'.'): + name = "%s.%s" % (current_module, name) + documented.setdefault(name, []).append( + (filename, current_title, "auto" + m.group(1), None)) + continue + + m = title_underline_re.match(line) + if m and last_line: + current_title = last_line.strip() + continue + + m = module_re.match(line) + if m: + current_module = m.group(2) + continue + finally: + last_line = line + + return documented + +if __name__ == "__main__": + main() diff -Nru python-scipy-0.7.2+dfsg1/doc/sphinxext/autosummary.py python-scipy-0.8.0+dfsg1/doc/sphinxext/autosummary.py --- python-scipy-0.7.2+dfsg1/doc/sphinxext/autosummary.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/sphinxext/autosummary.py 2010-07-26 16:59:44.000000000 +0100 @@ -0,0 +1,349 @@ +""" +=========== +autosummary +=========== + +Sphinx extension that adds an autosummary:: directive, which can be +used to generate function/method/attribute/etc. summary lists, similar +to those output eg. by Epydoc and other API doc generation tools. + +An :autolink: role is also provided. + +autosummary directive +--------------------- + +The autosummary directive has the form:: + + .. autosummary:: + :nosignatures: + :toctree: generated/ + + module.function_1 + module.function_2 + ... + +and it generates an output table (containing signatures, optionally) + + ======================== ============================================= + module.function_1(args) Summary line from the docstring of function_1 + module.function_2(args) Summary line from the docstring + ... + ======================== ============================================= + +If the :toctree: option is specified, files matching the function names +are inserted to the toctree with the given prefix: + + generated/module.function_1 + generated/module.function_2 + ... + +Note: The file names contain the module:: or currentmodule:: prefixes. + +.. seealso:: autosummary_generate.py + + +autolink role +------------- + +The autolink role functions as ``:obj:`` when the name referred can be +resolved to a Python object, and otherwise it becomes simple emphasis. +This can be used as the default role to make links 'smart'. + +""" +import sys, os, posixpath, re + +from docutils.parsers.rst import directives +from docutils.statemachine import ViewList +from docutils import nodes + +import sphinx.addnodes, sphinx.roles +from sphinx.util import patfilter + +from docscrape_sphinx import get_doc_object + +import warnings +warnings.warn( + "The numpydoc.autosummary extension can also be found as " + "sphinx.ext.autosummary in Sphinx >= 0.6, and the version in " + "Sphinx >= 0.7 is superior to the one in numpydoc. This numpydoc " + "version of autosummary is no longer maintained.", + DeprecationWarning, stacklevel=2) + +def setup(app): + app.add_directive('autosummary', autosummary_directive, True, (0, 0, False), + toctree=directives.unchanged, + nosignatures=directives.flag) + app.add_role('autolink', autolink_role) + + app.add_node(autosummary_toc, + html=(autosummary_toc_visit_html, autosummary_toc_depart_noop), + latex=(autosummary_toc_visit_latex, autosummary_toc_depart_noop)) + app.connect('doctree-read', process_autosummary_toc) + +#------------------------------------------------------------------------------ +# autosummary_toc node +#------------------------------------------------------------------------------ + +class autosummary_toc(nodes.comment): + pass + +def process_autosummary_toc(app, doctree): + """ + Insert items described in autosummary:: to the TOC tree, but do + not generate the toctree:: list. + + """ + env = app.builder.env + crawled = {} + def crawl_toc(node, depth=1): + crawled[node] = True + for j, subnode in enumerate(node): + try: + if (isinstance(subnode, autosummary_toc) + and isinstance(subnode[0], sphinx.addnodes.toctree)): + env.note_toctree(env.docname, subnode[0]) + continue + except IndexError: + continue + if not isinstance(subnode, nodes.section): + continue + if subnode not in crawled: + crawl_toc(subnode, depth+1) + crawl_toc(doctree) + +def autosummary_toc_visit_html(self, node): + """Hide autosummary toctree list in HTML output""" + raise nodes.SkipNode + +def autosummary_toc_visit_latex(self, node): + """Show autosummary toctree (= put the referenced pages here) in Latex""" + pass + +def autosummary_toc_depart_noop(self, node): + pass + +#------------------------------------------------------------------------------ +# .. autosummary:: +#------------------------------------------------------------------------------ + +def autosummary_directive(dirname, arguments, options, content, lineno, + content_offset, block_text, state, state_machine): + """ + Pretty table containing short signatures and summaries of functions etc. + + autosummary also generates a (hidden) toctree:: node. + + """ + + names = [] + names += [x.strip().split()[0] for x in content + if x.strip() and re.search(r'^[a-zA-Z_]', x.strip()[0])] + + table, warnings, real_names = get_autosummary(names, state, + 'nosignatures' in options) + node = table + + env = state.document.settings.env + suffix = env.config.source_suffix + all_docnames = env.found_docs.copy() + dirname = posixpath.dirname(env.docname) + + if 'toctree' in options: + tree_prefix = options['toctree'].strip() + docnames = [] + for name in names: + name = real_names.get(name, name) + + docname = tree_prefix + name + if docname.endswith(suffix): + docname = docname[:-len(suffix)] + docname = posixpath.normpath(posixpath.join(dirname, docname)) + if docname not in env.found_docs: + warnings.append(state.document.reporter.warning( + 'toctree references unknown document %r' % docname, + line=lineno)) + docnames.append(docname) + + tocnode = sphinx.addnodes.toctree() + tocnode['includefiles'] = docnames + tocnode['maxdepth'] = -1 + tocnode['glob'] = None + tocnode['entries'] = [(None, docname) for docname in docnames] + + tocnode = autosummary_toc('', '', tocnode) + return warnings + [node] + [tocnode] + else: + return warnings + [node] + +def get_autosummary(names, state, no_signatures=False): + """ + Generate a proper table node for autosummary:: directive. + + Parameters + ---------- + names : list of str + Names of Python objects to be imported and added to the table. + document : document + Docutils document object + + """ + document = state.document + + real_names = {} + warnings = [] + + prefixes = [''] + prefixes.insert(0, document.settings.env.currmodule) + + table = nodes.table('') + group = nodes.tgroup('', cols=2) + table.append(group) + group.append(nodes.colspec('', colwidth=10)) + group.append(nodes.colspec('', colwidth=90)) + body = nodes.tbody('') + group.append(body) + + def append_row(*column_texts): + row = nodes.row('') + for text in column_texts: + node = nodes.paragraph('') + vl = ViewList() + vl.append(text, '') + state.nested_parse(vl, 0, node) + try: + if isinstance(node[0], nodes.paragraph): + node = node[0] + except IndexError: + pass + row.append(nodes.entry('', node)) + body.append(row) + + for name in names: + try: + obj, real_name = import_by_name(name, prefixes=prefixes) + except ImportError: + warnings.append(document.reporter.warning( + 'failed to import %s' % name)) + append_row(":obj:`%s`" % name, "") + continue + + real_names[name] = real_name + + doc = get_doc_object(obj) + + if doc['Summary']: + title = " ".join(doc['Summary']) + else: + title = "" + + col1 = u":obj:`%s <%s>`" % (name, real_name) + if doc['Signature']: + sig = re.sub('^[^(\[]*', '', doc['Signature'].strip()) + if '=' in sig: + # abbreviate optional arguments + sig = re.sub(r', ([a-zA-Z0-9_]+)=', r'[, \1=', sig, count=1) + sig = re.sub(r'\(([a-zA-Z0-9_]+)=', r'([\1=', sig, count=1) + sig = re.sub(r'=[^,)]+,', ',', sig) + sig = re.sub(r'=[^,)]+\)$', '])', sig) + # shorten long strings + sig = re.sub(r'(\[.{16,16}[^,]*?),.*?\]\)', r'\1, ...])', sig) + else: + sig = re.sub(r'(\(.{16,16}[^,]*?),.*?\)', r'\1, ...)', sig) + # make signature contain non-breaking spaces + col1 += u"\\ \u00a0" + unicode(sig).replace(u" ", u"\u00a0") + col2 = title + append_row(col1, col2) + + return table, warnings, real_names + +def import_by_name(name, prefixes=[None]): + """ + Import a Python object that has the given name, under one of the prefixes. + + Parameters + ---------- + name : str + Name of a Python object, eg. 'numpy.ndarray.view' + prefixes : list of (str or None), optional + Prefixes to prepend to the name (None implies no prefix). + The first prefixed name that results to successful import is used. + + Returns + ------- + obj + The imported object + name + Name of the imported object (useful if `prefixes` was used) + + """ + for prefix in prefixes: + try: + if prefix: + prefixed_name = '.'.join([prefix, name]) + else: + prefixed_name = name + return _import_by_name(prefixed_name), prefixed_name + except ImportError: + pass + raise ImportError + +def _import_by_name(name): + """Import a Python object given its full name""" + try: + # try first interpret `name` as MODNAME.OBJ + name_parts = name.split('.') + try: + modname = '.'.join(name_parts[:-1]) + __import__(modname) + return getattr(sys.modules[modname], name_parts[-1]) + except (ImportError, IndexError, AttributeError): + pass + + # ... then as MODNAME, MODNAME.OBJ1, MODNAME.OBJ1.OBJ2, ... + last_j = 0 + modname = None + for j in reversed(range(1, len(name_parts)+1)): + last_j = j + modname = '.'.join(name_parts[:j]) + try: + __import__(modname) + except ImportError: + continue + if modname in sys.modules: + break + + if last_j < len(name_parts): + obj = sys.modules[modname] + for obj_name in name_parts[last_j:]: + obj = getattr(obj, obj_name) + return obj + else: + return sys.modules[modname] + except (ValueError, ImportError, AttributeError, KeyError), e: + raise ImportError(e) + +#------------------------------------------------------------------------------ +# :autolink: (smart default role) +#------------------------------------------------------------------------------ + +def autolink_role(typ, rawtext, etext, lineno, inliner, + options={}, content=[]): + """ + Smart linking role. + + Expands to ":obj:`text`" if `text` is an object that can be imported; + otherwise expands to "*text*". + """ + r = sphinx.roles.xfileref_role('obj', rawtext, etext, lineno, inliner, + options, content) + pnode = r[0][0] + + prefixes = [None] + #prefixes.insert(0, inliner.document.settings.env.currmodule) + try: + obj, name = import_by_name(pnode['reftarget'], prefixes) + except ImportError: + content = pnode[0] + r[0][0] = nodes.emphasis(rawtext, content[0].astext(), + classes=content['classes']) + return r diff -Nru python-scipy-0.7.2+dfsg1/doc/sphinxext/comment_eater.py python-scipy-0.8.0+dfsg1/doc/sphinxext/comment_eater.py --- python-scipy-0.7.2+dfsg1/doc/sphinxext/comment_eater.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/sphinxext/comment_eater.py 2010-07-26 16:59:44.000000000 +0100 @@ -0,0 +1,158 @@ +from cStringIO import StringIO +import compiler +import inspect +import textwrap +import tokenize + +from compiler_unparse import unparse + + +class Comment(object): + """ A comment block. + """ + is_comment = True + def __init__(self, start_lineno, end_lineno, text): + # int : The first line number in the block. 1-indexed. + self.start_lineno = start_lineno + # int : The last line number. Inclusive! + self.end_lineno = end_lineno + # str : The text block including '#' character but not any leading spaces. + self.text = text + + def add(self, string, start, end, line): + """ Add a new comment line. + """ + self.start_lineno = min(self.start_lineno, start[0]) + self.end_lineno = max(self.end_lineno, end[0]) + self.text += string + + def __repr__(self): + return '%s(%r, %r, %r)' % (self.__class__.__name__, self.start_lineno, + self.end_lineno, self.text) + + +class NonComment(object): + """ A non-comment block of code. + """ + is_comment = False + def __init__(self, start_lineno, end_lineno): + self.start_lineno = start_lineno + self.end_lineno = end_lineno + + def add(self, string, start, end, line): + """ Add lines to the block. + """ + if string.strip(): + # Only add if not entirely whitespace. + self.start_lineno = min(self.start_lineno, start[0]) + self.end_lineno = max(self.end_lineno, end[0]) + + def __repr__(self): + return '%s(%r, %r)' % (self.__class__.__name__, self.start_lineno, + self.end_lineno) + + +class CommentBlocker(object): + """ Pull out contiguous comment blocks. + """ + def __init__(self): + # Start with a dummy. + self.current_block = NonComment(0, 0) + + # All of the blocks seen so far. + self.blocks = [] + + # The index mapping lines of code to their associated comment blocks. + self.index = {} + + def process_file(self, file): + """ Process a file object. + """ + for token in tokenize.generate_tokens(file.next): + self.process_token(*token) + self.make_index() + + def process_token(self, kind, string, start, end, line): + """ Process a single token. + """ + if self.current_block.is_comment: + if kind == tokenize.COMMENT: + self.current_block.add(string, start, end, line) + else: + self.new_noncomment(start[0], end[0]) + else: + if kind == tokenize.COMMENT: + self.new_comment(string, start, end, line) + else: + self.current_block.add(string, start, end, line) + + def new_noncomment(self, start_lineno, end_lineno): + """ We are transitioning from a noncomment to a comment. + """ + block = NonComment(start_lineno, end_lineno) + self.blocks.append(block) + self.current_block = block + + def new_comment(self, string, start, end, line): + """ Possibly add a new comment. + + Only adds a new comment if this comment is the only thing on the line. + Otherwise, it extends the noncomment block. + """ + prefix = line[:start[1]] + if prefix.strip(): + # Oops! Trailing comment, not a comment block. + self.current_block.add(string, start, end, line) + else: + # A comment block. + block = Comment(start[0], end[0], string) + self.blocks.append(block) + self.current_block = block + + def make_index(self): + """ Make the index mapping lines of actual code to their associated + prefix comments. + """ + for prev, block in zip(self.blocks[:-1], self.blocks[1:]): + if not block.is_comment: + self.index[block.start_lineno] = prev + + def search_for_comment(self, lineno, default=None): + """ Find the comment block just before the given line number. + + Returns None (or the specified default) if there is no such block. + """ + if not self.index: + self.make_index() + block = self.index.get(lineno, None) + text = getattr(block, 'text', default) + return text + + +def strip_comment_marker(text): + """ Strip # markers at the front of a block of comment text. + """ + lines = [] + for line in text.splitlines(): + lines.append(line.lstrip('#')) + text = textwrap.dedent('\n'.join(lines)) + return text + + +def get_class_traits(klass): + """ Yield all of the documentation for trait definitions on a class object. + """ + # FIXME: gracefully handle errors here or in the caller? + source = inspect.getsource(klass) + cb = CommentBlocker() + cb.process_file(StringIO(source)) + mod_ast = compiler.parse(source) + class_ast = mod_ast.node.nodes[0] + for node in class_ast.code.nodes: + # FIXME: handle other kinds of assignments? + if isinstance(node, compiler.ast.Assign): + name = node.nodes[0].name + rhs = unparse(node.expr).strip() + doc = strip_comment_marker(cb.search_for_comment(node.lineno, default='')) + yield name, rhs, doc + diff -Nru python-scipy-0.7.2+dfsg1/doc/sphinxext/compiler_unparse.py python-scipy-0.8.0+dfsg1/doc/sphinxext/compiler_unparse.py --- python-scipy-0.7.2+dfsg1/doc/sphinxext/compiler_unparse.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/sphinxext/compiler_unparse.py 2010-07-26 16:59:44.000000000 +0100 @@ -0,0 +1,860 @@ +""" Turn compiler.ast structures back into executable python code. + + The unparse method takes a compiler.ast tree and transforms it back into + valid python code. It is incomplete and currently only works for + import statements, function calls, function definitions, assignments, and + basic expressions. + + Inspired by python-2.5-svn/Demo/parser/unparse.py + + fixme: We may want to move to using _ast trees because the compiler for + them is about 6 times faster than compiler.compile. +""" + +import sys +import cStringIO +from compiler.ast import Const, Name, Tuple, Div, Mul, Sub, Add + +def unparse(ast, single_line_functions=False): + s = cStringIO.StringIO() + UnparseCompilerAst(ast, s, single_line_functions) + return s.getvalue().lstrip() + +op_precedence = { 'compiler.ast.Power':3, 'compiler.ast.Mul':2, 'compiler.ast.Div':2, + 'compiler.ast.Add':1, 'compiler.ast.Sub':1 } + +class UnparseCompilerAst: + """ Methods in this class recursively traverse an AST and + output source code for the abstract syntax; original formatting + is disregarged. + """ + + ######################################################################### + # object interface. + ######################################################################### + + def __init__(self, tree, file = sys.stdout, single_line_functions=False): + """ Unparser(tree, file=sys.stdout) -> None. + + Print the source for tree to file. + """ + self.f = file + self._single_func = single_line_functions + self._do_indent = True + self._indent = 0 + self._dispatch(tree) + self._write("\n") + self.f.flush() + + ######################################################################### + # Unparser private interface. + ######################################################################### + + ### format, output, and dispatch methods ################################ + + def _fill(self, text = ""): + "Indent a piece of text, according to the current indentation level" + if self._do_indent: + self._write("\n"+" "*self._indent + text) + else: + self._write(text) + + def _write(self, text): + "Append a piece of text to the current line." + self.f.write(text) + + def _enter(self): + "Print ':', and increase the indentation." + self._write(": ") + self._indent += 1 + + def _leave(self): + "Decrease the indentation level." + self._indent -= 1 + + def _dispatch(self, tree): + "_dispatcher function, _dispatching tree type T to method _T." + if isinstance(tree, list): + for t in tree: + self._dispatch(t) + return + meth = getattr(self, "_"+tree.__class__.__name__) + if tree.__class__.__name__ == 'NoneType' and not self._do_indent: + return + meth(tree) + + + ######################################################################### + # compiler.ast unparsing methods. + # + # There should be one method per concrete grammar type. They are + # organized in alphabetical order. + ######################################################################### + + def _Add(self, t): + self.__binary_op(t, '+') + + def _And(self, t): + self._write(" (") + for i, node in enumerate(t.nodes): + self._dispatch(node) + if i != len(t.nodes)-1: + self._write(") and (") + self._write(")") + + def _AssAttr(self, t): + """ Handle assigning an attribute of an object + """ + self._dispatch(t.expr) + self._write('.'+t.attrname) + + def _Assign(self, t): + """ Expression Assignment such as "a = 1". + + This only handles assignment in expressions. Keyword assignment + is handled separately. + """ + self._fill() + for target in t.nodes: + self._dispatch(target) + self._write(" = ") + self._dispatch(t.expr) + if not self._do_indent: + self._write('; ') + + def _AssName(self, t): + """ Name on left hand side of expression. + + Treat just like a name on the right side of an expression. + """ + self._Name(t) + + def _AssTuple(self, t): + """ Tuple on left hand side of an expression. + """ + + # _write each elements, separated by a comma. + for element in t.nodes[:-1]: + self._dispatch(element) + self._write(", ") + + # Handle the last one without writing comma + last_element = t.nodes[-1] + self._dispatch(last_element) + + def _AugAssign(self, t): + """ +=,-=,*=,/=,**=, etc. operations + """ + + self._fill() + self._dispatch(t.node) + self._write(' '+t.op+' ') + self._dispatch(t.expr) + if not self._do_indent: + self._write(';') + + def _Bitand(self, t): + """ Bit and operation. + """ + + for i, node in enumerate(t.nodes): + self._write("(") + self._dispatch(node) + self._write(")") + if i != len(t.nodes)-1: + self._write(" & ") + + def _Bitor(self, t): + """ Bit or operation + """ + + for i, node in enumerate(t.nodes): + self._write("(") + self._dispatch(node) + self._write(")") + if i != len(t.nodes)-1: + self._write(" | ") + + def _CallFunc(self, t): + """ Function call. + """ + self._dispatch(t.node) + self._write("(") + comma = False + for e in t.args: + if comma: self._write(", ") + else: comma = True + self._dispatch(e) + if t.star_args: + if comma: self._write(", ") + else: comma = True + self._write("*") + self._dispatch(t.star_args) + if t.dstar_args: + if comma: self._write(", ") + else: comma = True + self._write("**") + self._dispatch(t.dstar_args) + self._write(")") + + def _Compare(self, t): + self._dispatch(t.expr) + for op, expr in t.ops: + self._write(" " + op + " ") + self._dispatch(expr) + + def _Const(self, t): + """ A constant value such as an integer value, 3, or a string, "hello". + """ + self._dispatch(t.value) + + def _Decorators(self, t): + """ Handle function decorators (eg. @has_units) + """ + for node in t.nodes: + self._dispatch(node) + + def _Dict(self, t): + self._write("{") + for i, (k, v) in enumerate(t.items): + self._dispatch(k) + self._write(": ") + self._dispatch(v) + if i < len(t.items)-1: + self._write(", ") + self._write("}") + + def _Discard(self, t): + """ Node for when return value is ignored such as in "foo(a)". + """ + self._fill() + self._dispatch(t.expr) + + def _Div(self, t): + self.__binary_op(t, '/') + + def _Ellipsis(self, t): + self._write("...") + + def _From(self, t): + """ Handle "from xyz import foo, bar as baz". + """ + # fixme: Are From and ImportFrom handled differently? + self._fill("from ") + self._write(t.modname) + self._write(" import ") + for i, (name,asname) in enumerate(t.names): + if i != 0: + self._write(", ") + self._write(name) + if asname is not None: + self._write(" as "+asname) + + def _Function(self, t): + """ Handle function definitions + """ + if t.decorators is not None: + self._fill("@") + self._dispatch(t.decorators) + self._fill("def "+t.name + "(") + defaults = [None] * (len(t.argnames) - len(t.defaults)) + list(t.defaults) + for i, arg in enumerate(zip(t.argnames, defaults)): + self._write(arg[0]) + if arg[1] is not None: + self._write('=') + self._dispatch(arg[1]) + if i < len(t.argnames)-1: + self._write(', ') + self._write(")") + if self._single_func: + self._do_indent = False + self._enter() + self._dispatch(t.code) + self._leave() + self._do_indent = True + + def _Getattr(self, t): + """ Handle getting an attribute of an object + """ + if isinstance(t.expr, (Div, Mul, Sub, Add)): + self._write('(') + self._dispatch(t.expr) + self._write(')') + else: + self._dispatch(t.expr) + + self._write('.'+t.attrname) + + def _If(self, t): + self._fill() + + for i, (compare,code) in enumerate(t.tests): + if i == 0: + self._write("if ") + else: + self._write("elif ") + self._dispatch(compare) + self._enter() + self._fill() + self._dispatch(code) + self._leave() + self._write("\n") + + if t.else_ is not None: + self._write("else") + self._enter() + self._fill() + self._dispatch(t.else_) + self._leave() + self._write("\n") + + def _IfExp(self, t): + self._dispatch(t.then) + self._write(" if ") + self._dispatch(t.test) + + if t.else_ is not None: + self._write(" else (") + self._dispatch(t.else_) + self._write(")") + + def _Import(self, t): + """ Handle "import xyz.foo". + """ + self._fill("import ") + + for i, (name,asname) in enumerate(t.names): + if i != 0: + self._write(", ") + self._write(name) + if asname is not None: + self._write(" as "+asname) + + def _Keyword(self, t): + """ Keyword value assignment within function calls and definitions. + """ + self._write(t.name) + self._write("=") + self._dispatch(t.expr) + + def _List(self, t): + self._write("[") + for i,node in enumerate(t.nodes): + self._dispatch(node) + if i < len(t.nodes)-1: + self._write(", ") + self._write("]") + + def _Module(self, t): + if t.doc is not None: + self._dispatch(t.doc) + self._dispatch(t.node) + + def _Mul(self, t): + self.__binary_op(t, '*') + + def _Name(self, t): + self._write(t.name) + + def _NoneType(self, t): + self._write("None") + + def _Not(self, t): + self._write('not (') + self._dispatch(t.expr) + self._write(')') + + def _Or(self, t): + self._write(" (") + for i, node in enumerate(t.nodes): + self._dispatch(node) + if i != len(t.nodes)-1: + self._write(") or (") + self._write(")") + + def _Pass(self, t): + self._write("pass\n") + + def _Printnl(self, t): + self._fill("print ") + if t.dest: + self._write(">> ") + self._dispatch(t.dest) + self._write(", ") + comma = False + for node in t.nodes: + if comma: self._write(', ') + else: comma = True + self._dispatch(node) + + def _Power(self, t): + self.__binary_op(t, '**') + + def _Return(self, t): + self._fill("return ") + if t.value: + if isinstance(t.value, Tuple): + text = ', '.join([ name.name for name in t.value.asList() ]) + self._write(text) + else: + self._dispatch(t.value) + if not self._do_indent: + self._write('; ') + + def _Slice(self, t): + self._dispatch(t.expr) + self._write("[") + if t.lower: + self._dispatch(t.lower) + self._write(":") + if t.upper: + self._dispatch(t.upper) + #if t.step: + # self._write(":") + # self._dispatch(t.step) + self._write("]") + + def _Sliceobj(self, t): + for i, node in enumerate(t.nodes): + if i != 0: + self._write(":") + if not (isinstance(node, Const) and node.value is None): + self._dispatch(node) + + def _Stmt(self, tree): + for node in tree.nodes: + self._dispatch(node) + + def _Sub(self, t): + self.__binary_op(t, '-') + + def _Subscript(self, t): + self._dispatch(t.expr) + self._write("[") + for i, value in enumerate(t.subs): + if i != 0: + self._write(",") + self._dispatch(value) + self._write("]") + + def _TryExcept(self, t): + self._fill("try") + self._enter() + self._dispatch(t.body) + self._leave() + + for handler in t.handlers: + self._fill('except ') + self._dispatch(handler[0]) + if handler[1] is not None: + self._write(', ') + self._dispatch(handler[1]) + self._enter() + self._dispatch(handler[2]) + self._leave() + + if t.else_: + self._fill("else") + self._enter() + self._dispatch(t.else_) + self._leave() + + def _Tuple(self, t): + + if not t.nodes: + # Empty tuple. + self._write("()") + else: + self._write("(") + + # _write each elements, separated by a comma. + for element in t.nodes[:-1]: + self._dispatch(element) + self._write(", ") + + # Handle the last one without writing comma + last_element = t.nodes[-1] + self._dispatch(last_element) + + self._write(")") + + def _UnaryAdd(self, t): + self._write("+") + self._dispatch(t.expr) + + def _UnarySub(self, t): + self._write("-") + self._dispatch(t.expr) + + def _With(self, t): + self._fill('with ') + self._dispatch(t.expr) + if t.vars: + self._write(' as ') + self._dispatch(t.vars.name) + self._enter() + self._dispatch(t.body) + self._leave() + self._write('\n') + + def _int(self, t): + self._write(repr(t)) + + def __binary_op(self, t, symbol): + # Check if parenthesis are needed on left side and then dispatch + has_paren = False + left_class = str(t.left.__class__) + if (left_class in op_precedence.keys() and + op_precedence[left_class] < op_precedence[str(t.__class__)]): + has_paren = True + if has_paren: + self._write('(') + self._dispatch(t.left) + if has_paren: + self._write(')') + # Write the appropriate symbol for operator + self._write(symbol) + # Check if parenthesis are needed on the right side and then dispatch + has_paren = False + right_class = str(t.right.__class__) + if (right_class in op_precedence.keys() and + op_precedence[right_class] < op_precedence[str(t.__class__)]): + has_paren = True + if has_paren: + self._write('(') + self._dispatch(t.right) + if has_paren: + self._write(')') + + def _float(self, t): + # if t is 0.1, str(t)->'0.1' while repr(t)->'0.1000000000001' + # We prefer str here. + self._write(str(t)) + + def _str(self, t): + self._write(repr(t)) + + def _tuple(self, t): + self._write(str(t)) + + ######################################################################### + # These are the methods from the _ast modules unparse. + # + # As our needs to handle more advanced code increase, we may want to + # modify some of the methods below so that they work for compiler.ast. + ######################################################################### + +# # stmt +# def _Expr(self, tree): +# self._fill() +# self._dispatch(tree.value) +# +# def _Import(self, t): +# self._fill("import ") +# first = True +# for a in t.names: +# if first: +# first = False +# else: +# self._write(", ") +# self._write(a.name) +# if a.asname: +# self._write(" as "+a.asname) +# +## def _ImportFrom(self, t): +## self._fill("from ") +## self._write(t.module) +## self._write(" import ") +## for i, a in enumerate(t.names): +## if i == 0: +## self._write(", ") +## self._write(a.name) +## if a.asname: +## self._write(" as "+a.asname) +## # XXX(jpe) what is level for? +## +# +# def _Break(self, t): +# self._fill("break") +# +# def _Continue(self, t): +# self._fill("continue") +# +# def _Delete(self, t): +# self._fill("del ") +# self._dispatch(t.targets) +# +# def _Assert(self, t): +# self._fill("assert ") +# self._dispatch(t.test) +# if t.msg: +# self._write(", ") +# self._dispatch(t.msg) +# +# def _Exec(self, t): +# self._fill("exec ") +# self._dispatch(t.body) +# if t.globals: +# self._write(" in ") +# self._dispatch(t.globals) +# if t.locals: +# self._write(", ") +# self._dispatch(t.locals) +# +# def _Print(self, t): +# self._fill("print ") +# do_comma = False +# if t.dest: +# self._write(">>") +# self._dispatch(t.dest) +# do_comma = True +# for e in t.values: +# if do_comma:self._write(", ") +# else:do_comma=True +# self._dispatch(e) +# if not t.nl: +# self._write(",") +# +# def _Global(self, t): +# self._fill("global") +# for i, n in enumerate(t.names): +# if i != 0: +# self._write(",") +# self._write(" " + n) +# +# def _Yield(self, t): +# self._fill("yield") +# if t.value: +# self._write(" (") +# self._dispatch(t.value) +# self._write(")") +# +# def _Raise(self, t): +# self._fill('raise ') +# if t.type: +# self._dispatch(t.type) +# if t.inst: +# self._write(", ") +# self._dispatch(t.inst) +# if t.tback: +# self._write(", ") +# self._dispatch(t.tback) +# +# +# def _TryFinally(self, t): +# self._fill("try") +# self._enter() +# self._dispatch(t.body) +# self._leave() +# +# self._fill("finally") +# self._enter() +# self._dispatch(t.finalbody) +# self._leave() +# +# def _excepthandler(self, t): +# self._fill("except ") +# if t.type: +# self._dispatch(t.type) +# if t.name: +# self._write(", ") +# self._dispatch(t.name) +# self._enter() +# self._dispatch(t.body) +# self._leave() +# +# def _ClassDef(self, t): +# self._write("\n") +# self._fill("class "+t.name) +# if t.bases: +# self._write("(") +# for a in t.bases: +# self._dispatch(a) +# self._write(", ") +# self._write(")") +# self._enter() +# self._dispatch(t.body) +# self._leave() +# +# def _FunctionDef(self, t): +# self._write("\n") +# for deco in t.decorators: +# self._fill("@") +# self._dispatch(deco) +# self._fill("def "+t.name + "(") +# self._dispatch(t.args) +# self._write(")") +# self._enter() +# self._dispatch(t.body) +# self._leave() +# +# def _For(self, t): +# self._fill("for ") +# self._dispatch(t.target) +# self._write(" in ") +# self._dispatch(t.iter) +# self._enter() +# self._dispatch(t.body) +# self._leave() +# if t.orelse: +# self._fill("else") +# self._enter() +# self._dispatch(t.orelse) +# self._leave +# +# def _While(self, t): +# self._fill("while ") +# self._dispatch(t.test) +# self._enter() +# self._dispatch(t.body) +# self._leave() +# if t.orelse: +# self._fill("else") +# self._enter() +# self._dispatch(t.orelse) +# self._leave +# +# # expr +# def _Str(self, tree): +# self._write(repr(tree.s)) +## +# def _Repr(self, t): +# self._write("`") +# self._dispatch(t.value) +# self._write("`") +# +# def _Num(self, t): +# self._write(repr(t.n)) +# +# def _ListComp(self, t): +# self._write("[") +# self._dispatch(t.elt) +# for gen in t.generators: +# self._dispatch(gen) +# self._write("]") +# +# def _GeneratorExp(self, t): +# self._write("(") +# self._dispatch(t.elt) +# for gen in t.generators: +# self._dispatch(gen) +# self._write(")") +# +# def _comprehension(self, t): +# self._write(" for ") +# self._dispatch(t.target) +# self._write(" in ") +# self._dispatch(t.iter) +# for if_clause in t.ifs: +# self._write(" if ") +# self._dispatch(if_clause) +# +# def _IfExp(self, t): +# self._dispatch(t.body) +# self._write(" if ") +# self._dispatch(t.test) +# if t.orelse: +# self._write(" else ") +# self._dispatch(t.orelse) +# +# unop = {"Invert":"~", "Not": "not", "UAdd":"+", "USub":"-"} +# def _UnaryOp(self, t): +# self._write(self.unop[t.op.__class__.__name__]) +# self._write("(") +# self._dispatch(t.operand) +# self._write(")") +# +# binop = { "Add":"+", "Sub":"-", "Mult":"*", "Div":"/", "Mod":"%", +# "LShift":">>", "RShift":"<<", "BitOr":"|", "BitXor":"^", "BitAnd":"&", +# "FloorDiv":"//", "Pow": "**"} +# def _BinOp(self, t): +# self._write("(") +# self._dispatch(t.left) +# self._write(")" + self.binop[t.op.__class__.__name__] + "(") +# self._dispatch(t.right) +# self._write(")") +# +# boolops = {_ast.And: 'and', _ast.Or: 'or'} +# def _BoolOp(self, t): +# self._write("(") +# self._dispatch(t.values[0]) +# for v in t.values[1:]: +# self._write(" %s " % self.boolops[t.op.__class__]) +# self._dispatch(v) +# self._write(")") +# +# def _Attribute(self,t): +# self._dispatch(t.value) +# self._write(".") +# self._write(t.attr) +# +## def _Call(self, t): +## self._dispatch(t.func) +## self._write("(") +## comma = False +## for e in t.args: +## if comma: self._write(", ") +## else: comma = True +## self._dispatch(e) +## for e in t.keywords: +## if comma: self._write(", ") +## else: comma = True +## self._dispatch(e) +## if t.starargs: +## if comma: self._write(", ") +## else: comma = True +## self._write("*") +## self._dispatch(t.starargs) +## if t.kwargs: +## if comma: self._write(", ") +## else: comma = True +## self._write("**") +## self._dispatch(t.kwargs) +## self._write(")") +# +# # slice +# def _Index(self, t): +# self._dispatch(t.value) +# +# def _ExtSlice(self, t): +# for i, d in enumerate(t.dims): +# if i != 0: +# self._write(': ') +# self._dispatch(d) +# +# # others +# def _arguments(self, t): +# first = True +# nonDef = len(t.args)-len(t.defaults) +# for a in t.args[0:nonDef]: +# if first:first = False +# else: self._write(", ") +# self._dispatch(a) +# for a,d in zip(t.args[nonDef:], t.defaults): +# if first:first = False +# else: self._write(", ") +# self._dispatch(a), +# self._write("=") +# self._dispatch(d) +# if t.vararg: +# if first:first = False +# else: self._write(", ") +# self._write("*"+t.vararg) +# if t.kwarg: +# if first:first = False +# else: self._write(", ") +# self._write("**"+t.kwarg) +# +## def _keyword(self, t): +## self._write(t.arg) +## self._write("=") +## self._dispatch(t.value) +# +# def _Lambda(self, t): +# self._write("lambda ") +# self._dispatch(t.args) +# self._write(": ") +# self._dispatch(t.body) + + + diff -Nru python-scipy-0.7.2+dfsg1/doc/sphinxext/docscrape.py python-scipy-0.8.0+dfsg1/doc/sphinxext/docscrape.py --- python-scipy-0.7.2+dfsg1/doc/sphinxext/docscrape.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/sphinxext/docscrape.py 2010-07-26 16:59:44.000000000 +0100 @@ -0,0 +1,499 @@ +"""Extract reference documentation from the NumPy source tree. + +""" + +import inspect +import textwrap +import re +import pydoc +from StringIO import StringIO +from warnings import warn + +class Reader(object): + """A line-based string reader. + + """ + def __init__(self, data): + """ + Parameters + ---------- + data : str + String with lines separated by '\n'. + + """ + if isinstance(data,list): + self._str = data + else: + self._str = data.split('\n') # store string as list of lines + + self.reset() + + def __getitem__(self, n): + return self._str[n] + + def reset(self): + self._l = 0 # current line nr + + def read(self): + if not self.eof(): + out = self[self._l] + self._l += 1 + return out + else: + return '' + + def seek_next_non_empty_line(self): + for l in self[self._l:]: + if l.strip(): + break + else: + self._l += 1 + + def eof(self): + return self._l >= len(self._str) + + def read_to_condition(self, condition_func): + start = self._l + for line in self[start:]: + if condition_func(line): + return self[start:self._l] + self._l += 1 + if self.eof(): + return self[start:self._l+1] + return [] + + def read_to_next_empty_line(self): + self.seek_next_non_empty_line() + def is_empty(line): + return not line.strip() + return self.read_to_condition(is_empty) + + def read_to_next_unindented_line(self): + def is_unindented(line): + return (line.strip() and (len(line.lstrip()) == len(line))) + return self.read_to_condition(is_unindented) + + def peek(self,n=0): + if self._l + n < len(self._str): + return self[self._l + n] + else: + return '' + + def is_empty(self): + return not ''.join(self._str).strip() + + +class NumpyDocString(object): + def __init__(self, docstring, config={}): + docstring = textwrap.dedent(docstring).split('\n') + + self._doc = Reader(docstring) + self._parsed_data = { + 'Signature': '', + 'Summary': [''], + 'Extended Summary': [], + 'Parameters': [], + 'Returns': [], + 'Raises': [], + 'Warns': [], + 'Other Parameters': [], + 'Attributes': [], + 'Methods': [], + 'See Also': [], + 'Notes': [], + 'Warnings': [], + 'References': '', + 'Examples': '', + 'index': {} + } + + self._parse() + + def __getitem__(self,key): + return self._parsed_data[key] + + def __setitem__(self,key,val): + if not self._parsed_data.has_key(key): + warn("Unknown section %s" % key) + else: + self._parsed_data[key] = val + + def _is_at_section(self): + self._doc.seek_next_non_empty_line() + + if self._doc.eof(): + return False + + l1 = self._doc.peek().strip() # e.g. Parameters + + if l1.startswith('.. index::'): + return True + + l2 = self._doc.peek(1).strip() # ---------- or ========== + return l2.startswith('-'*len(l1)) or l2.startswith('='*len(l1)) + + def _strip(self,doc): + i = 0 + j = 0 + for i,line in enumerate(doc): + if line.strip(): break + + for j,line in enumerate(doc[::-1]): + if line.strip(): break + + return doc[i:len(doc)-j] + + def _read_to_next_section(self): + section = self._doc.read_to_next_empty_line() + + while not self._is_at_section() and not self._doc.eof(): + if not self._doc.peek(-1).strip(): # previous line was empty + section += [''] + + section += self._doc.read_to_next_empty_line() + + return section + + def _read_sections(self): + while not self._doc.eof(): + data = self._read_to_next_section() + name = data[0].strip() + + if name.startswith('..'): # index section + yield name, data[1:] + elif len(data) < 2: + yield StopIteration + else: + yield name, self._strip(data[2:]) + + def _parse_param_list(self,content): + r = Reader(content) + params = [] + while not r.eof(): + header = r.read().strip() + if ' : ' in header: + arg_name, arg_type = header.split(' : ')[:2] + else: + arg_name, arg_type = header, '' + + desc = r.read_to_next_unindented_line() + desc = dedent_lines(desc) + + params.append((arg_name,arg_type,desc)) + + return params + + + _name_rgx = re.compile(r"^\s*(:(?P\w+):`(?P[a-zA-Z0-9_.-]+)`|" + r" (?P[a-zA-Z0-9_.-]+))\s*", re.X) + def _parse_see_also(self, content): + """ + func_name : Descriptive text + continued text + another_func_name : Descriptive text + func_name1, func_name2, :meth:`func_name`, func_name3 + + """ + items = [] + + def parse_item_name(text): + """Match ':role:`name`' or 'name'""" + m = self._name_rgx.match(text) + if m: + g = m.groups() + if g[1] is None: + return g[3], None + else: + return g[2], g[1] + raise ValueError("%s is not a item name" % text) + + def push_item(name, rest): + if not name: + return + name, role = parse_item_name(name) + items.append((name, list(rest), role)) + del rest[:] + + current_func = None + rest = [] + + for line in content: + if not line.strip(): continue + + m = self._name_rgx.match(line) + if m and line[m.end():].strip().startswith(':'): + push_item(current_func, rest) + current_func, line = line[:m.end()], line[m.end():] + rest = [line.split(':', 1)[1].strip()] + if not rest[0]: + rest = [] + elif not line.startswith(' '): + push_item(current_func, rest) + current_func = None + if ',' in line: + for func in line.split(','): + push_item(func, []) + elif line.strip(): + current_func = line + elif current_func is not None: + rest.append(line.strip()) + push_item(current_func, rest) + return items + + def _parse_index(self, section, content): + """ + .. index: default + :refguide: something, else, and more + + """ + def strip_each_in(lst): + return [s.strip() for s in lst] + + out = {} + section = section.split('::') + if len(section) > 1: + out['default'] = strip_each_in(section[1].split(','))[0] + for line in content: + line = line.split(':') + if len(line) > 2: + out[line[1]] = strip_each_in(line[2].split(',')) + return out + + def _parse_summary(self): + """Grab signature (if given) and summary""" + if self._is_at_section(): + return + + summary = self._doc.read_to_next_empty_line() + summary_str = " ".join([s.strip() for s in summary]).strip() + if re.compile('^([\w., ]+=)?\s*[\w\.]+\(.*\)$').match(summary_str): + self['Signature'] = summary_str + if not self._is_at_section(): + self['Summary'] = self._doc.read_to_next_empty_line() + else: + self['Summary'] = summary + + if not self._is_at_section(): + self['Extended Summary'] = self._read_to_next_section() + + def _parse(self): + self._doc.reset() + self._parse_summary() + + for (section,content) in self._read_sections(): + if not section.startswith('..'): + section = ' '.join([s.capitalize() for s in section.split(' ')]) + if section in ('Parameters', 'Attributes', 'Methods', + 'Returns', 'Raises', 'Warns'): + self[section] = self._parse_param_list(content) + elif section.startswith('.. index::'): + self['index'] = self._parse_index(section, content) + elif section == 'See Also': + self['See Also'] = self._parse_see_also(content) + else: + self[section] = content + + # string conversion routines + + def _str_header(self, name, symbol='-'): + return [name, len(name)*symbol] + + def _str_indent(self, doc, indent=4): + out = [] + for line in doc: + out += [' '*indent + line] + return out + + def _str_signature(self): + if self['Signature']: + return [self['Signature'].replace('*','\*')] + [''] + else: + return [''] + + def _str_summary(self): + if self['Summary']: + return self['Summary'] + [''] + else: + return [] + + def _str_extended_summary(self): + if self['Extended Summary']: + return self['Extended Summary'] + [''] + else: + return [] + + def _str_param_list(self, name): + out = [] + if self[name]: + out += self._str_header(name) + for param,param_type,desc in self[name]: + out += ['%s : %s' % (param, param_type)] + out += self._str_indent(desc) + out += [''] + return out + + def _str_section(self, name): + out = [] + if self[name]: + out += self._str_header(name) + out += self[name] + out += [''] + return out + + def _str_see_also(self, func_role): + if not self['See Also']: return [] + out = [] + out += self._str_header("See Also") + last_had_desc = True + for func, desc, role in self['See Also']: + if role: + link = ':%s:`%s`' % (role, func) + elif func_role: + link = ':%s:`%s`' % (func_role, func) + else: + link = "`%s`_" % func + if desc or last_had_desc: + out += [''] + out += [link] + else: + out[-1] += ", %s" % link + if desc: + out += self._str_indent([' '.join(desc)]) + last_had_desc = True + else: + last_had_desc = False + out += [''] + return out + + def _str_index(self): + idx = self['index'] + out = [] + out += ['.. index:: %s' % idx.get('default','')] + for section, references in idx.iteritems(): + if section == 'default': + continue + out += [' :%s: %s' % (section, ', '.join(references))] + return out + + def __str__(self, func_role=''): + out = [] + out += self._str_signature() + out += self._str_summary() + out += self._str_extended_summary() + for param_list in ('Parameters','Returns','Raises'): + out += self._str_param_list(param_list) + out += self._str_section('Warnings') + out += self._str_see_also(func_role) + for s in ('Notes','References','Examples'): + out += self._str_section(s) + for param_list in ('Attributes', 'Methods'): + out += self._str_param_list(param_list) + out += self._str_index() + return '\n'.join(out) + + +def indent(str,indent=4): + indent_str = ' '*indent + if str is None: + return indent_str + lines = str.split('\n') + return '\n'.join(indent_str + l for l in lines) + +def dedent_lines(lines): + """Deindent a list of lines maximally""" + return textwrap.dedent("\n".join(lines)).split("\n") + +def header(text, style='-'): + return text + '\n' + style*len(text) + '\n' + + +class FunctionDoc(NumpyDocString): + def __init__(self, func, role='func', doc=None, config={}): + self._f = func + self._role = role # e.g. "func" or "meth" + if doc is None: + doc = inspect.getdoc(func) or '' + try: + NumpyDocString.__init__(self, doc) + except ValueError, e: + print '*'*78 + print "ERROR: '%s' while parsing `%s`" % (e, self._f) + print '*'*78 + #print "Docstring follows:" + #print doclines + #print '='*78 + + if not self['Signature']: + func, func_name = self.get_func() + try: + # try to read signature + argspec = inspect.getargspec(func) + argspec = inspect.formatargspec(*argspec) + argspec = argspec.replace('*','\*') + signature = '%s%s' % (func_name, argspec) + except TypeError, e: + signature = '%s()' % func_name + self['Signature'] = signature + + def get_func(self): + func_name = getattr(self._f, '__name__', self.__class__.__name__) + if inspect.isclass(self._f): + func = getattr(self._f, '__call__', self._f.__init__) + else: + func = self._f + return func, func_name + + def __str__(self): + out = '' + + func, func_name = self.get_func() + signature = self['Signature'].replace('*', '\*') + + roles = {'func': 'function', + 'meth': 'method'} + + if self._role: + if not roles.has_key(self._role): + print "Warning: invalid role %s" % self._role + out += '.. %s:: %s\n \n\n' % (roles.get(self._role,''), + func_name) + + out += super(FunctionDoc, self).__str__(func_role=self._role) + return out + + +class ClassDoc(NumpyDocString): + def __init__(self, cls, doc=None, modulename='', func_doc=FunctionDoc, + config={}): + if not inspect.isclass(cls): + raise ValueError("Initialise using a class. Got %r" % cls) + self._cls = cls + + if modulename and not modulename.endswith('.'): + modulename += '.' + self._mod = modulename + self._name = cls.__name__ + self._func_doc = func_doc + + if doc is None: + doc = pydoc.getdoc(cls) + + NumpyDocString.__init__(self, doc) + + if config.get('show_class_members', True): + if not self['Methods']: + self['Methods'] = [(name, '', '') + for name in sorted(self.methods)] + if not self['Attributes']: + self['Attributes'] = [(name, '', '') + for name in sorted(self.properties)] + + @property + def methods(self): + return [name for name,func in inspect.getmembers(self._cls) + if not name.startswith('_') and callable(func)] + + @property + def properties(self): + return [name for name,func in inspect.getmembers(self._cls) + if not name.startswith('_') and func is None] diff -Nru python-scipy-0.7.2+dfsg1/doc/sphinxext/docscrape_sphinx.py python-scipy-0.8.0+dfsg1/doc/sphinxext/docscrape_sphinx.py --- python-scipy-0.7.2+dfsg1/doc/sphinxext/docscrape_sphinx.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/sphinxext/docscrape_sphinx.py 2010-07-26 16:59:44.000000000 +0100 @@ -0,0 +1,226 @@ +import re, inspect, textwrap, pydoc +import sphinx +from docscrape import NumpyDocString, FunctionDoc, ClassDoc + +class SphinxDocString(NumpyDocString): + def __init__(self, docstring, config={}): + self.use_plots = config.get('use_plots', False) + NumpyDocString.__init__(self, docstring, config=config) + + # string conversion routines + def _str_header(self, name, symbol='`'): + return ['.. rubric:: ' + name, ''] + + def _str_field_list(self, name): + return [':' + name + ':'] + + def _str_indent(self, doc, indent=4): + out = [] + for line in doc: + out += [' '*indent + line] + return out + + def _str_signature(self): + return [''] + if self['Signature']: + return ['``%s``' % self['Signature']] + [''] + else: + return [''] + + def _str_summary(self): + return self['Summary'] + [''] + + def _str_extended_summary(self): + return self['Extended Summary'] + [''] + + def _str_param_list(self, name): + out = [] + if self[name]: + out += self._str_field_list(name) + out += [''] + for param,param_type,desc in self[name]: + out += self._str_indent(['**%s** : %s' % (param.strip(), + param_type)]) + out += [''] + out += self._str_indent(desc,8) + out += [''] + return out + + @property + def _obj(self): + if hasattr(self, '_cls'): + return self._cls + elif hasattr(self, '_f'): + return self._f + return None + + def _str_member_list(self, name): + """ + Generate a member listing, autosummary:: table where possible, + and a table where not. + + """ + out = [] + if self[name]: + out += ['.. rubric:: %s' % name, ''] + prefix = getattr(self, '_name', '') + + if prefix: + prefix = '~%s.' % prefix + + autosum = [] + others = [] + for param, param_type, desc in self[name]: + param = param.strip() + if not self._obj or hasattr(self._obj, param): + autosum += [" %s%s" % (prefix, param)] + else: + others.append((param, param_type, desc)) + + if autosum: + out += ['.. autosummary::', ' :toctree:', ''] + out += autosum + + if others: + maxlen_0 = max([len(x[0]) for x in others]) + maxlen_1 = max([len(x[1]) for x in others]) + hdr = "="*maxlen_0 + " " + "="*maxlen_1 + " " + "="*10 + fmt = '%%%ds %%%ds ' % (maxlen_0, maxlen_1) + n_indent = maxlen_0 + maxlen_1 + 4 + out += [hdr] + for param, param_type, desc in others: + out += [fmt % (param.strip(), param_type)] + out += self._str_indent(desc, n_indent) + out += [hdr] + out += [''] + return out + + def _str_section(self, name): + out = [] + if self[name]: + out += self._str_header(name) + out += [''] + content = textwrap.dedent("\n".join(self[name])).split("\n") + out += content + out += [''] + return out + + def _str_see_also(self, func_role): + out = [] + if self['See Also']: + see_also = super(SphinxDocString, self)._str_see_also(func_role) + out = ['.. seealso::', ''] + out += self._str_indent(see_also[2:]) + return out + + def _str_warnings(self): + out = [] + if self['Warnings']: + out = ['.. warning::', ''] + out += self._str_indent(self['Warnings']) + return out + + def _str_index(self): + idx = self['index'] + out = [] + if len(idx) == 0: + return out + + out += ['.. index:: %s' % idx.get('default','')] + for section, references in idx.iteritems(): + if section == 'default': + continue + elif section == 'refguide': + out += [' single: %s' % (', '.join(references))] + else: + out += [' %s: %s' % (section, ','.join(references))] + return out + + def _str_references(self): + out = [] + if self['References']: + out += self._str_header('References') + if isinstance(self['References'], str): + self['References'] = [self['References']] + out.extend(self['References']) + out += [''] + # Latex collects all references to a separate bibliography, + # so we need to insert links to it + if sphinx.__version__ >= "0.6": + out += ['.. only:: latex',''] + else: + out += ['.. latexonly::',''] + items = [] + for line in self['References']: + m = re.match(r'.. \[([a-z0-9._-]+)\]', line, re.I) + if m: + items.append(m.group(1)) + out += [' ' + ", ".join(["[%s]_" % item for item in items]), ''] + return out + + def _str_examples(self): + examples_str = "\n".join(self['Examples']) + + if (self.use_plots and 'import matplotlib' in examples_str + and 'plot::' not in examples_str): + out = [] + out += self._str_header('Examples') + out += ['.. plot::', ''] + out += self._str_indent(self['Examples']) + out += [''] + return out + else: + return self._str_section('Examples') + + def __str__(self, indent=0, func_role="obj"): + out = [] + out += self._str_signature() + out += self._str_index() + [''] + out += self._str_summary() + out += self._str_extended_summary() + for param_list in ('Parameters', 'Returns', 'Raises'): + out += self._str_param_list(param_list) + out += self._str_warnings() + out += self._str_see_also(func_role) + out += self._str_section('Notes') + out += self._str_references() + out += self._str_examples() + for param_list in ('Attributes', 'Methods'): + out += self._str_member_list(param_list) + out = self._str_indent(out,indent) + return '\n'.join(out) + +class SphinxFunctionDoc(SphinxDocString, FunctionDoc): + def __init__(self, obj, doc=None, config={}): + self.use_plots = config.get('use_plots', False) + FunctionDoc.__init__(self, obj, doc=doc, config=config) + +class SphinxClassDoc(SphinxDocString, ClassDoc): + def __init__(self, obj, doc=None, func_doc=None, config={}): + self.use_plots = config.get('use_plots', False) + ClassDoc.__init__(self, obj, doc=doc, func_doc=None, config=config) + +class SphinxObjDoc(SphinxDocString): + def __init__(self, obj, doc=None, config={}): + self._f = obj + SphinxDocString.__init__(self, doc, config=config) + +def get_doc_object(obj, what=None, doc=None, config={}): + if what is None: + if inspect.isclass(obj): + what = 'class' + elif inspect.ismodule(obj): + what = 'module' + elif callable(obj): + what = 'function' + else: + what = 'object' + if what == 'class': + return SphinxClassDoc(obj, func_doc=SphinxFunctionDoc, doc=doc, + config=config) + elif what in ('function', 'method'): + return SphinxFunctionDoc(obj, doc=doc, config=config) + else: + if doc is None: + doc = pydoc.getdoc(obj) + return SphinxObjDoc(obj, doc, config=config) diff -Nru python-scipy-0.7.2+dfsg1/doc/sphinxext/__init__.py python-scipy-0.8.0+dfsg1/doc/sphinxext/__init__.py --- python-scipy-0.7.2+dfsg1/doc/sphinxext/__init__.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/sphinxext/__init__.py 2010-07-26 16:59:44.000000000 +0100 @@ -0,0 +1 @@ +from numpydoc import setup diff -Nru python-scipy-0.7.2+dfsg1/doc/sphinxext/LICENSE.txt python-scipy-0.8.0+dfsg1/doc/sphinxext/LICENSE.txt --- python-scipy-0.7.2+dfsg1/doc/sphinxext/LICENSE.txt 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/sphinxext/LICENSE.txt 2010-07-26 16:59:44.000000000 +0100 @@ -0,0 +1,97 @@ +------------------------------------------------------------------------------- + The files + - numpydoc.py + - autosummary.py + - autosummary_generate.py + - docscrape.py + - docscrape_sphinx.py + - phantom_import.py + have the following license: + +Copyright (C) 2008 Stefan van der Walt , Pauli Virtanen + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + 1. Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in + the documentation and/or other materials provided with the + distribution. + +THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR +IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, +INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR +SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, +STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING +IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +------------------------------------------------------------------------------- + The files + - compiler_unparse.py + - comment_eater.py + - traitsdoc.py + have the following license: + +This software is OSI Certified Open Source Software. +OSI Certified is a certification mark of the Open Source Initiative. + +Copyright (c) 2006, Enthought, Inc. +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + * Redistributions of source code must retain the above copyright notice, this + list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above copyright notice, + this list of conditions and the following disclaimer in the documentation + and/or other materials provided with the distribution. + * Neither the name of Enthought, Inc. nor the names of its contributors may + be used to endorse or promote products derived from this software without + specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR +ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON +ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + + +------------------------------------------------------------------------------- + The files + - only_directives.py + - plot_directive.py + originate from Matplotlib (http://matplotlib.sf.net/) which has + the following license: + +Copyright (c) 2002-2008 John D. Hunter; All Rights Reserved. + +1. This LICENSE AGREEMENT is between John D. Hunter (“JDHâ€), and the Individual or Organization (“Licenseeâ€) accessing and otherwise using matplotlib software in source or binary form and its associated documentation. + +2. Subject to the terms and conditions of this License Agreement, JDH hereby grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, analyze, test, perform and/or display publicly, prepare derivative works, distribute, and otherwise use matplotlib 0.98.3 alone or in any derivative version, provided, however, that JDH’s License Agreement and JDH’s notice of copyright, i.e., “Copyright (c) 2002-2008 John D. Hunter; All Rights Reserved†are retained in matplotlib 0.98.3 alone or in any derivative version prepared by Licensee. + +3. In the event Licensee prepares a derivative work that is based on or incorporates matplotlib 0.98.3 or any part thereof, and wants to make the derivative work available to others as provided herein, then Licensee hereby agrees to include in any such work a brief summary of the changes made to matplotlib 0.98.3. + +4. JDH is making matplotlib 0.98.3 available to Licensee on an “AS IS†basis. JDH MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, JDH MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF MATPLOTLIB 0.98.3 WILL NOT INFRINGE ANY THIRD PARTY RIGHTS. + +5. JDH SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF MATPLOTLIB 0.98.3 FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING MATPLOTLIB 0.98.3, OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. + +6. This License Agreement will automatically terminate upon a material breach of its terms and conditions. + +7. Nothing in this License Agreement shall be deemed to create any relationship of agency, partnership, or joint venture between JDH and Licensee. This License Agreement does not grant permission to use JDH trademarks or trade name in a trademark sense to endorse or promote products or services of Licensee, or any third party. + +8. By copying, installing or otherwise using matplotlib 0.98.3, Licensee agrees to be bound by the terms and conditions of this License Agreement. + diff -Nru python-scipy-0.7.2+dfsg1/doc/sphinxext/MANIFEST.in python-scipy-0.8.0+dfsg1/doc/sphinxext/MANIFEST.in --- python-scipy-0.7.2+dfsg1/doc/sphinxext/MANIFEST.in 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/sphinxext/MANIFEST.in 2010-07-26 16:59:44.000000000 +0100 @@ -0,0 +1,2 @@ +recursive-include tests *.py +include *.txt diff -Nru python-scipy-0.7.2+dfsg1/doc/sphinxext/numpydoc.py python-scipy-0.8.0+dfsg1/doc/sphinxext/numpydoc.py --- python-scipy-0.7.2+dfsg1/doc/sphinxext/numpydoc.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/sphinxext/numpydoc.py 2010-07-26 16:59:44.000000000 +0100 @@ -0,0 +1,196 @@ +""" +======== +numpydoc +======== + +Sphinx extension that handles docstrings in the Numpy standard format. [1] + +It will: + +- Convert Parameters etc. sections to field lists. +- Convert See Also section to a See also entry. +- Renumber references. +- Extract the signature from the docstring, if it can't be determined otherwise. + +.. [1] http://projects.scipy.org/numpy/wiki/CodingStyleGuidelines#docstring-standard + +""" + +import os, re, pydoc +from docscrape_sphinx import get_doc_object, SphinxDocString +from sphinx.util.compat import Directive +import inspect + +def mangle_docstrings(app, what, name, obj, options, lines, + reference_offset=[0]): + + cfg = dict(use_plots=app.config.numpydoc_use_plots, + show_class_members=app.config.numpydoc_show_class_members) + + if what == 'module': + # Strip top title + title_re = re.compile(ur'^\s*[#*=]{4,}\n[a-z0-9 -]+\n[#*=]{4,}\s*', + re.I|re.S) + lines[:] = title_re.sub(u'', u"\n".join(lines)).split(u"\n") + else: + doc = get_doc_object(obj, what, u"\n".join(lines), config=cfg) + lines[:] = unicode(doc).split(u"\n") + + if app.config.numpydoc_edit_link and hasattr(obj, '__name__') and \ + obj.__name__: + if hasattr(obj, '__module__'): + v = dict(full_name=u"%s.%s" % (obj.__module__, obj.__name__)) + else: + v = dict(full_name=obj.__name__) + lines += [u'', u'.. htmlonly::', ''] + lines += [u' %s' % x for x in + (app.config.numpydoc_edit_link % v).split("\n")] + + # replace reference numbers so that there are no duplicates + references = [] + for line in lines: + line = line.strip() + m = re.match(ur'^.. \[([a-z0-9_.-])\]', line, re.I) + if m: + references.append(m.group(1)) + + # start renaming from the longest string, to avoid overwriting parts + references.sort(key=lambda x: -len(x)) + if references: + for i, line in enumerate(lines): + for r in references: + if re.match(ur'^\d+$', r): + new_r = u"R%d" % (reference_offset[0] + int(r)) + else: + new_r = u"%s%d" % (r, reference_offset[0]) + lines[i] = lines[i].replace(u'[%s]_' % r, + u'[%s]_' % new_r) + lines[i] = lines[i].replace(u'.. [%s]' % r, + u'.. [%s]' % new_r) + + reference_offset[0] += len(references) + +def mangle_signature(app, what, name, obj, options, sig, retann): + # Do not try to inspect classes that don't define `__init__` + if (inspect.isclass(obj) and + (not hasattr(obj, '__init__') or + 'initializes x; see ' in pydoc.getdoc(obj.__init__))): + return '', '' + + if not (callable(obj) or hasattr(obj, '__argspec_is_invalid_')): return + if not hasattr(obj, '__doc__'): return + + doc = SphinxDocString(pydoc.getdoc(obj)) + if doc['Signature']: + sig = re.sub(u"^[^(]*", u"", doc['Signature']) + return sig, u'' + +def initialize(app): + try: + app.connect('autodoc-process-signature', mangle_signature) + except: + monkeypatch_sphinx_ext_autodoc() + +def setup(app, get_doc_object_=get_doc_object): + global get_doc_object + get_doc_object = get_doc_object_ + + app.connect('autodoc-process-docstring', mangle_docstrings) + app.connect('builder-inited', initialize) + app.add_config_value('numpydoc_edit_link', None, False) + app.add_config_value('numpydoc_use_plots', None, False) + app.add_config_value('numpydoc_show_class_members', True, True) + + # Extra mangling directives + name_type = { + 'cfunction': 'function', + 'cmember': 'attribute', + 'cmacro': 'function', + 'ctype': 'class', + 'cvar': 'object', + 'class': 'class', + 'function': 'function', + 'attribute': 'attribute', + 'method': 'function', + 'staticmethod': 'function', + 'classmethod': 'function', + } + + for name, objtype in name_type.items(): + app.add_directive('np-' + name, wrap_mangling_directive(name, objtype)) + +#------------------------------------------------------------------------------ +# Input-mangling directives +#------------------------------------------------------------------------------ +from docutils.statemachine import ViewList + +def get_directive(name): + from docutils.parsers.rst import directives + try: + return directives.directive(name, None, None)[0] + except AttributeError: + pass + try: + # docutils 0.4 + return directives._directives[name] + except (AttributeError, KeyError): + raise RuntimeError("No directive named '%s' found" % name) + +def wrap_mangling_directive(base_directive_name, objtype): + base_directive = get_directive(base_directive_name) + + if inspect.isfunction(base_directive): + base_func = base_directive + class base_directive(Directive): + required_arguments = base_func.arguments[0] + optional_arguments = base_func.arguments[1] + final_argument_whitespace = base_func.arguments[2] + option_spec = base_func.options + has_content = base_func.content + def run(self): + return base_func(self.name, self.arguments, self.options, + self.content, self.lineno, + self.content_offset, self.block_text, + self.state, self.state_machine) + + class directive(base_directive): + def run(self): + env = self.state.document.settings.env + + name = None + if self.arguments: + m = re.match(r'^(.*\s+)?(.*?)(\(.*)?', self.arguments[0]) + name = m.group(2).strip() + + if not name: + name = self.arguments[0] + + lines = list(self.content) + mangle_docstrings(env.app, objtype, name, None, None, lines) + self.content = ViewList(lines, self.content.parent) + + return base_directive.run(self) + + return directive + +#------------------------------------------------------------------------------ +# Monkeypatch sphinx.ext.autodoc to accept argspecless autodocs (Sphinx < 0.5) +#------------------------------------------------------------------------------ + +def monkeypatch_sphinx_ext_autodoc(): + global _original_format_signature + import sphinx.ext.autodoc + + if sphinx.ext.autodoc.format_signature is our_format_signature: + return + + print "[numpydoc] Monkeypatching sphinx.ext.autodoc ..." + _original_format_signature = sphinx.ext.autodoc.format_signature + sphinx.ext.autodoc.format_signature = our_format_signature + +def our_format_signature(what, obj): + r = mangle_signature(None, what, None, obj, None, None, None) + if r is not None: + return r[0] + else: + return _original_format_signature(what, obj) diff -Nru python-scipy-0.7.2+dfsg1/doc/sphinxext/only_directives.py python-scipy-0.8.0+dfsg1/doc/sphinxext/only_directives.py --- python-scipy-0.7.2+dfsg1/doc/sphinxext/only_directives.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/sphinxext/only_directives.py 2010-07-26 16:59:44.000000000 +0100 @@ -0,0 +1,96 @@ +# +# A pair of directives for inserting content that will only appear in +# either html or latex. +# + +from docutils.nodes import Body, Element +from docutils.writers.html4css1 import HTMLTranslator +try: + from sphinx.latexwriter import LaTeXTranslator +except ImportError: + from sphinx.writers.latex import LaTeXTranslator + + import warnings + warnings.warn("The numpydoc.only_directives module is deprecated;" + "please use the only:: directive available in Sphinx >= 0.6", + DeprecationWarning, stacklevel=2) + +from docutils.parsers.rst import directives + +class html_only(Body, Element): + pass + +class latex_only(Body, Element): + pass + +def run(content, node_class, state, content_offset): + text = '\n'.join(content) + node = node_class(text) + state.nested_parse(content, content_offset, node) + return [node] + +try: + from docutils.parsers.rst import Directive +except ImportError: + from docutils.parsers.rst.directives import _directives + + def html_only_directive(name, arguments, options, content, lineno, + content_offset, block_text, state, state_machine): + return run(content, html_only, state, content_offset) + + def latex_only_directive(name, arguments, options, content, lineno, + content_offset, block_text, state, state_machine): + return run(content, latex_only, state, content_offset) + + for func in (html_only_directive, latex_only_directive): + func.content = 1 + func.options = {} + func.arguments = None + + _directives['htmlonly'] = html_only_directive + _directives['latexonly'] = latex_only_directive +else: + class OnlyDirective(Directive): + has_content = True + required_arguments = 0 + optional_arguments = 0 + final_argument_whitespace = True + option_spec = {} + + def run(self): + self.assert_has_content() + return run(self.content, self.node_class, + self.state, self.content_offset) + + class HtmlOnlyDirective(OnlyDirective): + node_class = html_only + + class LatexOnlyDirective(OnlyDirective): + node_class = latex_only + + directives.register_directive('htmlonly', HtmlOnlyDirective) + directives.register_directive('latexonly', LatexOnlyDirective) + +def setup(app): + app.add_node(html_only) + app.add_node(latex_only) + + # Add visit/depart methods to HTML-Translator: + def visit_perform(self, node): + pass + def depart_perform(self, node): + pass + def visit_ignore(self, node): + node.children = [] + def depart_ignore(self, node): + node.children = [] + + HTMLTranslator.visit_html_only = visit_perform + HTMLTranslator.depart_html_only = depart_perform + HTMLTranslator.visit_latex_only = visit_ignore + HTMLTranslator.depart_latex_only = depart_ignore + + LaTeXTranslator.visit_html_only = visit_ignore + LaTeXTranslator.depart_html_only = depart_ignore + LaTeXTranslator.visit_latex_only = visit_perform + LaTeXTranslator.depart_latex_only = depart_perform diff -Nru python-scipy-0.7.2+dfsg1/doc/sphinxext/phantom_import.py python-scipy-0.8.0+dfsg1/doc/sphinxext/phantom_import.py --- python-scipy-0.7.2+dfsg1/doc/sphinxext/phantom_import.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/sphinxext/phantom_import.py 2010-07-26 16:59:44.000000000 +0100 @@ -0,0 +1,162 @@ +""" +============== +phantom_import +============== + +Sphinx extension to make directives from ``sphinx.ext.autodoc`` and similar +extensions to use docstrings loaded from an XML file. + +This extension loads an XML file in the Pydocweb format [1] and +creates a dummy module that contains the specified docstrings. This +can be used to get the current docstrings from a Pydocweb instance +without needing to rebuild the documented module. + +.. [1] http://code.google.com/p/pydocweb + +""" +import imp, sys, compiler, types, os, inspect, re + +def setup(app): + app.connect('builder-inited', initialize) + app.add_config_value('phantom_import_file', None, True) + +def initialize(app): + fn = app.config.phantom_import_file + if (fn and os.path.isfile(fn)): + print "[numpydoc] Phantom importing modules from", fn, "..." + import_phantom_module(fn) + +#------------------------------------------------------------------------------ +# Creating 'phantom' modules from an XML description +#------------------------------------------------------------------------------ +def import_phantom_module(xml_file): + """ + Insert a fake Python module to sys.modules, based on a XML file. + + The XML file is expected to conform to Pydocweb DTD. The fake + module will contain dummy objects, which guarantee the following: + + - Docstrings are correct. + - Class inheritance relationships are correct (if present in XML). + - Function argspec is *NOT* correct (even if present in XML). + Instead, the function signature is prepended to the function docstring. + - Class attributes are *NOT* correct; instead, they are dummy objects. + + Parameters + ---------- + xml_file : str + Name of an XML file to read + + """ + import lxml.etree as etree + + object_cache = {} + + tree = etree.parse(xml_file) + root = tree.getroot() + + # Sort items so that + # - Base classes come before classes inherited from them + # - Modules come before their contents + all_nodes = dict([(n.attrib['id'], n) for n in root]) + + def _get_bases(node, recurse=False): + bases = [x.attrib['ref'] for x in node.findall('base')] + if recurse: + j = 0 + while True: + try: + b = bases[j] + except IndexError: break + if b in all_nodes: + bases.extend(_get_bases(all_nodes[b])) + j += 1 + return bases + + type_index = ['module', 'class', 'callable', 'object'] + + def base_cmp(a, b): + x = cmp(type_index.index(a.tag), type_index.index(b.tag)) + if x != 0: return x + + if a.tag == 'class' and b.tag == 'class': + a_bases = _get_bases(a, recurse=True) + b_bases = _get_bases(b, recurse=True) + x = cmp(len(a_bases), len(b_bases)) + if x != 0: return x + if a.attrib['id'] in b_bases: return -1 + if b.attrib['id'] in a_bases: return 1 + + return cmp(a.attrib['id'].count('.'), b.attrib['id'].count('.')) + + nodes = root.getchildren() + nodes.sort(base_cmp) + + # Create phantom items + for node in nodes: + name = node.attrib['id'] + doc = (node.text or '').decode('string-escape') + "\n" + if doc == "\n": doc = "" + + # create parent, if missing + parent = name + while True: + parent = '.'.join(parent.split('.')[:-1]) + if not parent: break + if parent in object_cache: break + obj = imp.new_module(parent) + object_cache[parent] = obj + sys.modules[parent] = obj + + # create object + if node.tag == 'module': + obj = imp.new_module(name) + obj.__doc__ = doc + sys.modules[name] = obj + elif node.tag == 'class': + bases = [object_cache[b] for b in _get_bases(node) + if b in object_cache] + bases.append(object) + init = lambda self: None + init.__doc__ = doc + obj = type(name, tuple(bases), {'__doc__': doc, '__init__': init}) + obj.__name__ = name.split('.')[-1] + elif node.tag == 'callable': + funcname = node.attrib['id'].split('.')[-1] + argspec = node.attrib.get('argspec') + if argspec: + argspec = re.sub('^[^(]*', '', argspec) + doc = "%s%s\n\n%s" % (funcname, argspec, doc) + obj = lambda: 0 + obj.__argspec_is_invalid_ = True + obj.func_name = funcname + obj.__name__ = name + obj.__doc__ = doc + if inspect.isclass(object_cache[parent]): + obj.__objclass__ = object_cache[parent] + else: + class Dummy(object): pass + obj = Dummy() + obj.__name__ = name + obj.__doc__ = doc + if inspect.isclass(object_cache[parent]): + obj.__get__ = lambda: None + object_cache[name] = obj + + if parent: + if inspect.ismodule(object_cache[parent]): + obj.__module__ = parent + setattr(object_cache[parent], name.split('.')[-1], obj) + + # Populate items + for node in root: + obj = object_cache.get(node.attrib['id']) + if obj is None: continue + for ref in node.findall('ref'): + if node.tag == 'class': + if ref.attrib['ref'].startswith(node.attrib['id'] + '.'): + setattr(obj, ref.attrib['name'], + object_cache.get(ref.attrib['ref'])) + else: + setattr(obj, ref.attrib['name'], + object_cache.get(ref.attrib['ref'])) diff -Nru python-scipy-0.7.2+dfsg1/doc/sphinxext/plot_directive.py python-scipy-0.8.0+dfsg1/doc/sphinxext/plot_directive.py --- python-scipy-0.7.2+dfsg1/doc/sphinxext/plot_directive.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/sphinxext/plot_directive.py 2010-07-26 16:59:44.000000000 +0100 @@ -0,0 +1,563 @@ +""" +A special directive for generating a matplotlib plot. + +.. warning:: + + This is a hacked version of plot_directive.py from Matplotlib. + It's very much subject to change! + + +Usage +----- + +Can be used like this:: + + .. plot:: examples/example.py + + .. plot:: + + import matplotlib.pyplot as plt + plt.plot([1,2,3], [4,5,6]) + + .. plot:: + + A plotting example: + + >>> import matplotlib.pyplot as plt + >>> plt.plot([1,2,3], [4,5,6]) + +The content is interpreted as doctest formatted if it has a line starting +with ``>>>``. + +The ``plot`` directive supports the options + + format : {'python', 'doctest'} + Specify the format of the input + + include-source : bool + Whether to display the source code. Default can be changed in conf.py + +and the ``image`` directive options ``alt``, ``height``, ``width``, +``scale``, ``align``, ``class``. + +Configuration options +--------------------- + +The plot directive has the following configuration options: + + plot_include_source + Default value for the include-source option + + plot_pre_code + Code that should be executed before each plot. + + plot_basedir + Base directory, to which plot:: file names are relative to. + (If None or empty, file names are relative to the directoly where + the file containing the directive is.) + + plot_formats + File formats to generate. List of tuples or strings:: + + [(suffix, dpi), suffix, ...] + + that determine the file format and the DPI. For entries whose + DPI was omitted, sensible defaults are chosen. + +TODO +---- + +* Refactor Latex output; now it's plain images, but it would be nice + to make them appear side-by-side, or in floats. + +""" + +import sys, os, glob, shutil, imp, warnings, cStringIO, re, textwrap, traceback +import sphinx + +import warnings +warnings.warn("A plot_directive module is also available under " + "matplotlib.sphinxext; expect this numpydoc.plot_directive " + "module to be deprecated after relevant features have been " + "integrated there.", + FutureWarning, stacklevel=2) + + +#------------------------------------------------------------------------------ +# Registration hook +#------------------------------------------------------------------------------ + +def setup(app): + setup.app = app + setup.config = app.config + setup.confdir = app.confdir + + app.add_config_value('plot_pre_code', '', True) + app.add_config_value('plot_include_source', False, True) + app.add_config_value('plot_formats', ['png', 'hires.png', 'pdf'], True) + app.add_config_value('plot_basedir', None, True) + + app.add_directive('plot', plot_directive, True, (0, 1, False), + **plot_directive_options) + +#------------------------------------------------------------------------------ +# plot:: directive +#------------------------------------------------------------------------------ +from docutils.parsers.rst import directives +from docutils import nodes + +def plot_directive(name, arguments, options, content, lineno, + content_offset, block_text, state, state_machine): + return run(arguments, content, options, state_machine, state, lineno) +plot_directive.__doc__ = __doc__ + +def _option_boolean(arg): + if not arg or not arg.strip(): + # no argument given, assume used as a flag + return True + elif arg.strip().lower() in ('no', '0', 'false'): + return False + elif arg.strip().lower() in ('yes', '1', 'true'): + return True + else: + raise ValueError('"%s" unknown boolean' % arg) + +def _option_format(arg): + return directives.choice(arg, ('python', 'lisp')) + +def _option_align(arg): + return directives.choice(arg, ("top", "middle", "bottom", "left", "center", + "right")) + +plot_directive_options = {'alt': directives.unchanged, + 'height': directives.length_or_unitless, + 'width': directives.length_or_percentage_or_unitless, + 'scale': directives.nonnegative_int, + 'align': _option_align, + 'class': directives.class_option, + 'include-source': _option_boolean, + 'format': _option_format, + } + +#------------------------------------------------------------------------------ +# Generating output +#------------------------------------------------------------------------------ + +from docutils import nodes, utils + +try: + # Sphinx depends on either Jinja or Jinja2 + import jinja2 + def format_template(template, **kw): + return jinja2.Template(template).render(**kw) +except ImportError: + import jinja + def format_template(template, **kw): + return jinja.from_string(template, **kw) + +TEMPLATE = """ +{{ source_code }} + +{{ only_html }} + + {% if source_code %} + (`Source code <{{ source_link }}>`__) + + .. admonition:: Output + :class: plot-output + + {% endif %} + + {% for img in images %} + .. figure:: {{ build_dir }}/{{ img.basename }}.png + {%- for option in options %} + {{ option }} + {% endfor %} + + ( + {%- if not source_code -%} + `Source code <{{source_link}}>`__ + {%- for fmt in img.formats -%} + , `{{ fmt }} <{{ dest_dir }}/{{ img.basename }}.{{ fmt }}>`__ + {%- endfor -%} + {%- else -%} + {%- for fmt in img.formats -%} + {%- if not loop.first -%}, {% endif -%} + `{{ fmt }} <{{ dest_dir }}/{{ img.basename }}.{{ fmt }}>`__ + {%- endfor -%} + {%- endif -%} + ) + {% endfor %} + +{{ only_latex }} + + {% for img in images %} + .. image:: {{ build_dir }}/{{ img.basename }}.pdf + {% endfor %} + +""" + +class ImageFile(object): + def __init__(self, basename, dirname): + self.basename = basename + self.dirname = dirname + self.formats = [] + + def filename(self, format): + return os.path.join(self.dirname, "%s.%s" % (self.basename, format)) + + def filenames(self): + return [self.filename(fmt) for fmt in self.formats] + +def run(arguments, content, options, state_machine, state, lineno): + if arguments and content: + raise RuntimeError("plot:: directive can't have both args and content") + + document = state_machine.document + config = document.settings.env.config + + options.setdefault('include-source', config.plot_include_source) + + # determine input + rst_file = document.attributes['source'] + rst_dir = os.path.dirname(rst_file) + + if arguments: + if not config.plot_basedir: + source_file_name = os.path.join(rst_dir, + directives.uri(arguments[0])) + else: + source_file_name = os.path.join(setup.confdir, config.plot_basedir, + directives.uri(arguments[0])) + code = open(source_file_name, 'r').read() + output_base = os.path.basename(source_file_name) + else: + source_file_name = rst_file + code = textwrap.dedent("\n".join(map(str, content))) + counter = document.attributes.get('_plot_counter', 0) + 1 + document.attributes['_plot_counter'] = counter + base, ext = os.path.splitext(os.path.basename(source_file_name)) + output_base = '%s-%d.py' % (base, counter) + + base, source_ext = os.path.splitext(output_base) + if source_ext in ('.py', '.rst', '.txt'): + output_base = base + else: + source_ext = '' + + # ensure that LaTeX includegraphics doesn't choke in foo.bar.pdf filenames + output_base = output_base.replace('.', '-') + + # is it in doctest format? + is_doctest = contains_doctest(code) + if options.has_key('format'): + if options['format'] == 'python': + is_doctest = False + else: + is_doctest = True + + # determine output directory name fragment + source_rel_name = relpath(source_file_name, setup.confdir) + source_rel_dir = os.path.dirname(source_rel_name) + while source_rel_dir.startswith(os.path.sep): + source_rel_dir = source_rel_dir[1:] + + # build_dir: where to place output files (temporarily) + build_dir = os.path.join(os.path.dirname(setup.app.doctreedir), + 'plot_directive', + source_rel_dir) + if not os.path.exists(build_dir): + os.makedirs(build_dir) + + # output_dir: final location in the builder's directory + dest_dir = os.path.abspath(os.path.join(setup.app.builder.outdir, + source_rel_dir)) + + # how to link to files from the RST file + dest_dir_link = os.path.join(relpath(setup.confdir, rst_dir), + source_rel_dir).replace(os.path.sep, '/') + build_dir_link = relpath(build_dir, rst_dir).replace(os.path.sep, '/') + source_link = dest_dir_link + '/' + output_base + source_ext + + # make figures + try: + images = makefig(code, source_file_name, build_dir, output_base, + config) + except PlotError, err: + reporter = state.memo.reporter + sm = reporter.system_message( + 3, "Exception occurred in plotting %s: %s" % (output_base, err), + line=lineno) + return [sm] + + # generate output restructuredtext + if options['include-source']: + if is_doctest: + lines = [''] + lines += [row.rstrip() for row in code.split('\n')] + else: + lines = ['.. code-block:: python', ''] + lines += [' %s' % row.rstrip() for row in code.split('\n')] + source_code = "\n".join(lines) + else: + source_code = "" + + opts = [':%s: %s' % (key, val) for key, val in options.items() + if key in ('alt', 'height', 'width', 'scale', 'align', 'class')] + + if sphinx.__version__ >= "0.6": + only_html = ".. only:: html" + only_latex = ".. only:: latex" + else: + only_html = ".. htmlonly::" + only_latex = ".. latexonly::" + + result = format_template( + TEMPLATE, + dest_dir=dest_dir_link, + build_dir=build_dir_link, + source_link=source_link, + only_html=only_html, + only_latex=only_latex, + options=opts, + images=images, + source_code=source_code) + + lines = result.split("\n") + if len(lines): + state_machine.insert_input( + lines, state_machine.input_lines.source(0)) + + # copy image files to builder's output directory + if not os.path.exists(dest_dir): + os.makedirs(dest_dir) + + for img in images: + for fn in img.filenames(): + shutil.copyfile(fn, os.path.join(dest_dir, os.path.basename(fn))) + + # copy script (if necessary) + if source_file_name == rst_file: + target_name = os.path.join(dest_dir, output_base + source_ext) + f = open(target_name, 'w') + f.write(unescape_doctest(code)) + f.close() + + return [] + + +#------------------------------------------------------------------------------ +# Run code and capture figures +#------------------------------------------------------------------------------ + +import matplotlib +matplotlib.use('Agg') +import matplotlib.pyplot as plt +import matplotlib.image as image +from matplotlib import _pylab_helpers + +import exceptions + +def contains_doctest(text): + try: + # check if it's valid Python as-is + compile(text, '', 'exec') + return False + except SyntaxError: + pass + r = re.compile(r'^\s*>>>', re.M) + m = r.search(text) + return bool(m) + +def unescape_doctest(text): + """ + Extract code from a piece of text, which contains either Python code + or doctests. + + """ + if not contains_doctest(text): + return text + + code = "" + for line in text.split("\n"): + m = re.match(r'^\s*(>>>|\.\.\.) (.*)$', line) + if m: + code += m.group(2) + "\n" + elif line.strip(): + code += "# " + line.strip() + "\n" + else: + code += "\n" + return code + +class PlotError(RuntimeError): + pass + +def run_code(code, code_path): + # Change the working directory to the directory of the example, so + # it can get at its data files, if any. + pwd = os.getcwd() + old_sys_path = list(sys.path) + if code_path is not None: + dirname = os.path.abspath(os.path.dirname(code_path)) + os.chdir(dirname) + sys.path.insert(0, dirname) + + # Redirect stdout + stdout = sys.stdout + sys.stdout = cStringIO.StringIO() + + # Reset sys.argv + old_sys_argv = sys.argv + sys.argv = [code_path] + + try: + try: + code = unescape_doctest(code) + ns = {} + exec setup.config.plot_pre_code in ns + exec code in ns + except (Exception, SystemExit), err: + raise PlotError(traceback.format_exc()) + finally: + os.chdir(pwd) + sys.argv = old_sys_argv + sys.path[:] = old_sys_path + sys.stdout = stdout + return ns + + +#------------------------------------------------------------------------------ +# Generating figures +#------------------------------------------------------------------------------ + +def out_of_date(original, derived): + """ + Returns True if derivative is out-of-date wrt original, + both of which are full file paths. + """ + return (not os.path.exists(derived) + or os.stat(derived).st_mtime < os.stat(original).st_mtime) + + +def makefig(code, code_path, output_dir, output_base, config): + """ + Run a pyplot script *code* and save the images under *output_dir* + with file names derived from *output_base* + + """ + + # -- Parse format list + default_dpi = {'png': 80, 'hires.png': 200, 'pdf': 50} + formats = [] + for fmt in config.plot_formats: + if isinstance(fmt, str): + formats.append((fmt, default_dpi.get(fmt, 80))) + elif type(fmt) in (tuple, list) and len(fmt)==2: + formats.append((str(fmt[0]), int(fmt[1]))) + else: + raise PlotError('invalid image format "%r" in plot_formats' % fmt) + + # -- Try to determine if all images already exist + + # Look for single-figure output files first + all_exists = True + img = ImageFile(output_base, output_dir) + for format, dpi in formats: + if out_of_date(code_path, img.filename(format)): + all_exists = False + break + img.formats.append(format) + + if all_exists: + return [img] + + # Then look for multi-figure output files + images = [] + all_exists = True + for i in xrange(1000): + img = ImageFile('%s_%02d' % (output_base, i), output_dir) + for format, dpi in formats: + if out_of_date(code_path, img.filename(format)): + all_exists = False + break + img.formats.append(format) + + # assume that if we have one, we have them all + if not all_exists: + all_exists = (i > 0) + break + images.append(img) + + if all_exists: + return images + + # -- We didn't find the files, so build them + + # Clear between runs + plt.close('all') + + # Run code + run_code(code, code_path) + + # Collect images + images = [] + + fig_managers = _pylab_helpers.Gcf.get_all_fig_managers() + for i, figman in enumerate(fig_managers): + if len(fig_managers) == 1: + img = ImageFile(output_base, output_dir) + else: + img = ImageFile("%s_%02d" % (output_base, i), output_dir) + images.append(img) + for format, dpi in formats: + try: + figman.canvas.figure.savefig(img.filename(format), dpi=dpi) + except exceptions.BaseException, err: + raise PlotError(traceback.format_exc()) + img.formats.append(format) + + return images + + +#------------------------------------------------------------------------------ +# Relative pathnames +#------------------------------------------------------------------------------ + +try: + from os.path import relpath +except ImportError: + def relpath(target, base=os.curdir): + """ + Return a relative path to the target from either the current + dir or an optional base dir. Base can be a directory + specified either as absolute or relative to current dir. + """ + + if not os.path.exists(target): + raise OSError, 'Target does not exist: '+target + + if not os.path.isdir(base): + raise OSError, 'Base is not a directory or does not exist: '+base + + base_list = (os.path.abspath(base)).split(os.sep) + target_list = (os.path.abspath(target)).split(os.sep) + + # On the windows platform the target may be on a completely + # different drive from the base. + if os.name in ['nt','dos','os2'] and base_list[0] <> target_list[0]: + raise OSError, 'Target is on a different drive to base. Target: '+target_list[0].upper()+', base: '+base_list[0].upper() + + # Starting from the filepath root, work out how much of the + # filepath is shared by base and target. + for i in range(min(len(base_list), len(target_list))): + if base_list[i] <> target_list[i]: break + else: + # If we broke out of the loop, i is pointing to the first + # differing path elements. If we didn't break out of the + # loop, i is pointing to identical path elements. + # Increment i so that in all cases it points to the first + # differing path elements. + i+=1 + + rel_list = [os.pardir] * (len(base_list)-i) + target_list[i:] + return os.path.join(*rel_list) diff -Nru python-scipy-0.7.2+dfsg1/doc/sphinxext/README.txt python-scipy-0.8.0+dfsg1/doc/sphinxext/README.txt --- python-scipy-0.7.2+dfsg1/doc/sphinxext/README.txt 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/sphinxext/README.txt 2010-07-26 16:59:44.000000000 +0100 @@ -0,0 +1,52 @@ +===================================== +numpydoc -- Numpy's Sphinx extensions +===================================== + +Numpy's documentation uses several custom extensions to Sphinx. These +are shipped in this ``numpydoc`` package, in case you want to make use +of them in third-party projects. + +The following extensions are available: + + - ``numpydoc``: support for the Numpy docstring format in Sphinx, and add + the code description directives ``np-function``, ``np-cfunction``, etc. + that support the Numpy docstring syntax. + + - ``numpydoc.traitsdoc``: For gathering documentation about Traits attributes. + + - ``numpydoc.plot_directives``: Adaptation of Matplotlib's ``plot::`` + directive. Note that this implementation may still undergo severe + changes or eventually be deprecated. + + - ``numpydoc.only_directives``: (DEPRECATED) + + - ``numpydoc.autosummary``: (DEPRECATED) An ``autosummary::`` directive. + Available in Sphinx 0.6.2 and (to-be) 1.0 as ``sphinx.ext.autosummary``, + and it the Sphinx 1.0 version is recommended over that included in + Numpydoc. + + +numpydoc +======== + +Numpydoc inserts a hook into Sphinx's autodoc that converts docstrings +following the Numpy/Scipy format to a form palatable to Sphinx. + +Options +------- + +The following options can be set in conf.py: + +- numpydoc_use_plots: bool + + Whether to produce ``plot::`` directives for Examples sections that + contain ``import matplotlib``. + +- numpydoc_show_class_members: bool + + Whether to show all members of a class in the Methods and Attributes + sections automatically. + +- numpydoc_edit_link: bool (DEPRECATED -- edit your HTML template instead) + + Whether to insert an edit link after docstrings. diff -Nru python-scipy-0.7.2+dfsg1/doc/sphinxext/setup.py python-scipy-0.8.0+dfsg1/doc/sphinxext/setup.py --- python-scipy-0.7.2+dfsg1/doc/sphinxext/setup.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/sphinxext/setup.py 2010-07-26 16:59:44.000000000 +0100 @@ -0,0 +1,31 @@ +from distutils.core import setup +import setuptools +import sys, os + +version = "0.3.dev" + +setup( + name="numpydoc", + packages=["numpydoc"], + package_dir={"numpydoc": ""}, + version=version, + description="Sphinx extension to support docstrings in Numpy format", + # classifiers from http://pypi.python.org/pypi?%3Aaction=list_classifiers + classifiers=["Development Status :: 3 - Alpha", + "Environment :: Plugins", + "License :: OSI Approved :: BSD License", + "Topic :: Documentation"], + keywords="sphinx numpy", + author="Pauli Virtanen and others", + author_email="pav@iki.fi", + url="http://projects.scipy.org/numpy/browser/trunk/doc/sphinxext", + license="BSD", + zip_safe=False, + install_requires=["Sphinx >= 0.5"], + package_data={'numpydoc': 'tests', '': ''}, + entry_points={ + "console_scripts": [ + "autosummary_generate = numpydoc.autosummary_generate:main", + ], + }, +) diff -Nru python-scipy-0.7.2+dfsg1/doc/sphinxext/tests/test_docscrape.py python-scipy-0.8.0+dfsg1/doc/sphinxext/tests/test_docscrape.py --- python-scipy-0.7.2+dfsg1/doc/sphinxext/tests/test_docscrape.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/sphinxext/tests/test_docscrape.py 2010-07-26 16:59:44.000000000 +0100 @@ -0,0 +1,545 @@ +# -*- encoding:utf-8 -*- + +import sys, os +sys.path.append(os.path.join(os.path.dirname(__file__), '..')) + +from docscrape import NumpyDocString, FunctionDoc, ClassDoc +from docscrape_sphinx import SphinxDocString, SphinxClassDoc +from nose.tools import * + +doc_txt = '''\ + numpy.multivariate_normal(mean, cov, shape=None) + + Draw values from a multivariate normal distribution with specified + mean and covariance. + + The multivariate normal or Gaussian distribution is a generalisation + of the one-dimensional normal distribution to higher dimensions. + + Parameters + ---------- + mean : (N,) ndarray + Mean of the N-dimensional distribution. + + .. math:: + + (1+2+3)/3 + + cov : (N,N) ndarray + Covariance matrix of the distribution. + shape : tuple of ints + Given a shape of, for example, (m,n,k), m*n*k samples are + generated, and packed in an m-by-n-by-k arrangement. Because + each sample is N-dimensional, the output shape is (m,n,k,N). + + Returns + ------- + out : ndarray + The drawn samples, arranged according to `shape`. If the + shape given is (m,n,...), then the shape of `out` is is + (m,n,...,N). + + In other words, each entry ``out[i,j,...,:]`` is an N-dimensional + value drawn from the distribution. + + Warnings + -------- + Certain warnings apply. + + Notes + ----- + + Instead of specifying the full covariance matrix, popular + approximations include: + + - Spherical covariance (`cov` is a multiple of the identity matrix) + - Diagonal covariance (`cov` has non-negative elements only on the diagonal) + + This geometrical property can be seen in two dimensions by plotting + generated data-points: + + >>> mean = [0,0] + >>> cov = [[1,0],[0,100]] # diagonal covariance, points lie on x or y-axis + + >>> x,y = multivariate_normal(mean,cov,5000).T + >>> plt.plot(x,y,'x'); plt.axis('equal'); plt.show() + + Note that the covariance matrix must be symmetric and non-negative + definite. + + References + ---------- + .. [1] A. Papoulis, "Probability, Random Variables, and Stochastic + Processes," 3rd ed., McGraw-Hill Companies, 1991 + .. [2] R.O. Duda, P.E. Hart, and D.G. Stork, "Pattern Classification," + 2nd ed., Wiley, 2001. + + See Also + -------- + some, other, funcs + otherfunc : relationship + + Examples + -------- + >>> mean = (1,2) + >>> cov = [[1,0],[1,0]] + >>> x = multivariate_normal(mean,cov,(3,3)) + >>> print x.shape + (3, 3, 2) + + The following is probably true, given that 0.6 is roughly twice the + standard deviation: + + >>> print list( (x[0,0,:] - mean) < 0.6 ) + [True, True] + + .. index:: random + :refguide: random;distributions, random;gauss + + ''' +doc = NumpyDocString(doc_txt) + + +def test_signature(): + assert doc['Signature'].startswith('numpy.multivariate_normal(') + assert doc['Signature'].endswith('shape=None)') + +def test_summary(): + assert doc['Summary'][0].startswith('Draw values') + assert doc['Summary'][-1].endswith('covariance.') + +def test_extended_summary(): + assert doc['Extended Summary'][0].startswith('The multivariate normal') + +def test_parameters(): + assert_equal(len(doc['Parameters']), 3) + assert_equal([n for n,_,_ in doc['Parameters']], ['mean','cov','shape']) + + arg, arg_type, desc = doc['Parameters'][1] + assert_equal(arg_type, '(N,N) ndarray') + assert desc[0].startswith('Covariance matrix') + assert doc['Parameters'][0][-1][-2] == ' (1+2+3)/3' + +def test_returns(): + assert_equal(len(doc['Returns']), 1) + arg, arg_type, desc = doc['Returns'][0] + assert_equal(arg, 'out') + assert_equal(arg_type, 'ndarray') + assert desc[0].startswith('The drawn samples') + assert desc[-1].endswith('distribution.') + +def test_notes(): + assert doc['Notes'][0].startswith('Instead') + assert doc['Notes'][-1].endswith('definite.') + assert_equal(len(doc['Notes']), 17) + +def test_references(): + assert doc['References'][0].startswith('..') + assert doc['References'][-1].endswith('2001.') + +def test_examples(): + assert doc['Examples'][0].startswith('>>>') + assert doc['Examples'][-1].endswith('True]') + +def test_index(): + assert_equal(doc['index']['default'], 'random') + print doc['index'] + assert_equal(len(doc['index']), 2) + assert_equal(len(doc['index']['refguide']), 2) + +def non_blank_line_by_line_compare(a,b): + a = [l for l in a.split('\n') if l.strip()] + b = [l for l in b.split('\n') if l.strip()] + for n,line in enumerate(a): + if not line == b[n]: + raise AssertionError("Lines %s of a and b differ: " + "\n>>> %s\n<<< %s\n" % + (n,line,b[n])) +def test_str(): + non_blank_line_by_line_compare(str(doc), +"""numpy.multivariate_normal(mean, cov, shape=None) + +Draw values from a multivariate normal distribution with specified +mean and covariance. + +The multivariate normal or Gaussian distribution is a generalisation +of the one-dimensional normal distribution to higher dimensions. + +Parameters +---------- +mean : (N,) ndarray + Mean of the N-dimensional distribution. + + .. math:: + + (1+2+3)/3 + +cov : (N,N) ndarray + Covariance matrix of the distribution. +shape : tuple of ints + Given a shape of, for example, (m,n,k), m*n*k samples are + generated, and packed in an m-by-n-by-k arrangement. Because + each sample is N-dimensional, the output shape is (m,n,k,N). + +Returns +------- +out : ndarray + The drawn samples, arranged according to `shape`. If the + shape given is (m,n,...), then the shape of `out` is is + (m,n,...,N). + + In other words, each entry ``out[i,j,...,:]`` is an N-dimensional + value drawn from the distribution. + +Warnings +-------- +Certain warnings apply. + +See Also +-------- +`some`_, `other`_, `funcs`_ + +`otherfunc`_ + relationship + +Notes +----- +Instead of specifying the full covariance matrix, popular +approximations include: + + - Spherical covariance (`cov` is a multiple of the identity matrix) + - Diagonal covariance (`cov` has non-negative elements only on the diagonal) + +This geometrical property can be seen in two dimensions by plotting +generated data-points: + +>>> mean = [0,0] +>>> cov = [[1,0],[0,100]] # diagonal covariance, points lie on x or y-axis + +>>> x,y = multivariate_normal(mean,cov,5000).T +>>> plt.plot(x,y,'x'); plt.axis('equal'); plt.show() + +Note that the covariance matrix must be symmetric and non-negative +definite. + +References +---------- +.. [1] A. Papoulis, "Probability, Random Variables, and Stochastic + Processes," 3rd ed., McGraw-Hill Companies, 1991 +.. [2] R.O. Duda, P.E. Hart, and D.G. Stork, "Pattern Classification," + 2nd ed., Wiley, 2001. + +Examples +-------- +>>> mean = (1,2) +>>> cov = [[1,0],[1,0]] +>>> x = multivariate_normal(mean,cov,(3,3)) +>>> print x.shape +(3, 3, 2) + +The following is probably true, given that 0.6 is roughly twice the +standard deviation: + +>>> print list( (x[0,0,:] - mean) < 0.6 ) +[True, True] + +.. index:: random + :refguide: random;distributions, random;gauss""") + + +def test_sphinx_str(): + sphinx_doc = SphinxDocString(doc_txt) + non_blank_line_by_line_compare(str(sphinx_doc), +""" +.. index:: random + single: random;distributions, random;gauss + +Draw values from a multivariate normal distribution with specified +mean and covariance. + +The multivariate normal or Gaussian distribution is a generalisation +of the one-dimensional normal distribution to higher dimensions. + +:Parameters: + + **mean** : (N,) ndarray + + Mean of the N-dimensional distribution. + + .. math:: + + (1+2+3)/3 + + **cov** : (N,N) ndarray + + Covariance matrix of the distribution. + + **shape** : tuple of ints + + Given a shape of, for example, (m,n,k), m*n*k samples are + generated, and packed in an m-by-n-by-k arrangement. Because + each sample is N-dimensional, the output shape is (m,n,k,N). + +:Returns: + + **out** : ndarray + + The drawn samples, arranged according to `shape`. If the + shape given is (m,n,...), then the shape of `out` is is + (m,n,...,N). + + In other words, each entry ``out[i,j,...,:]`` is an N-dimensional + value drawn from the distribution. + +.. warning:: + + Certain warnings apply. + +.. seealso:: + + :obj:`some`, :obj:`other`, :obj:`funcs` + + :obj:`otherfunc` + relationship + +.. rubric:: Notes + +Instead of specifying the full covariance matrix, popular +approximations include: + + - Spherical covariance (`cov` is a multiple of the identity matrix) + - Diagonal covariance (`cov` has non-negative elements only on the diagonal) + +This geometrical property can be seen in two dimensions by plotting +generated data-points: + +>>> mean = [0,0] +>>> cov = [[1,0],[0,100]] # diagonal covariance, points lie on x or y-axis + +>>> x,y = multivariate_normal(mean,cov,5000).T +>>> plt.plot(x,y,'x'); plt.axis('equal'); plt.show() + +Note that the covariance matrix must be symmetric and non-negative +definite. + +.. rubric:: References + +.. [1] A. Papoulis, "Probability, Random Variables, and Stochastic + Processes," 3rd ed., McGraw-Hill Companies, 1991 +.. [2] R.O. Duda, P.E. Hart, and D.G. Stork, "Pattern Classification," + 2nd ed., Wiley, 2001. + +.. only:: latex + + [1]_, [2]_ + +.. rubric:: Examples + +>>> mean = (1,2) +>>> cov = [[1,0],[1,0]] +>>> x = multivariate_normal(mean,cov,(3,3)) +>>> print x.shape +(3, 3, 2) + +The following is probably true, given that 0.6 is roughly twice the +standard deviation: + +>>> print list( (x[0,0,:] - mean) < 0.6 ) +[True, True] +""") + + +doc2 = NumpyDocString(""" + Returns array of indices of the maximum values of along the given axis. + + Parameters + ---------- + a : {array_like} + Array to look in. + axis : {None, integer} + If None, the index is into the flattened array, otherwise along + the specified axis""") + +def test_parameters_without_extended_description(): + assert_equal(len(doc2['Parameters']), 2) + +doc3 = NumpyDocString(""" + my_signature(*params, **kwds) + + Return this and that. + """) + +def test_escape_stars(): + signature = str(doc3).split('\n')[0] + assert_equal(signature, 'my_signature(\*params, \*\*kwds)') + +doc4 = NumpyDocString( + """a.conj() + + Return an array with all complex-valued elements conjugated.""") + +def test_empty_extended_summary(): + assert_equal(doc4['Extended Summary'], []) + +doc5 = NumpyDocString( + """ + a.something() + + Raises + ------ + LinAlgException + If array is singular. + + """) + +def test_raises(): + assert_equal(len(doc5['Raises']), 1) + name,_,desc = doc5['Raises'][0] + assert_equal(name,'LinAlgException') + assert_equal(desc,['If array is singular.']) + +def test_see_also(): + doc6 = NumpyDocString( + """ + z(x,theta) + + See Also + -------- + func_a, func_b, func_c + func_d : some equivalent func + foo.func_e : some other func over + multiple lines + func_f, func_g, :meth:`func_h`, func_j, + func_k + :obj:`baz.obj_q` + :class:`class_j`: fubar + foobar + """) + + assert len(doc6['See Also']) == 12 + for func, desc, role in doc6['See Also']: + if func in ('func_a', 'func_b', 'func_c', 'func_f', + 'func_g', 'func_h', 'func_j', 'func_k', 'baz.obj_q'): + assert(not desc) + else: + assert(desc) + + if func == 'func_h': + assert role == 'meth' + elif func == 'baz.obj_q': + assert role == 'obj' + elif func == 'class_j': + assert role == 'class' + else: + assert role is None + + if func == 'func_d': + assert desc == ['some equivalent func'] + elif func == 'foo.func_e': + assert desc == ['some other func over', 'multiple lines'] + elif func == 'class_j': + assert desc == ['fubar', 'foobar'] + +def test_see_also_print(): + class Dummy(object): + """ + See Also + -------- + func_a, func_b + func_c : some relationship + goes here + func_d + """ + pass + + obj = Dummy() + s = str(FunctionDoc(obj, role='func')) + assert(':func:`func_a`, :func:`func_b`' in s) + assert(' some relationship' in s) + assert(':func:`func_d`' in s) + +doc7 = NumpyDocString(""" + + Doc starts on second line. + + """) + +def test_empty_first_line(): + assert doc7['Summary'][0].startswith('Doc starts') + + +def test_no_summary(): + str(SphinxDocString(""" + Parameters + ----------""")) + + +def test_unicode(): + doc = SphinxDocString(""" + öäöäöäöäöåååå + + öäöäöäööäååå + + Parameters + ---------- + ååå : äää + ööö + + Returns + ------- + ååå : ööö + äää + + """) + assert doc['Summary'][0] == u'öäöäöäöäöåååå'.encode('utf-8') + +def test_plot_examples(): + cfg = dict(use_plots=True) + + doc = SphinxDocString(""" + Examples + -------- + >>> import matplotlib.pyplot as plt + >>> plt.plot([1,2,3],[4,5,6]) + >>> plt.show() + """, config=cfg) + assert 'plot::' in str(doc), str(doc) + + doc = SphinxDocString(""" + Examples + -------- + .. plot:: + + import matplotlib.pyplot as plt + plt.plot([1,2,3],[4,5,6]) + plt.show() + """, config=cfg) + assert str(doc).count('plot::') == 1, str(doc) + +def test_class_members(): + + class Dummy(object): + """ + Dummy class. + + """ + def spam(self, a, b): + """Spam\n\nSpam spam.""" + pass + def ham(self, c, d): + """Cheese\n\nNo cheese.""" + pass + + for cls in (ClassDoc, SphinxClassDoc): + doc = cls(Dummy, config=dict(show_class_members=False)) + assert 'Methods' not in str(doc), (cls, str(doc)) + assert 'spam' not in str(doc), (cls, str(doc)) + assert 'ham' not in str(doc), (cls, str(doc)) + + doc = cls(Dummy, config=dict(show_class_members=True)) + assert 'Methods' in str(doc), (cls, str(doc)) + assert 'spam' in str(doc), (cls, str(doc)) + assert 'ham' in str(doc), (cls, str(doc)) + + if cls is SphinxClassDoc: + assert '.. autosummary::' in str(doc), str(doc) diff -Nru python-scipy-0.7.2+dfsg1/doc/sphinxext/traitsdoc.py python-scipy-0.8.0+dfsg1/doc/sphinxext/traitsdoc.py --- python-scipy-0.7.2+dfsg1/doc/sphinxext/traitsdoc.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/doc/sphinxext/traitsdoc.py 2010-07-26 16:59:44.000000000 +0100 @@ -0,0 +1,140 @@ +""" +========= +traitsdoc +========= + +Sphinx extension that handles docstrings in the Numpy standard format, [1] +and support Traits [2]. + +This extension can be used as a replacement for ``numpydoc`` when support +for Traits is required. + +.. [1] http://projects.scipy.org/numpy/wiki/CodingStyleGuidelines#docstring-standard +.. [2] http://code.enthought.com/projects/traits/ + +""" + +import inspect +import os +import pydoc + +import docscrape +import docscrape_sphinx +from docscrape_sphinx import SphinxClassDoc, SphinxFunctionDoc, SphinxDocString + +import numpydoc + +import comment_eater + +class SphinxTraitsDoc(SphinxClassDoc): + def __init__(self, cls, modulename='', func_doc=SphinxFunctionDoc): + if not inspect.isclass(cls): + raise ValueError("Initialise using a class. Got %r" % cls) + self._cls = cls + + if modulename and not modulename.endswith('.'): + modulename += '.' + self._mod = modulename + self._name = cls.__name__ + self._func_doc = func_doc + + docstring = pydoc.getdoc(cls) + docstring = docstring.split('\n') + + # De-indent paragraph + try: + indent = min(len(s) - len(s.lstrip()) for s in docstring + if s.strip()) + except ValueError: + indent = 0 + + for n,line in enumerate(docstring): + docstring[n] = docstring[n][indent:] + + self._doc = docscrape.Reader(docstring) + self._parsed_data = { + 'Signature': '', + 'Summary': '', + 'Description': [], + 'Extended Summary': [], + 'Parameters': [], + 'Returns': [], + 'Raises': [], + 'Warns': [], + 'Other Parameters': [], + 'Traits': [], + 'Methods': [], + 'See Also': [], + 'Notes': [], + 'References': '', + 'Example': '', + 'Examples': '', + 'index': {} + } + + self._parse() + + def _str_summary(self): + return self['Summary'] + [''] + + def _str_extended_summary(self): + return self['Description'] + self['Extended Summary'] + [''] + + def __str__(self, indent=0, func_role="func"): + out = [] + out += self._str_signature() + out += self._str_index() + [''] + out += self._str_summary() + out += self._str_extended_summary() + for param_list in ('Parameters', 'Traits', 'Methods', + 'Returns','Raises'): + out += self._str_param_list(param_list) + out += self._str_see_also("obj") + out += self._str_section('Notes') + out += self._str_references() + out += self._str_section('Example') + out += self._str_section('Examples') + out = self._str_indent(out,indent) + return '\n'.join(out) + +def looks_like_issubclass(obj, classname): + """ Return True if the object has a class or superclass with the given class + name. + + Ignores old-style classes. + """ + t = obj + if t.__name__ == classname: + return True + for klass in t.__mro__: + if klass.__name__ == classname: + return True + return False + +def get_doc_object(obj, what=None, config=None): + if what is None: + if inspect.isclass(obj): + what = 'class' + elif inspect.ismodule(obj): + what = 'module' + elif callable(obj): + what = 'function' + else: + what = 'object' + if what == 'class': + doc = SphinxTraitsDoc(obj, '', func_doc=SphinxFunctionDoc, config=config) + if looks_like_issubclass(obj, 'HasTraits'): + for name, trait, comment in comment_eater.get_class_traits(obj): + # Exclude private traits. + if not name.startswith('_'): + doc['Traits'].append((name, trait, comment.splitlines())) + return doc + elif what in ('function', 'method'): + return SphinxFunctionDoc(obj, '', config=config) + else: + return SphinxDocString(pydoc.getdoc(obj), config=config) + +def setup(app): + # init numpydoc + numpydoc.setup(app, get_doc_object) + diff -Nru python-scipy-0.7.2+dfsg1/INSTALL.txt python-scipy-0.8.0+dfsg1/INSTALL.txt --- python-scipy-0.7.2+dfsg1/INSTALL.txt 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/INSTALL.txt 2010-07-26 15:48:29.000000000 +0100 @@ -6,174 +6,102 @@ .. Contents:: +INTRODUCTION +============ + +It is *strongly* recommended that you use the binary packages on your platform +if they are available, in particular on Windows and Mac OS X. You should not +attempt to build SciPy if you are not familiar with compiling softwares from +sources. + PREREQUISITES ============= -SciPy requires the following software installed: +SciPy requires the following software installed for your platform: 1) Python__ 2.4.x or newer - Debian packages: python python-dev - - Make sure that the Python package distutils is installed before - continuing. For example, in Debian GNU/Linux, distutils is included - in the python-dev package. - - Python must also be compiled with the zlib module enabled. - __ http://www.python.org -2) NumPy__ 1.2.0 or newer - - Debian package: python-numpy +2) NumPy__ 1.4.1 or newer (note: SciPy trunk at times requires latest NumPy + trunk). __ http://www.numpy.org/ -3) Complete LAPACK__ library (see NOTES 1, 2, 3) - - Debian/Ubuntu packages (g77): atlas3-base atlas3-base-dev - - Various SciPy packages do linear algebra computations using the LAPACK - routines. SciPy's setup.py scripts can use number of different LAPACK - library setups, including optimized LAPACK libraries such as ATLAS__ or - the Accelerate/vecLib framework on OS X. The notes below give - more information on how to prepare the build environment so that - SciPy's setup.py scripts can use whatever LAPACK library setup one has. - -__ http://www.netlib.org/lapack/ -__ http://math-atlas.sourceforge.net/ - +Windows +------- +Compilers +~~~~~~~~~ -OPTIONAL PACKAGES -================= +It is recommended to use the mingw__ compilers on Windows: you will need gcc +(C), g++ (C++) and g77 (Fortran) compilers. -The following software is optional, but SciPy can use these if present -for extra functionality: +__ http://www.mingw.org -1) C, C++, Fortran 77 compilers (see COMPILER NOTES) +Blas/Lapack +~~~~~~~~~~~ - To build SciPy or any other extension modules for Python, you'll need - a C compiler. Scipy also requires a C++ compiler. - - Various SciPy modules use Fortran 77 libraries, so you'll need also - at least a Fortran 77 compiler installed. - - gcc__ 3.x compilers are recommended. gcc 2.95 and 4.0.x also work on - some platforms, but may be more problematic (see COMPILER NOTES). +Blas/Lapack are core routines for linear algebra (vector/matrix operations). +You should use ATLAS__ with a full LAPACK, or simple BLAS/LAPACK built with g77 +from netlib__ sources. Building those libraries on windows may be difficult, as +they assume a unix-style environment. Please use the binaries if you don't feel +comfortable with cygwin, make and similar tools. - Debian packages: gcc g++ g77 +__ http://math-atlas.sourceforge.net/ +__ http://www.netlib.org/lapack/ -__ http://gcc.gnu.org/ +Mac OS X +-------- +Compilers +~~~~~~~~~ -2) FFTW__ x (see Lib/fftpack/NOTES.txt) +It is recommended to use gcc. gcc is available for free when installing +Xcode__, the developer toolsuite on Mac OS X. You also need a fortran compiler, +which is not included with Xcode: you should use gfortran from this page: - FFTW 2.1.x and 3.x work. +__ http://r.research.att.com/tools/ - Debian packages: fftw2 fftw-dev fftw3 fftw3-dev +Please do NOT use gfortran from hpc.sourceforge.net, it is known to generate +buggy scipy binaries. -__ http://www.fftw.org/ +__Xcode: http://developer.apple.com/TOOLS/xcode +Blas/Lapack +~~~~~~~~~~~ +Mac OS X includes the Accelerate framework: it should be detected without any +intervention when building SciPy. -NOTES +Linux ----- -1) To use ATLAS, version 3.2.1 or newer and a *complete* LAPACK library - are required. See - - http://math-atlas.sourceforge.net/errata.html#completelp - - for instructions. Please be aware than building your own atlas is - error-prone, and should be avoided as much as possible if you don't want to - spend time on build issues. Use the blas/lapack packaged by your - distribution on Linux; on Mac Os X, you should use the vecLib/Accelerate - framework, which are available when installing the apple development tools. - - Below follows basic steps for building ATLAS+LAPACK from scratch. - In case of trouble, consult the documentation of the corresponding - software. - - * Get and unpack http://www.netlib.org/lapack/lapack.tgz - to ``/path/to/src/``. +Most common distributions include all the dependencies. Here are some +instructions for the most common ones: - * Copy proper ``/path/to/src/LAPACK/INSTALL/make.inc.?????`` - to ``/path/to/src/LAPACK/make.inc``. +Ubuntu >= 8.10 +~~~~~~~~~~~~~~ - * Build LAPACK:: +You can get all the dependencies as follows:: - cd /path/to/src/LAPACK - make lapacklib # On 400MHz PII it takes about 15min. + sudo apt-get install python python-dev libatlas3-base-dev gcc gfortran g++ - that will create lapack_LINUX.a when using - INSTALL/make.inc.LINUX, for example. - If using Intel Fortran Compiler, see additional notes below. +Ubuntu < 8.10, Debian +~~~~~~~~~~~~~~~~~~~~~ - * Get the latest stable ATLAS sources from - http://math-atlas.sourceforge.net/ - and unpack to ``/path/to/src/``. +You can get all the dependencies as follows:: - * Build ATLAS:: + sudo apt-get install python python-dev atlas3-base-dev gcc g77 g++ - cd /path/to/src/ATLAS - make # Number of questions will be asked - make install arch=Linux_PII # This takes about 45min. +OpenSuse >= 10 +~~~~~~~~~~~~~~ - where arch may vary (see the output of the previous command). +RHEL +~~~~ - * Make optimized LAPACK library:: - - cd /path/to/src/ATLAS/lib/Linux_PII/ - mkdir tmp; cd tmp - ar x ../liblapack.a - cp /path/to/src/LAPACK/lapack_LINUX.a ../liblapack.a - ar r ../liblapack.a *.o - cd ..; rm -rf tmp - - * Move all ``lib*.a`` files from ``/path/to/src/ATLAS/lib/Linux_PII/``, - say, to ``/usr/local/lib/atlas/``. - Also copying ``/path/to/src/ATLAS/include/{cblas.h,clapack.h}`` to - ``/usr/local/lib/atlas/`` might be a good idea. - - * Define environment variable ATLAS that contains path to the directory - where you moved the atlas libraries. For example, in bash run:: - - export ATLAS=/usr/local/lib/atlas - -2) If you are willing to sacrifice the performance (by factor of 5 to 15 - for large problems) of the linalg module then it is possible to build - SciPy without ATLAS. For that you'll need either Fortran LAPACK/BLAS - libraries installed in your system or Fortran LAPACK/BLAS sources to be - accessible by SciPy setup scripts (use ``LAPACK_SRC``/``BLAS_SRC`` - environment variables to indicate the location of the corresponding - source directories). More details of how to do this are on the SciPy - Wiki, at: - - http://www.scipy.org/Installing_SciPy/BuildingGeneral - -3) Users of Debian (and derivatives like Ubuntu) can use the following - deb packages:: - - atlas2-headers - - and - - atlas2-base atlas2-base-dev - or - atlas2-sse atlas2-sse-dev - or - atlas2-sse2 atlas2-sse2-dev - or - atlas2-3dnow atlas2-3dnow-dev - - It is not necessary to install blas or lapack libraries in addition. - - 4) Compiler flags customization (FFLAGS, CFLAGS, etc...). If you customize - CFLAGS and other related flags from the command line or the shell environment, - beware that is does not have the standard behavior of appending options. - Instead, it overrides the options. As such, you have to give all options in the - flag for the build to be successful. +Fedora Core +~~~~~~~~~~~ GETTING SCIPY ============= @@ -191,7 +119,7 @@ Before building and installing from SVN, remove the old installation (e.g. in /usr/lib/python2.4/site-packages/scipy or -$HOME/lib/python2.4/site-packages/scipy). Then type: +$HOME/lib/python2.4/site-packages/scipy). Then type:: cd scipy rm -rf build @@ -205,7 +133,10 @@ First make sure that all SciPy prerequisites are installed and working properly. Then be sure to remove any old SciPy installations (e.g. /usr/lib/python2.4/site-packages/scipy or $HOME/lib/python2.4/ -site-packages/scipy). +site-packages/scipy). On windows, if you installed scipy previously from a +binary, use the remove facility from the add/remove softwares panel, or remote +the scipy directory by hand if you installed from sources (e.g. +C:\Python24\Lib\site-packages\scipy for python 2.4). From tarballs ------------- @@ -216,14 +147,15 @@ python setup.py install This may take several minutes to an hour depending on the speed of your -computer. This may require root privileges. To install to a -user-specific location instead, run +computer. To install to a user-specific location instead, run:: python setup.py install --prefix=$MYDIR where $MYDIR is, for example, $HOME or $HOME/usr. - + ** Note 1: On Unix, you should avoid installing in /usr, but rather in + /usr/local or somewhere else. /usr is generally 'owned' by your package + manager, and you may overwrite a packaged scipy this way. TESTING ======= @@ -239,9 +171,9 @@ Please note that you must have version 0.10 or later of the 'nose' test framework installed in order to run the tests. More information about nose is -available here: +available on the website__. -http://somethingaboutorange.com/mrl/projects/nose/ +__ http://somethingaboutorange.com/mrl/projects/nose/ COMPILER NOTES ============== @@ -353,21 +285,6 @@ to create a complete liblapack.a. Then copy liblapack.a to the same location where libatlas.a is installed and retry with scipy build. -Using ATLAS 3.2.1 ------------------ -If import clapack fails with the following error -:: - - ImportError: .../clapack.so : undefined symbol: clapack_sgetri - -then you most probably have ATLAS 3.2.1 but linalg module was built -for newer versions of ATLAS. -Fix: - - 1) Remove Lib/linalg/clapack.pyf - - 2) Rebuild/reinstall scipy. - Using non-GNU Fortran Compiler ------------------------------ If import scipy shows a message diff -Nru python-scipy-0.7.2+dfsg1/MANIFEST.in python-scipy-0.8.0+dfsg1/MANIFEST.in --- python-scipy-0.7.2+dfsg1/MANIFEST.in 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/MANIFEST.in 2010-07-26 15:48:29.000000000 +0100 @@ -16,3 +16,4 @@ recursive-include doc/release * recursive-include doc/source * recursive-include doc/sphinxext * +prune scipy/special/tests/data/boost diff -Nru python-scipy-0.7.2+dfsg1/PKG-INFO python-scipy-0.8.0+dfsg1/PKG-INFO --- python-scipy-0.7.2+dfsg1/PKG-INFO 2010-04-22 14:35:40.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/PKG-INFO 2010-07-26 17:08:47.000000000 +0100 @@ -1,6 +1,6 @@ Metadata-Version: 1.0 Name: scipy -Version: 0.7.2 +Version: 0.8.0 Summary: SciPy: Scientific Library for Python Home-page: http://www.scipy.org Author: SciPy Developers diff -Nru python-scipy-0.7.2+dfsg1/README.txt python-scipy-0.8.0+dfsg1/README.txt --- python-scipy-0.7.2+dfsg1/README.txt 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/README.txt 2010-07-26 15:48:29.000000000 +0100 @@ -90,7 +90,7 @@ For details, read: - http://projects.scipy.org/scipy/numpy/wiki/DistutilsDoc + http://projects.scipy.org/numpy/wiki/DistutilsDoc Documentation @@ -106,7 +106,7 @@ http://www.scipy.org/ The developer's site is here - http://projects.scipy.org/scipy/scipy/wiki + http://projects.scipy.org/scipy/wiki Mailing Lists @@ -120,10 +120,10 @@ ----------- To search for bugs, please use the NIPY Bug Tracker at - http://projects.scipy.org/scipy/scipy/query + http://projects.scipy.org/scipy/query To report a bug, please use the NIPY Bug Tracker at - http://projects.scipy.org/scipy/scipy/newticket + http://projects.scipy.org/scipy/newticket License information diff -Nru python-scipy-0.7.2+dfsg1/scipy/cluster/hierarchy.py python-scipy-0.8.0+dfsg1/scipy/cluster/hierarchy.py --- python-scipy-0.7.2+dfsg1/scipy/cluster/hierarchy.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/cluster/hierarchy.py 2010-07-26 15:48:29.000000000 +0100 @@ -193,9 +193,10 @@ # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +import types import numpy as np -import _hierarchy_wrap, types +import _hierarchy_wrap import scipy.spatial.distance as distance _cpy_non_euclid_methods = {'single': 0, 'complete': 1, 'average': 2, @@ -219,7 +220,7 @@ if a.base is not None: return a.copy() elif np.issubsctype(a, np.float32): - return array(a, dtype=np.double) + return np.array(a, dtype=np.double) else: return a @@ -440,151 +441,152 @@ def linkage(y, method='single', metric='euclidean'): - r""" + """ Performs hierarchical/agglomerative clustering on the - condensed distance matrix y. y must be a :math:`{n \choose 2}` sized - vector where n is the number of original observations paired - in the distance matrix. The behavior of this function is very - similar to the MATLAB(TM) linkage function. - - A 4 by :math:`(n-1)` matrix ``Z`` is returned. At the - :math:`i`-th iteration, clusters with indices ``Z[i, 0]`` and - ``Z[i, 1]`` are combined to form cluster :math:`n + i`. A - cluster with an index less than :math:`n` corresponds to one of - the :math:`n` original observations. The distance between - clusters ``Z[i, 0]`` and ``Z[i, 1]`` is given by ``Z[i, 2]``. The - fourth value ``Z[i, 3]`` represents the number of original - observations in the newly formed cluster. - - The following linkage methods are used to compute the distance - :math:`d(s, t)` between two clusters :math:`s` and - :math:`t`. The algorithm begins with a forest of clusters that - have yet to be used in the hierarchy being formed. When two - clusters :math:`s` and :math:`t` from this forest are combined - into a single cluster :math:`u`, :math:`s` and :math:`t` are - removed from the forest, and :math:`u` is added to the - forest. When only one cluster remains in the forest, the algorithm - stops, and this cluster becomes the root. - - A distance matrix is maintained at each iteration. The ``d[i,j]`` - entry corresponds to the distance between cluster :math:`i` and - :math:`j` in the original forest. - - At each iteration, the algorithm must update the distance matrix - to reflect the distance of the newly formed cluster u with the - remaining clusters in the forest. - - Suppose there are :math:`|u|` original observations - :math:`u[0], \ldots, u[|u|-1]` in cluster :math:`u` and - :math:`|v|` original objects :math:`v[0], \ldots, v[|v|-1]` in - cluster :math:`v`. Recall :math:`s` and :math:`t` are - combined to form cluster :math:`u`. Let :math:`v` be any - remaining cluster in the forest that is not :math:`u`. - - The following are methods for calculating the distance between the - newly formed cluster :math:`u` and each :math:`v`. - - * method='single' assigns - - .. math:: - d(u,v) = \min(dist(u[i],v[j])) - - for all points :math:`i` in cluster :math:`u` and - :math:`j` in cluster :math:`v`. This is also known as the - Nearest Point Algorithm. - - * method='complete' assigns - - .. math:: - d(u, v) = \max(dist(u[i],v[j])) - - for all points :math:`i` in cluster u and :math:`j` in - cluster :math:`v`. This is also known by the Farthest Point - Algorithm or Voor Hees Algorithm. - - * method='average' assigns - - .. math:: - d(u,v) = \sum_{ij} \frac{d(u[i], v[j])} - {(|u|*|v|)} - - for all points :math:`i` and :math:`j` where :math:`|u|` - and :math:`|v|` are the cardinalities of clusters :math:`u` - and :math:`v`, respectively. This is also called the UPGMA - algorithm. This is called UPGMA. - - * method='weighted' assigns - - .. math:: - d(u,v) = (dist(s,v) + dist(t,v))/2 - - where cluster u was formed with cluster s and t and v - is a remaining cluster in the forest. (also called WPGMA) - - * method='centroid' assigns - - .. math:: - dist(s,t) = ||c_s-c_t||_2 - - where :math:`c_s` and :math:`c_t` are the centroids of - clusters :math:`s` and :math:`t`, respectively. When two - clusters :math:`s` and :math:`t` are combined into a new - cluster :math:`u`, the new centroid is computed over all the - original objects in clusters :math:`s` and :math:`t`. The - distance then becomes the Euclidean distance between the - centroid of :math:`u` and the centroid of a remaining cluster - :math:`v` in the forest. This is also known as the UPGMC - algorithm. + condensed distance matrix y. y must be a :math:`{n \\choose 2}` sized + vector where n is the number of original observations paired + in the distance matrix. The behavior of this function is very + similar to the MATLAB(TM) linkage function. + + A 4 by :math:`(n-1)` matrix ``Z`` is returned. At the + :math:`i`-th iteration, clusters with indices ``Z[i, 0]`` and + ``Z[i, 1]`` are combined to form cluster :math:`n + i`. A + cluster with an index less than :math:`n` corresponds to one of + the :math:`n` original observations. The distance between + clusters ``Z[i, 0]`` and ``Z[i, 1]`` is given by ``Z[i, 2]``. The + fourth value ``Z[i, 3]`` represents the number of original + observations in the newly formed cluster. + + The following linkage methods are used to compute the distance + :math:`d(s, t)` between two clusters :math:`s` and + :math:`t`. The algorithm begins with a forest of clusters that + have yet to be used in the hierarchy being formed. When two + clusters :math:`s` and :math:`t` from this forest are combined + into a single cluster :math:`u`, :math:`s` and :math:`t` are + removed from the forest, and :math:`u` is added to the + forest. When only one cluster remains in the forest, the algorithm + stops, and this cluster becomes the root. + + A distance matrix is maintained at each iteration. The ``d[i,j]`` + entry corresponds to the distance between cluster :math:`i` and + :math:`j` in the original forest. + + At each iteration, the algorithm must update the distance matrix + to reflect the distance of the newly formed cluster u with the + remaining clusters in the forest. + + Suppose there are :math:`|u|` original observations + :math:`u[0], \\ldots, u[|u|-1]` in cluster :math:`u` and + :math:`|v|` original objects :math:`v[0], \\ldots, v[|v|-1]` in + cluster :math:`v`. Recall :math:`s` and :math:`t` are + combined to form cluster :math:`u`. Let :math:`v` be any + remaining cluster in the forest that is not :math:`u`. + + The following are methods for calculating the distance between the + newly formed cluster :math:`u` and each :math:`v`. + + * method='single' assigns + + .. math:: + d(u,v) = \\min(dist(u[i],v[j])) + + for all points :math:`i` in cluster :math:`u` and + :math:`j` in cluster :math:`v`. This is also known as the + Nearest Point Algorithm. + + * method='complete' assigns + + .. math:: + d(u, v) = \\max(dist(u[i],v[j])) + + for all points :math:`i` in cluster u and :math:`j` in + cluster :math:`v`. This is also known by the Farthest Point + Algorithm or Voor Hees Algorithm. + + * method='average' assigns + + .. math:: + d(u,v) = \\sum_{ij} \\frac{d(u[i], v[j])} + {(|u|*|v|)} + + for all points :math:`i` and :math:`j` where :math:`|u|` + and :math:`|v|` are the cardinalities of clusters :math:`u` + and :math:`v`, respectively. This is also called the UPGMA + algorithm. This is called UPGMA. + + * method='weighted' assigns + + .. math:: + d(u,v) = (dist(s,v) + dist(t,v))/2 + + where cluster u was formed with cluster s and t and v + is a remaining cluster in the forest. (also called WPGMA) + + * method='centroid' assigns + + .. math:: + dist(s,t) = ||c_s-c_t||_2 + + where :math:`c_s` and :math:`c_t` are the centroids of + clusters :math:`s` and :math:`t`, respectively. When two + clusters :math:`s` and :math:`t` are combined into a new + cluster :math:`u`, the new centroid is computed over all the + original objects in clusters :math:`s` and :math:`t`. The + distance then becomes the Euclidean distance between the + centroid of :math:`u` and the centroid of a remaining cluster + :math:`v` in the forest. This is also known as the UPGMC + algorithm. + + * method='median' assigns math:`d(s,t)` like the ``centroid`` + method. When two clusters :math:`s` and :math:`t` are combined + into a new cluster :math:`u`, the average of centroids s and t + give the new centroid :math:`u`. This is also known as the + WPGMC algorithm. + + * method='ward' uses the Ward variance minimization algorithm. + The new entry :math:`d(u,v)` is computed as follows, + + .. math:: + + d(u,v) = \\sqrt{\\frac{|v|+|s|} + {T}d(v,s)^2 + + \\frac{|v|+|t|} + {T}d(v,t)^2 + + \\frac{|v|} + {T}d(s,t)^2} + + where :math:`u` is the newly joined cluster consisting of + clusters :math:`s` and :math:`t`, :math:`v` is an unused + cluster in the forest, :math:`T=|v|+|s|+|t|`, and + :math:`|*|` is the cardinality of its argument. This is also + known as the incremental algorithm. + + Warning: When the minimum distance pair in the forest is chosen, there may + be two or more pairs with the same minimum distance. This + implementation may chose a different minimum than the MATLAB(TM) + version. - * method='median' assigns math:`d(s,t)` like the ``centroid`` - method. When two clusters :math:`s` and :math:`t` are combined - into a new cluster :math:`u`, the average of centroids s and t - give the new centroid :math:`u`. This is also known as the - WPGMC algorithm. - - * method='ward' uses the Ward variance minimization algorithm. - The new entry :math:`d(u,v)` is computed as follows, - - .. math:: - - d(u,v) = \sqrt{\frac{|v|+|s|} - {T}d(v,s)^2 - + \frac{|v|+|t|} - {T}d(v,t)^2 - + \frac{|v|} - {T}d(s,t)^2} - - where :math:`u` is the newly joined cluster consisting of - clusters :math:`s` and :math:`t`, :math:`v` is an unused - cluster in the forest, :math:`T=|v|+|s|+|t|`, and - :math:`|*|` is the cardinality of its argument. This is also - known as the incremental algorithm. - - Warning: When the minimum distance pair in the forest is chosen, there may - be two or more pairs with the same minimum distance. This - implementation may chose a different minimum than the MATLAB(TM) - version. + :Parameters: + - y : ndarray + A condensed or redundant distance matrix. A condensed + distance matrix is a flat array containing the upper + triangular of the distance matrix. This is the form that + ``pdist`` returns. Alternatively, a collection of + :math:`m` observation vectors in n dimensions may be passed as + an :math:`m` by :math:`n` array. + - method : string + The linkage algorithm to use. See the ``Linkage Methods`` + section below for full descriptions. + - metric : string + The distance metric to use. See the ``distance.pdist`` + function for a list of valid distance metrics. - :Parameters: - - Q : ndarray - A condensed or redundant distance matrix. A condensed - distance matrix is a flat array containing the upper - triangular of the distance matrix. This is the form that - ``pdist`` returns. Alternatively, a collection of - :math:`m` observation vectors in n dimensions may be passed as - a :math:`m` by :math:`n` array. - - method : string - The linkage algorithm to use. See the ``Linkage Methods`` - section below for full descriptions. - - metric : string - The distance metric to use. See the ``distance.pdist`` - function for a list of valid distance metrics. + :Returns: - :Returns: + - Z : ndarray + The hierarchical clustering encoded as a linkage matrix. - - Z : ndarray - The hierarchical clustering encoded as a linkage matrix. - """ + """ if not isinstance(method, str): raise TypeError("Argument 'method' must be a string.") @@ -1457,9 +1459,9 @@ :Arguments: - - Z : ndarray - The hierarchical clustering encoded with the matrix returned - by the ``linkage`` function. + - X : ndarray + ``n`` by ``m`` data matrix with ``n`` observations in ``m`` + dimensions. - t : double The threshold to apply when forming flat clusters. @@ -1502,6 +1504,7 @@ ----- This function is similar to MATLAB(TM) clusterdata function. + """ X = np.asarray(X, order='c', dtype=np.double) diff -Nru python-scipy-0.7.2+dfsg1/scipy/cluster/src/hierarchy.c python-scipy-0.8.0+dfsg1/scipy/cluster/src/hierarchy.c --- python-scipy-0.7.2+dfsg1/scipy/cluster/src/hierarchy.c 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/cluster/src/hierarchy.c 2010-07-26 15:48:29.000000000 +0100 @@ -33,6 +33,8 @@ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ +#include +#include #include "common.h" @@ -67,7 +69,7 @@ #include "hierarchy.h" -static inline double euclidean_distance(const double *u, const double *v, int n) { +static NPY_INLINE double euclidean_distance(const double *u, const double *v, int n) { int i = 0; double s = 0.0, d; for (i = 0; i < n; i++) { @@ -154,6 +156,7 @@ int i, xi, xn; cnode *rn = info->nodes + inds[mini]; cnode *sn = info->nodes + inds[minj]; + cnode *xnd; bit = buf; rc = (double)rn->n; sc = (double)sn->n; @@ -164,7 +167,7 @@ drx = *(rows[i] + mini - i - 1); dsx = *(rows[i] + minj - i - 1); xi = inds[i]; - cnode *xnd = info->nodes + xi; + xnd = info->nodes + xi; xn = xnd->n; mply = (double)1.0 / (((double)xn) * rscnt); *bit = mply * ((drx * (rc * xn)) + (dsx * (sc * xn))); @@ -173,7 +176,7 @@ drx = *(rows[mini] + i - mini - 1); dsx = *(rows[i] + minj - i - 1); xi = inds[i]; - cnode *xnd = info->nodes + xi; + xnd = info->nodes + xi; xn = xnd->n; mply = (double)1.0 / (((double)xn) * rscnt); *bit = mply * ((drx * (rc * xn)) + (dsx * (sc * xn))); @@ -182,7 +185,7 @@ drx = *(rows[mini] + i - mini - 1); dsx = *(rows[minj] + i - minj - 1); xi = inds[i]; - cnode *xnd = info->nodes + xi; + xnd = info->nodes + xi; xn = xnd->n; mply = (double)1.0 / (((double)xn) * rscnt); *bit = mply * ((drx * (rc * xn)) + (dsx * (sc * xn))); @@ -253,6 +256,7 @@ int i, m, xi, rind, sind; double drx, dsx, rf, sf, xf, xn, rn, sn, drsSq; cnode *newNode; + cnode *xnd; rind = inds[mini]; sind = inds[minj]; @@ -270,7 +274,7 @@ drx = *(rows[i] + mini - i - 1); dsx = *(rows[i] + minj - i - 1); xi = inds[i]; - cnode *xnd = info->nodes + xi; + xnd = info->nodes + xi; xn = xnd->n; rf = (rn + xn) / (rn + sn + xn); sf = (sn + xn) / (rn + sn + xn); @@ -284,7 +288,7 @@ drx = *(rows[mini] + i - mini - 1); dsx = *(rows[i] + minj - i - 1); xi = inds[i]; - cnode *xnd = info->nodes + xi; + xnd = info->nodes + xi; xn = xnd->n; rf = (rn + xn) / (rn + sn + xn); sf = (sn + xn) / (rn + sn + xn); @@ -297,7 +301,7 @@ drx = *(rows[mini] + i - mini - 1); dsx = *(rows[minj] + i - minj - 1); xi = inds[i]; - cnode *xnd = info->nodes + xi; + xnd = info->nodes + xi; xn = xnd->n; rf = (rn + xn) / (rn + sn + xn); sf = (sn + xn) / (rn + sn + xn); @@ -897,7 +901,7 @@ } } -inline void set_dist_entry(double *d, double val, int i, int j, int n) { +NPY_INLINE void set_dist_entry(double *d, double val, int i, int j, int n) { if (i < j) { *(d + (NCHOOSE2(n)-NCHOOSE2(n - i)) + j) = val; } @@ -1065,7 +1069,7 @@ free(rvisited); } -void calculate_cluster_sizes(const double *Z, double *CS, int n) { +void calculate_cluster_sizes(const double *Z, double *cs, int n) { int i, j, k, q; const double *row; for (k = 0; k < n - 1; k++) { @@ -1075,22 +1079,22 @@ /** If the left node is a non-singleton, add its count. */ if (i >= n) { q = i - n; - CS[k] += CS[q]; + cs[k] += cs[q]; } /** Otherwise just add 1 for the leaf. */ else { - CS[k] += 1.0; + cs[k] += 1.0; } /** If the right node is a non-singleton, add its count. */ if (j >= n) { q = j - n; - CS[k] += CS[q]; + cs[k] += cs[q]; } /** Otherwise just add 1 for the leaf. */ else { - CS[k] += 1.0; + cs[k] += 1.0; } - CPY_DEBUG_MSG("i=%d, j=%d, CS[%d]=%d\n", i, j, k, (int)CS[k]); + CPY_DEBUG_MSG("i=%d, j=%d, cs[%d]=%d\n", i, j, k, (int)cs[k]); } } diff -Nru python-scipy-0.7.2+dfsg1/scipy/cluster/src/hierarchy.h python-scipy-0.8.0+dfsg1/scipy/cluster/src/hierarchy.h --- python-scipy-0.7.2+dfsg1/scipy/cluster/src/hierarchy.h 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/cluster/src/hierarchy.h 2010-07-26 15:48:29.000000000 +0100 @@ -107,7 +107,7 @@ void cophenetic_distances(const double *Z, double *d, int n); void cpy_to_tree(const double *Z, cnode **tnodes, int n); -void calculate_cluster_sizes(const double *Z, double *CS, int n); +void calculate_cluster_sizes(const double *Z, double *cs, int n); void form_member_list(const double *Z, int *members, int n); void form_flat_clusters_from_in(const double *Z, const double *R, int *T, diff -Nru python-scipy-0.7.2+dfsg1/scipy/cluster/src/hierarchy_wrap.c python-scipy-0.8.0+dfsg1/scipy/cluster/src/hierarchy_wrap.c --- python-scipy-0.7.2+dfsg1/scipy/cluster/src/hierarchy_wrap.c 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/cluster/src/hierarchy_wrap.c 2010-07-26 15:48:29.000000000 +0100 @@ -114,14 +114,14 @@ extern PyObject *calculate_cluster_sizes_wrap(PyObject *self, PyObject *args) { int n; - PyArrayObject *Z, *CS_; + PyArrayObject *Z, *cs_; if (!PyArg_ParseTuple(args, "O!O!i", &PyArray_Type, &Z, - &PyArray_Type, &CS_, + &PyArray_Type, &cs_, &n)) { return 0; } - calculate_cluster_sizes((const double*)Z->data, (double*)CS_->data, n); + calculate_cluster_sizes((const double*)Z->data, (double*)cs_->data, n); return Py_BuildValue(""); } @@ -373,7 +373,7 @@ {NULL, NULL} /* Sentinel - marks the end of this structure */ }; -void init_hierarchy_wrap(void) { +PyMODINIT_FUNC init_hierarchy_wrap(void) { (void) Py_InitModule("_hierarchy_wrap", _hierarchyWrapMethods); import_array(); // Must be present for NumPy. Called first after above line. } diff -Nru python-scipy-0.7.2+dfsg1/scipy/cluster/src/vq.c python-scipy-0.8.0+dfsg1/scipy/cluster/src/vq.c --- python-scipy-0.7.2+dfsg1/scipy/cluster/src/vq.c 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/cluster/src/vq.c 2010-07-26 15:48:29.000000000 +0100 @@ -1,10 +1,15 @@ /* - * vim:syntax=c - * * This file implements vq for float and double in C. It is a direct * translation from the swig interface which could not be generated anymore * with recent swig */ + +/* + * Including python.h is necessary because python header redefines some macros + * in standart C header + */ +#include + #include #include diff -Nru python-scipy-0.7.2+dfsg1/scipy/cluster/src/vq_module.c python-scipy-0.8.0+dfsg1/scipy/cluster/src/vq_module.c --- python-scipy-0.7.2+dfsg1/scipy/cluster/src/vq_module.c 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/cluster/src/vq_module.c 2010-07-26 15:48:29.000000000 +0100 @@ -27,7 +27,7 @@ PyArrayObject *obs_a, *code_a; PyArrayObject *index_a, *dist_a; int typenum1, typenum2; - int nc, nd; + npy_intp nc, nd; npy_intp n, d; if ( !PyArg_ParseTuple(args, "OO", &obs, &code) ) { diff -Nru python-scipy-0.7.2+dfsg1/scipy/cluster/tests/test_vq.py python-scipy-0.8.0+dfsg1/scipy/cluster/tests/test_vq.py --- python-scipy-0.7.2+dfsg1/scipy/cluster/tests/test_vq.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/cluster/tests/test_vq.py 2010-07-26 15:48:29.000000000 +0100 @@ -83,6 +83,22 @@ print "== not testing C imp of vq (rank 1) ==" class TestKMean(TestCase): + def test_large_features(self): + # Generate a data set with large values, and run kmeans on it to + # (regression for 1077). + d = 300 + n = 1e2 + + m1 = np.random.randn(d) + m2 = np.random.randn(d) + x = 10000 * np.random.randn(n, d) - 20000 * m1 + y = 10000 * np.random.randn(n, d) + 20000 * m2 + + data = np.empty((x.shape[0] + y.shape[0], d), np.double) + data[:x.shape[0]] = x + data[x.shape[0]:] = y + + res = kmeans(data, 2) def test_kmeans_simple(self): initc = np.concatenate(([[X[0]], [X[1]], [X[2]]])) code = initc.copy() diff -Nru python-scipy-0.7.2+dfsg1/scipy/cluster/vq.py python-scipy-0.8.0+dfsg1/scipy/cluster/vq.py --- python-scipy-0.7.2+dfsg1/scipy/cluster/vq.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/cluster/vq.py 2010-07-26 15:48:29.000000000 +0100 @@ -485,7 +485,7 @@ result = _kmeans(obs, guess, thresh = thresh) else: #initialize best distance value to a large value - best_dist = 100000 + best_dist = np.inf No = obs.shape[0] k = k_or_guess if k < 1: diff -Nru python-scipy-0.7.2+dfsg1/scipy/constants/codata.py python-scipy-0.8.0+dfsg1/scipy/constants/codata.py --- python-scipy-0.7.2+dfsg1/scipy/constants/codata.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/constants/codata.py 2010-07-26 15:48:29.000000000 +0100 @@ -20,6 +20,7 @@ find(sub) prints out a list of keys containing the string sub. """ +import warnings import string from math import pi, sqrt __all__ = ['physical_constants', 'value', 'unit', 'precision', 'find'] @@ -359,19 +360,117 @@ physical_constants[name] = (val, units, uncert) def value(key) : - """value indexed by key""" + """ + Value in physical_constants indexed by key + + Parameters + ---------- + key : Python string or unicode + Key in dictionary `physical_constants` + + Returns + ------- + value : float + Value in `physical_constants` corresponding to `key` + + See Also + -------- + codata : Contains the description of `physical_constants`, which, as a + dictionary literal object, does not itself possess a docstring. + + Examples + -------- + >>> from scipy.constants import codata + >>> codata.value('elementary charge') + 1.60217653e-019 + + """ return physical_constants[key][0] def unit(key) : - """unit indexed by key""" + """ + Unit in physical_constants indexed by key + + Parameters + ---------- + key : Python string or unicode + Key in dictionary `physical_constants` + + Returns + ------- + unit : Python string + Unit in `physical_constants` corresponding to `key` + + See Also + -------- + codata : Contains the description of `physical_constants`, which, as a + dictionary literal object, does not itself possess a docstring. + + Examples + -------- + >>> from scipy.constants import codata + >>> codata.unit(u'proton mass') + 'kg' + + """ return physical_constants[key][1] def precision(key) : - """relative precision indexed by key""" + """ + Relative precision in physical_constants indexed by key + + Parameters + ---------- + key : Python string or unicode + Key in dictionary `physical_constants` + + Returns + ------- + prec : float + Relative precision in `physical_constants` corresponding to `key` + + See Also + -------- + codata : Contains the description of `physical_constants`, which, as a + dictionary literal object, does not itself possess a docstring. + + Examples + -------- + >>> from scipy.constants import codata + >>> codata.precision(u'proton mass') + 1.7338050694080732e-007 + + """ return physical_constants[key][2] / physical_constants[key][0] -def find(sub) : - """list all keys containing the string sub""" + +def find(sub, disp=True) : + """ + Find the codata.physical_constant keys containing a given string. + + Parameters + ---------- + sub : str or unicode + Sub-string to search keys for + disp : bool + If True, print the keys that are found, and return None. + Otherwise, return the list of keys without printing anything. + + Returns + ------- + keys : None or list + If `disp` is False, the list of keys is returned. Otherwise, None + is returned. + + See Also + -------- + codata : Contains the description of `physical_constants`, which, as a + dictionary literal object, does not itself possess a docstring. + + """ + warnings.warn("In Scipy version 0.8.0, the keyword argument 'disp' was added to " + "find(), with the default value True. In 0.9.0, the default will be False.", + DeprecationWarning) l_sub = string.lower(sub) result = [] for key in physical_constants : @@ -379,8 +478,13 @@ if l_sub in l_key: result.append(key) result.sort() - for key in result : - print key + if disp: + for key in result: + print key + return + else: + return result + #table is lacking some digits for exact values: calculate from definition diff -Nru python-scipy-0.7.2+dfsg1/scipy/constants/constants.py python-scipy-0.8.0+dfsg1/scipy/constants/constants.py --- python-scipy-0.7.2+dfsg1/scipy/constants/constants.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/constants/constants.py 2010-07-26 15:48:29.000000000 +0100 @@ -171,35 +171,227 @@ #functions for conversions that are not linear def C2K(C): - """Convert Celcius to Kelvin""" + """ + Convert Celsius to Kelvin + + Parameters + ---------- + C : float-like scalar or array-like + Celsius temperature(s) to be converted + + Returns + ------- + K : float or a numpy array of floats, corresponding to type of Parameters + Equivalent Kelvin temperature(s) + + Notes + ----- + Computes `K = C +` `zero_Celsius` where `zero_Celsius` = 273.15, i.e., + (the absolute value of) temperature "absolute zero" as measured in Celsius. + + Examples + -------- + >>> from scipy.constants.constants import C2K + >>> C2K(np.array([-40, 40.0])) + array([ 233.15, 313.15]) + + """ return C + zero_Celsius def K2C(K): - """Convert Kelvin to Celcius""" + """ + Convert Kelvin to Celsius + + Parameters + ---------- + K : float-like scalar or array-like + Kelvin temperature(s) to be converted + + Returns + ------- + C : float or a numpy array of floats, corresponding to type of Parameters + Equivalent Celsius temperature(s) + + Notes + ----- + Computes `C = K -` `zero_Celsius` where `zero_Celsius` = 273.15, i.e., + (the absolute value of) temperature "absolute zero" as measured in Celsius. + + Examples + -------- + >>> from scipy.constants.constants import K2C + >>> K2C(np.array([233.15, 313.15])) + array([-40., 40.]) + + """ return K - zero_Celsius def F2C(F): - """Convert Fahrenheit to Celcius""" + """ + Convert Fahrenheit to Celsius + + Parameters + ---------- + F : float-like scalar or array-like + Fahrenheit temperature(s) to be converted + + Returns + ------- + C : float or a numpy array of floats, corresponding to type of Parameters + Equivalent Celsius temperature(s) + + Notes + ----- + Computes `C = (F - 32) / 1.8` + + Examples + -------- + >>> from scipy.constants.constants import F2C + >>> F2C(np.array([-40, 40.0])) + array([-40. , 4.44444444]) + + """ return (F - 32) / 1.8 def C2F(C): - """Convert Celcius to Fahrenheit""" + """ + Convert Celsius to Fahrenheit + + Parameters + ---------- + C : float-like scalar or array-like + Celsius temperature(s) to be converted + + Returns + ------- + F : float or a numpy array of floats, corresponding to type of Parameters + Equivalent Fahrenheit temperature(s) + + Notes + ----- + Computes `F = 1.8 * C + 32` + + Examples + -------- + >>> from scipy.constants.constants import C2F + >>> C2F(np.array([-40, 40.0])) + array([ -40., 104.]) + + """ return 1.8 * C + 32 def F2K(F): - """Convert Fahrenheit to Kelvin""" + """ + Convert Fahrenheit to Kelvin + + Parameters + ---------- + F : float-like scalar or array-like + Fahrenheit temperature(s) to be converted + + Returns + ------- + K : float or a numpy array of floats, corresponding to type of Parameters + Equivalent Kelvin temperature(s) + + Notes + ----- + Computes `K = (F - 32)/1.8 +` `zero_Celsius` where `zero_Celsius` = + 273.15, i.e., (the absolute value of) temperature "absolute zero" as + measured in Celsius. + + Examples + -------- + >>> from scipy.constants.constants import F2K + >>> F2K(np.array([-40, 104])) + array([ 233.15, 313.15]) + + """ return C2K(F2C(F)) def K2F(K): - """Convert Kelvin to Fahrenheit""" + """ + Convert Kelvin to Fahrenheit + + Parameters + ---------- + K : float-like scalar or array-like + Kelvin temperature(s) to be converted + + Returns + ------- + F : float or a numpy array of floats, corresponding to type of Parameters + Equivalent Fahrenheit temperature(s) + + Notes + ----- + Computes `F = 1.8 * (K -` `zero_Celsius` `) + 32` where `zero_Celsius` = + 273.15, i.e., (the absolute value of) temperature "absolute zero" as + measured in Celsius. + + Examples + -------- + >>> from scipy.constants.constants import K2F + >>> K2F(np.array([233.15, 313.15])) + array([ -40., 104.]) + + """ return C2F(K2C(K)) #optics def lambda2nu(lambda_): - """Convert wavelength to optical frequency""" + """ + Convert wavelength to optical frequency + + Parameters + ---------- + lambda : float-like scalar or array-like + Wavelength(s) to be converted + + Returns + ------- + nu : float or a numpy array of floats, corresponding to type of Parameters + Equivalent optical frequency(ies) + + Notes + ----- + Computes :math:`\\nu = c / \\lambda` where `c` = 299792458.0, i.e., the + (vacuum) speed of light in meters/second. + + Examples + -------- + >>> from scipy.constants.constants import lambda2nu + >>> lambda2nu(np.array((1, speed_of_light))) + array([ 2.99792458e+08, 1.00000000e+00]) + + """ return c / lambda_ def nu2lambda(nu): - """Convert optical frequency to wavelength""" + """ + Convert optical frequency to wavelength. + + Parameters + ---------- + nu : float-like scalar or array-like + Optical frequency(ies) to be converted + + Returns + ------- + lambda : float or a numpy array of floats, corresp. to type of Parameters + Equivalent wavelength(s) + + Notes + ----- + Computes :math:`\\lambda = c / \\nu` where `c` = 299792458.0, i.e., the + (vacuum) speed of light in meters/second. + + Examples + -------- + >>> from scipy.constants.constants import nu2lambda + >>> nu2lambda(np.array((1, speed_of_light))) + array([ 2.99792458e+08, 1.00000000e+00]) + + """ return c / nu diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/basic.py python-scipy-0.8.0+dfsg1/scipy/fftpack/basic.py --- python-scipy-0.7.2+dfsg1/scipy/fftpack/basic.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/basic.py 2010-07-26 15:48:29.000000000 +0100 @@ -1,5 +1,3 @@ -## Automatically adapted for scipy Oct 21, 2005 by - """ Discrete Fourier Transforms - basic.py """ @@ -8,19 +6,106 @@ __all__ = ['fft','ifft','fftn','ifftn','rfft','irfft', 'fft2','ifft2', 'rfftfreq'] -from numpy import asarray, zeros, swapaxes, integer, array +from numpy import zeros, swapaxes, integer, array import numpy -import _fftpack as fftpack +import _fftpack import atexit -atexit.register(fftpack.destroy_zfft_cache) -atexit.register(fftpack.destroy_zfftnd_cache) -atexit.register(fftpack.destroy_drfft_cache) +atexit.register(_fftpack.destroy_zfft_cache) +atexit.register(_fftpack.destroy_zfftnd_cache) +atexit.register(_fftpack.destroy_drfft_cache) +atexit.register(_fftpack.destroy_cfft_cache) +atexit.register(_fftpack.destroy_cfftnd_cache) +atexit.register(_fftpack.destroy_rfft_cache) del atexit def istype(arr, typeclass): return issubclass(arr.dtype.type, typeclass) +# XXX: single precision FFTs partially disabled due to accuracy issues +# for large prime-sized inputs. +# +# See http://permalink.gmane.org/gmane.comp.python.scientific.devel/13834 +# ("fftpack test failures for 0.8.0b1", Ralf Gommers, 17 Jun 2010, +# @ scipy-dev) +# +# These should be re-enabled once the problems are resolved + +def _is_safe_size(n): + """ + Is the size of FFT such that FFTPACK can handle it in single precision + with sufficient accuracy? + + Composite numbers of 2, 3, and 5 are accepted, as FFTPACK has those + """ + n = int(n) + for c in (2, 3, 5): + while n % c == 0: + n /= c + return (n <= 1) + +def _fake_crfft(x, n, *a, **kw): + if _is_safe_size(n): + return _fftpack.crfft(x, n, *a, **kw) + else: + return _fftpack.zrfft(x, n, *a, **kw).astype(numpy.complex64) + +def _fake_cfft(x, n, *a, **kw): + if _is_safe_size(n): + return _fftpack.cfft(x, n, *a, **kw) + else: + return _fftpack.zfft(x, n, *a, **kw).astype(numpy.complex64) + +def _fake_rfft(x, n, *a, **kw): + if _is_safe_size(n): + return _fftpack.rfft(x, n, *a, **kw) + else: + return _fftpack.drfft(x, n, *a, **kw).astype(numpy.float32) + +def _fake_cfftnd(x, shape, *a, **kw): + if numpy.all(map(_is_safe_size, shape)): + return _fftpack.cfftnd(x, shape, *a, **kw) + else: + return _fftpack.zfftnd(x, shape, *a, **kw).astype(numpy.complex64) + +_DTYPE_TO_FFT = { +# numpy.dtype(numpy.float32): _fftpack.crfft, + numpy.dtype(numpy.float32): _fake_crfft, + numpy.dtype(numpy.float64): _fftpack.zrfft, +# numpy.dtype(numpy.complex64): _fftpack.cfft, + numpy.dtype(numpy.complex64): _fake_cfft, + numpy.dtype(numpy.complex128): _fftpack.zfft, +} + +_DTYPE_TO_RFFT = { +# numpy.dtype(numpy.float32): _fftpack.rfft, + numpy.dtype(numpy.float32): _fake_rfft, + numpy.dtype(numpy.float64): _fftpack.drfft, +} + +_DTYPE_TO_FFTN = { +# numpy.dtype(numpy.complex64): _fftpack.cfftnd, + numpy.dtype(numpy.complex64): _fake_cfftnd, + numpy.dtype(numpy.complex128): _fftpack.zfftnd, +# numpy.dtype(numpy.float32): _fftpack.cfftnd, + numpy.dtype(numpy.float32): _fake_cfftnd, + numpy.dtype(numpy.float64): _fftpack.zfftnd, +} + +def _asfarray(x): + """Like numpy asfarray, except that it does not modify x dtype if x is + already an array with a float dtype, and do not cast complex types to + real.""" + if hasattr(x, "dtype") and x.dtype.char in numpy.typecodes["AllFloat"]: + return x + else: + # We cannot use asfarray directly because it converts sequences of + # complex to sequence of real + ret = numpy.asarray(x) + if not ret.dtype.char in numpy.typecodes["AllFloat"]: + return numpy.asfarray(x) + return ret + def _fix_shape(x, n, axis): """ Internal auxiliary function for _raw_fft, _raw_fftnd.""" s = list(x.shape) @@ -79,7 +164,7 @@ [y(0),y(1),..,y((n-1)/2),y(-(n-1)/2),...,y(-1)] if n is odd where y(j) = sum[k=0..n-1] x[k] * exp(-sqrt(-1)*j*k* 2*pi/n), j = 0..n-1 - Note that y(-j) = y(n-j). + Note that y(-j) = y(n-j).conjugate(). See Also -------- @@ -96,6 +181,11 @@ This is most efficient for n a power of two. + .. note:: In scipy 0.8.0 `fft` in single precision is available, but *only* + for input array sizes which can be factorized into (combinations of) 2, + 3 and 5. For other sizes the computation will be done in double + precision. + Examples -------- >>> x = np.arange(5) @@ -103,16 +193,21 @@ True """ - tmp = asarray(x) + tmp = _asfarray(x) + + try: + work_function = _DTYPE_TO_FFT[tmp.dtype] + except KeyError: + raise ValueError("type %s is not supported" % tmp.dtype) + if istype(tmp, numpy.complex128): overwrite_x = overwrite_x or (tmp is not x and not \ hasattr(x,'__array__')) - work_function = fftpack.zfft elif istype(tmp, numpy.complex64): - raise NotImplementedError + overwrite_x = overwrite_x or (tmp is not x and not \ + hasattr(x,'__array__')) else: overwrite_x = 1 - work_function = fftpack.zrfft #return _raw_fft(tmp,n,axis,1,overwrite_x,work_function) if n is None: @@ -141,16 +236,21 @@ Optional input: see fft.__doc__ """ - tmp = asarray(x) + tmp = _asfarray(x) + + try: + work_function = _DTYPE_TO_FFT[tmp.dtype] + except KeyError: + raise ValueError("type %s is not supported" % tmp.dtype) + if istype(tmp, numpy.complex128): overwrite_x = overwrite_x or (tmp is not x and not \ hasattr(x,'__array__')) - work_function = fftpack.zfft elif istype(tmp, numpy.complex64): - raise NotImplementedError + overwrite_x = overwrite_x or (tmp is not x and not \ + hasattr(x,'__array__')) else: overwrite_x = 1 - work_function = fftpack.zrfft #return _raw_fft(tmp,n,axis,-1,overwrite_x,work_function) if n is None: @@ -178,7 +278,7 @@ where y(j) = sum[k=0..n-1] x[k] * exp(-sqrt(-1)*j*k* 2*pi/n) j = 0..n-1 - Note that y(-j) = y(n-j). + Note that y(-j) = y(n-j).conjugate(). Optional input: n @@ -194,10 +294,16 @@ Notes: y == rfft(irfft(y)) within numerical accuracy. """ - tmp = asarray(x) + tmp = _asfarray(x) + if not numpy.isrealobj(tmp): raise TypeError,"1st argument must be real sequence" - work_function = fftpack.drfft + + try: + work_function = _DTYPE_TO_RFFT[tmp.dtype] + except KeyError: + raise ValueError("type %s is not supported" % tmp.dtype) + return _raw_fft(tmp,n,axis,1,overwrite_x,work_function) @@ -238,12 +344,16 @@ Optional input: see rfft.__doc__ """ - tmp = asarray(x) + tmp = _asfarray(x) if not numpy.isrealobj(tmp): raise TypeError,"1st argument must be real sequence" - work_function = fftpack.drfft - return _raw_fft(tmp,n,axis,-1,overwrite_x,work_function) + try: + work_function = _DTYPE_TO_RFFT[tmp.dtype] + except KeyError: + raise ValueError("type %s is not supported" % tmp.dtype) + + return _raw_fft(tmp,n,axis,-1,overwrite_x,work_function) def _raw_fftnd(x, s, axes, direction, overwrite_x, work_function): """ Internal auxiliary function for fftnd, ifftnd.""" @@ -314,7 +424,7 @@ x[k_1,..,k_d] * prod[i=1..d] exp(-sqrt(-1)*2*pi/n_i * j_i * k_i) where d = len(x.shape) and n = x.shape. - Note that y[..., -j_i, ...] = y[..., n_i-j_i, ...]. + Note that y[..., -j_i, ...] = y[..., n_i-j_i, ...].conjugate(). Optional input: shape @@ -333,54 +443,58 @@ Notes: y == fftn(ifftn(y)) within numerical accuracy. """ - tmp = asarray(x) + return _raw_fftn_dispatch(x, shape, axes, overwrite_x, 1) + +def _raw_fftn_dispatch(x, shape, axes, overwrite_x, direction): + tmp = _asfarray(x) + + try: + work_function = _DTYPE_TO_FFTN[tmp.dtype] + except KeyError: + raise ValueError("type %s is not supported" % tmp.dtype) + if istype(tmp, numpy.complex128): overwrite_x = overwrite_x or (tmp is not x and not \ hasattr(x,'__array__')) - work_function = fftpack.zfftnd elif istype(tmp, numpy.complex64): - raise NotImplementedError + pass else: overwrite_x = 1 - work_function = fftpack.zfftnd - return _raw_fftnd(tmp,shape,axes,1,overwrite_x,work_function) + return _raw_fftnd(tmp,shape,axes,direction,overwrite_x,work_function) def ifftn(x, shape=None, axes=None, overwrite_x=0): - """ ifftn(x, s=None, axes=None, overwrite_x=0) -> y - + """ Return inverse multi-dimensional discrete Fourier transform of arbitrary type sequence x. - The returned array contains + The returned array contains:: y[j_1,..,j_d] = 1/p * sum[k_1=0..n_1-1, ..., k_d=0..n_d-1] x[k_1,..,k_d] * prod[i=1..d] exp(sqrt(-1)*2*pi/n_i * j_i * k_i) - where d = len(x.shape), n = x.shape, and p = prod[i=1..d] n_i. + where ``d = len(x.shape)``, ``n = x.shape``, and ``p = prod[i=1..d] n_i``. - Optional input: see fftn.__doc__ - """ - tmp = asarray(x) - if istype(tmp, numpy.complex128): - overwrite_x = overwrite_x or (tmp is not x and not \ - hasattr(x,'__array__')) - work_function = fftpack.zfftnd - elif istype(tmp, numpy.complex64): - raise NotImplementedError - else: - overwrite_x = 1 - work_function = fftpack.zfftnd - return _raw_fftnd(tmp,shape,axes,-1,overwrite_x,work_function) + For description of parameters see `fftn`. + See Also + -------- + fftn : for detailed information. + + """ + return _raw_fftn_dispatch(x, shape, axes, overwrite_x, -1) def fft2(x, shape=None, axes=(-2,-1), overwrite_x=0): - """ fft2(x, shape=None, axes=(-2,-1), overwrite_x=0) -> y + """ + 2-D discrete Fourier transform. - Return two-dimensional discrete Fourier transform of - arbitrary type sequence x. + Return the two-dimensional discrete Fourier transform of the 2-D argument + `x`. + + See Also + -------- + fftn : for detailed information. - See fftn.__doc__ for more information. """ return fftn(x,shape,axes,overwrite_x) diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/fftpack.pyf python-scipy-0.8.0+dfsg1/scipy/fftpack/fftpack.pyf --- python-scipy-0.7.2+dfsg1/scipy/fftpack/fftpack.pyf 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/fftpack.pyf 2010-07-26 15:48:29.000000000 +0100 @@ -83,6 +83,168 @@ intent(c) destroy_drfft_cache end subroutine destroy_drfft_cache + /* Single precision version */ + subroutine cfft(x,n,direction,howmany,normalize) + ! y = fft(x[,n,direction,normalize,overwrite_x]) + intent(c) cfft + complex*8 intent(c,in,out,copy,out=y) :: x(*) + integer optional,depend(x),intent(c,in) :: n=size(x) + check(n>0) n + integer depend(x,n),intent(c,hide) :: howmany = size(x)/n + check(n*howmany==size(x)) howmany + integer optional,intent(c,in) :: direction = 1 + integer optional,intent(c,in),depend(direction) & + :: normalize = (direction<0) + end subroutine cfft + + subroutine rfft(x,n,direction,howmany,normalize) + ! y = rfft(x[,n,direction,normalize,overwrite_x]) + intent(c) rfft + real*4 intent(c,in,out,copy,out=y) :: x(*) + integer optional,depend(x),intent(c,in) :: n=size(x) + check(n>0&&n<=size(x)) n + integer depend(x,n),intent(c,hide) :: howmany = size(x)/n + check(n*howmany==size(x)) howmany + integer optional,intent(c,in) :: direction = 1 + integer optional,intent(c,in),depend(direction) & + :: normalize = (direction<0) + end subroutine rfft + + subroutine crfft(x,n,direction,howmany,normalize) + ! y = crfft(x[,n,direction,normalize,overwrite_x]) + intent(c) crfft + complex*8 intent(c,in,out,overwrite,out=y) :: x(*) + integer optional,depend(x),intent(c,in) :: n=size(x) + check(n>0&&n<=size(x)) n + integer depend(x,n),intent(c,hide) :: howmany = size(x)/n + check(n*howmany==size(x)) howmany + integer optional,intent(c,in) :: direction = 1 + integer optional,intent(c,in),depend(direction) & + :: normalize = (direction<0) + end subroutine crfft + + subroutine cfftnd(x,r,s,direction,howmany,normalize,j) + ! y = cfftnd(x[,s,direction,normalize,overwrite_x]) + intent(c) cfftnd + complex*8 intent(c,in,out,copy,out=y) :: x(*) + integer intent(c,hide),depend(x) :: r=old_rank(x) + integer intent(c,hide) :: j=0 + integer optional,depend(r),dimension(r),intent(c,in) & + :: s=old_shape(x,j++) + check(r>=len(s)) s + integer intent(c,hide) :: howmany = 1 + integer optional,intent(c,in) :: direction = 1 + integer optional,intent(c,in),depend(direction) :: & + normalize = (direction<0) + callprotoargument complex_float*,int,int*,int,int,int + callstatement {& + int i,sz=1,xsz=size(x); & + for (i=0;i0&&n<=size(x)) n + integer depend(x,n),intent(c,hide) :: howmany = size(x)/n + check(n*howmany==size(x)) howmany + integer optional,intent(c,in) :: normalize = 0 + end subroutine ddct1 + + subroutine ddct2(x,n,howmany,normalize) + ! y = ddct2(x[,n,normalize,overwrite_x]) + intent(c) ddct2 + real*8 intent(c,in,out,copy,out=y) :: x(*) + integer optional,depend(x),intent(c,in) :: n=size(x) + check(n>0&&n<=size(x)) n + integer depend(x,n),intent(c,hide) :: howmany = size(x)/n + check(n*howmany==size(x)) howmany + integer optional,intent(c,in) :: normalize = 0 + end subroutine ddct2 + + subroutine ddct3(x,n,howmany,normalize) + ! y = ddct3(x[,n,normalize,overwrite_x]) + intent(c) ddct3 + real*8 intent(c,in,out,copy,out=y) :: x(*) + integer optional,depend(x),intent(c,in) :: n=size(x) + check(n>0&&n<=size(x)) n + integer depend(x,n),intent(c,hide) :: howmany = size(x)/n + check(n*howmany==size(x)) howmany + integer optional,intent(c,in) :: normalize = 0 + end subroutine ddct3 + + subroutine dct1(x,n,howmany,normalize) + ! y = dct1(x[,n,normalize,overwrite_x]) + intent(c) dct1 + real*4 intent(c,in,out,copy,out=y) :: x(*) + integer optional,depend(x),intent(c,in) :: n=size(x) + check(n>0&&n<=size(x)) n + integer depend(x,n),intent(c,hide) :: howmany = size(x)/n + check(n*howmany==size(x)) howmany + integer optional,intent(c,in) :: normalize = 0 + end subroutine dct1 + + subroutine dct2(x,n,howmany,normalize) + ! y = dct2(x[,n,normalize,overwrite_x]) + intent(c) dct2 + real*4 intent(c,in,out,copy,out=y) :: x(*) + integer optional,depend(x),intent(c,in) :: n=size(x) + check(n>0&&n<=size(x)) n + integer depend(x,n),intent(c,hide) :: howmany = size(x)/n + check(n*howmany==size(x)) howmany + integer optional,intent(c,in) :: normalize = 0 + end subroutine dct2 + + subroutine dct3(x,n,howmany,normalize) + ! y = dct3(x[,n,normalize,overwrite_x]) + intent(c) dct3 + real*4 intent(c,in,out,copy,out=y) :: x(*) + integer optional,depend(x),intent(c,in) :: n=size(x) + check(n>0&&n<=size(x)) n + integer depend(x,n),intent(c,hide) :: howmany = size(x)/n + check(n*howmany==size(x)) howmany + integer optional,intent(c,in) :: normalize = 0 + end subroutine dct3 + + subroutine destroy_ddct2_cache() + intent(c) destroy_ddct2_cache + end subroutine destroy_ddct2_cache + + subroutine destroy_ddct1_cache() + intent(c) destroy_ddct1_cache + end subroutine destroy_ddct1_cache + + subroutine destroy_dct2_cache() + intent(c) destroy_dct2_cache + end subroutine destroy_dct2_cache + + subroutine destroy_dct1_cache() + intent(c) destroy_dct1_cache + end subroutine destroy_dct1_cache + end interface end python module _fftpack diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/__init__.py python-scipy-0.8.0+dfsg1/scipy/fftpack/__init__.py --- python-scipy-0.7.2+dfsg1/scipy/fftpack/__init__.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/__init__.py 2010-07-26 15:48:29.000000000 +0100 @@ -16,6 +16,8 @@ register_func(k, eval(k)) del k, register_func +from realtransforms import * +__all__.extend(['dct', 'idct']) from numpy.testing import Tester test = Tester().test diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/pseudo_diffs.py python-scipy-0.8.0+dfsg1/scipy/fftpack/pseudo_diffs.py --- python-scipy-0.7.2+dfsg1/scipy/fftpack/pseudo_diffs.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/pseudo_diffs.py 2010-07-26 15:48:29.000000000 +0100 @@ -252,26 +252,30 @@ _cache = {} def sc_diff(x, a, b, period=None, _cache = _cache): - """ sc_diff(x, a, b, period=2*pi) -> y - + """ Return (a,b)-sinh/cosh pseudo-derivative of a periodic sequence x. If x_j and y_j are Fourier coefficients of periodic functions x - and y, respectively, then + and y, respectively, then:: y_j = sqrt(-1)*sinh(j*a*2*pi/period)/cosh(j*b*2*pi/period) * x_j y_0 = 0 - Input: - a,b + Parameters + ---------- + x : array_like + Input array. + a,b : float Defines the parameters of the sinh/cosh pseudo-differential operator. - period + period : float, optional The period of the sequence x. Default is 2*pi. - Notes: - sc_diff(cs_diff(x,a,b),b,a) == x - For even len(x), the Nyquist mode of x is taken zero. + Notes + ----- + ``sc_diff(cs_diff(x,a,b),b,a) == x`` + For even ``len(x)``, the Nyquist mode of x is taken as zero. + """ tmp = asarray(x) if iscomplexobj(tmp): diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/realtransforms.py python-scipy-0.8.0+dfsg1/scipy/fftpack/realtransforms.py --- python-scipy-0.7.2+dfsg1/scipy/fftpack/realtransforms.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/realtransforms.py 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,229 @@ +""" +Real spectrum tranforms (DCT, DST, MDCT) +""" + +__all__ = ['dct', 'idct'] + +import numpy as np +from scipy.fftpack import _fftpack + +import atexit +atexit.register(_fftpack.destroy_ddct1_cache) +atexit.register(_fftpack.destroy_ddct2_cache) +atexit.register(_fftpack.destroy_dct1_cache) +atexit.register(_fftpack.destroy_dct2_cache) + +def dct(x, type=2, n=None, axis=-1, norm=None): + """ + Return the Discrete Cosine Transform of arbitrary type sequence x. + + Parameters + ---------- + x : array-like + The input array. + type : {1, 2, 3}, optional + Type of the DCT (see Notes). Default type is 2. + n : int, optional + Length of the transform. + axis : int, optional + Axis over which to compute the transform. + norm : {None, 'ortho'}, optional + Normalization mode (see Notes). Default is None. + + Returns + ------- + y : ndarray of real + The transformed input array. + + See Also + -------- + idct + + Notes + ----- + For a single dimension array ``x``, ``dct(x, norm='ortho')`` is equal to + matlab ``dct(x)``. + + There are theoretically 8 types of the DCT, only the first 3 types are + implemented in scipy. 'The' DCT generally refers to DCT type 2, and 'the' + Inverse DCT generally refers to DCT type 3. + + type I + ~~~~~~ + There are several definitions of the DCT-I; we use the following + (for ``norm=None``):: + + N-2 + y[k] = x[0] + (-1)**k x[N-1] + 2 * sum x[n]*cos(pi*k*n/(N-1)) + n=1 + + Only None is supported as normalization mode for DCT-I. Note also that the + DCT-I is only supported for input size > 1 + + type II + ~~~~~~~ + There are several definitions of the DCT-II; we use the following + (for ``norm=None``):: + + + N-1 + y[k] = 2* sum x[n]*cos(pi*k*(2n+1)/(2*N)), 0 <= k < N. + n=0 + + If ``norm='ortho'``, ``y[k]`` is multiplied by a scaling factor `f`:: + + f = sqrt(1/(4*N)) if k = 0, + f = sqrt(1/(2*N)) otherwise. + + Which makes the corresponding matrix of coefficients orthonormal + (``OO' = Id``). + + type III + ~~~~~~~~ + + There are several definitions, we use the following + (for ``norm=None``):: + + N-1 + y[k] = x[0] + 2 * sum x[n]*cos(pi*(k+0.5)*n/N), 0 <= k < N. + n=1 + + or, for ``norm='ortho'`` and 0 <= k < N:: + + N-1 + y[k] = x[0] / sqrt(N) + sqrt(1/N) * sum x[n]*cos(pi*(k+0.5)*n/N) + n=1 + + The (unnormalized) DCT-III is the inverse of the (unnormalized) DCT-II, up + to a factor `2N`. The orthonormalized DCT-III is exactly the inverse of + the orthonormalized DCT-II. + + References + ---------- + + http://en.wikipedia.org/wiki/Discrete_cosine_transform + + 'A Fast Cosine Transform in One and Two Dimensions', by J. Makhoul, `IEEE + Transactions on acoustics, speech and signal processing` vol. 28(1), + pp. 27-34, http://dx.doi.org/10.1109/TASSP.1980.1163351 (1980). + + """ + if type == 1 and norm is not None: + raise NotImplementedError( + "Orthonormalization not yet supported for DCT-I") + return _dct(x, type, n, axis, normalize=norm) + +def idct(x, type=2, n=None, axis=-1, norm=None): + """ + Return the Inverse Discrete Cosine Transform of arbitrary type sequence x. + + Parameters + ---------- + x : array-like + The input array. + type : {1, 2, 3}, optional + Type of the DCT (see Notes). Default type is 2. + n : int, optional + Length of the transform. + axis : int, optional + Axis over which to compute the transform. + norm : {None, 'ortho'}, optional + Normalization mode (see Notes). Default is None. + + Returns + ------- + y : ndarray of real + The transformed input array. + + See Also + -------- + dct + + Notes + ----- + For a single dimension array `x`, ``idct(x, norm='ortho')`` is equal to + matlab ``idct(x)``. + + 'The' IDCT is the IDCT of type 2, which is the same as DCT of type 3. + + IDCT of type 1 is the DCT of type 1, IDCT of type 2 is the DCT of type 3, + and IDCT of type 3 is the DCT of type 2. For the definition of these types, + see `dct`. + + """ + if type == 1 and norm is not None: + raise NotImplementedError( + "Orthonormalization not yet supported for IDCT-I") + # Inverse/forward type table + _TP = {1:1, 2:3, 3:2} + return _dct(x, _TP[type], n, axis, normalize=norm) + +def _dct(x, type, n=None, axis=-1, overwrite_x=0, normalize=None): + """ + Return Discrete Cosine Transform of arbitrary type sequence x. + + Parameters + ---------- + x : array-like + input array. + n : int, optional + Length of the transform. + axis : int, optional + Axis along which the dct is computed. (default=-1) + overwrite_x : bool, optional + If True the contents of x can be destroyed. (default=False) + + Returns + ------- + z : real ndarray + + """ + tmp = np.asarray(x) + if not np.isrealobj(tmp): + raise TypeError,"1st argument must be real sequence" + + if n is None: + n = tmp.shape[axis] + else: + raise NotImplemented("Padding/truncating not yet implemented") + + if tmp.dtype == np.double: + if type == 1: + f = _fftpack.ddct1 + elif type == 2: + f = _fftpack.ddct2 + elif type == 3: + f = _fftpack.ddct3 + else: + raise ValueError("Type %d not understood" % type) + elif tmp.dtype == np.float32: + if type == 1: + f = _fftpack.dct1 + elif type == 2: + f = _fftpack.dct2 + elif type == 3: + f = _fftpack.dct3 + else: + raise ValueError("Type %d not understood" % type) + else: + raise ValueError("dtype %s not supported" % tmp.dtype) + + if normalize: + if normalize == "ortho": + nm = 1 + else: + raise ValueError("Unknown normalize mode %s" % normalize) + else: + nm = 0 + + if type == 1 and n < 2: + raise ValueError("DCT-I is not defined for size < 2") + + if axis == -1 or axis == len(tmp.shape) - 1: + return f(tmp, n, nm, overwrite_x) + #else: + # raise NotImplementedError("Axis arg not yet implemented") + + tmp = np.swapaxes(tmp, axis, -1) + tmp = f(tmp, n, nm, overwrite_x) + return np.swapaxes(tmp, axis, -1) diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/SConscript python-scipy-0.8.0+dfsg1/scipy/fftpack/SConscript --- python-scipy-0.7.2+dfsg1/scipy/fftpack/SConscript 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/SConscript 2010-07-26 15:48:29.000000000 +0100 @@ -1,4 +1,4 @@ -# Last Change: Sat Nov 01 10:00 PM 2008 J +# Last Change: Sat Jan 24 04:00 PM 2009 J # vim:syntax=python from os.path import join as pjoin @@ -7,6 +7,10 @@ env = GetNumpyEnvironment(ARGUMENTS) env.Tool('f2py') +config = env.NumpyConfigure() +config.CheckF77Mangling() +config.Finish() + # Build dfftpack src = [pjoin("src/dfftpack", i) for i in [ "dcosqb.f", "dcosqf.f", "dcosqi.f", "dcost.f", "dcosti.f", "dfftb.f", "dfftb1.f", "dfftf.f", "dfftf1.f", "dffti.f", @@ -14,11 +18,20 @@ "dsinti.f", "zfftb.f", "zfftb1.f", "zfftf.f", "zfftf1.f", "zffti.f", "zffti1.f"]] dfftpack = env.DistutilsStaticExtLibrary('dfftpack', source = [str(s) for s in src]) -env.PrependUnique(LIBS = ['dfftpack']) + +# Build fftpack (single prec) +src = [pjoin("src/fftpack", i) for i in [ 'cfftb.f', 'cfftb1.f', 'cfftf.f', +'cfftf1.f', 'cffti.f', 'cffti1.f', 'cosqb.f', 'cosqf.f', 'cosqi.f', 'cost.f', +'costi.f', 'rfftb.f', 'rfftb1.f', 'rfftf.f', 'rfftf1.f', 'rffti.f', +'rffti1.f', 'sinqb.f', 'sinqf.f', 'sinqi.f', 'sint.f', 'sint1.f', 'sinti.f']] +fftpack = env.DistutilsStaticExtLibrary('fftpack', source = [str(s) for s in src]) + +env.PrependUnique(LIBS = ['fftpack', 'dfftpack']) env.PrependUnique(LIBPATH = ['.']) # Build _fftpack src = ['src/zfft.c','src/drfft.c','src/zrfft.c', 'src/zfftnd.c', 'fftpack.pyf'] +src += env.FromCTemplate('src/dct.c.src') env.NumpyPythonExtension('_fftpack', src) # Build convolve diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/setup.py python-scipy-0.8.0+dfsg1/scipy/fftpack/setup.py --- python-scipy-0.7.2+dfsg1/scipy/fftpack/setup.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/setup.py 2010-07-26 15:48:29.000000000 +0100 @@ -14,14 +14,16 @@ config.add_library('dfftpack', sources=[join('src/dfftpack','*.f')]) + config.add_library('fftpack', + sources=[join('src/fftpack','*.f')]) + sources = ['fftpack.pyf','src/zfft.c','src/drfft.c','src/zrfft.c', - 'src/zfftnd.c'] + 'src/zfftnd.c', 'src/dct.c.src'] config.add_extension('_fftpack', sources=sources, - libraries=['dfftpack'], - depends=['src/zfft_fftpack.c', 'src/drfft_fftpack.c', - 'src/zfftnd_fftpack.c']) + libraries=['dfftpack', 'fftpack'], + include_dirs=['src']) config.add_extension('convolve', sources=['convolve.pyf','src/convolve.c'], diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/convolve.c python-scipy-0.8.0+dfsg1/scipy/fftpack/src/convolve.c --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/convolve.c 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/convolve.c 2010-07-26 15:48:29.000000000 +0100 @@ -24,7 +24,7 @@ extern void destroy_convolve_cache(void) { - destroy_dfftpack_caches(); + destroy_dfftpack_cache(); } /**************** convolve **********************/ diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/dct.c.src python-scipy-0.8.0+dfsg1/scipy/fftpack/src/dct.c.src --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/dct.c.src 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/dct.c.src 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,158 @@ +/* vim:syntax=c + * vim:sw=4 + * + * Interfaces to the DCT transforms of fftpack + */ +#include + +#include "fftpack.h" + +enum normalize { + DCT_NORMALIZE_NO = 0, + DCT_NORMALIZE_ORTHONORMAL = 1 +}; + +/**begin repeat + +#type=float,double# +#pref=,d# +#PREF=,D# +*/ +extern void F_FUNC(@pref@costi, @PREF@COSTI)(int*, @type@*); +extern void F_FUNC(@pref@cost, @PREF@COST)(int*, @type@*, @type@*); +extern void F_FUNC(@pref@cosqi, @PREF@COSQI)(int*, @type@*); +extern void F_FUNC(@pref@cosqb, @PREF@COSQB)(int*, @type@*, @type@*); +extern void F_FUNC(@pref@cosqf, @PREF@COSQF)(int*, @type@*, @type@*); + +GEN_CACHE(@pref@dct1,(int n) + ,@type@* wsave; + ,(caches_@pref@dct1[i].n==n) + ,caches_@pref@dct1[id].wsave = malloc(sizeof(@type@)*(3*n+15)); + F_FUNC(@pref@costi, @PREF@COSTI)(&n, caches_@pref@dct1[id].wsave); + ,free(caches_@pref@dct1[id].wsave); + ,10) + +GEN_CACHE(@pref@dct2,(int n) + ,@type@* wsave; + ,(caches_@pref@dct2[i].n==n) + ,caches_@pref@dct2[id].wsave = malloc(sizeof(@type@)*(3*n+15)); + F_FUNC(@pref@cosqi,@PREF@COSQI)(&n,caches_@pref@dct2[id].wsave); + ,free(caches_@pref@dct2[id].wsave); + ,10) + +void @pref@dct1(@type@ * inout, int n, int howmany, int normalize) +{ + int i, j; + @type@ *ptr = inout, n1, n2; + @type@ *wsave = NULL; + + wsave = caches_@pref@dct1[get_cache_id_@pref@dct1(n)].wsave; + + for (i = 0; i < howmany; ++i, ptr += n) { + F_FUNC(@pref@cost, @PREF@COST)(&n, ptr, wsave); + } + + switch (normalize) { + case DCT_NORMALIZE_NO: + break; +#if 0 + case DCT_NORMALIZE_ORTHONORMAL: + ptr = inout; + n1 = sqrt(0.5 / (n-1)); + n2 = sqrt(1. / (n-1)); + for (i = 0; i < howmany; ++i, ptr+=n) { + ptr[0] *= n1; + for (j = 1; j < n-1; ++j) { + ptr[j] *= n2; + } + } + break; +#endif + default: + fprintf(stderr, "dct1: normalize not yet supported=%d\n", + normalize); + break; + } +} + +void @pref@dct2(@type@ * inout, int n, int howmany, int normalize) +{ + int i, j; + @type@ *ptr = inout; + @type@ *wsave = NULL; + @type@ n1, n2; + + wsave = caches_@pref@dct2[get_cache_id_@pref@dct2(n)].wsave; + + for (i = 0; i < howmany; ++i, ptr += n) { + F_FUNC(@pref@cosqb, @PREF@COSQB)(&n, ptr, wsave); + + } + + switch (normalize) { + case DCT_NORMALIZE_NO: + ptr = inout; + /* 0.5 coeff comes from fftpack defining DCT as + * 4 * sum(cos(something)), whereas most definition + * use 2 */ + for (i = 0; i < n * howmany; ++i) { + ptr[i] *= 0.5; + } + break; + case DCT_NORMALIZE_ORTHONORMAL: + ptr = inout; + /* 0.5 coeff comes from fftpack defining DCT as + * 4 * sum(cos(something)), whereas most definition + * use 2 */ + n1 = 0.25 * sqrt(1./n); + n2 = 0.25 * sqrt(2./n); + for (i = 0; i < howmany; ++i, ptr+=n) { + ptr[0] *= n1; + for (j = 1; j < n; ++j) { + ptr[j] *= n2; + } + } + break; + default: + fprintf(stderr, "dct2: normalize not yet supported=%d\n", + normalize); + break; + } +} + +void @pref@dct3(@type@ * inout, int n, int howmany, int normalize) +{ + int i, j; + @type@ *ptr = inout; + @type@ *wsave = NULL; + @type@ n1, n2; + + wsave = caches_@pref@dct2[get_cache_id_@pref@dct2(n)].wsave; + + switch (normalize) { + case DCT_NORMALIZE_NO: + break; + case DCT_NORMALIZE_ORTHONORMAL: + n1 = sqrt(1./n); + n2 = sqrt(0.5/n); + for (i = 0; i < howmany; ++i, ptr+=n) { + ptr[0] *= n1; + for (j = 1; j < n; ++j) { + ptr[j] *= n2; + } + } + break; + default: + fprintf(stderr, "dct3: normalize not yet supported=%d\n", + normalize); + break; + } + + ptr = inout; + for (i = 0; i < howmany; ++i, ptr += n) { + F_FUNC(@pref@cosqf, @PREF@COSQF)(&n, ptr, wsave); + + } + +} +/**end repeat**/ diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/drfft.c python-scipy-0.8.0+dfsg1/scipy/fftpack/src/drfft.c --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/drfft.c 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/drfft.c 2010-07-26 15:48:29.000000000 +0100 @@ -6,19 +6,98 @@ #include "fftpack.h" -/* The following macro convert private backend specific function to the public - * functions exported by the module */ -#define GEN_PUBLIC_API(name) \ -void destroy_drfft_cache(void)\ -{\ - destroy_dr##name##_caches();\ -}\ -\ -void drfft(double *inout, int n, \ - int direction, int howmany, int normalize)\ -{\ - drfft_##name(inout, n, direction, howmany, normalize);\ +extern void F_FUNC(dfftf, DFFTF) (int *, double *, double *); +extern void F_FUNC(dfftb, DFFTB) (int *, double *, double *); +extern void F_FUNC(dffti, DFFTI) (int *, double *); +extern void F_FUNC(rfftf, RFFTF) (int *, float *, float *); +extern void F_FUNC(rfftb, RFFTB) (int *, float *, float *); +extern void F_FUNC(rffti, RFFTI) (int *, float *); + + +GEN_CACHE(drfft, (int n) + , double *wsave; + , (caches_drfft[i].n == n) + , caches_drfft[id].wsave = + (double *) malloc(sizeof(double) * (2 * n + 15)); + F_FUNC(dffti, DFFTI) (&n, caches_drfft[id].wsave); + , free(caches_drfft[id].wsave); + , 10) + +GEN_CACHE(rfft, (int n) + , float *wsave; + , (caches_rfft[i].n == n) + , caches_rfft[id].wsave = + (float *) malloc(sizeof(float) * (2 * n + 15)); + F_FUNC(rffti, RFFTI) (&n, caches_rfft[id].wsave); + , free(caches_rfft[id].wsave); + , 10) + +void drfft(double *inout, int n, int direction, int howmany, + int normalize) +{ + int i; + double *ptr = inout; + double *wsave = NULL; + wsave = caches_drfft[get_cache_id_drfft(n)].wsave; + + + switch (direction) { + case 1: + for (i = 0; i < howmany; ++i, ptr += n) { + F_FUNC(dfftf,DFFTF)(&n, ptr, wsave); + } + break; + + case -1: + for (i = 0; i < howmany; ++i, ptr += n) { + F_FUNC(dfftb,DFFTB)(&n, ptr, wsave); + } + break; + + default: + fprintf(stderr, "drfft: invalid direction=%d\n", direction); + } + + if (normalize) { + double d = 1.0 / n; + ptr = inout; + for (i = n * howmany - 1; i >= 0; --i) { + (*(ptr++)) *= d; + } + } } -#include "drfft_fftpack.c" -GEN_PUBLIC_API(fftpack) +void rfft(float *inout, int n, int direction, int howmany, + int normalize) +{ + int i; + float *ptr = inout; + float *wsave = NULL; + wsave = caches_rfft[get_cache_id_rfft(n)].wsave; + + + switch (direction) { + case 1: + for (i = 0; i < howmany; ++i, ptr += n) { + F_FUNC(rfftf,RFFTF)(&n, ptr, wsave); + } + break; + + case -1: + for (i = 0; i < howmany; ++i, ptr += n) { + F_FUNC(rfftb,RFFTB)(&n, ptr, wsave); + } + break; + + default: + fprintf(stderr, "rfft: invalid direction=%d\n", direction); + } + + if (normalize) { + float d = 1.0 / n; + ptr = inout; + for (i = n * howmany - 1; i >= 0; --i) { + (*(ptr++)) *= d; + } + } +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/drfft_fftpack.c python-scipy-0.8.0+dfsg1/scipy/fftpack/src/drfft_fftpack.c --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/drfft_fftpack.c 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/drfft_fftpack.c 1970-01-01 01:00:00.000000000 +0100 @@ -1,54 +0,0 @@ -/* - * Last Change: Wed Aug 01 07:00 PM 2007 J - * - * FFTPACK implementation - * - * Original code by Pearu Peterson. - */ - -extern void F_FUNC(dfftf, DFFTF) (int *, double *, double *); -extern void F_FUNC(dfftb, DFFTB) (int *, double *, double *); -extern void F_FUNC(dffti, DFFTI) (int *, double *); -GEN_CACHE(drfftpack, (int n) - , double *wsave; - , (caches_drfftpack[i].n == n) - , caches_drfftpack[id].wsave = - (double *) malloc(sizeof(double) * (2 * n + 15)); - F_FUNC(dffti, DFFTI) (&n, caches_drfftpack[id].wsave); - , free(caches_drfftpack[id].wsave); - , 10) - -static void drfft_fftpack(double *inout, int n, int direction, int howmany, - int normalize) -{ - int i; - double *ptr = inout; - double *wsave = NULL; - wsave = caches_drfftpack[get_cache_id_drfftpack(n)].wsave; - - - switch (direction) { - case 1: - for (i = 0; i < howmany; ++i, ptr += n) { - dfftf_(&n, ptr, wsave); - } - break; - - case -1: - for (i = 0; i < howmany; ++i, ptr += n) { - dfftb_(&n, ptr, wsave); - } - break; - - default: - fprintf(stderr, "drfft: invalid direction=%d\n", direction); - } - - if (normalize) { - double d = 1.0 / n; - ptr = inout; - for (i = n * howmany - 1; i >= 0; --i) { - (*(ptr++)) *= d; - } - } -} diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/cfftb1.f python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/cfftb1.f --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/cfftb1.f 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/cfftb1.f 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,368 @@ + SUBROUTINE CFFTB1 (N,C,CH,WA,IFAC) + DIMENSION CH(*) ,C(*) ,WA(*) ,IFAC(*) + NF = IFAC(2) + NA = 0 + L1 = 1 + IW = 1 + DO 116 K1=1,NF + IP = IFAC(K1+2) + L2 = IP*L1 + IDO = N/L2 + IDOT = IDO+IDO + IDL1 = IDOT*L1 + IF (IP .NE. 4) GO TO 103 + IX2 = IW+IDOT + IX3 = IX2+IDOT + IF (NA .NE. 0) GO TO 101 + CALL PASSB4 (IDOT,L1,C,CH,WA(IW),WA(IX2),WA(IX3)) + GO TO 102 + 101 CALL PASSB4 (IDOT,L1,CH,C,WA(IW),WA(IX2),WA(IX3)) + 102 NA = 1-NA + GO TO 115 + 103 IF (IP .NE. 2) GO TO 106 + IF (NA .NE. 0) GO TO 104 + CALL PASSB2 (IDOT,L1,C,CH,WA(IW)) + GO TO 105 + 104 CALL PASSB2 (IDOT,L1,CH,C,WA(IW)) + 105 NA = 1-NA + GO TO 115 + 106 IF (IP .NE. 3) GO TO 109 + IX2 = IW+IDOT + IF (NA .NE. 0) GO TO 107 + CALL PASSB3 (IDOT,L1,C,CH,WA(IW),WA(IX2)) + GO TO 108 + 107 CALL PASSB3 (IDOT,L1,CH,C,WA(IW),WA(IX2)) + 108 NA = 1-NA + GO TO 115 + 109 IF (IP .NE. 5) GO TO 112 + IX2 = IW+IDOT + IX3 = IX2+IDOT + IX4 = IX3+IDOT + IF (NA .NE. 0) GO TO 110 + CALL PASSB5 (IDOT,L1,C,CH,WA(IW),WA(IX2),WA(IX3),WA(IX4)) + GO TO 111 + 110 CALL PASSB5 (IDOT,L1,CH,C,WA(IW),WA(IX2),WA(IX3),WA(IX4)) + 111 NA = 1-NA + GO TO 115 + 112 IF (NA .NE. 0) GO TO 113 + CALL PASSB (NAC,IDOT,IP,L1,IDL1,C,C,C,CH,CH,WA(IW)) + GO TO 114 + 113 CALL PASSB (NAC,IDOT,IP,L1,IDL1,CH,CH,CH,C,C,WA(IW)) + 114 IF (NAC .NE. 0) NA = 1-NA + 115 L1 = L2 + IW = IW+(IP-1)*IDOT + 116 CONTINUE + IF (NA .EQ. 0) RETURN + N2 = N+N + DO 117 I=1,N2 + C(I) = CH(I) + 117 CONTINUE + RETURN + END + SUBROUTINE PASSB2 (IDO,L1,CC,CH,WA1) + DIMENSION CC(IDO,2,L1) ,CH(IDO,L1,2) , + 1 WA1(*) + IF (IDO .GT. 2) GO TO 102 + DO 101 K=1,L1 + CH(1,K,1) = CC(1,1,K)+CC(1,2,K) + CH(1,K,2) = CC(1,1,K)-CC(1,2,K) + CH(2,K,1) = CC(2,1,K)+CC(2,2,K) + CH(2,K,2) = CC(2,1,K)-CC(2,2,K) + 101 CONTINUE + RETURN + 102 DO 104 K=1,L1 + DO 103 I=2,IDO,2 + CH(I-1,K,1) = CC(I-1,1,K)+CC(I-1,2,K) + TR2 = CC(I-1,1,K)-CC(I-1,2,K) + CH(I,K,1) = CC(I,1,K)+CC(I,2,K) + TI2 = CC(I,1,K)-CC(I,2,K) + CH(I,K,2) = WA1(I-1)*TI2+WA1(I)*TR2 + CH(I-1,K,2) = WA1(I-1)*TR2-WA1(I)*TI2 + 103 CONTINUE + 104 CONTINUE + RETURN + END + SUBROUTINE PASSB3 (IDO,L1,CC,CH,WA1,WA2) + DIMENSION CC(IDO,3,L1) ,CH(IDO,L1,3) , + 1 WA1(*) ,WA2(*) + DATA TAUR,TAUI /-.5,.866025403784439/ + IF (IDO .NE. 2) GO TO 102 + DO 101 K=1,L1 + TR2 = CC(1,2,K)+CC(1,3,K) + CR2 = CC(1,1,K)+TAUR*TR2 + CH(1,K,1) = CC(1,1,K)+TR2 + TI2 = CC(2,2,K)+CC(2,3,K) + CI2 = CC(2,1,K)+TAUR*TI2 + CH(2,K,1) = CC(2,1,K)+TI2 + CR3 = TAUI*(CC(1,2,K)-CC(1,3,K)) + CI3 = TAUI*(CC(2,2,K)-CC(2,3,K)) + CH(1,K,2) = CR2-CI3 + CH(1,K,3) = CR2+CI3 + CH(2,K,2) = CI2+CR3 + CH(2,K,3) = CI2-CR3 + 101 CONTINUE + RETURN + 102 DO 104 K=1,L1 + DO 103 I=2,IDO,2 + TR2 = CC(I-1,2,K)+CC(I-1,3,K) + CR2 = CC(I-1,1,K)+TAUR*TR2 + CH(I-1,K,1) = CC(I-1,1,K)+TR2 + TI2 = CC(I,2,K)+CC(I,3,K) + CI2 = CC(I,1,K)+TAUR*TI2 + CH(I,K,1) = CC(I,1,K)+TI2 + CR3 = TAUI*(CC(I-1,2,K)-CC(I-1,3,K)) + CI3 = TAUI*(CC(I,2,K)-CC(I,3,K)) + DR2 = CR2-CI3 + DR3 = CR2+CI3 + DI2 = CI2+CR3 + DI3 = CI2-CR3 + CH(I,K,2) = WA1(I-1)*DI2+WA1(I)*DR2 + CH(I-1,K,2) = WA1(I-1)*DR2-WA1(I)*DI2 + CH(I,K,3) = WA2(I-1)*DI3+WA2(I)*DR3 + CH(I-1,K,3) = WA2(I-1)*DR3-WA2(I)*DI3 + 103 CONTINUE + 104 CONTINUE + RETURN + END + SUBROUTINE PASSB4 (IDO,L1,CC,CH,WA1,WA2,WA3) + DIMENSION CC(IDO,4,L1) ,CH(IDO,L1,4) , + 1 WA1(*) ,WA2(*) ,WA3(*) + IF (IDO .NE. 2) GO TO 102 + DO 101 K=1,L1 + TI1 = CC(2,1,K)-CC(2,3,K) + TI2 = CC(2,1,K)+CC(2,3,K) + TR4 = CC(2,4,K)-CC(2,2,K) + TI3 = CC(2,2,K)+CC(2,4,K) + TR1 = CC(1,1,K)-CC(1,3,K) + TR2 = CC(1,1,K)+CC(1,3,K) + TI4 = CC(1,2,K)-CC(1,4,K) + TR3 = CC(1,2,K)+CC(1,4,K) + CH(1,K,1) = TR2+TR3 + CH(1,K,3) = TR2-TR3 + CH(2,K,1) = TI2+TI3 + CH(2,K,3) = TI2-TI3 + CH(1,K,2) = TR1+TR4 + CH(1,K,4) = TR1-TR4 + CH(2,K,2) = TI1+TI4 + CH(2,K,4) = TI1-TI4 + 101 CONTINUE + RETURN + 102 DO 104 K=1,L1 + DO 103 I=2,IDO,2 + TI1 = CC(I,1,K)-CC(I,3,K) + TI2 = CC(I,1,K)+CC(I,3,K) + TI3 = CC(I,2,K)+CC(I,4,K) + TR4 = CC(I,4,K)-CC(I,2,K) + TR1 = CC(I-1,1,K)-CC(I-1,3,K) + TR2 = CC(I-1,1,K)+CC(I-1,3,K) + TI4 = CC(I-1,2,K)-CC(I-1,4,K) + TR3 = CC(I-1,2,K)+CC(I-1,4,K) + CH(I-1,K,1) = TR2+TR3 + CR3 = TR2-TR3 + CH(I,K,1) = TI2+TI3 + CI3 = TI2-TI3 + CR2 = TR1+TR4 + CR4 = TR1-TR4 + CI2 = TI1+TI4 + CI4 = TI1-TI4 + CH(I-1,K,2) = WA1(I-1)*CR2-WA1(I)*CI2 + CH(I,K,2) = WA1(I-1)*CI2+WA1(I)*CR2 + CH(I-1,K,3) = WA2(I-1)*CR3-WA2(I)*CI3 + CH(I,K,3) = WA2(I-1)*CI3+WA2(I)*CR3 + CH(I-1,K,4) = WA3(I-1)*CR4-WA3(I)*CI4 + CH(I,K,4) = WA3(I-1)*CI4+WA3(I)*CR4 + 103 CONTINUE + 104 CONTINUE + RETURN + END + SUBROUTINE PASSB5 (IDO,L1,CC,CH,WA1,WA2,WA3,WA4) + DIMENSION CC(IDO,5,L1) ,CH(IDO,L1,5) , + 1 WA1(*) ,WA2(*) ,WA3(*) ,WA4(*) + DATA TR11,TI11,TR12,TI12 /.309016994374947,.951056516295154, + 1-.809016994374947,.587785252292473/ + IF (IDO .NE. 2) GO TO 102 + DO 101 K=1,L1 + TI5 = CC(2,2,K)-CC(2,5,K) + TI2 = CC(2,2,K)+CC(2,5,K) + TI4 = CC(2,3,K)-CC(2,4,K) + TI3 = CC(2,3,K)+CC(2,4,K) + TR5 = CC(1,2,K)-CC(1,5,K) + TR2 = CC(1,2,K)+CC(1,5,K) + TR4 = CC(1,3,K)-CC(1,4,K) + TR3 = CC(1,3,K)+CC(1,4,K) + CH(1,K,1) = CC(1,1,K)+TR2+TR3 + CH(2,K,1) = CC(2,1,K)+TI2+TI3 + CR2 = CC(1,1,K)+TR11*TR2+TR12*TR3 + CI2 = CC(2,1,K)+TR11*TI2+TR12*TI3 + CR3 = CC(1,1,K)+TR12*TR2+TR11*TR3 + CI3 = CC(2,1,K)+TR12*TI2+TR11*TI3 + CR5 = TI11*TR5+TI12*TR4 + CI5 = TI11*TI5+TI12*TI4 + CR4 = TI12*TR5-TI11*TR4 + CI4 = TI12*TI5-TI11*TI4 + CH(1,K,2) = CR2-CI5 + CH(1,K,5) = CR2+CI5 + CH(2,K,2) = CI2+CR5 + CH(2,K,3) = CI3+CR4 + CH(1,K,3) = CR3-CI4 + CH(1,K,4) = CR3+CI4 + CH(2,K,4) = CI3-CR4 + CH(2,K,5) = CI2-CR5 + 101 CONTINUE + RETURN + 102 DO 104 K=1,L1 + DO 103 I=2,IDO,2 + TI5 = CC(I,2,K)-CC(I,5,K) + TI2 = CC(I,2,K)+CC(I,5,K) + TI4 = CC(I,3,K)-CC(I,4,K) + TI3 = CC(I,3,K)+CC(I,4,K) + TR5 = CC(I-1,2,K)-CC(I-1,5,K) + TR2 = CC(I-1,2,K)+CC(I-1,5,K) + TR4 = CC(I-1,3,K)-CC(I-1,4,K) + TR3 = CC(I-1,3,K)+CC(I-1,4,K) + CH(I-1,K,1) = CC(I-1,1,K)+TR2+TR3 + CH(I,K,1) = CC(I,1,K)+TI2+TI3 + CR2 = CC(I-1,1,K)+TR11*TR2+TR12*TR3 + CI2 = CC(I,1,K)+TR11*TI2+TR12*TI3 + CR3 = CC(I-1,1,K)+TR12*TR2+TR11*TR3 + CI3 = CC(I,1,K)+TR12*TI2+TR11*TI3 + CR5 = TI11*TR5+TI12*TR4 + CI5 = TI11*TI5+TI12*TI4 + CR4 = TI12*TR5-TI11*TR4 + CI4 = TI12*TI5-TI11*TI4 + DR3 = CR3-CI4 + DR4 = CR3+CI4 + DI3 = CI3+CR4 + DI4 = CI3-CR4 + DR5 = CR2+CI5 + DR2 = CR2-CI5 + DI5 = CI2-CR5 + DI2 = CI2+CR5 + CH(I-1,K,2) = WA1(I-1)*DR2-WA1(I)*DI2 + CH(I,K,2) = WA1(I-1)*DI2+WA1(I)*DR2 + CH(I-1,K,3) = WA2(I-1)*DR3-WA2(I)*DI3 + CH(I,K,3) = WA2(I-1)*DI3+WA2(I)*DR3 + CH(I-1,K,4) = WA3(I-1)*DR4-WA3(I)*DI4 + CH(I,K,4) = WA3(I-1)*DI4+WA3(I)*DR4 + CH(I-1,K,5) = WA4(I-1)*DR5-WA4(I)*DI5 + CH(I,K,5) = WA4(I-1)*DI5+WA4(I)*DR5 + 103 CONTINUE + 104 CONTINUE + RETURN + END + SUBROUTINE PASSB (NAC,IDO,IP,L1,IDL1,CC,C1,C2,CH,CH2,WA) + DIMENSION CH(IDO,L1,IP) ,CC(IDO,IP,L1) , + 1 C1(IDO,L1,IP) ,WA(*) ,C2(IDL1,IP), + 2 CH2(IDL1,IP) + IDOT = IDO/2 + NT = IP*IDL1 + IPP2 = IP+2 + IPPH = (IP+1)/2 + IDP = IP*IDO +C + IF (IDO .LT. L1) GO TO 106 + DO 103 J=2,IPPH + JC = IPP2-J + DO 102 K=1,L1 + DO 101 I=1,IDO + CH(I,K,J) = CC(I,J,K)+CC(I,JC,K) + CH(I,K,JC) = CC(I,J,K)-CC(I,JC,K) + 101 CONTINUE + 102 CONTINUE + 103 CONTINUE + DO 105 K=1,L1 + DO 104 I=1,IDO + CH(I,K,1) = CC(I,1,K) + 104 CONTINUE + 105 CONTINUE + GO TO 112 + 106 DO 109 J=2,IPPH + JC = IPP2-J + DO 108 I=1,IDO + DO 107 K=1,L1 + CH(I,K,J) = CC(I,J,K)+CC(I,JC,K) + CH(I,K,JC) = CC(I,J,K)-CC(I,JC,K) + 107 CONTINUE + 108 CONTINUE + 109 CONTINUE + DO 111 I=1,IDO + DO 110 K=1,L1 + CH(I,K,1) = CC(I,1,K) + 110 CONTINUE + 111 CONTINUE + 112 IDL = 2-IDO + INC = 0 + DO 116 L=2,IPPH + LC = IPP2-L + IDL = IDL+IDO + DO 113 IK=1,IDL1 + C2(IK,L) = CH2(IK,1)+WA(IDL-1)*CH2(IK,2) + C2(IK,LC) = WA(IDL)*CH2(IK,IP) + 113 CONTINUE + IDLJ = IDL + INC = INC+IDO + DO 115 J=3,IPPH + JC = IPP2-J + IDLJ = IDLJ+INC + IF (IDLJ .GT. IDP) IDLJ = IDLJ-IDP + WAR = WA(IDLJ-1) + WAI = WA(IDLJ) + DO 114 IK=1,IDL1 + C2(IK,L) = C2(IK,L)+WAR*CH2(IK,J) + C2(IK,LC) = C2(IK,LC)+WAI*CH2(IK,JC) + 114 CONTINUE + 115 CONTINUE + 116 CONTINUE + DO 118 J=2,IPPH + DO 117 IK=1,IDL1 + CH2(IK,1) = CH2(IK,1)+CH2(IK,J) + 117 CONTINUE + 118 CONTINUE + DO 120 J=2,IPPH + JC = IPP2-J + DO 119 IK=2,IDL1,2 + CH2(IK-1,J) = C2(IK-1,J)-C2(IK,JC) + CH2(IK-1,JC) = C2(IK-1,J)+C2(IK,JC) + CH2(IK,J) = C2(IK,J)+C2(IK-1,JC) + CH2(IK,JC) = C2(IK,J)-C2(IK-1,JC) + 119 CONTINUE + 120 CONTINUE + NAC = 1 + IF (IDO .EQ. 2) RETURN + NAC = 0 + DO 121 IK=1,IDL1 + C2(IK,1) = CH2(IK,1) + 121 CONTINUE + DO 123 J=2,IP + DO 122 K=1,L1 + C1(1,K,J) = CH(1,K,J) + C1(2,K,J) = CH(2,K,J) + 122 CONTINUE + 123 CONTINUE + IF (IDOT .GT. L1) GO TO 127 + IDIJ = 0 + DO 126 J=2,IP + IDIJ = IDIJ+2 + DO 125 I=4,IDO,2 + IDIJ = IDIJ+2 + DO 124 K=1,L1 + C1(I-1,K,J) = WA(IDIJ-1)*CH(I-1,K,J)-WA(IDIJ)*CH(I,K,J) + C1(I,K,J) = WA(IDIJ-1)*CH(I,K,J)+WA(IDIJ)*CH(I-1,K,J) + 124 CONTINUE + 125 CONTINUE + 126 CONTINUE + RETURN + 127 IDJ = 2-IDO + DO 130 J=2,IP + IDJ = IDJ+IDO + DO 129 K=1,L1 + IDIJ = IDJ + DO 128 I=4,IDO,2 + IDIJ = IDIJ+2 + C1(I-1,K,J) = WA(IDIJ-1)*CH(I-1,K,J)-WA(IDIJ)*CH(I,K,J) + C1(I,K,J) = WA(IDIJ-1)*CH(I,K,J)+WA(IDIJ)*CH(I-1,K,J) + 128 CONTINUE + 129 CONTINUE + 130 CONTINUE + RETURN + END diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/cfftb.f python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/cfftb.f --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/cfftb.f 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/cfftb.f 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,8 @@ + SUBROUTINE CFFTB (N,C,WSAVE) + DIMENSION C(*) ,WSAVE(*) + IF (N .EQ. 1) RETURN + IW1 = N+N+1 + IW2 = IW1+N+N + CALL CFFTB1 (N,C,WSAVE,WSAVE(IW1),WSAVE(IW2)) + RETURN + END diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/cfftf1.f python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/cfftf1.f --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/cfftf1.f 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/cfftf1.f 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,368 @@ + SUBROUTINE CFFTF1 (N,C,CH,WA,IFAC) + DIMENSION CH(*) ,C(*) ,WA(*) ,IFAC(*) + NF = IFAC(2) + NA = 0 + L1 = 1 + IW = 1 + DO 116 K1=1,NF + IP = IFAC(K1+2) + L2 = IP*L1 + IDO = N/L2 + IDOT = IDO+IDO + IDL1 = IDOT*L1 + IF (IP .NE. 4) GO TO 103 + IX2 = IW+IDOT + IX3 = IX2+IDOT + IF (NA .NE. 0) GO TO 101 + CALL PASSF4 (IDOT,L1,C,CH,WA(IW),WA(IX2),WA(IX3)) + GO TO 102 + 101 CALL PASSF4 (IDOT,L1,CH,C,WA(IW),WA(IX2),WA(IX3)) + 102 NA = 1-NA + GO TO 115 + 103 IF (IP .NE. 2) GO TO 106 + IF (NA .NE. 0) GO TO 104 + CALL PASSF2 (IDOT,L1,C,CH,WA(IW)) + GO TO 105 + 104 CALL PASSF2 (IDOT,L1,CH,C,WA(IW)) + 105 NA = 1-NA + GO TO 115 + 106 IF (IP .NE. 3) GO TO 109 + IX2 = IW+IDOT + IF (NA .NE. 0) GO TO 107 + CALL PASSF3 (IDOT,L1,C,CH,WA(IW),WA(IX2)) + GO TO 108 + 107 CALL PASSF3 (IDOT,L1,CH,C,WA(IW),WA(IX2)) + 108 NA = 1-NA + GO TO 115 + 109 IF (IP .NE. 5) GO TO 112 + IX2 = IW+IDOT + IX3 = IX2+IDOT + IX4 = IX3+IDOT + IF (NA .NE. 0) GO TO 110 + CALL PASSF5 (IDOT,L1,C,CH,WA(IW),WA(IX2),WA(IX3),WA(IX4)) + GO TO 111 + 110 CALL PASSF5 (IDOT,L1,CH,C,WA(IW),WA(IX2),WA(IX3),WA(IX4)) + 111 NA = 1-NA + GO TO 115 + 112 IF (NA .NE. 0) GO TO 113 + CALL PASSF (NAC,IDOT,IP,L1,IDL1,C,C,C,CH,CH,WA(IW)) + GO TO 114 + 113 CALL PASSF (NAC,IDOT,IP,L1,IDL1,CH,CH,CH,C,C,WA(IW)) + 114 IF (NAC .NE. 0) NA = 1-NA + 115 L1 = L2 + IW = IW+(IP-1)*IDOT + 116 CONTINUE + IF (NA .EQ. 0) RETURN + N2 = N+N + DO 117 I=1,N2 + C(I) = CH(I) + 117 CONTINUE + RETURN + END + SUBROUTINE PASSF2 (IDO,L1,CC,CH,WA1) + DIMENSION CC(IDO,2,L1) ,CH(IDO,L1,2) , + 1 WA1(*) + IF (IDO .GT. 2) GO TO 102 + DO 101 K=1,L1 + CH(1,K,1) = CC(1,1,K)+CC(1,2,K) + CH(1,K,2) = CC(1,1,K)-CC(1,2,K) + CH(2,K,1) = CC(2,1,K)+CC(2,2,K) + CH(2,K,2) = CC(2,1,K)-CC(2,2,K) + 101 CONTINUE + RETURN + 102 DO 104 K=1,L1 + DO 103 I=2,IDO,2 + CH(I-1,K,1) = CC(I-1,1,K)+CC(I-1,2,K) + TR2 = CC(I-1,1,K)-CC(I-1,2,K) + CH(I,K,1) = CC(I,1,K)+CC(I,2,K) + TI2 = CC(I,1,K)-CC(I,2,K) + CH(I,K,2) = WA1(I-1)*TI2-WA1(I)*TR2 + CH(I-1,K,2) = WA1(I-1)*TR2+WA1(I)*TI2 + 103 CONTINUE + 104 CONTINUE + RETURN + END + SUBROUTINE PASSF3 (IDO,L1,CC,CH,WA1,WA2) + DIMENSION CC(IDO,3,L1) ,CH(IDO,L1,3) , + 1 WA1(*) ,WA2(*) + DATA TAUR,TAUI /-.5,-.866025403784439/ + IF (IDO .NE. 2) GO TO 102 + DO 101 K=1,L1 + TR2 = CC(1,2,K)+CC(1,3,K) + CR2 = CC(1,1,K)+TAUR*TR2 + CH(1,K,1) = CC(1,1,K)+TR2 + TI2 = CC(2,2,K)+CC(2,3,K) + CI2 = CC(2,1,K)+TAUR*TI2 + CH(2,K,1) = CC(2,1,K)+TI2 + CR3 = TAUI*(CC(1,2,K)-CC(1,3,K)) + CI3 = TAUI*(CC(2,2,K)-CC(2,3,K)) + CH(1,K,2) = CR2-CI3 + CH(1,K,3) = CR2+CI3 + CH(2,K,2) = CI2+CR3 + CH(2,K,3) = CI2-CR3 + 101 CONTINUE + RETURN + 102 DO 104 K=1,L1 + DO 103 I=2,IDO,2 + TR2 = CC(I-1,2,K)+CC(I-1,3,K) + CR2 = CC(I-1,1,K)+TAUR*TR2 + CH(I-1,K,1) = CC(I-1,1,K)+TR2 + TI2 = CC(I,2,K)+CC(I,3,K) + CI2 = CC(I,1,K)+TAUR*TI2 + CH(I,K,1) = CC(I,1,K)+TI2 + CR3 = TAUI*(CC(I-1,2,K)-CC(I-1,3,K)) + CI3 = TAUI*(CC(I,2,K)-CC(I,3,K)) + DR2 = CR2-CI3 + DR3 = CR2+CI3 + DI2 = CI2+CR3 + DI3 = CI2-CR3 + CH(I,K,2) = WA1(I-1)*DI2-WA1(I)*DR2 + CH(I-1,K,2) = WA1(I-1)*DR2+WA1(I)*DI2 + CH(I,K,3) = WA2(I-1)*DI3-WA2(I)*DR3 + CH(I-1,K,3) = WA2(I-1)*DR3+WA2(I)*DI3 + 103 CONTINUE + 104 CONTINUE + RETURN + END + SUBROUTINE PASSF4 (IDO,L1,CC,CH,WA1,WA2,WA3) + DIMENSION CC(IDO,4,L1) ,CH(IDO,L1,4) , + 1 WA1(*) ,WA2(*) ,WA3(*) + IF (IDO .NE. 2) GO TO 102 + DO 101 K=1,L1 + TI1 = CC(2,1,K)-CC(2,3,K) + TI2 = CC(2,1,K)+CC(2,3,K) + TR4 = CC(2,2,K)-CC(2,4,K) + TI3 = CC(2,2,K)+CC(2,4,K) + TR1 = CC(1,1,K)-CC(1,3,K) + TR2 = CC(1,1,K)+CC(1,3,K) + TI4 = CC(1,4,K)-CC(1,2,K) + TR3 = CC(1,2,K)+CC(1,4,K) + CH(1,K,1) = TR2+TR3 + CH(1,K,3) = TR2-TR3 + CH(2,K,1) = TI2+TI3 + CH(2,K,3) = TI2-TI3 + CH(1,K,2) = TR1+TR4 + CH(1,K,4) = TR1-TR4 + CH(2,K,2) = TI1+TI4 + CH(2,K,4) = TI1-TI4 + 101 CONTINUE + RETURN + 102 DO 104 K=1,L1 + DO 103 I=2,IDO,2 + TI1 = CC(I,1,K)-CC(I,3,K) + TI2 = CC(I,1,K)+CC(I,3,K) + TI3 = CC(I,2,K)+CC(I,4,K) + TR4 = CC(I,2,K)-CC(I,4,K) + TR1 = CC(I-1,1,K)-CC(I-1,3,K) + TR2 = CC(I-1,1,K)+CC(I-1,3,K) + TI4 = CC(I-1,4,K)-CC(I-1,2,K) + TR3 = CC(I-1,2,K)+CC(I-1,4,K) + CH(I-1,K,1) = TR2+TR3 + CR3 = TR2-TR3 + CH(I,K,1) = TI2+TI3 + CI3 = TI2-TI3 + CR2 = TR1+TR4 + CR4 = TR1-TR4 + CI2 = TI1+TI4 + CI4 = TI1-TI4 + CH(I-1,K,2) = WA1(I-1)*CR2+WA1(I)*CI2 + CH(I,K,2) = WA1(I-1)*CI2-WA1(I)*CR2 + CH(I-1,K,3) = WA2(I-1)*CR3+WA2(I)*CI3 + CH(I,K,3) = WA2(I-1)*CI3-WA2(I)*CR3 + CH(I-1,K,4) = WA3(I-1)*CR4+WA3(I)*CI4 + CH(I,K,4) = WA3(I-1)*CI4-WA3(I)*CR4 + 103 CONTINUE + 104 CONTINUE + RETURN + END + SUBROUTINE PASSF5 (IDO,L1,CC,CH,WA1,WA2,WA3,WA4) + DIMENSION CC(IDO,5,L1) ,CH(IDO,L1,5) , + 1 WA1(*) ,WA2(*) ,WA3(*) ,WA4(*) + DATA TR11,TI11,TR12,TI12 /.309016994374947,-.951056516295154, + 1-.809016994374947,-.587785252292473/ + IF (IDO .NE. 2) GO TO 102 + DO 101 K=1,L1 + TI5 = CC(2,2,K)-CC(2,5,K) + TI2 = CC(2,2,K)+CC(2,5,K) + TI4 = CC(2,3,K)-CC(2,4,K) + TI3 = CC(2,3,K)+CC(2,4,K) + TR5 = CC(1,2,K)-CC(1,5,K) + TR2 = CC(1,2,K)+CC(1,5,K) + TR4 = CC(1,3,K)-CC(1,4,K) + TR3 = CC(1,3,K)+CC(1,4,K) + CH(1,K,1) = CC(1,1,K)+TR2+TR3 + CH(2,K,1) = CC(2,1,K)+TI2+TI3 + CR2 = CC(1,1,K)+TR11*TR2+TR12*TR3 + CI2 = CC(2,1,K)+TR11*TI2+TR12*TI3 + CR3 = CC(1,1,K)+TR12*TR2+TR11*TR3 + CI3 = CC(2,1,K)+TR12*TI2+TR11*TI3 + CR5 = TI11*TR5+TI12*TR4 + CI5 = TI11*TI5+TI12*TI4 + CR4 = TI12*TR5-TI11*TR4 + CI4 = TI12*TI5-TI11*TI4 + CH(1,K,2) = CR2-CI5 + CH(1,K,5) = CR2+CI5 + CH(2,K,2) = CI2+CR5 + CH(2,K,3) = CI3+CR4 + CH(1,K,3) = CR3-CI4 + CH(1,K,4) = CR3+CI4 + CH(2,K,4) = CI3-CR4 + CH(2,K,5) = CI2-CR5 + 101 CONTINUE + RETURN + 102 DO 104 K=1,L1 + DO 103 I=2,IDO,2 + TI5 = CC(I,2,K)-CC(I,5,K) + TI2 = CC(I,2,K)+CC(I,5,K) + TI4 = CC(I,3,K)-CC(I,4,K) + TI3 = CC(I,3,K)+CC(I,4,K) + TR5 = CC(I-1,2,K)-CC(I-1,5,K) + TR2 = CC(I-1,2,K)+CC(I-1,5,K) + TR4 = CC(I-1,3,K)-CC(I-1,4,K) + TR3 = CC(I-1,3,K)+CC(I-1,4,K) + CH(I-1,K,1) = CC(I-1,1,K)+TR2+TR3 + CH(I,K,1) = CC(I,1,K)+TI2+TI3 + CR2 = CC(I-1,1,K)+TR11*TR2+TR12*TR3 + CI2 = CC(I,1,K)+TR11*TI2+TR12*TI3 + CR3 = CC(I-1,1,K)+TR12*TR2+TR11*TR3 + CI3 = CC(I,1,K)+TR12*TI2+TR11*TI3 + CR5 = TI11*TR5+TI12*TR4 + CI5 = TI11*TI5+TI12*TI4 + CR4 = TI12*TR5-TI11*TR4 + CI4 = TI12*TI5-TI11*TI4 + DR3 = CR3-CI4 + DR4 = CR3+CI4 + DI3 = CI3+CR4 + DI4 = CI3-CR4 + DR5 = CR2+CI5 + DR2 = CR2-CI5 + DI5 = CI2-CR5 + DI2 = CI2+CR5 + CH(I-1,K,2) = WA1(I-1)*DR2+WA1(I)*DI2 + CH(I,K,2) = WA1(I-1)*DI2-WA1(I)*DR2 + CH(I-1,K,3) = WA2(I-1)*DR3+WA2(I)*DI3 + CH(I,K,3) = WA2(I-1)*DI3-WA2(I)*DR3 + CH(I-1,K,4) = WA3(I-1)*DR4+WA3(I)*DI4 + CH(I,K,4) = WA3(I-1)*DI4-WA3(I)*DR4 + CH(I-1,K,5) = WA4(I-1)*DR5+WA4(I)*DI5 + CH(I,K,5) = WA4(I-1)*DI5-WA4(I)*DR5 + 103 CONTINUE + 104 CONTINUE + RETURN + END + SUBROUTINE PASSF (NAC,IDO,IP,L1,IDL1,CC,C1,C2,CH,CH2,WA) + DIMENSION CH(IDO,L1,IP) ,CC(IDO,IP,L1) , + 1 C1(IDO,L1,IP) ,WA(*) ,C2(IDL1,IP), + 2 CH2(IDL1,IP) + IDOT = IDO/2 + NT = IP*IDL1 + IPP2 = IP+2 + IPPH = (IP+1)/2 + IDP = IP*IDO +C + IF (IDO .LT. L1) GO TO 106 + DO 103 J=2,IPPH + JC = IPP2-J + DO 102 K=1,L1 + DO 101 I=1,IDO + CH(I,K,J) = CC(I,J,K)+CC(I,JC,K) + CH(I,K,JC) = CC(I,J,K)-CC(I,JC,K) + 101 CONTINUE + 102 CONTINUE + 103 CONTINUE + DO 105 K=1,L1 + DO 104 I=1,IDO + CH(I,K,1) = CC(I,1,K) + 104 CONTINUE + 105 CONTINUE + GO TO 112 + 106 DO 109 J=2,IPPH + JC = IPP2-J + DO 108 I=1,IDO + DO 107 K=1,L1 + CH(I,K,J) = CC(I,J,K)+CC(I,JC,K) + CH(I,K,JC) = CC(I,J,K)-CC(I,JC,K) + 107 CONTINUE + 108 CONTINUE + 109 CONTINUE + DO 111 I=1,IDO + DO 110 K=1,L1 + CH(I,K,1) = CC(I,1,K) + 110 CONTINUE + 111 CONTINUE + 112 IDL = 2-IDO + INC = 0 + DO 116 L=2,IPPH + LC = IPP2-L + IDL = IDL+IDO + DO 113 IK=1,IDL1 + C2(IK,L) = CH2(IK,1)+WA(IDL-1)*CH2(IK,2) + C2(IK,LC) = -WA(IDL)*CH2(IK,IP) + 113 CONTINUE + IDLJ = IDL + INC = INC+IDO + DO 115 J=3,IPPH + JC = IPP2-J + IDLJ = IDLJ+INC + IF (IDLJ .GT. IDP) IDLJ = IDLJ-IDP + WAR = WA(IDLJ-1) + WAI = WA(IDLJ) + DO 114 IK=1,IDL1 + C2(IK,L) = C2(IK,L)+WAR*CH2(IK,J) + C2(IK,LC) = C2(IK,LC)-WAI*CH2(IK,JC) + 114 CONTINUE + 115 CONTINUE + 116 CONTINUE + DO 118 J=2,IPPH + DO 117 IK=1,IDL1 + CH2(IK,1) = CH2(IK,1)+CH2(IK,J) + 117 CONTINUE + 118 CONTINUE + DO 120 J=2,IPPH + JC = IPP2-J + DO 119 IK=2,IDL1,2 + CH2(IK-1,J) = C2(IK-1,J)-C2(IK,JC) + CH2(IK-1,JC) = C2(IK-1,J)+C2(IK,JC) + CH2(IK,J) = C2(IK,J)+C2(IK-1,JC) + CH2(IK,JC) = C2(IK,J)-C2(IK-1,JC) + 119 CONTINUE + 120 CONTINUE + NAC = 1 + IF (IDO .EQ. 2) RETURN + NAC = 0 + DO 121 IK=1,IDL1 + C2(IK,1) = CH2(IK,1) + 121 CONTINUE + DO 123 J=2,IP + DO 122 K=1,L1 + C1(1,K,J) = CH(1,K,J) + C1(2,K,J) = CH(2,K,J) + 122 CONTINUE + 123 CONTINUE + IF (IDOT .GT. L1) GO TO 127 + IDIJ = 0 + DO 126 J=2,IP + IDIJ = IDIJ+2 + DO 125 I=4,IDO,2 + IDIJ = IDIJ+2 + DO 124 K=1,L1 + C1(I-1,K,J) = WA(IDIJ-1)*CH(I-1,K,J)+WA(IDIJ)*CH(I,K,J) + C1(I,K,J) = WA(IDIJ-1)*CH(I,K,J)-WA(IDIJ)*CH(I-1,K,J) + 124 CONTINUE + 125 CONTINUE + 126 CONTINUE + RETURN + 127 IDJ = 2-IDO + DO 130 J=2,IP + IDJ = IDJ+IDO + DO 129 K=1,L1 + IDIJ = IDJ + DO 128 I=4,IDO,2 + IDIJ = IDIJ+2 + C1(I-1,K,J) = WA(IDIJ-1)*CH(I-1,K,J)+WA(IDIJ)*CH(I,K,J) + C1(I,K,J) = WA(IDIJ-1)*CH(I,K,J)-WA(IDIJ)*CH(I-1,K,J) + 128 CONTINUE + 129 CONTINUE + 130 CONTINUE + RETURN + END diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/cfftf.f python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/cfftf.f --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/cfftf.f 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/cfftf.f 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,8 @@ + SUBROUTINE CFFTF (N,C,WSAVE) + DIMENSION C(*) ,WSAVE(*) + IF (N .EQ. 1) RETURN + IW1 = N+N+1 + IW2 = IW1+N+N + CALL CFFTF1 (N,C,WSAVE,WSAVE(IW1),WSAVE(IW2)) + RETURN + END diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/cffti1.f python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/cffti1.f --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/cffti1.f 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/cffti1.f 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,62 @@ + SUBROUTINE CFFTI1 (N,WA,IFAC) + DIMENSION WA(*) ,IFAC(*) ,NTRYH(4) + DATA NTRYH(1),NTRYH(2),NTRYH(3),NTRYH(4)/3,4,2,5/ + NL = N + NF = 0 + J = 0 + 101 J = J+1 + IF (J.le.4) GO TO 102 + GO TO 103 + 102 NTRY = NTRYH(J) + GO TO 104 + 103 NTRY = NTRY+2 + 104 NQ = NL/NTRY + NR = NL-NTRY*NQ + IF (NR.eq.0) GO TO 105 + GO TO 101 + 105 NF = NF+1 + IFAC(NF+2) = NTRY + NL = NQ + IF (NTRY .NE. 2) GO TO 107 + IF (NF .EQ. 1) GO TO 107 + DO 106 I=2,NF + IB = NF-I+2 + IFAC(IB+2) = IFAC(IB+1) + 106 CONTINUE + IFAC(3) = 2 + 107 IF (NL .NE. 1) GO TO 104 + IFAC(1) = N + IFAC(2) = NF + TPI = 6.28318530717959 + ARGH = TPI/FLOAT(N) + I = 2 + L1 = 1 + DO 110 K1=1,NF + IP = IFAC(K1+2) + LD = 0 + L2 = L1*IP + IDO = N/L2 + IDOT = IDO+IDO+2 + IPM = IP-1 + DO 109 J=1,IPM + I1 = I + WA(I-1) = 1. + WA(I) = 0. + LD = LD+L1 + FI = 0. + ARGLD = FLOAT(LD)*ARGH + DO 108 II=4,IDOT,2 + I = I+2 + FI = FI+1. + ARG = FI*ARGLD + WA(I-1) = COS(ARG) + WA(I) = SIN(ARG) + 108 CONTINUE + IF (IP .LE. 5) GO TO 109 + WA(I1-1) = WA(I-1) + WA(I1) = WA(I) + 109 CONTINUE + L1 = L2 + 110 CONTINUE + RETURN + END diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/cffti.f python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/cffti.f --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/cffti.f 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/cffti.f 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,8 @@ + SUBROUTINE CFFTI (N,WSAVE) + DIMENSION WSAVE(*) + IF (N .EQ. 1) RETURN + IW1 = N+N+1 + IW2 = IW1+N+N + CALL CFFTI1 (N,WSAVE(IW1),WSAVE(IW2)) + RETURN + END diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/cosqb.f python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/cosqb.f --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/cosqb.f 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/cosqb.f 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,42 @@ + SUBROUTINE COSQB (N,X,WSAVE) + DIMENSION X(*) ,WSAVE(*) + DATA TSQRT2 /2.82842712474619/ + IF (N.lt.2) GO TO 101 + IF (N.eq.2) GO TO 102 + GO TO 103 + 101 X(1) = 4.*X(1) + RETURN + 102 X1 = 4.*(X(1)+X(2)) + X(2) = TSQRT2*(X(1)-X(2)) + X(1) = X1 + RETURN + 103 CALL COSQB1 (N,X,WSAVE,WSAVE(N+1)) + RETURN + END + SUBROUTINE COSQB1 (N,X,W,XH) + DIMENSION X(1) ,W(1) ,XH(1) + NS2 = (N+1)/2 + NP2 = N+2 + DO 101 I=3,N,2 + XIM1 = X(I-1)+X(I) + X(I) = X(I)-X(I-1) + X(I-1) = XIM1 + 101 CONTINUE + X(1) = X(1)+X(1) + MODN = MOD(N,2) + IF (MODN .EQ. 0) X(N) = X(N)+X(N) + CALL RFFTB (N,X,XH) + DO 102 K=2,NS2 + KC = NP2-K + XH(K) = W(K-1)*X(KC)+W(KC-1)*X(K) + XH(KC) = W(K-1)*X(K)-W(KC-1)*X(KC) + 102 CONTINUE + IF (MODN .EQ. 0) X(NS2+1) = W(NS2)*(X(NS2+1)+X(NS2+1)) + DO 103 K=2,NS2 + KC = NP2-K + X(K) = XH(K)+XH(KC) + X(KC) = XH(K)-XH(KC) + 103 CONTINUE + X(1) = X(1)+X(1) + RETURN + END diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/cosqf.f python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/cosqf.f --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/cosqf.f 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/cosqf.f 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,38 @@ + SUBROUTINE COSQF (N,X,WSAVE) + DIMENSION X(*) ,WSAVE(*) + DATA SQRT2 /1.4142135623731/ + IF (N.lt.2) GO TO 102 + IF (N.eq.2) GO TO 101 + GO TO 103 + 101 TSQX = SQRT2*X(2) + X(2) = X(1)-TSQX + X(1) = X(1)+TSQX + 102 RETURN + 103 CALL COSQF1 (N,X,WSAVE,WSAVE(N+1)) + RETURN + END + SUBROUTINE COSQF1 (N,X,W,XH) + DIMENSION X(1) ,W(1) ,XH(1) + NS2 = (N+1)/2 + NP2 = N+2 + DO 101 K=2,NS2 + KC = NP2-K + XH(K) = X(K)+X(KC) + XH(KC) = X(K)-X(KC) + 101 CONTINUE + MODN = MOD(N,2) + IF (MODN .EQ. 0) XH(NS2+1) = X(NS2+1)+X(NS2+1) + DO 102 K=2,NS2 + KC = NP2-K + X(K) = W(K-1)*XH(KC)+W(KC-1)*XH(K) + X(KC) = W(K-1)*XH(K)-W(KC-1)*XH(KC) + 102 CONTINUE + IF (MODN .EQ. 0) X(NS2+1) = W(NS2)*XH(NS2+1) + CALL RFFTF (N,X,XH) + DO 103 I=3,N,2 + XIM1 = X(I-1)-X(I) + X(I) = X(I-1)+X(I) + X(I-1) = XIM1 + 103 CONTINUE + RETURN + END diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/cosqi.f python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/cosqi.f --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/cosqi.f 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/cosqi.f 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,12 @@ + SUBROUTINE COSQI (N,WSAVE) + DIMENSION WSAVE(*) + DATA PIH /1.57079632679491/ + DT = PIH/FLOAT(N) + FK = 0. + DO 101 K=1,N + FK = FK+1. + WSAVE(K) = COS(FK*DT) + 101 CONTINUE + CALL RFFTI (N,WSAVE(N+1)) + RETURN + END diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/cost.f python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/cost.f --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/cost.f 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/cost.f 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,44 @@ + SUBROUTINE COST (N,X,WSAVE) + DIMENSION X(*) ,WSAVE(*) + NM1 = N-1 + NP1 = N+1 + NS2 = N/2 + IF (N.lt.2) GO TO 106 + IF (N.eq.2) GO TO 101 + GO TO 102 + 101 X1H = X(1)+X(2) + X(2) = X(1)-X(2) + X(1) = X1H + RETURN + 102 IF (N .GT. 3) GO TO 103 + X1P3 = X(1)+X(3) + TX2 = X(2)+X(2) + X(2) = X(1)-X(3) + X(1) = X1P3+TX2 + X(3) = X1P3-TX2 + RETURN + 103 C1 = X(1)-X(N) + X(1) = X(1)+X(N) + DO 104 K=2,NS2 + KC = NP1-K + T1 = X(K)+X(KC) + T2 = X(K)-X(KC) + C1 = C1+WSAVE(KC)*T2 + T2 = WSAVE(K)*T2 + X(K) = T1-T2 + X(KC) = T1+T2 + 104 CONTINUE + MODN = MOD(N,2) + IF (MODN .NE. 0) X(NS2+1) = X(NS2+1)+X(NS2+1) + CALL RFFTF (NM1,X,WSAVE(N+1)) + XIM2 = X(2) + X(2) = C1 + DO 105 I=4,N,2 + XI = X(I) + X(I) = X(I-2)-X(I-1) + X(I-1) = XIM2 + XIM2 = XI + 105 CONTINUE + IF (MODN .NE. 0) X(N) = XIM2 + 106 RETURN + END diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/costi.f python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/costi.f --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/costi.f 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/costi.f 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,18 @@ + SUBROUTINE COSTI (N,WSAVE) + DIMENSION WSAVE(*) + DATA PI /3.14159265358979/ + IF (N .LE. 3) RETURN + NM1 = N-1 + NP1 = N+1 + NS2 = N/2 + DT = PI/FLOAT(NM1) + FK = 0. + DO 101 K=2,NS2 + KC = NP1-K + FK = FK+1. + WSAVE(K) = 2.*SIN(FK*DT) + WSAVE(KC) = 2.*COS(FK*DT) + 101 CONTINUE + CALL RFFTI (NM1,WSAVE(N+1)) + RETURN + END diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/rfftb1.f python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/rfftb1.f --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/rfftb1.f 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/rfftb1.f 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,407 @@ + SUBROUTINE RFFTB1 (N,C,CH,WA,IFAC) + DIMENSION CH(*) ,C(*) ,WA(*) ,IFAC(*) + NF = IFAC(2) + NA = 0 + L1 = 1 + IW = 1 + DO 116 K1=1,NF + IP = IFAC(K1+2) + L2 = IP*L1 + IDO = N/L2 + IDL1 = IDO*L1 + IF (IP .NE. 4) GO TO 103 + IX2 = IW+IDO + IX3 = IX2+IDO + IF (NA .NE. 0) GO TO 101 + CALL RADB4 (IDO,L1,C,CH,WA(IW),WA(IX2),WA(IX3)) + GO TO 102 + 101 CALL RADB4 (IDO,L1,CH,C,WA(IW),WA(IX2),WA(IX3)) + 102 NA = 1-NA + GO TO 115 + 103 IF (IP .NE. 2) GO TO 106 + IF (NA .NE. 0) GO TO 104 + CALL RADB2 (IDO,L1,C,CH,WA(IW)) + GO TO 105 + 104 CALL RADB2 (IDO,L1,CH,C,WA(IW)) + 105 NA = 1-NA + GO TO 115 + 106 IF (IP .NE. 3) GO TO 109 + IX2 = IW+IDO + IF (NA .NE. 0) GO TO 107 + CALL RADB3 (IDO,L1,C,CH,WA(IW),WA(IX2)) + GO TO 108 + 107 CALL RADB3 (IDO,L1,CH,C,WA(IW),WA(IX2)) + 108 NA = 1-NA + GO TO 115 + 109 IF (IP .NE. 5) GO TO 112 + IX2 = IW+IDO + IX3 = IX2+IDO + IX4 = IX3+IDO + IF (NA .NE. 0) GO TO 110 + CALL RADB5 (IDO,L1,C,CH,WA(IW),WA(IX2),WA(IX3),WA(IX4)) + GO TO 111 + 110 CALL RADB5 (IDO,L1,CH,C,WA(IW),WA(IX2),WA(IX3),WA(IX4)) + 111 NA = 1-NA + GO TO 115 + 112 IF (NA .NE. 0) GO TO 113 + CALL RADBG (IDO,IP,L1,IDL1,C,C,C,CH,CH,WA(IW)) + GO TO 114 + 113 CALL RADBG (IDO,IP,L1,IDL1,CH,CH,CH,C,C,WA(IW)) + 114 IF (IDO .EQ. 1) NA = 1-NA + 115 L1 = L2 + IW = IW+(IP-1)*IDO + 116 CONTINUE + IF (NA .EQ. 0) RETURN + DO 117 I=1,N + C(I) = CH(I) + 117 CONTINUE + RETURN + END + SUBROUTINE RADB2 (IDO,L1,CC,CH,WA1) + DIMENSION CC(IDO,2,L1) ,CH(IDO,L1,2) , + 1 WA1(*) + DO 101 K=1,L1 + CH(1,K,1) = CC(1,1,K)+CC(IDO,2,K) + CH(1,K,2) = CC(1,1,K)-CC(IDO,2,K) + 101 CONTINUE + IF (IDO.lt.2) GO TO 107 + IF (IDO.eq.2) GO TO 105 + GO TO 102 + 102 IDP2 = IDO+2 + DO 104 K=1,L1 + DO 103 I=3,IDO,2 + IC = IDP2-I + CH(I-1,K,1) = CC(I-1,1,K)+CC(IC-1,2,K) + TR2 = CC(I-1,1,K)-CC(IC-1,2,K) + CH(I,K,1) = CC(I,1,K)-CC(IC,2,K) + TI2 = CC(I,1,K)+CC(IC,2,K) + CH(I-1,K,2) = WA1(I-2)*TR2-WA1(I-1)*TI2 + CH(I,K,2) = WA1(I-2)*TI2+WA1(I-1)*TR2 + 103 CONTINUE + 104 CONTINUE + IF (MOD(IDO,2) .EQ. 1) RETURN + 105 DO 106 K=1,L1 + CH(IDO,K,1) = CC(IDO,1,K)+CC(IDO,1,K) + CH(IDO,K,2) = -(CC(1,2,K)+CC(1,2,K)) + 106 CONTINUE + 107 RETURN + END + SUBROUTINE RADB3 (IDO,L1,CC,CH,WA1,WA2) + DIMENSION CC(IDO,3,L1) ,CH(IDO,L1,3) , + 1 WA1(*) ,WA2(*) + DATA TAUR,TAUI /-.5,.866025403784439/ + DO 101 K=1,L1 + TR2 = CC(IDO,2,K)+CC(IDO,2,K) + CR2 = CC(1,1,K)+TAUR*TR2 + CH(1,K,1) = CC(1,1,K)+TR2 + CI3 = TAUI*(CC(1,3,K)+CC(1,3,K)) + CH(1,K,2) = CR2-CI3 + CH(1,K,3) = CR2+CI3 + 101 CONTINUE + IF (IDO .EQ. 1) RETURN + IDP2 = IDO+2 + DO 103 K=1,L1 + DO 102 I=3,IDO,2 + IC = IDP2-I + TR2 = CC(I-1,3,K)+CC(IC-1,2,K) + CR2 = CC(I-1,1,K)+TAUR*TR2 + CH(I-1,K,1) = CC(I-1,1,K)+TR2 + TI2 = CC(I,3,K)-CC(IC,2,K) + CI2 = CC(I,1,K)+TAUR*TI2 + CH(I,K,1) = CC(I,1,K)+TI2 + CR3 = TAUI*(CC(I-1,3,K)-CC(IC-1,2,K)) + CI3 = TAUI*(CC(I,3,K)+CC(IC,2,K)) + DR2 = CR2-CI3 + DR3 = CR2+CI3 + DI2 = CI2+CR3 + DI3 = CI2-CR3 + CH(I-1,K,2) = WA1(I-2)*DR2-WA1(I-1)*DI2 + CH(I,K,2) = WA1(I-2)*DI2+WA1(I-1)*DR2 + CH(I-1,K,3) = WA2(I-2)*DR3-WA2(I-1)*DI3 + CH(I,K,3) = WA2(I-2)*DI3+WA2(I-1)*DR3 + 102 CONTINUE + 103 CONTINUE + RETURN + END + SUBROUTINE RADB4 (IDO,L1,CC,CH,WA1,WA2,WA3) + DIMENSION CC(IDO,4,L1) ,CH(IDO,L1,4) , + 1 WA1(*) ,WA2(*) ,WA3(*) + DATA SQRT2 /1.414213562373095/ + DO 101 K=1,L1 + TR1 = CC(1,1,K)-CC(IDO,4,K) + TR2 = CC(1,1,K)+CC(IDO,4,K) + TR3 = CC(IDO,2,K)+CC(IDO,2,K) + TR4 = CC(1,3,K)+CC(1,3,K) + CH(1,K,1) = TR2+TR3 + CH(1,K,2) = TR1-TR4 + CH(1,K,3) = TR2-TR3 + CH(1,K,4) = TR1+TR4 + 101 CONTINUE + IF (IDO.lt.2) GO TO 107 + IF (IDO.eq.2) GO TO 105 + GO TO 102 + 102 IDP2 = IDO+2 + DO 104 K=1,L1 + DO 103 I=3,IDO,2 + IC = IDP2-I + TI1 = CC(I,1,K)+CC(IC,4,K) + TI2 = CC(I,1,K)-CC(IC,4,K) + TI3 = CC(I,3,K)-CC(IC,2,K) + TR4 = CC(I,3,K)+CC(IC,2,K) + TR1 = CC(I-1,1,K)-CC(IC-1,4,K) + TR2 = CC(I-1,1,K)+CC(IC-1,4,K) + TI4 = CC(I-1,3,K)-CC(IC-1,2,K) + TR3 = CC(I-1,3,K)+CC(IC-1,2,K) + CH(I-1,K,1) = TR2+TR3 + CR3 = TR2-TR3 + CH(I,K,1) = TI2+TI3 + CI3 = TI2-TI3 + CR2 = TR1-TR4 + CR4 = TR1+TR4 + CI2 = TI1+TI4 + CI4 = TI1-TI4 + CH(I-1,K,2) = WA1(I-2)*CR2-WA1(I-1)*CI2 + CH(I,K,2) = WA1(I-2)*CI2+WA1(I-1)*CR2 + CH(I-1,K,3) = WA2(I-2)*CR3-WA2(I-1)*CI3 + CH(I,K,3) = WA2(I-2)*CI3+WA2(I-1)*CR3 + CH(I-1,K,4) = WA3(I-2)*CR4-WA3(I-1)*CI4 + CH(I,K,4) = WA3(I-2)*CI4+WA3(I-1)*CR4 + 103 CONTINUE + 104 CONTINUE + IF (MOD(IDO,2) .EQ. 1) RETURN + 105 CONTINUE + DO 106 K=1,L1 + TI1 = CC(1,2,K)+CC(1,4,K) + TI2 = CC(1,4,K)-CC(1,2,K) + TR1 = CC(IDO,1,K)-CC(IDO,3,K) + TR2 = CC(IDO,1,K)+CC(IDO,3,K) + CH(IDO,K,1) = TR2+TR2 + CH(IDO,K,2) = SQRT2*(TR1-TI1) + CH(IDO,K,3) = TI2+TI2 + CH(IDO,K,4) = -SQRT2*(TR1+TI1) + 106 CONTINUE + 107 RETURN + END + SUBROUTINE RADB5 (IDO,L1,CC,CH,WA1,WA2,WA3,WA4) + DIMENSION CC(IDO,5,L1) ,CH(IDO,L1,5) , + 1 WA1(*) ,WA2(*) ,WA3(*) ,WA4(*) + DATA TR11,TI11,TR12,TI12 /.309016994374947,.951056516295154, + 1-.809016994374947,.587785252292473/ + DO 101 K=1,L1 + TI5 = CC(1,3,K)+CC(1,3,K) + TI4 = CC(1,5,K)+CC(1,5,K) + TR2 = CC(IDO,2,K)+CC(IDO,2,K) + TR3 = CC(IDO,4,K)+CC(IDO,4,K) + CH(1,K,1) = CC(1,1,K)+TR2+TR3 + CR2 = CC(1,1,K)+TR11*TR2+TR12*TR3 + CR3 = CC(1,1,K)+TR12*TR2+TR11*TR3 + CI5 = TI11*TI5+TI12*TI4 + CI4 = TI12*TI5-TI11*TI4 + CH(1,K,2) = CR2-CI5 + CH(1,K,3) = CR3-CI4 + CH(1,K,4) = CR3+CI4 + CH(1,K,5) = CR2+CI5 + 101 CONTINUE + IF (IDO .EQ. 1) RETURN + IDP2 = IDO+2 + DO 103 K=1,L1 + DO 102 I=3,IDO,2 + IC = IDP2-I + TI5 = CC(I,3,K)+CC(IC,2,K) + TI2 = CC(I,3,K)-CC(IC,2,K) + TI4 = CC(I,5,K)+CC(IC,4,K) + TI3 = CC(I,5,K)-CC(IC,4,K) + TR5 = CC(I-1,3,K)-CC(IC-1,2,K) + TR2 = CC(I-1,3,K)+CC(IC-1,2,K) + TR4 = CC(I-1,5,K)-CC(IC-1,4,K) + TR3 = CC(I-1,5,K)+CC(IC-1,4,K) + CH(I-1,K,1) = CC(I-1,1,K)+TR2+TR3 + CH(I,K,1) = CC(I,1,K)+TI2+TI3 + CR2 = CC(I-1,1,K)+TR11*TR2+TR12*TR3 + CI2 = CC(I,1,K)+TR11*TI2+TR12*TI3 + CR3 = CC(I-1,1,K)+TR12*TR2+TR11*TR3 + CI3 = CC(I,1,K)+TR12*TI2+TR11*TI3 + CR5 = TI11*TR5+TI12*TR4 + CI5 = TI11*TI5+TI12*TI4 + CR4 = TI12*TR5-TI11*TR4 + CI4 = TI12*TI5-TI11*TI4 + DR3 = CR3-CI4 + DR4 = CR3+CI4 + DI3 = CI3+CR4 + DI4 = CI3-CR4 + DR5 = CR2+CI5 + DR2 = CR2-CI5 + DI5 = CI2-CR5 + DI2 = CI2+CR5 + CH(I-1,K,2) = WA1(I-2)*DR2-WA1(I-1)*DI2 + CH(I,K,2) = WA1(I-2)*DI2+WA1(I-1)*DR2 + CH(I-1,K,3) = WA2(I-2)*DR3-WA2(I-1)*DI3 + CH(I,K,3) = WA2(I-2)*DI3+WA2(I-1)*DR3 + CH(I-1,K,4) = WA3(I-2)*DR4-WA3(I-1)*DI4 + CH(I,K,4) = WA3(I-2)*DI4+WA3(I-1)*DR4 + CH(I-1,K,5) = WA4(I-2)*DR5-WA4(I-1)*DI5 + CH(I,K,5) = WA4(I-2)*DI5+WA4(I-1)*DR5 + 102 CONTINUE + 103 CONTINUE + RETURN + END + SUBROUTINE RADBG (IDO,IP,L1,IDL1,CC,C1,C2,CH,CH2,WA) + DIMENSION CH(IDO,L1,IP) ,CC(IDO,IP,L1) , + 1 C1(IDO,L1,IP) ,C2(IDL1,IP), + 2 CH2(IDL1,IP) ,WA(*) + DATA TPI/6.28318530717959/ + ARG = TPI/FLOAT(IP) + DCP = COS(ARG) + DSP = SIN(ARG) + IDP2 = IDO+2 + NBD = (IDO-1)/2 + IPP2 = IP+2 + IPPH = (IP+1)/2 + IF (IDO .LT. L1) GO TO 103 + DO 102 K=1,L1 + DO 101 I=1,IDO + CH(I,K,1) = CC(I,1,K) + 101 CONTINUE + 102 CONTINUE + GO TO 106 + 103 DO 105 I=1,IDO + DO 104 K=1,L1 + CH(I,K,1) = CC(I,1,K) + 104 CONTINUE + 105 CONTINUE + 106 DO 108 J=2,IPPH + JC = IPP2-J + J2 = J+J + DO 107 K=1,L1 + CH(1,K,J) = CC(IDO,J2-2,K)+CC(IDO,J2-2,K) + CH(1,K,JC) = CC(1,J2-1,K)+CC(1,J2-1,K) + 107 CONTINUE + 108 CONTINUE + IF (IDO .EQ. 1) GO TO 116 + IF (NBD .LT. L1) GO TO 112 + DO 111 J=2,IPPH + JC = IPP2-J + DO 110 K=1,L1 + DO 109 I=3,IDO,2 + IC = IDP2-I + CH(I-1,K,J) = CC(I-1,2*J-1,K)+CC(IC-1,2*J-2,K) + CH(I-1,K,JC) = CC(I-1,2*J-1,K)-CC(IC-1,2*J-2,K) + CH(I,K,J) = CC(I,2*J-1,K)-CC(IC,2*J-2,K) + CH(I,K,JC) = CC(I,2*J-1,K)+CC(IC,2*J-2,K) + 109 CONTINUE + 110 CONTINUE + 111 CONTINUE + GO TO 116 + 112 DO 115 J=2,IPPH + JC = IPP2-J + DO 114 I=3,IDO,2 + IC = IDP2-I + DO 113 K=1,L1 + CH(I-1,K,J) = CC(I-1,2*J-1,K)+CC(IC-1,2*J-2,K) + CH(I-1,K,JC) = CC(I-1,2*J-1,K)-CC(IC-1,2*J-2,K) + CH(I,K,J) = CC(I,2*J-1,K)-CC(IC,2*J-2,K) + CH(I,K,JC) = CC(I,2*J-1,K)+CC(IC,2*J-2,K) + 113 CONTINUE + 114 CONTINUE + 115 CONTINUE + 116 AR1 = 1. + AI1 = 0. + DO 120 L=2,IPPH + LC = IPP2-L + AR1H = DCP*AR1-DSP*AI1 + AI1 = DCP*AI1+DSP*AR1 + AR1 = AR1H + DO 117 IK=1,IDL1 + C2(IK,L) = CH2(IK,1)+AR1*CH2(IK,2) + C2(IK,LC) = AI1*CH2(IK,IP) + 117 CONTINUE + DC2 = AR1 + DS2 = AI1 + AR2 = AR1 + AI2 = AI1 + DO 119 J=3,IPPH + JC = IPP2-J + AR2H = DC2*AR2-DS2*AI2 + AI2 = DC2*AI2+DS2*AR2 + AR2 = AR2H + DO 118 IK=1,IDL1 + C2(IK,L) = C2(IK,L)+AR2*CH2(IK,J) + C2(IK,LC) = C2(IK,LC)+AI2*CH2(IK,JC) + 118 CONTINUE + 119 CONTINUE + 120 CONTINUE + DO 122 J=2,IPPH + DO 121 IK=1,IDL1 + CH2(IK,1) = CH2(IK,1)+CH2(IK,J) + 121 CONTINUE + 122 CONTINUE + DO 124 J=2,IPPH + JC = IPP2-J + DO 123 K=1,L1 + CH(1,K,J) = C1(1,K,J)-C1(1,K,JC) + CH(1,K,JC) = C1(1,K,J)+C1(1,K,JC) + 123 CONTINUE + 124 CONTINUE + IF (IDO .EQ. 1) GO TO 132 + IF (NBD .LT. L1) GO TO 128 + DO 127 J=2,IPPH + JC = IPP2-J + DO 126 K=1,L1 + DO 125 I=3,IDO,2 + CH(I-1,K,J) = C1(I-1,K,J)-C1(I,K,JC) + CH(I-1,K,JC) = C1(I-1,K,J)+C1(I,K,JC) + CH(I,K,J) = C1(I,K,J)+C1(I-1,K,JC) + CH(I,K,JC) = C1(I,K,J)-C1(I-1,K,JC) + 125 CONTINUE + 126 CONTINUE + 127 CONTINUE + GO TO 132 + 128 DO 131 J=2,IPPH + JC = IPP2-J + DO 130 I=3,IDO,2 + DO 129 K=1,L1 + CH(I-1,K,J) = C1(I-1,K,J)-C1(I,K,JC) + CH(I-1,K,JC) = C1(I-1,K,J)+C1(I,K,JC) + CH(I,K,J) = C1(I,K,J)+C1(I-1,K,JC) + CH(I,K,JC) = C1(I,K,J)-C1(I-1,K,JC) + 129 CONTINUE + 130 CONTINUE + 131 CONTINUE + 132 CONTINUE + IF (IDO .EQ. 1) RETURN + DO 133 IK=1,IDL1 + C2(IK,1) = CH2(IK,1) + 133 CONTINUE + DO 135 J=2,IP + DO 134 K=1,L1 + C1(1,K,J) = CH(1,K,J) + 134 CONTINUE + 135 CONTINUE + IF (NBD .GT. L1) GO TO 139 + IS = -IDO + DO 138 J=2,IP + IS = IS+IDO + IDIJ = IS + DO 137 I=3,IDO,2 + IDIJ = IDIJ+2 + DO 136 K=1,L1 + C1(I-1,K,J) = WA(IDIJ-1)*CH(I-1,K,J)-WA(IDIJ)*CH(I,K,J) + C1(I,K,J) = WA(IDIJ-1)*CH(I,K,J)+WA(IDIJ)*CH(I-1,K,J) + 136 CONTINUE + 137 CONTINUE + 138 CONTINUE + GO TO 143 + 139 IS = -IDO + DO 142 J=2,IP + IS = IS+IDO + DO 141 K=1,L1 + IDIJ = IS + DO 140 I=3,IDO,2 + IDIJ = IDIJ+2 + C1(I-1,K,J) = WA(IDIJ-1)*CH(I-1,K,J)-WA(IDIJ)*CH(I,K,J) + C1(I,K,J) = WA(IDIJ-1)*CH(I,K,J)+WA(IDIJ)*CH(I-1,K,J) + 140 CONTINUE + 141 CONTINUE + 142 CONTINUE + 143 RETURN + END diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/rfftb.f python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/rfftb.f --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/rfftb.f 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/rfftb.f 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,6 @@ + SUBROUTINE RFFTB (N,R,WSAVE) + DIMENSION R(*) ,WSAVE(*) + IF (N .EQ. 1) RETURN + CALL RFFTB1 (N,R,WSAVE,WSAVE(N+1),WSAVE(2*N+1)) + RETURN + END diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/rfftf1.f python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/rfftf1.f --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/rfftf1.f 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/rfftf1.f 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,403 @@ + SUBROUTINE RFFTF1 (N,C,CH,WA,IFAC) + DIMENSION CH(*) ,C(*) ,WA(*) ,IFAC(*) + NF = IFAC(2) + NA = 1 + L2 = N + IW = N + DO 111 K1=1,NF + KH = NF-K1 + IP = IFAC(KH+3) + L1 = L2/IP + IDO = N/L2 + IDL1 = IDO*L1 + IW = IW-(IP-1)*IDO + NA = 1-NA + IF (IP .NE. 4) GO TO 102 + IX2 = IW+IDO + IX3 = IX2+IDO + IF (NA .NE. 0) GO TO 101 + CALL RADF4 (IDO,L1,C,CH,WA(IW),WA(IX2),WA(IX3)) + GO TO 110 + 101 CALL RADF4 (IDO,L1,CH,C,WA(IW),WA(IX2),WA(IX3)) + GO TO 110 + 102 IF (IP .NE. 2) GO TO 104 + IF (NA .NE. 0) GO TO 103 + CALL RADF2 (IDO,L1,C,CH,WA(IW)) + GO TO 110 + 103 CALL RADF2 (IDO,L1,CH,C,WA(IW)) + GO TO 110 + 104 IF (IP .NE. 3) GO TO 106 + IX2 = IW+IDO + IF (NA .NE. 0) GO TO 105 + CALL RADF3 (IDO,L1,C,CH,WA(IW),WA(IX2)) + GO TO 110 + 105 CALL RADF3 (IDO,L1,CH,C,WA(IW),WA(IX2)) + GO TO 110 + 106 IF (IP .NE. 5) GO TO 108 + IX2 = IW+IDO + IX3 = IX2+IDO + IX4 = IX3+IDO + IF (NA .NE. 0) GO TO 107 + CALL RADF5 (IDO,L1,C,CH,WA(IW),WA(IX2),WA(IX3),WA(IX4)) + GO TO 110 + 107 CALL RADF5 (IDO,L1,CH,C,WA(IW),WA(IX2),WA(IX3),WA(IX4)) + GO TO 110 + 108 IF (IDO .EQ. 1) NA = 1-NA + IF (NA .NE. 0) GO TO 109 + CALL RADFG (IDO,IP,L1,IDL1,C,C,C,CH,CH,WA(IW)) + NA = 1 + GO TO 110 + 109 CALL RADFG (IDO,IP,L1,IDL1,CH,CH,CH,C,C,WA(IW)) + NA = 0 + 110 L2 = L1 + 111 CONTINUE + IF (NA .EQ. 1) RETURN + DO 112 I=1,N + C(I) = CH(I) + 112 CONTINUE + RETURN + END + SUBROUTINE RADF2 (IDO,L1,CC,CH,WA1) + DIMENSION CH(IDO,2,L1) ,CC(IDO,L1,2) , + 1 WA1(*) + DO 101 K=1,L1 + CH(1,1,K) = CC(1,K,1)+CC(1,K,2) + CH(IDO,2,K) = CC(1,K,1)-CC(1,K,2) + 101 CONTINUE + IF (IDO.lt.2) GO TO 107 + IF (IDO.eq.2) GO TO 105 + GO TO 102 + 102 IDP2 = IDO+2 + DO 104 K=1,L1 + DO 103 I=3,IDO,2 + IC = IDP2-I + TR2 = WA1(I-2)*CC(I-1,K,2)+WA1(I-1)*CC(I,K,2) + TI2 = WA1(I-2)*CC(I,K,2)-WA1(I-1)*CC(I-1,K,2) + CH(I,1,K) = CC(I,K,1)+TI2 + CH(IC,2,K) = TI2-CC(I,K,1) + CH(I-1,1,K) = CC(I-1,K,1)+TR2 + CH(IC-1,2,K) = CC(I-1,K,1)-TR2 + 103 CONTINUE + 104 CONTINUE + IF (MOD(IDO,2) .EQ. 1) RETURN + 105 DO 106 K=1,L1 + CH(1,2,K) = -CC(IDO,K,2) + CH(IDO,1,K) = CC(IDO,K,1) + 106 CONTINUE + 107 RETURN + END + SUBROUTINE RADF3 (IDO,L1,CC,CH,WA1,WA2) + DIMENSION CH(IDO,3,L1) ,CC(IDO,L1,3) , + 1 WA1(*) ,WA2(*) + DATA TAUR,TAUI /-.5,.866025403784439/ + DO 101 K=1,L1 + CR2 = CC(1,K,2)+CC(1,K,3) + CH(1,1,K) = CC(1,K,1)+CR2 + CH(1,3,K) = TAUI*(CC(1,K,3)-CC(1,K,2)) + CH(IDO,2,K) = CC(1,K,1)+TAUR*CR2 + 101 CONTINUE + IF (IDO .EQ. 1) RETURN + IDP2 = IDO+2 + DO 103 K=1,L1 + DO 102 I=3,IDO,2 + IC = IDP2-I + DR2 = WA1(I-2)*CC(I-1,K,2)+WA1(I-1)*CC(I,K,2) + DI2 = WA1(I-2)*CC(I,K,2)-WA1(I-1)*CC(I-1,K,2) + DR3 = WA2(I-2)*CC(I-1,K,3)+WA2(I-1)*CC(I,K,3) + DI3 = WA2(I-2)*CC(I,K,3)-WA2(I-1)*CC(I-1,K,3) + CR2 = DR2+DR3 + CI2 = DI2+DI3 + CH(I-1,1,K) = CC(I-1,K,1)+CR2 + CH(I,1,K) = CC(I,K,1)+CI2 + TR2 = CC(I-1,K,1)+TAUR*CR2 + TI2 = CC(I,K,1)+TAUR*CI2 + TR3 = TAUI*(DI2-DI3) + TI3 = TAUI*(DR3-DR2) + CH(I-1,3,K) = TR2+TR3 + CH(IC-1,2,K) = TR2-TR3 + CH(I,3,K) = TI2+TI3 + CH(IC,2,K) = TI3-TI2 + 102 CONTINUE + 103 CONTINUE + RETURN + END + SUBROUTINE RADF4 (IDO,L1,CC,CH,WA1,WA2,WA3) + DIMENSION CC(IDO,L1,4) ,CH(IDO,4,L1) , + 1 WA1(*) ,WA2(*) ,WA3(*) + DATA HSQT2 /.7071067811865475/ + DO 101 K=1,L1 + TR1 = CC(1,K,2)+CC(1,K,4) + TR2 = CC(1,K,1)+CC(1,K,3) + CH(1,1,K) = TR1+TR2 + CH(IDO,4,K) = TR2-TR1 + CH(IDO,2,K) = CC(1,K,1)-CC(1,K,3) + CH(1,3,K) = CC(1,K,4)-CC(1,K,2) + 101 CONTINUE + IF (IDO.lt.2) GO TO 107 + IF (IDO.eq.2) GO TO 105 + GO TO 102 + 102 IDP2 = IDO+2 + DO 104 K=1,L1 + DO 103 I=3,IDO,2 + IC = IDP2-I + CR2 = WA1(I-2)*CC(I-1,K,2)+WA1(I-1)*CC(I,K,2) + CI2 = WA1(I-2)*CC(I,K,2)-WA1(I-1)*CC(I-1,K,2) + CR3 = WA2(I-2)*CC(I-1,K,3)+WA2(I-1)*CC(I,K,3) + CI3 = WA2(I-2)*CC(I,K,3)-WA2(I-1)*CC(I-1,K,3) + CR4 = WA3(I-2)*CC(I-1,K,4)+WA3(I-1)*CC(I,K,4) + CI4 = WA3(I-2)*CC(I,K,4)-WA3(I-1)*CC(I-1,K,4) + TR1 = CR2+CR4 + TR4 = CR4-CR2 + TI1 = CI2+CI4 + TI4 = CI2-CI4 + TI2 = CC(I,K,1)+CI3 + TI3 = CC(I,K,1)-CI3 + TR2 = CC(I-1,K,1)+CR3 + TR3 = CC(I-1,K,1)-CR3 + CH(I-1,1,K) = TR1+TR2 + CH(IC-1,4,K) = TR2-TR1 + CH(I,1,K) = TI1+TI2 + CH(IC,4,K) = TI1-TI2 + CH(I-1,3,K) = TI4+TR3 + CH(IC-1,2,K) = TR3-TI4 + CH(I,3,K) = TR4+TI3 + CH(IC,2,K) = TR4-TI3 + 103 CONTINUE + 104 CONTINUE + IF (MOD(IDO,2) .EQ. 1) RETURN + 105 CONTINUE + DO 106 K=1,L1 + TI1 = -HSQT2*(CC(IDO,K,2)+CC(IDO,K,4)) + TR1 = HSQT2*(CC(IDO,K,2)-CC(IDO,K,4)) + CH(IDO,1,K) = TR1+CC(IDO,K,1) + CH(IDO,3,K) = CC(IDO,K,1)-TR1 + CH(1,2,K) = TI1-CC(IDO,K,3) + CH(1,4,K) = TI1+CC(IDO,K,3) + 106 CONTINUE + 107 RETURN + END + SUBROUTINE RADF5 (IDO,L1,CC,CH,WA1,WA2,WA3,WA4) + DIMENSION CC(IDO,L1,5) ,CH(IDO,5,L1) , + 1 WA1(*) ,WA2(*) ,WA3(*) ,WA4(*) + DATA TR11,TI11,TR12,TI12 /.309016994374947,.951056516295154, + 1-.809016994374947,.587785252292473/ + DO 101 K=1,L1 + CR2 = CC(1,K,5)+CC(1,K,2) + CI5 = CC(1,K,5)-CC(1,K,2) + CR3 = CC(1,K,4)+CC(1,K,3) + CI4 = CC(1,K,4)-CC(1,K,3) + CH(1,1,K) = CC(1,K,1)+CR2+CR3 + CH(IDO,2,K) = CC(1,K,1)+TR11*CR2+TR12*CR3 + CH(1,3,K) = TI11*CI5+TI12*CI4 + CH(IDO,4,K) = CC(1,K,1)+TR12*CR2+TR11*CR3 + CH(1,5,K) = TI12*CI5-TI11*CI4 + 101 CONTINUE + IF (IDO .EQ. 1) RETURN + IDP2 = IDO+2 + DO 103 K=1,L1 + DO 102 I=3,IDO,2 + IC = IDP2-I + DR2 = WA1(I-2)*CC(I-1,K,2)+WA1(I-1)*CC(I,K,2) + DI2 = WA1(I-2)*CC(I,K,2)-WA1(I-1)*CC(I-1,K,2) + DR3 = WA2(I-2)*CC(I-1,K,3)+WA2(I-1)*CC(I,K,3) + DI3 = WA2(I-2)*CC(I,K,3)-WA2(I-1)*CC(I-1,K,3) + DR4 = WA3(I-2)*CC(I-1,K,4)+WA3(I-1)*CC(I,K,4) + DI4 = WA3(I-2)*CC(I,K,4)-WA3(I-1)*CC(I-1,K,4) + DR5 = WA4(I-2)*CC(I-1,K,5)+WA4(I-1)*CC(I,K,5) + DI5 = WA4(I-2)*CC(I,K,5)-WA4(I-1)*CC(I-1,K,5) + CR2 = DR2+DR5 + CI5 = DR5-DR2 + CR5 = DI2-DI5 + CI2 = DI2+DI5 + CR3 = DR3+DR4 + CI4 = DR4-DR3 + CR4 = DI3-DI4 + CI3 = DI3+DI4 + CH(I-1,1,K) = CC(I-1,K,1)+CR2+CR3 + CH(I,1,K) = CC(I,K,1)+CI2+CI3 + TR2 = CC(I-1,K,1)+TR11*CR2+TR12*CR3 + TI2 = CC(I,K,1)+TR11*CI2+TR12*CI3 + TR3 = CC(I-1,K,1)+TR12*CR2+TR11*CR3 + TI3 = CC(I,K,1)+TR12*CI2+TR11*CI3 + TR5 = TI11*CR5+TI12*CR4 + TI5 = TI11*CI5+TI12*CI4 + TR4 = TI12*CR5-TI11*CR4 + TI4 = TI12*CI5-TI11*CI4 + CH(I-1,3,K) = TR2+TR5 + CH(IC-1,2,K) = TR2-TR5 + CH(I,3,K) = TI2+TI5 + CH(IC,2,K) = TI5-TI2 + CH(I-1,5,K) = TR3+TR4 + CH(IC-1,4,K) = TR3-TR4 + CH(I,5,K) = TI3+TI4 + CH(IC,4,K) = TI4-TI3 + 102 CONTINUE + 103 CONTINUE + RETURN + END + SUBROUTINE RADFG (IDO,IP,L1,IDL1,CC,C1,C2,CH,CH2,WA) + DIMENSION CH(IDO,L1,IP) ,CC(IDO,IP,L1) , + 1 C1(IDO,L1,IP) ,C2(IDL1,IP), + 2 CH2(IDL1,IP) ,WA(*) + DATA TPI/6.28318530717959/ + ARG = TPI/FLOAT(IP) + DCP = COS(ARG) + DSP = SIN(ARG) + IPPH = (IP+1)/2 + IPP2 = IP+2 + IDP2 = IDO+2 + NBD = (IDO-1)/2 + IF (IDO .EQ. 1) GO TO 119 + DO 101 IK=1,IDL1 + CH2(IK,1) = C2(IK,1) + 101 CONTINUE + DO 103 J=2,IP + DO 102 K=1,L1 + CH(1,K,J) = C1(1,K,J) + 102 CONTINUE + 103 CONTINUE + IF (NBD .GT. L1) GO TO 107 + IS = -IDO + DO 106 J=2,IP + IS = IS+IDO + IDIJ = IS + DO 105 I=3,IDO,2 + IDIJ = IDIJ+2 + DO 104 K=1,L1 + CH(I-1,K,J) = WA(IDIJ-1)*C1(I-1,K,J)+WA(IDIJ)*C1(I,K,J) + CH(I,K,J) = WA(IDIJ-1)*C1(I,K,J)-WA(IDIJ)*C1(I-1,K,J) + 104 CONTINUE + 105 CONTINUE + 106 CONTINUE + GO TO 111 + 107 IS = -IDO + DO 110 J=2,IP + IS = IS+IDO + DO 109 K=1,L1 + IDIJ = IS + DO 108 I=3,IDO,2 + IDIJ = IDIJ+2 + CH(I-1,K,J) = WA(IDIJ-1)*C1(I-1,K,J)+WA(IDIJ)*C1(I,K,J) + CH(I,K,J) = WA(IDIJ-1)*C1(I,K,J)-WA(IDIJ)*C1(I-1,K,J) + 108 CONTINUE + 109 CONTINUE + 110 CONTINUE + 111 IF (NBD .LT. L1) GO TO 115 + DO 114 J=2,IPPH + JC = IPP2-J + DO 113 K=1,L1 + DO 112 I=3,IDO,2 + C1(I-1,K,J) = CH(I-1,K,J)+CH(I-1,K,JC) + C1(I-1,K,JC) = CH(I,K,J)-CH(I,K,JC) + C1(I,K,J) = CH(I,K,J)+CH(I,K,JC) + C1(I,K,JC) = CH(I-1,K,JC)-CH(I-1,K,J) + 112 CONTINUE + 113 CONTINUE + 114 CONTINUE + GO TO 121 + 115 DO 118 J=2,IPPH + JC = IPP2-J + DO 117 I=3,IDO,2 + DO 116 K=1,L1 + C1(I-1,K,J) = CH(I-1,K,J)+CH(I-1,K,JC) + C1(I-1,K,JC) = CH(I,K,J)-CH(I,K,JC) + C1(I,K,J) = CH(I,K,J)+CH(I,K,JC) + C1(I,K,JC) = CH(I-1,K,JC)-CH(I-1,K,J) + 116 CONTINUE + 117 CONTINUE + 118 CONTINUE + GO TO 121 + 119 DO 120 IK=1,IDL1 + C2(IK,1) = CH2(IK,1) + 120 CONTINUE + 121 DO 123 J=2,IPPH + JC = IPP2-J + DO 122 K=1,L1 + C1(1,K,J) = CH(1,K,J)+CH(1,K,JC) + C1(1,K,JC) = CH(1,K,JC)-CH(1,K,J) + 122 CONTINUE + 123 CONTINUE +C + AR1 = 1. + AI1 = 0. + DO 127 L=2,IPPH + LC = IPP2-L + AR1H = DCP*AR1-DSP*AI1 + AI1 = DCP*AI1+DSP*AR1 + AR1 = AR1H + DO 124 IK=1,IDL1 + CH2(IK,L) = C2(IK,1)+AR1*C2(IK,2) + CH2(IK,LC) = AI1*C2(IK,IP) + 124 CONTINUE + DC2 = AR1 + DS2 = AI1 + AR2 = AR1 + AI2 = AI1 + DO 126 J=3,IPPH + JC = IPP2-J + AR2H = DC2*AR2-DS2*AI2 + AI2 = DC2*AI2+DS2*AR2 + AR2 = AR2H + DO 125 IK=1,IDL1 + CH2(IK,L) = CH2(IK,L)+AR2*C2(IK,J) + CH2(IK,LC) = CH2(IK,LC)+AI2*C2(IK,JC) + 125 CONTINUE + 126 CONTINUE + 127 CONTINUE + DO 129 J=2,IPPH + DO 128 IK=1,IDL1 + CH2(IK,1) = CH2(IK,1)+C2(IK,J) + 128 CONTINUE + 129 CONTINUE +C + IF (IDO .LT. L1) GO TO 132 + DO 131 K=1,L1 + DO 130 I=1,IDO + CC(I,1,K) = CH(I,K,1) + 130 CONTINUE + 131 CONTINUE + GO TO 135 + 132 DO 134 I=1,IDO + DO 133 K=1,L1 + CC(I,1,K) = CH(I,K,1) + 133 CONTINUE + 134 CONTINUE + 135 DO 137 J=2,IPPH + JC = IPP2-J + J2 = J+J + DO 136 K=1,L1 + CC(IDO,J2-2,K) = CH(1,K,J) + CC(1,J2-1,K) = CH(1,K,JC) + 136 CONTINUE + 137 CONTINUE + IF (IDO .EQ. 1) RETURN + IF (NBD .LT. L1) GO TO 141 + DO 140 J=2,IPPH + JC = IPP2-J + J2 = J+J + DO 139 K=1,L1 + DO 138 I=3,IDO,2 + IC = IDP2-I + CC(I-1,J2-1,K) = CH(I-1,K,J)+CH(I-1,K,JC) + CC(IC-1,J2-2,K) = CH(I-1,K,J)-CH(I-1,K,JC) + CC(I,J2-1,K) = CH(I,K,J)+CH(I,K,JC) + CC(IC,J2-2,K) = CH(I,K,JC)-CH(I,K,J) + 138 CONTINUE + 139 CONTINUE + 140 CONTINUE + RETURN + 141 DO 144 J=2,IPPH + JC = IPP2-J + J2 = J+J + DO 143 I=3,IDO,2 + IC = IDP2-I + DO 142 K=1,L1 + CC(I-1,J2-1,K) = CH(I-1,K,J)+CH(I-1,K,JC) + CC(IC-1,J2-2,K) = CH(I-1,K,J)-CH(I-1,K,JC) + CC(I,J2-1,K) = CH(I,K,J)+CH(I,K,JC) + CC(IC,J2-2,K) = CH(I,K,JC)-CH(I,K,J) + 142 CONTINUE + 143 CONTINUE + 144 CONTINUE + RETURN + END diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/rfftf.f python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/rfftf.f --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/rfftf.f 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/rfftf.f 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,6 @@ + SUBROUTINE RFFTF (N,R,WSAVE) + DIMENSION R(*) ,WSAVE(*) + IF (N .EQ. 1) RETURN + CALL RFFTF1 (N,R,WSAVE,WSAVE(N+1),WSAVE(2*N+1)) + RETURN + END diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/rffti1.f python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/rffti1.f --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/rffti1.f 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/rffti1.f 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,59 @@ + SUBROUTINE RFFTI1 (N,WA,IFAC) + DIMENSION WA(*) ,IFAC(*) ,NTRYH(4) + DATA NTRYH(1),NTRYH(2),NTRYH(3),NTRYH(4)/4,2,3,5/ + NL = N + NF = 0 + J = 0 + 101 J = J+1 + IF (J.le.4) GO TO 102 + GO TO 103 + 102 NTRY = NTRYH(J) + GO TO 104 + 103 NTRY = NTRY+2 + 104 NQ = NL/NTRY + NR = NL-NTRY*NQ + IF (NR.eq.0) GO TO 105 + GO TO 101 + 105 NF = NF+1 + IFAC(NF+2) = NTRY + NL = NQ + IF (NTRY .NE. 2) GO TO 107 + IF (NF .EQ. 1) GO TO 107 + DO 106 I=2,NF + IB = NF-I+2 + IFAC(IB+2) = IFAC(IB+1) + 106 CONTINUE + IFAC(3) = 2 + 107 IF (NL .NE. 1) GO TO 104 + IFAC(1) = N + IFAC(2) = NF + TPI = 6.28318530717959 + ARGH = TPI/FLOAT(N) + IS = 0 + NFM1 = NF-1 + L1 = 1 + IF (NFM1 .EQ. 0) RETURN + DO 110 K1=1,NFM1 + IP = IFAC(K1+2) + LD = 0 + L2 = L1*IP + IDO = N/L2 + IPM = IP-1 + DO 109 J=1,IPM + LD = LD+L1 + I = IS + ARGLD = FLOAT(LD)*ARGH + FI = 0. + DO 108 II=3,IDO,2 + I = I+2 + FI = FI+1. + ARG = FI*ARGLD + WA(I-1) = COS(ARG) + WA(I) = SIN(ARG) + 108 CONTINUE + IS = IS+IDO + 109 CONTINUE + L1 = L2 + 110 CONTINUE + RETURN + END diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/rffti.f python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/rffti.f --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/rffti.f 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/rffti.f 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,6 @@ + SUBROUTINE RFFTI (N,WSAVE) + DIMENSION WSAVE(*) + IF (N .EQ. 1) RETURN + CALL RFFTI1 (N,WSAVE(N+1),WSAVE(2*N+1)) + RETURN + END diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/sinqb.f python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/sinqb.f --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/sinqb.f 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/sinqb.f 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,18 @@ + SUBROUTINE SINQB (N,X,WSAVE) + DIMENSION X(*) ,WSAVE(*) + IF (N .GT. 1) GO TO 101 + X(1) = 4.*X(1) + RETURN + 101 NS2 = N/2 + DO 102 K=2,N,2 + X(K) = -X(K) + 102 CONTINUE + CALL COSQB (N,X,WSAVE) + DO 103 K=1,NS2 + KC = N-K + XHOLD = X(K) + X(K) = X(KC+1) + X(KC+1) = XHOLD + 103 CONTINUE + RETURN + END diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/sinqf.f python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/sinqf.f --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/sinqf.f 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/sinqf.f 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,16 @@ + SUBROUTINE SINQF (N,X,WSAVE) + DIMENSION X(*) ,WSAVE(*) + IF (N .EQ. 1) RETURN + NS2 = N/2 + DO 101 K=1,NS2 + KC = N-K + XHOLD = X(K) + X(K) = X(KC+1) + X(KC+1) = XHOLD + 101 CONTINUE + CALL COSQF (N,X,WSAVE) + DO 102 K=2,N,2 + X(K) = -X(K) + 102 CONTINUE + RETURN + END diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/sinqi.f python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/sinqi.f --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/sinqi.f 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/sinqi.f 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,5 @@ + SUBROUTINE SINQI (N,WSAVE) + DIMENSION WSAVE(*) + CALL COSQI (N,WSAVE) + RETURN + END diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/sint1.f python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/sint1.f --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/sint1.f 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/sint1.f 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,42 @@ + SUBROUTINE SINT1(N,WAR,WAS,XH,X,IFAC) + DIMENSION WAR(*),WAS(*),X(*),XH(*),IFAC(*) + DATA SQRT3 /1.73205080756888/ + DO 100 I=1,N + XH(I) = WAR(I) + WAR(I) = X(I) + 100 CONTINUE + IF (N.lt.2) GO TO 101 + IF (N.eq.2) GO TO 102 + GO TO 103 + 101 XH(1) = XH(1)+XH(1) + GO TO 106 + 102 XHOLD = SQRT3*(XH(1)+XH(2)) + XH(2) = SQRT3*(XH(1)-XH(2)) + XH(1) = XHOLD + GO TO 106 + 103 NP1 = N+1 + NS2 = N/2 + X(1) = 0. + DO 104 K=1,NS2 + KC = NP1-K + T1 = XH(K)-XH(KC) + T2 = WAS(K)*(XH(K)+XH(KC)) + X(K+1) = T1+T2 + X(KC+1) = T2-T1 + 104 CONTINUE + MODN = MOD(N,2) + IF (MODN .NE. 0) X(NS2+2) = 4.*XH(NS2+1) + CALL RFFTF1 (NP1,X,XH,WAR,IFAC) + XH(1) = .5*X(1) + DO 105 I=3,N,2 + XH(I-1) = -X(I) + XH(I) = XH(I-2)+X(I-1) + 105 CONTINUE + IF (MODN .NE. 0) GO TO 106 + XH(N) = -X(N+1) + 106 DO 107 I=1,N + X(I) = WAR(I) + WAR(I) = XH(I) + 107 CONTINUE + RETURN + END diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/sint.f python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/sint.f --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/sint.f 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/sint.f 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,9 @@ + SUBROUTINE SINT (N,X,WSAVE) + DIMENSION X(*) ,WSAVE(*) + NP1 = N+1 + IW1 = N/2+1 + IW2 = IW1+NP1 + IW3 = IW2+NP1 + CALL SINT1(N,X,WSAVE,WSAVE(IW1),WSAVE(IW2),WSAVE(IW3)) + RETURN + END diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/sinti.f python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/sinti.f --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack/sinti.f 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack/sinti.f 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,13 @@ + SUBROUTINE SINTI (N,WSAVE) + DIMENSION WSAVE(*) + DATA PI /3.14159265358979/ + IF (N .LE. 1) RETURN + NS2 = N/2 + NP1 = N+1 + DT = PI/FLOAT(NP1) + DO 101 K=1,NS2 + WSAVE(K) = 2.*SIN(K*DT) + 101 CONTINUE + CALL RFFTI (NP1,WSAVE(NS2+1)) + RETURN + END diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack.h python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack.h --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/fftpack.h 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/fftpack.h 2010-07-26 15:48:29.000000000 +0100 @@ -73,7 +73,7 @@ last_cache_id_##name = id;\ return id;\ }\ -static void destroy_##name##_caches(void) {\ +void destroy_##name##_cache(void) {\ int id;\ for (id=0;id= 0; --i) { + *((double *) (ptr)) /= n; + *((double *) (ptr++) + 1) /= n; + } + } } -#include "zfft_fftpack.c" -GEN_PUBLIC_API(fftpack) +void cfft(complex_float * inout, int n, int direction, int howmany, + int normalize) +{ + int i; + complex_float *ptr = inout; + float *wsave = NULL; + + wsave = caches_cfft[get_cache_id_cfft(n)].wsave; + + switch (direction) { + case 1: + for (i = 0; i < howmany; ++i, ptr += n) { + F_FUNC(cfftf, CFFTF)(&n, (float *) (ptr), wsave); + + } + break; + + case -1: + for (i = 0; i < howmany; ++i, ptr += n) { + F_FUNC(cfftb, CFFTB)(&n, (float *) (ptr), wsave); + } + break; + default: + fprintf(stderr, "cfft: invalid direction=%d\n", direction); + } + + if (normalize) { + ptr = inout; + for (i = n * howmany - 1; i >= 0; --i) { + *((float *) (ptr)) /= n; + *((float *) (ptr++) + 1) /= n; + } + } +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/zfft_fftpack.c python-scipy-0.8.0+dfsg1/scipy/fftpack/src/zfft_fftpack.c --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/zfft_fftpack.c 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/zfft_fftpack.c 1970-01-01 01:00:00.000000000 +0100 @@ -1,45 +0,0 @@ -extern void F_FUNC(zfftf,ZFFTF)(int*,double*,double*); -extern void F_FUNC(zfftb,ZFFTB)(int*,double*,double*); -extern void F_FUNC(zffti,ZFFTI)(int*,double*); -GEN_CACHE(zfftpack,(int n) - ,double* wsave; - ,(caches_zfftpack[i].n==n) - ,caches_zfftpack[id].wsave = (double*)malloc(sizeof(double)*(4*n+15)); - F_FUNC(zffti,ZFFTI)(&n,caches_zfftpack[id].wsave); - ,free(caches_zfftpack[id].wsave); - ,10) - -static void zfft_fftpack(complex_double * inout, - int n, int direction, int howmany, int normalize) -{ - int i; - complex_double *ptr = inout; - double *wsave = NULL; - - wsave = caches_zfftpack[get_cache_id_zfftpack(n)].wsave; - - switch (direction) { - case 1: - for (i = 0; i < howmany; ++i, ptr += n) { - zfftf_(&n, (double *) (ptr), wsave); - - } - break; - - case -1: - for (i = 0; i < howmany; ++i, ptr += n) { - zfftb_(&n, (double *) (ptr), wsave); - } - break; - default: - fprintf(stderr, "zfft: invalid direction=%d\n", direction); - } - - if (normalize) { - ptr = inout; - for (i = n * howmany - 1; i >= 0; --i) { - *((double *) (ptr)) /= n; - *((double *) (ptr++) + 1) /= n; - } - } -} diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/zfftnd.c python-scipy-0.8.0+dfsg1/scipy/fftpack/src/zfftnd.c --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/zfftnd.c 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/zfftnd.c 2010-07-26 15:48:29.000000000 +0100 @@ -5,19 +5,210 @@ */ #include "fftpack.h" -/* The following macro convert private backend specific function to the public - * functions exported by the module */ -#define GEN_PUBLIC_API(name) \ -void destroy_zfftnd_cache(void)\ -{\ - destroy_zfftnd_##name##_caches();\ -}\ -\ -void zfftnd(complex_double * inout, int rank,\ - int *dims, int direction, int howmany, int normalize)\ -{\ - zfftnd_##name(inout, rank, dims, direction, howmany, normalize);\ +GEN_CACHE(zfftnd, (int n, int rank) + , complex_double * ptr; int *iptr; int rank; + , ((caches_zfftnd[i].n == n) + && (caches_zfftnd[i].rank == rank)) + , caches_zfftnd[id].n = n; + caches_zfftnd[id].ptr = + (complex_double *) malloc(2 * sizeof(double) * n); + caches_zfftnd[id].iptr = + (int *) malloc(4 * rank * sizeof(int)); + , + free(caches_zfftnd[id].ptr); + free(caches_zfftnd[id].iptr); + , 10) + +GEN_CACHE(cfftnd, (int n, int rank) + , complex_float * ptr; int *iptr; int rank; + , ((caches_cfftnd[i].n == n) + && (caches_cfftnd[i].rank == rank)) + , caches_cfftnd[id].n = n; + caches_cfftnd[id].ptr = + (complex_float *) malloc(2 * sizeof(float) * n); + caches_cfftnd[id].iptr = + (int *) malloc(4 * rank * sizeof(int)); + , + free(caches_cfftnd[id].ptr); + free(caches_cfftnd[id].iptr); + , 10) + +static +/*inline : disabled because MSVC6.0 fails to compile it. */ +int next_comb(int *ia, int *da, int m) +{ + while (m >= 0 && ia[m] == da[m]) { + ia[m--] = 0; + } + if (m < 0) { + return 0; + } + ia[m]++; + return 1; +} + +static +void flatten(complex_double * dest, complex_double * src, + int rank, int strides_axis, int dims_axis, int unflat, + int *tmp) +{ + int *new_strides = tmp + rank; + int *new_dims = tmp + 2 * rank; + int *ia = tmp + 3 * rank; + int rm1 = rank - 1, rm2 = rank - 2; + int i, j, k; + for (i = 0; i < rm2; ++i) + ia[i] = 0; + ia[rm2] = -1; + j = 0; + if (unflat) { + while (next_comb(ia, new_dims, rm2)) { + k = 0; + for (i = 0; i < rm1; ++i) { + k += ia[i] * new_strides[i]; + } + for (i = 0; i < dims_axis; ++i) { + *(dest + k + i * strides_axis) = *(src + j++); + } + } + } else { + while (next_comb(ia, new_dims, rm2)) { + k = 0; + for (i = 0; i < rm1; ++i) { + k += ia[i] * new_strides[i]; + } + for (i = 0; i < dims_axis; ++i) { + *(dest + j++) = *(src + k + i * strides_axis); + } + } + } +} + +static +void sflatten(complex_float * dest, complex_float * src, + int rank, int strides_axis, int dims_axis, int unflat, + int *tmp) +{ + int *new_strides = tmp + rank; + int *new_dims = tmp + 2 * rank; + int *ia = tmp + 3 * rank; + int rm1 = rank - 1, rm2 = rank - 2; + int i, j, k; + for (i = 0; i < rm2; ++i) + ia[i] = 0; + ia[rm2] = -1; + j = 0; + if (unflat) { + while (next_comb(ia, new_dims, rm2)) { + k = 0; + for (i = 0; i < rm1; ++i) { + k += ia[i] * new_strides[i]; + } + for (i = 0; i < dims_axis; ++i) { + *(dest + k + i * strides_axis) = *(src + j++); + } + } + } else { + while (next_comb(ia, new_dims, rm2)) { + k = 0; + for (i = 0; i < rm1; ++i) { + k += ia[i] * new_strides[i]; + } + for (i = 0; i < dims_axis; ++i) { + *(dest + j++) = *(src + k + i * strides_axis); + } + } + } } -#include "zfftnd_fftpack.c" -GEN_PUBLIC_API(fftpack) +extern void cfft(complex_float * inout, + int n, int direction, int howmany, int normalize); + +extern void zfft(complex_double * inout, + int n, int direction, int howmany, int normalize); + +extern void zfftnd(complex_double * inout, int rank, + int *dims, int direction, int howmany, + int normalize) +{ + int i, sz; + complex_double *ptr = inout; + int axis; + complex_double *tmp; + int *itmp; + int k, j; + + sz = 1; + for (i = 0; i < rank; ++i) { + sz *= dims[i]; + } + zfft(ptr, dims[rank - 1], direction, howmany * sz / dims[rank - 1], + normalize); + + i = get_cache_id_zfftnd(sz, rank); + tmp = caches_zfftnd[i].ptr; + itmp = caches_zfftnd[i].iptr; + + itmp[rank - 1] = 1; + for (i = 2; i <= rank; ++i) { + itmp[rank - i] = itmp[rank - i + 1] * dims[rank - i + 1]; + } + + for (i = 0; i < howmany; ++i, ptr += sz) { + for (axis = 0; axis < rank - 1; ++axis) { + for (k = j = 0; k < rank; ++k) { + if (k != axis) { + *(itmp + rank + j) = itmp[k]; + *(itmp + 2 * rank + j++) = dims[k] - 1; + } + } + flatten(tmp, ptr, rank, itmp[axis], dims[axis], 0, itmp); + zfft(tmp, dims[axis], direction, sz / dims[axis], normalize); + flatten(ptr, tmp, rank, itmp[axis], dims[axis], 1, itmp); + } + } + +} + +extern void cfftnd(complex_float * inout, int rank, + int *dims, int direction, int howmany, + int normalize) +{ + int i, sz; + complex_float *ptr = inout; + int axis; + complex_float *tmp; + int *itmp; + int k, j; + + sz = 1; + for (i = 0; i < rank; ++i) { + sz *= dims[i]; + } + cfft(ptr, dims[rank - 1], direction, howmany * sz / dims[rank - 1], + normalize); + + i = get_cache_id_cfftnd(sz, rank); + tmp = caches_cfftnd[i].ptr; + itmp = caches_cfftnd[i].iptr; + + itmp[rank - 1] = 1; + for (i = 2; i <= rank; ++i) { + itmp[rank - i] = itmp[rank - i + 1] * dims[rank - i + 1]; + } + + for (i = 0; i < howmany; ++i, ptr += sz) { + for (axis = 0; axis < rank - 1; ++axis) { + for (k = j = 0; k < rank; ++k) { + if (k != axis) { + *(itmp + rank + j) = itmp[k]; + *(itmp + 2 * rank + j++) = dims[k] - 1; + } + } + sflatten(tmp, ptr, rank, itmp[axis], dims[axis], 0, itmp); + cfft(tmp, dims[axis], direction, sz / dims[axis], normalize); + sflatten(ptr, tmp, rank, itmp[axis], dims[axis], 1, itmp); + } + } + +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/zfftnd_fftpack.c python-scipy-0.8.0+dfsg1/scipy/fftpack/src/zfftnd_fftpack.c --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/zfftnd_fftpack.c 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/zfftnd_fftpack.c 1970-01-01 01:00:00.000000000 +0100 @@ -1,118 +0,0 @@ -/* - * fftpack backend for multi dimensional fft - * - * Original code by Pearu Peaterson - * - * Last Change: Wed Aug 08 02:00 PM 2007 J - */ - -GEN_CACHE(zfftnd_fftpack, (int n, int rank) - , complex_double * ptr; int *iptr; int rank; - , ((caches_zfftnd_fftpack[i].n == n) - && (caches_zfftnd_fftpack[i].rank == rank)) - , caches_zfftnd_fftpack[id].n = n; - caches_zfftnd_fftpack[id].ptr = - (complex_double *) malloc(2 * sizeof(double) * n); - caches_zfftnd_fftpack[id].iptr = - (int *) malloc(4 * rank * sizeof(int)); - , - free(caches_zfftnd_fftpack[id].ptr); - free(caches_zfftnd_fftpack[id].iptr); - , 10) - -static -/*inline : disabled because MSVC6.0 fails to compile it. */ -int next_comb(int *ia, int *da, int m) -{ - while (m >= 0 && ia[m] == da[m]) { - ia[m--] = 0; - } - if (m < 0) { - return 0; - } - ia[m]++; - return 1; -} - -static -void flatten(complex_double * dest, complex_double * src, - int rank, int strides_axis, int dims_axis, int unflat, - int *tmp) -{ - int *new_strides = tmp + rank; - int *new_dims = tmp + 2 * rank; - int *ia = tmp + 3 * rank; - int rm1 = rank - 1, rm2 = rank - 2; - int i, j, k; - for (i = 0; i < rm2; ++i) - ia[i] = 0; - ia[rm2] = -1; - j = 0; - if (unflat) { - while (next_comb(ia, new_dims, rm2)) { - k = 0; - for (i = 0; i < rm1; ++i) { - k += ia[i] * new_strides[i]; - } - for (i = 0; i < dims_axis; ++i) { - *(dest + k + i * strides_axis) = *(src + j++); - } - } - } else { - while (next_comb(ia, new_dims, rm2)) { - k = 0; - for (i = 0; i < rm1; ++i) { - k += ia[i] * new_strides[i]; - } - for (i = 0; i < dims_axis; ++i) { - *(dest + j++) = *(src + k + i * strides_axis); - } - } - } -} - -extern void zfft(complex_double * inout, - int n, int direction, int howmany, int normalize); - -extern void zfftnd_fftpack(complex_double * inout, int rank, - int *dims, int direction, int howmany, - int normalize) -{ - int i, sz; - complex_double *ptr = inout; - int axis; - complex_double *tmp; - int *itmp; - int k, j; - - sz = 1; - for (i = 0; i < rank; ++i) { - sz *= dims[i]; - } - zfft(ptr, dims[rank - 1], direction, howmany * sz / dims[rank - 1], - normalize); - - i = get_cache_id_zfftnd_fftpack(sz, rank); - tmp = caches_zfftnd_fftpack[i].ptr; - itmp = caches_zfftnd_fftpack[i].iptr; - - itmp[rank - 1] = 1; - for (i = 2; i <= rank; ++i) { - itmp[rank - i] = itmp[rank - i + 1] * dims[rank - i + 1]; - } - - for (i = 0; i < howmany; ++i, ptr += sz) { - for (axis = 0; axis < rank - 1; ++axis) { - for (k = j = 0; k < rank; ++k) { - if (k != axis) { - *(itmp + rank + j) = itmp[k]; - *(itmp + 2 * rank + j++) = dims[k] - 1; - } - } - flatten(tmp, ptr, rank, itmp[axis], dims[axis], 0, itmp); - zfft(tmp, dims[axis], direction, sz / dims[axis], normalize); - flatten(ptr, tmp, rank, itmp[axis], dims[axis], 1, itmp); - } - } - -} diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/src/zrfft.c python-scipy-0.8.0+dfsg1/scipy/fftpack/src/zrfft.c --- python-scipy-0.7.2+dfsg1/scipy/fftpack/src/zrfft.c 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/src/zrfft.c 2010-07-26 15:48:29.000000000 +0100 @@ -7,6 +7,7 @@ #include "fftpack.h" extern void drfft(double *inout,int n,int direction,int howmany,int normalize); +extern void rfft(float *inout,int n,int direction,int howmany,int normalize); extern void zrfft(complex_double *inout, int n,int direction,int howmany,int normalize) { @@ -52,3 +53,48 @@ fprintf(stderr,"zrfft: invalid direction=%d\n",direction); } } + +extern void crfft(complex_float *inout, + int n,int direction,int howmany,int normalize) { + int i,j,k; + float* ptr = (float *)inout; + switch (direction) { + case 1: + for (i=0;i +#include + +#include + +#ifdef DCT_TEST_USE_SINGLE +typedef float float_prec; +#define PF "%.7f" +#define FFTW_PLAN fftwf_plan +#define FFTW_MALLOC fftwf_malloc +#define FFTW_FREE fftwf_free +#define FFTW_PLAN_CREATE fftwf_plan_r2r_1d +#define FFTW_EXECUTE fftwf_execute +#define FFTW_DESTROY_PLAN fftwf_destroy_plan +#define FFTW_CLEANUP fftwf_cleanup +#else +typedef double float_prec; +#define PF "%.18f" +#define FFTW_PLAN fftw_plan +#define FFTW_MALLOC fftw_malloc +#define FFTW_FREE fftw_free +#define FFTW_PLAN_CREATE fftw_plan_r2r_1d +#define FFTW_EXECUTE fftw_execute +#define FFTW_DESTROY_PLAN fftw_destroy_plan +#define FFTW_CLEANUP fftw_cleanup +#endif + + +enum type { + DCT_I = 1, + DCT_II = 2, + DCT_III = 3, + DCT_IV = 4, +}; + +int gen(int type, int sz) +{ + float_prec *a, *b; + FFTW_PLAN p; + int i, tp; + + a = FFTW_MALLOC(sizeof(*a) * sz); + if (a == NULL) { + fprintf(stderr, "failure\n"); + exit(EXIT_FAILURE); + } + b = FFTW_MALLOC(sizeof(*b) * sz); + if (b == NULL) { + fprintf(stderr, "failure\n"); + exit(EXIT_FAILURE); + } + + for(i=0; i < sz; ++i) { + a[i] = i; + } + + switch(type) { + case DCT_I: + tp = FFTW_REDFT00; + break; + case DCT_II: + tp = FFTW_REDFT10; + break; + case DCT_III: + tp = FFTW_REDFT01; + break; + case DCT_IV: + tp = FFTW_REDFT11; + break; + default: + fprintf(stderr, "unknown type\n"); + exit(EXIT_FAILURE); + } + + p = FFTW_PLAN_CREATE(sz, a, b, tp, FFTW_ESTIMATE); + FFTW_EXECUTE(p); + FFTW_DESTROY_PLAN(p); + + for(i=0; i < sz; ++i) { + printf(PF"\n", b[i]); + } + FFTW_FREE(b); + FFTW_FREE(a); + + return 0; +} + +int main(int argc, char* argv[]) +{ + int n, tp; + + if (argc < 3) { + fprintf(stderr, "missing argument: program type n\n"); + exit(EXIT_FAILURE); + } + tp = atoi(argv[1]); + n = atoi(argv[2]); + + gen(tp, n); + FFTW_CLEANUP(); + + return 0; +} Binary files /tmp/tTuQn84fcN/python-scipy-0.7.2+dfsg1/scipy/fftpack/tests/fftw_double_ref.npz and /tmp/9QioNV5RG6/python-scipy-0.8.0+dfsg1/scipy/fftpack/tests/fftw_double_ref.npz differ Binary files /tmp/tTuQn84fcN/python-scipy-0.7.2+dfsg1/scipy/fftpack/tests/fftw_single_ref.npz and /tmp/9QioNV5RG6/python-scipy-0.8.0+dfsg1/scipy/fftpack/tests/fftw_single_ref.npz differ diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/tests/gendata.m python-scipy-0.8.0+dfsg1/scipy/fftpack/tests/gendata.m --- python-scipy-0.7.2+dfsg1/scipy/fftpack/tests/gendata.m 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/tests/gendata.m 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,21 @@ +x0 = linspace(0, 10, 11); +x1 = linspace(0, 10, 15); +x2 = linspace(0, 10, 16); +x3 = linspace(0, 10, 17); + +x4 = randn(32, 1); +x5 = randn(64, 1); +x6 = randn(128, 1); +x7 = randn(256, 1); + +y0 = dct(x0); +y1 = dct(x1); +y2 = dct(x2); +y3 = dct(x3); +y4 = dct(x4); +y5 = dct(x5); +y6 = dct(x6); +y7 = dct(x7); + +save('test.mat', 'x0', 'x1', 'x2', 'x3', 'x4', 'x5', 'x6', 'x7', ... + 'y0', 'y1', 'y2', 'y3', 'y4', 'y5', 'y6', 'y7'); diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/tests/gendata.py python-scipy-0.8.0+dfsg1/scipy/fftpack/tests/gendata.py --- python-scipy-0.7.2+dfsg1/scipy/fftpack/tests/gendata.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/tests/gendata.py 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,6 @@ +import numpy as np +from scipy.io import loadmat + +m = loadmat('test.mat', squeeze_me=True, struct_as_record=True, + mat_dtype=True) +np.savez('test.npz', **m) diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/tests/gen_fftw_ref.py python-scipy-0.8.0+dfsg1/scipy/fftpack/tests/gen_fftw_ref.py --- python-scipy-0.7.2+dfsg1/scipy/fftpack/tests/gen_fftw_ref.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/tests/gen_fftw_ref.py 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,35 @@ +from subprocess import Popen, PIPE, STDOUT + +import numpy as np + +SZ = [2, 3, 4, 8, 12, 15, 16, 17, 32, 64, 128, 256, 512, 1024] + +def gen_data(dt): + arrays = {} + + if dt == np.double: + pg = './fftw_double' + elif dt == np.float32: + pg = './fftw_single' + else: + raise ValueError("unknown: %s" % dt) + # Generate test data using FFTW for reference + for type in [1, 2, 3, 4]: + arrays[type] = {} + for sz in SZ: + a = Popen([pg, str(type), str(sz)], stdout=PIPE, stderr=STDOUT) + st = [i.strip() for i in a.stdout.readlines()] + arrays[type][sz] = np.fromstring(",".join(st), sep=',', dtype=dt) + + return arrays + +data = gen_data(np.float32) +filename = 'fftw_single_ref' + +# Save ref data into npz format +d = {} +d['sizes'] = SZ +for type in [1, 2, 3, 4]: + for sz in SZ: + d['dct_%d_%d' % (type, sz)] = data[type][sz] +np.savez(filename, **d) diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/tests/Makefile python-scipy-0.8.0+dfsg1/scipy/fftpack/tests/Makefile --- python-scipy-0.7.2+dfsg1/scipy/fftpack/tests/Makefile 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/tests/Makefile 2010-07-26 15:48:29.000000000 +0100 @@ -0,0 +1,13 @@ +CC = gcc +LD = gcc + +fftw_single: fftw_dct.c + $(CC) -W -Wall -DDCT_TEST_USE_SINGLE $< -o $@ -lfftw3f + +fftw_double: fftw_dct.c + $(CC) -W -Wall $< -o $@ -lfftw3 + +clean: + rm -f fftw_single + rm -f fftw_double + rm -f *.o diff -Nru python-scipy-0.7.2+dfsg1/scipy/fftpack/tests/test_basic.py python-scipy-0.8.0+dfsg1/scipy/fftpack/tests/test_basic.py --- python-scipy-0.7.2+dfsg1/scipy/fftpack/tests/test_basic.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/fftpack/tests/test_basic.py 2010-07-26 15:48:30.000000000 +0100 @@ -17,8 +17,28 @@ from numpy import arange, add, array, asarray, zeros, dot, exp, pi,\ swapaxes, double, cdouble +import numpy as np import numpy.fft +# "large" composite numbers supported by FFTPACK +LARGE_COMPOSITE_SIZES = [ + 2**13, + 2**5 * 3**5, + 2**3 * 3**3 * 5**2, +] +SMALL_COMPOSITE_SIZES = [ + 2, + 2*3*5, + 2*2*3*3, +] +# prime +LARGE_PRIME_SIZES = [ + 2011 +] +SMALL_PRIME_SIZES = [ + 29 +] + from numpy.random import rand def random(size): return rand(*size) @@ -63,7 +83,7 @@ n = len(x) w = -arange(n)*(2j*pi/n) r = zeros(n,dtype=double) - for i in range(n/2+1): + for i in range(int(n/2+1)): y = dot(exp(i*w),x) if i: r[2*i-1] = y.real @@ -77,7 +97,7 @@ x = asarray(x) n = len(x) x1 = zeros(n,dtype=cdouble) - for i in range(n/2+1): + for i in range(int(n/2+1)): if i: if 2*i info, factorial, factorial2, factorialk, + comb, who, lena, central_diff_weights, + derivative, pade, source + fftpack --> fft, fftn, fft2, ifft, ifft2, ifftn, + fftshift, ifftshift, fftfreq + stats --> find_repeats + linalg.dsolve.umfpack --> UmfpackContext + +Utility tools +------------- +:: + + test --- Run scipy unittests + show_config --- Show scipy build configuration + show_numpy_config --- Show numpy build configuration + __version__ --- Scipy version string + __numpy_version__ --- Numpy version string + """ __all__ = ['pkgload','test'] @@ -29,23 +96,10 @@ "scipy (detected version %s)" % _num.version.version, UserWarning) -# Suppress warnings due to a known harmless change in numpy 1.4.1 -if majver == 1 and minver >= 4: - import warnings - warnings.filterwarnings(action='ignore', message='.*numpy.dtype size changed.*') - warnings.filterwarnings(action='ignore', message='.*numpy.flatiter size changed.*') - __all__ += ['oldnumeric']+_num.__all__ __all__ += ['randn', 'rand', 'fft', 'ifft'] -if __doc__: - __doc__ += """ -Contents --------- -SciPy imports all the functions from the NumPy namespace, and in -addition provides:""" - del _num # Remove the linalg imported from numpy so that the scipy.linalg package can be # imported. @@ -53,15 +107,15 @@ __all__.remove('linalg') try: - from __config__ import show as show_config -except ImportError, e: + from scipy.__config__ import show as show_config +except ImportError: msg = """Error importing scipy: you cannot import scipy while being in scipy source directory; please exit the scipy source tree first, and relaunch your python intepreter.""" raise ImportError(msg) -from version import version as __version__ +from scipy.version import version as __version__ -# Load scipy packages, their global_symbols, set up __doc__ string. +# Load scipy packages and their global_symbols from numpy._import_tools import PackageLoader import os as _os SCIPY_IMPORT_VERBOSE = int(_os.environ.get('SCIPY_IMPORT_VERBOSE','-1')) @@ -69,33 +123,6 @@ pkgload = PackageLoader() pkgload(verbose=SCIPY_IMPORT_VERBOSE,postpone=True) -if __doc__: - __doc__ += """ - -Available subpackages ---------------------- -""" -if __doc__: - __doc__ += pkgload.get_pkgdocs() - from numpy.testing import Tester test = Tester().test bench = Tester().bench -if __doc__: - __doc__ += """ - -Utility tools -------------- - - test --- Run scipy unittests - pkgload --- Load scipy packages - show_config --- Show scipy build configuration - show_numpy_config --- Show numpy build configuration - __version__ --- Scipy version string - __numpy_version__ --- Numpy version string - -Environment variables ---------------------- - - SCIPY_IMPORT_VERBOSE --- pkgload verbose flag, default is 0. -""" diff -Nru python-scipy-0.7.2+dfsg1/scipy/integrate/dop/dop853.f python-scipy-0.8.0+dfsg1/scipy/integrate/dop/dop853.f --- python-scipy-0.7.2+dfsg1/scipy/integrate/dop/dop853.f 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/integrate/dop/dop853.f 2010-07-26 15:48:30.000000000 +0100 @@ -0,0 +1,879 @@ + SUBROUTINE DOP853(N,FCN,X,Y,XEND, + & RTOL,ATOL,ITOL, + & SOLOUT,IOUT, + & WORK,LWORK,IWORK,LIWORK,RPAR,IPAR,IDID) +C ---------------------------------------------------------- +C NUMERICAL SOLUTION OF A SYSTEM OF FIRST 0RDER +C ORDINARY DIFFERENTIAL EQUATIONS Y'=F(X,Y). +C THIS IS AN EXPLICIT RUNGE-KUTTA METHOD OF ORDER 8(5,3) +C DUE TO DORMAND & PRINCE (WITH STEPSIZE CONTROL AND +C DENSE OUTPUT) +C +C AUTHORS: E. HAIRER AND G. WANNER +C UNIVERSITE DE GENEVE, DEPT. DE MATHEMATIQUES +C CH-1211 GENEVE 24, SWITZERLAND +C E-MAIL: Ernst.Hairer@math.unige.ch +C Gerhard.Wanner@math.unige.ch +C +C THIS CODE IS DESCRIBED IN: +C E. HAIRER, S.P. NORSETT AND G. WANNER, SOLVING ORDINARY +C DIFFERENTIAL EQUATIONS I. NONSTIFF PROBLEMS. 2ND EDITION. +C SPRINGER SERIES IN COMPUTATIONAL MATHEMATICS, +C SPRINGER-VERLAG (1993) +C +C VERSION OF APRIL 25, 1996 +C (latest correction of a small bug: August 8, 2005) +C +C Edited (22 Feb 2009) by J.C. Travers: +C renamed HINIT->HINIT853 to avoid name collision with dopri5 +C +C INPUT PARAMETERS +C ---------------- +C N DIMENSION OF THE SYSTEM +C +C FCN NAME (EXTERNAL) OF SUBROUTINE COMPUTING THE +C VALUE OF F(X,Y): +C SUBROUTINE FCN(N,X,Y,F,RPAR,IPAR) +C DOUBLE PRECISION X,Y(N),F(N) +C F(1)=... ETC. +C +C X INITIAL X-VALUE +C +C Y(N) INITIAL VALUES FOR Y +C +C XEND FINAL X-VALUE (XEND-X MAY BE POSITIVE OR NEGATIVE) +C +C RTOL,ATOL RELATIVE AND ABSOLUTE ERROR TOLERANCES. THEY +C CAN BE BOTH SCALARS OR ELSE BOTH VECTORS OF LENGTH N. +C ATOL SHOULD BE STRICTLY POSITIVE (POSSIBLY VERY SMALL) +C +C ITOL SWITCH FOR RTOL AND ATOL: +C ITOL=0: BOTH RTOL AND ATOL ARE SCALARS. +C THE CODE KEEPS, ROUGHLY, THE LOCAL ERROR OF +C Y(I) BELOW RTOL*ABS(Y(I))+ATOL +C ITOL=1: BOTH RTOL AND ATOL ARE VECTORS. +C THE CODE KEEPS THE LOCAL ERROR OF Y(I) BELOW +C RTOL(I)*ABS(Y(I))+ATOL(I). +C +C SOLOUT NAME (EXTERNAL) OF SUBROUTINE PROVIDING THE +C NUMERICAL SOLUTION DURING INTEGRATION. +C IF IOUT.GE.1, IT IS CALLED AFTER EVERY SUCCESSFUL STEP. +C SUPPLY A DUMMY SUBROUTINE IF IOUT=0. +C IT MUST HAVE THE FORM +C SUBROUTINE SOLOUT (NR,XOLD,X,Y,N,CON,ICOMP,ND, +C RPAR,IPAR,IRTRN) +C DIMENSION Y(N),CON(8*ND),ICOMP(ND) +C .... +C SOLOUT FURNISHES THE SOLUTION "Y" AT THE NR-TH +C GRID-POINT "X" (THEREBY THE INITIAL VALUE IS +C THE FIRST GRID-POINT). +C "XOLD" IS THE PRECEEDING GRID-POINT. +C "IRTRN" SERVES TO INTERRUPT THE INTEGRATION. IF IRTRN +C IS SET <0, DOP853 WILL RETURN TO THE CALLING PROGRAM. +C IF THE NUMERICAL SOLUTION IS ALTERED IN SOLOUT, +C SET IRTRN = 2 +C +C ----- CONTINUOUS OUTPUT: ----- +C DURING CALLS TO "SOLOUT", A CONTINUOUS SOLUTION +C FOR THE INTERVAL [XOLD,X] IS AVAILABLE THROUGH +C THE FUNCTION +C >>> CONTD8(I,S,CON,ICOMP,ND) <<< +C WHICH PROVIDES AN APPROXIMATION TO THE I-TH +C COMPONENT OF THE SOLUTION AT THE POINT S. THE VALUE +C S SHOULD LIE IN THE INTERVAL [XOLD,X]. +C +C IOUT SWITCH FOR CALLING THE SUBROUTINE SOLOUT: +C IOUT=0: SUBROUTINE IS NEVER CALLED +C IOUT=1: SUBROUTINE IS USED FOR OUTPUT +C IOUT=2: DENSE OUTPUT IS PERFORMED IN SOLOUT +C (IN THIS CASE WORK(5) MUST BE SPECIFIED) +C +C WORK ARRAY OF WORKING SPACE OF LENGTH "LWORK". +C WORK(1),...,WORK(20) SERVE AS PARAMETERS FOR THE CODE. +C FOR STANDARD USE, SET THEM TO ZERO BEFORE CALLING. +C "LWORK" MUST BE AT LEAST 11*N+8*NRDENS+21 +C WHERE NRDENS = IWORK(5) +C +C LWORK DECLARED LENGHT OF ARRAY "WORK". +C +C IWORK INTEGER WORKING SPACE OF LENGHT "LIWORK". +C IWORK(1),...,IWORK(20) SERVE AS PARAMETERS FOR THE CODE. +C FOR STANDARD USE, SET THEM TO ZERO BEFORE CALLING. +C "LIWORK" MUST BE AT LEAST NRDENS+21 . +C +C LIWORK DECLARED LENGHT OF ARRAY "IWORK". +C +C RPAR, IPAR REAL AND INTEGER PARAMETERS (OR PARAMETER ARRAYS) WHICH +C CAN BE USED FOR COMMUNICATION BETWEEN YOUR CALLING +C PROGRAM AND THE FCN, JAC, MAS, SOLOUT SUBROUTINES. +C +C----------------------------------------------------------------------- +C +C SOPHISTICATED SETTING OF PARAMETERS +C ----------------------------------- +C SEVERAL PARAMETERS (WORK(1),...,IWORK(1),...) ALLOW +C TO ADAPT THE CODE TO THE PROBLEM AND TO THE NEEDS OF +C THE USER. FOR ZERO INPUT, THE CODE CHOOSES DEFAULT VALUES. +C +C WORK(1) UROUND, THE ROUNDING UNIT, DEFAULT 2.3D-16. +C +C WORK(2) THE SAFETY FACTOR IN STEP SIZE PREDICTION, +C DEFAULT 0.9D0. +C +C WORK(3), WORK(4) PARAMETERS FOR STEP SIZE SELECTION +C THE NEW STEP SIZE IS CHOSEN SUBJECT TO THE RESTRICTION +C WORK(3) <= HNEW/HOLD <= WORK(4) +C DEFAULT VALUES: WORK(3)=0.333D0, WORK(4)=6.D0 +C +C WORK(5) IS THE "BETA" FOR STABILIZED STEP SIZE CONTROL +C (SEE SECTION IV.2). POSITIVE VALUES OF BETA ( <= 0.04 ) +C MAKE THE STEP SIZE CONTROL MORE STABLE. +C NEGATIVE WORK(5) PROVOKE BETA=0. +C DEFAULT 0.0D0. +C +C WORK(6) MAXIMAL STEP SIZE, DEFAULT XEND-X. +C +C WORK(7) INITIAL STEP SIZE, FOR WORK(7)=0.D0 AN INITIAL GUESS +C IS COMPUTED WITH HELP OF THE FUNCTION HINIT +C +C IWORK(1) THIS IS THE MAXIMAL NUMBER OF ALLOWED STEPS. +C THE DEFAULT VALUE (FOR IWORK(1)=0) IS 100000. +C +C IWORK(2) SWITCH FOR THE CHOICE OF THE COEFFICIENTS +C IF IWORK(2).EQ.1 METHOD DOP853 OF DORMAND AND PRINCE +C (SECTION II.6). +C THE DEFAULT VALUE (FOR IWORK(2)=0) IS IWORK(2)=1. +C +C IWORK(3) SWITCH FOR PRINTING ERROR MESSAGES +C IF IWORK(3).LT.0 NO MESSAGES ARE BEING PRINTED +C IF IWORK(3).GT.0 MESSAGES ARE PRINTED WITH +C WRITE (IWORK(3),*) ... +C DEFAULT VALUE (FOR IWORK(3)=0) IS IWORK(3)=6 +C +C IWORK(4) TEST FOR STIFFNESS IS ACTIVATED AFTER STEP NUMBER +C J*IWORK(4) (J INTEGER), PROVIDED IWORK(4).GT.0. +C FOR NEGATIVE IWORK(4) THE STIFFNESS TEST IS +C NEVER ACTIVATED; DEFAULT VALUE IS IWORK(4)=1000 +C +C IWORK(5) = NRDENS = NUMBER OF COMPONENTS, FOR WHICH DENSE OUTPUT +C IS REQUIRED; DEFAULT VALUE IS IWORK(5)=0; +C FOR 0 < NRDENS < N THE COMPONENTS (FOR WHICH DENSE +C OUTPUT IS REQUIRED) HAVE TO BE SPECIFIED IN +C IWORK(21),...,IWORK(NRDENS+20); +C FOR NRDENS=N THIS IS DONE BY THE CODE. +C +C---------------------------------------------------------------------- +C +C OUTPUT PARAMETERS +C ----------------- +C X X-VALUE FOR WHICH THE SOLUTION HAS BEEN COMPUTED +C (AFTER SUCCESSFUL RETURN X=XEND). +C +C Y(N) NUMERICAL SOLUTION AT X +C +C H PREDICTED STEP SIZE OF THE LAST ACCEPTED STEP +C +C IDID REPORTS ON SUCCESSFULNESS UPON RETURN: +C IDID= 1 COMPUTATION SUCCESSFUL, +C IDID= 2 COMPUT. SUCCESSFUL (INTERRUPTED BY SOLOUT) +C IDID=-1 INPUT IS NOT CONSISTENT, +C IDID=-2 LARGER NMAX IS NEEDED, +C IDID=-3 STEP SIZE BECOMES TOO SMALL. +C IDID=-4 PROBLEM IS PROBABLY STIFF (INTERRUPTED). +C +C IWORK(17) NFCN NUMBER OF FUNCTION EVALUATIONS +C IWORK(18) NSTEP NUMBER OF COMPUTED STEPS +C IWORK(19) NACCPT NUMBER OF ACCEPTED STEPS +C IWORK(20) NREJCT NUMBER OF REJECTED STEPS (DUE TO ERROR TEST), +C (STEP REJECTIONS IN THE FIRST STEP ARE NOT COUNTED) +C----------------------------------------------------------------------- +C *** *** *** *** *** *** *** *** *** *** *** *** *** +C DECLARATIONS +C *** *** *** *** *** *** *** *** *** *** *** *** *** + IMPLICIT DOUBLE PRECISION (A-H,O-Z) + DIMENSION Y(N),ATOL(*),RTOL(*),WORK(LWORK),IWORK(LIWORK) + DIMENSION RPAR(*),IPAR(*) + LOGICAL ARRET + EXTERNAL FCN,SOLOUT +C *** *** *** *** *** *** *** +C SETTING THE PARAMETERS +C *** *** *** *** *** *** *** + NFCN=0 + NSTEP=0 + NACCPT=0 + NREJCT=0 + ARRET=.FALSE. +C -------- IPRINT FOR MONITORING THE PRINTING + IF(IWORK(3).EQ.0)THEN + IPRINT=6 + ELSE + IPRINT=IWORK(3) + END IF +C -------- NMAX , THE MAXIMAL NUMBER OF STEPS ----- + IF(IWORK(1).EQ.0)THEN + NMAX=100000 + ELSE + NMAX=IWORK(1) + IF(NMAX.LE.0)THEN + IF (IPRINT.GT.0) WRITE(IPRINT,*) + & ' WRONG INPUT IWORK(1)=',IWORK(1) + ARRET=.TRUE. + END IF + END IF +C -------- METH COEFFICIENTS OF THE METHOD + IF(IWORK(2).EQ.0)THEN + METH=1 + ELSE + METH=IWORK(2) + IF(METH.LE.0.OR.METH.GE.4)THEN + IF (IPRINT.GT.0) WRITE(IPRINT,*) + & ' CURIOUS INPUT IWORK(2)=',IWORK(2) + ARRET=.TRUE. + END IF + END IF +C -------- NSTIFF PARAMETER FOR STIFFNESS DETECTION + NSTIFF=IWORK(4) + IF (NSTIFF.EQ.0) NSTIFF=1000 + IF (NSTIFF.LT.0) NSTIFF=NMAX+10 +C -------- NRDENS NUMBER OF DENSE OUTPUT COMPONENTS + NRDENS=IWORK(5) + IF(NRDENS.LT.0.OR.NRDENS.GT.N)THEN + IF (IPRINT.GT.0) WRITE(IPRINT,*) + & ' CURIOUS INPUT IWORK(5)=',IWORK(5) + ARRET=.TRUE. + ELSE + IF(NRDENS.GT.0.AND.IOUT.LT.2)THEN + IF (IPRINT.GT.0) WRITE(IPRINT,*) + & ' WARNING: PUT IOUT=2 FOR DENSE OUTPUT ' + END IF + IF (NRDENS.EQ.N) THEN + DO I=1,NRDENS + IWORK(I+20)=I + END DO + END IF + END IF +C -------- UROUND SMALLEST NUMBER SATISFYING 1.D0+UROUND>1.D0 + IF(WORK(1).EQ.0.D0)THEN + UROUND=2.3D-16 + ELSE + UROUND=WORK(1) + IF(UROUND.LE.1.D-35.OR.UROUND.GE.1.D0)THEN + IF (IPRINT.GT.0) WRITE(IPRINT,*) + & ' WHICH MACHINE DO YOU HAVE? YOUR UROUND WAS:',WORK(1) + ARRET=.TRUE. + END IF + END IF +C ------- SAFETY FACTOR ------------- + IF(WORK(2).EQ.0.D0)THEN + SAFE=0.9D0 + ELSE + SAFE=WORK(2) + IF(SAFE.GE.1.D0.OR.SAFE.LE.1.D-4)THEN + IF (IPRINT.GT.0) WRITE(IPRINT,*) + & ' CURIOUS INPUT FOR SAFETY FACTOR WORK(2)=',WORK(2) + ARRET=.TRUE. + END IF + END IF +C ------- FAC1,FAC2 PARAMETERS FOR STEP SIZE SELECTION + IF(WORK(3).EQ.0.D0)THEN + FAC1=0.333D0 + ELSE + FAC1=WORK(3) + END IF + IF(WORK(4).EQ.0.D0)THEN + FAC2=6.D0 + ELSE + FAC2=WORK(4) + END IF +C --------- BETA FOR STEP CONTROL STABILIZATION ----------- + IF(WORK(5).EQ.0.D0)THEN + BETA=0.0D0 + ELSE + IF(WORK(5).LT.0.D0)THEN + BETA=0.D0 + ELSE + BETA=WORK(5) + IF(BETA.GT.0.2D0)THEN + IF (IPRINT.GT.0) WRITE(IPRINT,*) + & ' CURIOUS INPUT FOR BETA: WORK(5)=',WORK(5) + ARRET=.TRUE. + END IF + END IF + END IF +C -------- MAXIMAL STEP SIZE + IF(WORK(6).EQ.0.D0)THEN + HMAX=XEND-X + ELSE + HMAX=WORK(6) + END IF +C -------- INITIAL STEP SIZE + H=WORK(7) +C ------- PREPARE THE ENTRY-POINTS FOR THE ARRAYS IN WORK ----- + IEK1=21 + IEK2=IEK1+N + IEK3=IEK2+N + IEK4=IEK3+N + IEK5=IEK4+N + IEK6=IEK5+N + IEK7=IEK6+N + IEK8=IEK7+N + IEK9=IEK8+N + IEK10=IEK9+N + IEY1=IEK10+N + IECO=IEY1+N +C ------ TOTAL STORAGE REQUIREMENT ----------- + ISTORE=IECO+8*NRDENS-1 + IF(ISTORE.GT.LWORK)THEN + IF (IPRINT.GT.0) WRITE(IPRINT,*) + & ' INSUFFICIENT STORAGE FOR WORK, MIN. LWORK=',ISTORE + ARRET=.TRUE. + END IF + ICOMP=21 + ISTORE=ICOMP+NRDENS-1 + IF(ISTORE.GT.LIWORK)THEN + IF (IPRINT.GT.0) WRITE(IPRINT,*) + & ' INSUFFICIENT STORAGE FOR IWORK, MIN. LIWORK=',ISTORE + ARRET=.TRUE. + END IF +C -------- WHEN A FAIL HAS OCCURED, WE RETURN WITH IDID=-1 + IF (ARRET) THEN + IDID=-1 + RETURN + END IF +C -------- CALL TO CORE INTEGRATOR ------------ + CALL DP86CO(N,FCN,X,Y,XEND,HMAX,H,RTOL,ATOL,ITOL,IPRINT, + & SOLOUT,IOUT,IDID,NMAX,UROUND,METH,NSTIFF,SAFE,BETA,FAC1,FAC2, + & WORK(IEK1),WORK(IEK2),WORK(IEK3),WORK(IEK4),WORK(IEK5), + & WORK(IEK6),WORK(IEK7),WORK(IEK8),WORK(IEK9),WORK(IEK10), + & WORK(IEY1),WORK(IECO),IWORK(ICOMP),NRDENS,RPAR,IPAR, + & NFCN,NSTEP,NACCPT,NREJCT) + WORK(7)=H + IWORK(17)=NFCN + IWORK(18)=NSTEP + IWORK(19)=NACCPT + IWORK(20)=NREJCT +C ----------- RETURN ----------- + RETURN + END +C +C +C +C ----- ... AND HERE IS THE CORE INTEGRATOR ---------- +C + SUBROUTINE DP86CO(N,FCN,X,Y,XEND,HMAX,H,RTOL,ATOL,ITOL,IPRINT, + & SOLOUT,IOUT,IDID,NMAX,UROUND,METH,NSTIFF,SAFE,BETA,FAC1,FAC2, + & K1,K2,K3,K4,K5,K6,K7,K8,K9,K10,Y1,CONT,ICOMP,NRD,RPAR,IPAR, + & NFCN,NSTEP,NACCPT,NREJCT) +C ---------------------------------------------------------- +C CORE INTEGRATOR FOR DOP853 +C PARAMETERS SAME AS IN DOP853 WITH WORKSPACE ADDED +C ---------------------------------------------------------- +C DECLARATIONS +C ---------------------------------------------------------- + IMPLICIT DOUBLE PRECISION (A-H,O-Z) + parameter ( + & c2 = 0.526001519587677318785587544488D-01, + & c3 = 0.789002279381515978178381316732D-01, + & c4 = 0.118350341907227396726757197510D+00, + & c5 = 0.281649658092772603273242802490D+00, + & c6 = 0.333333333333333333333333333333D+00, + & c7 = 0.25D+00, + & c8 = 0.307692307692307692307692307692D+00, + & c9 = 0.651282051282051282051282051282D+00, + & c10 = 0.6D+00, + & c11 = 0.857142857142857142857142857142D+00, + & c14 = 0.1D+00, + & c15 = 0.2D+00, + & c16 = 0.777777777777777777777777777778D+00) + parameter ( + & b1 = 5.42937341165687622380535766363D-2, + & b6 = 4.45031289275240888144113950566D0, + & b7 = 1.89151789931450038304281599044D0, + & b8 = -5.8012039600105847814672114227D0, + & b9 = 3.1116436695781989440891606237D-1, + & b10 = -1.52160949662516078556178806805D-1, + & b11 = 2.01365400804030348374776537501D-1, + & b12 = 4.47106157277725905176885569043D-2) + parameter ( + & bhh1 = 0.244094488188976377952755905512D+00, + & bhh2 = 0.733846688281611857341361741547D+00, + & bhh3 = 0.220588235294117647058823529412D-01) + parameter ( + & er 1 = 0.1312004499419488073250102996D-01, + & er 6 = -0.1225156446376204440720569753D+01, + & er 7 = -0.4957589496572501915214079952D+00, + & er 8 = 0.1664377182454986536961530415D+01, + & er 9 = -0.3503288487499736816886487290D+00, + & er10 = 0.3341791187130174790297318841D+00, + & er11 = 0.8192320648511571246570742613D-01, + & er12 = -0.2235530786388629525884427845D-01) + parameter ( + & a21 = 5.26001519587677318785587544488D-2, + & a31 = 1.97250569845378994544595329183D-2, + & a32 = 5.91751709536136983633785987549D-2, + & a41 = 2.95875854768068491816892993775D-2, + & a43 = 8.87627564304205475450678981324D-2, + & a51 = 2.41365134159266685502369798665D-1, + & a53 = -8.84549479328286085344864962717D-1, + & a54 = 9.24834003261792003115737966543D-1, + & a61 = 3.7037037037037037037037037037D-2, + & a64 = 1.70828608729473871279604482173D-1, + & a65 = 1.25467687566822425016691814123D-1, + & a71 = 3.7109375D-2, + & a74 = 1.70252211019544039314978060272D-1, + & a75 = 6.02165389804559606850219397283D-2, + & a76 = -1.7578125D-2) + parameter ( + & a81 = 3.70920001185047927108779319836D-2, + & a84 = 1.70383925712239993810214054705D-1, + & a85 = 1.07262030446373284651809199168D-1, + & a86 = -1.53194377486244017527936158236D-2, + & a87 = 8.27378916381402288758473766002D-3, + & a91 = 6.24110958716075717114429577812D-1, + & a94 = -3.36089262944694129406857109825D0, + & a95 = -8.68219346841726006818189891453D-1, + & a96 = 2.75920996994467083049415600797D1, + & a97 = 2.01540675504778934086186788979D1, + & a98 = -4.34898841810699588477366255144D1, + & a101 = 4.77662536438264365890433908527D-1, + & a104 = -2.48811461997166764192642586468D0, + & a105 = -5.90290826836842996371446475743D-1, + & a106 = 2.12300514481811942347288949897D1, + & a107 = 1.52792336328824235832596922938D1, + & a108 = -3.32882109689848629194453265587D1, + & a109 = -2.03312017085086261358222928593D-2) + parameter ( + & a111 = -9.3714243008598732571704021658D-1, + & a114 = 5.18637242884406370830023853209D0, + & a115 = 1.09143734899672957818500254654D0, + & a116 = -8.14978701074692612513997267357D0, + & a117 = -1.85200656599969598641566180701D1, + & a118 = 2.27394870993505042818970056734D1, + & a119 = 2.49360555267965238987089396762D0, + & a1110 = -3.0467644718982195003823669022D0, + & a121 = 2.27331014751653820792359768449D0, + & a124 = -1.05344954667372501984066689879D1, + & a125 = -2.00087205822486249909675718444D0, + & a126 = -1.79589318631187989172765950534D1, + & a127 = 2.79488845294199600508499808837D1, + & a128 = -2.85899827713502369474065508674D0, + & a129 = -8.87285693353062954433549289258D0, + & a1210 = 1.23605671757943030647266201528D1, + & a1211 = 6.43392746015763530355970484046D-1) + parameter ( + & a141 = 5.61675022830479523392909219681D-2, + & a147 = 2.53500210216624811088794765333D-1, + & a148 = -2.46239037470802489917441475441D-1, + & a149 = -1.24191423263816360469010140626D-1, + & a1410 = 1.5329179827876569731206322685D-1, + & a1411 = 8.20105229563468988491666602057D-3, + & a1412 = 7.56789766054569976138603589584D-3, + & a1413 = -8.298D-3) + parameter ( + & a151 = 3.18346481635021405060768473261D-2, + & a156 = 2.83009096723667755288322961402D-2, + & a157 = 5.35419883074385676223797384372D-2, + & a158 = -5.49237485713909884646569340306D-2, + & a1511 = -1.08347328697249322858509316994D-4, + & a1512 = 3.82571090835658412954920192323D-4, + & a1513 = -3.40465008687404560802977114492D-4, + & a1514 = 1.41312443674632500278074618366D-1, + & a161 = -4.28896301583791923408573538692D-1, + & a166 = -4.69762141536116384314449447206D0, + & a167 = 7.68342119606259904184240953878D0, + & a168 = 4.06898981839711007970213554331D0, + & a169 = 3.56727187455281109270669543021D-1, + & a1613 = -1.39902416515901462129418009734D-3, + & a1614 = 2.9475147891527723389556272149D0, + & a1615 = -9.15095847217987001081870187138D0) + parameter ( + & d41 = -0.84289382761090128651353491142D+01, + & d46 = 0.56671495351937776962531783590D+00, + & d47 = -0.30689499459498916912797304727D+01, + & d48 = 0.23846676565120698287728149680D+01, + & d49 = 0.21170345824450282767155149946D+01, + & d410 = -0.87139158377797299206789907490D+00, + & d411 = 0.22404374302607882758541771650D+01, + & d412 = 0.63157877876946881815570249290D+00, + & d413 = -0.88990336451333310820698117400D-01, + & d414 = 0.18148505520854727256656404962D+02, + & d415 = -0.91946323924783554000451984436D+01, + & d416 = -0.44360363875948939664310572000D+01) + parameter ( + & d51 = 0.10427508642579134603413151009D+02, + & d56 = 0.24228349177525818288430175319D+03, + & d57 = 0.16520045171727028198505394887D+03, + & d58 = -0.37454675472269020279518312152D+03, + & d59 = -0.22113666853125306036270938578D+02, + & d510 = 0.77334326684722638389603898808D+01, + & d511 = -0.30674084731089398182061213626D+02, + & d512 = -0.93321305264302278729567221706D+01, + & d513 = 0.15697238121770843886131091075D+02, + & d514 = -0.31139403219565177677282850411D+02, + & d515 = -0.93529243588444783865713862664D+01, + & d516 = 0.35816841486394083752465898540D+02) + parameter ( + & d61 = 0.19985053242002433820987653617D+02, + & d66 = -0.38703730874935176555105901742D+03, + & d67 = -0.18917813819516756882830838328D+03, + & d68 = 0.52780815920542364900561016686D+03, + & d69 = -0.11573902539959630126141871134D+02, + & d610 = 0.68812326946963000169666922661D+01, + & d611 = -0.10006050966910838403183860980D+01, + & d612 = 0.77771377980534432092869265740D+00, + & d613 = -0.27782057523535084065932004339D+01, + & d614 = -0.60196695231264120758267380846D+02, + & d615 = 0.84320405506677161018159903784D+02, + & d616 = 0.11992291136182789328035130030D+02) + parameter ( + & d71 = -0.25693933462703749003312586129D+02, + & d76 = -0.15418974869023643374053993627D+03, + & d77 = -0.23152937917604549567536039109D+03, + & d78 = 0.35763911791061412378285349910D+03, + & d79 = 0.93405324183624310003907691704D+02, + & d710 = -0.37458323136451633156875139351D+02, + & d711 = 0.10409964950896230045147246184D+03, + & d712 = 0.29840293426660503123344363579D+02, + & d713 = -0.43533456590011143754432175058D+02, + & d714 = 0.96324553959188282948394950600D+02, + & d715 = -0.39177261675615439165231486172D+02, + & d716 = -0.14972683625798562581422125276D+03) + DOUBLE PRECISION Y(N),Y1(N),K1(N),K2(N),K3(N),K4(N),K5(N),K6(N) + DOUBLE PRECISION K7(N),K8(N),K9(N),K10(N),ATOL(*),RTOL(*) + DIMENSION CONT(8*NRD),ICOMP(NRD),RPAR(*),IPAR(*) + LOGICAL REJECT,LAST + EXTERNAL FCN + COMMON /CONDO8/XOLD,HOUT +C *** *** *** *** *** *** *** +C INITIALISATIONS +C *** *** *** *** *** *** *** + FACOLD=1.D-4 + EXPO1=1.d0/8.d0-BETA*0.2D0 + FACC1=1.D0/FAC1 + FACC2=1.D0/FAC2 + POSNEG=SIGN(1.D0,XEND-X) +C --- INITIAL PREPARATIONS + ATOLI=ATOL(1) + RTOLI=RTOL(1) + LAST=.FALSE. + HLAMB=0.D0 + IASTI=0 + CALL FCN(N,X,Y,K1,RPAR,IPAR) + HMAX=ABS(HMAX) + IORD=8 + IF (H.EQ.0.D0) H=HINIT853(N,FCN,X,Y,XEND,POSNEG,K1,K2,K3,IORD, + & HMAX,ATOL,RTOL,ITOL,RPAR,IPAR) + NFCN=NFCN+2 + REJECT=.FALSE. + XOLD=X + IF (IOUT.GE.1) THEN + IRTRN=1 + HOUT=1.D0 + CALL SOLOUT(NACCPT+1,XOLD,X,Y,N,CONT,ICOMP,NRD, + & RPAR,IPAR,IRTRN) + IF (IRTRN.LT.0) GOTO 79 + END IF +C --- BASIC INTEGRATION STEP + 1 CONTINUE + IF (NSTEP.GT.NMAX) GOTO 78 + IF (0.1D0*ABS(H).LE.ABS(X)*UROUND)GOTO 77 + IF ((X+1.01D0*H-XEND)*POSNEG.GT.0.D0) THEN + H=XEND-X + LAST=.TRUE. + END IF + NSTEP=NSTEP+1 +C --- THE TWELVE STAGES + IF (IRTRN.GE.2) THEN + CALL FCN(N,X,Y,K1,RPAR,IPAR) + END IF + DO 22 I=1,N + 22 Y1(I)=Y(I)+H*A21*K1(I) + CALL FCN(N,X+C2*H,Y1,K2,RPAR,IPAR) + DO 23 I=1,N + 23 Y1(I)=Y(I)+H*(A31*K1(I)+A32*K2(I)) + CALL FCN(N,X+C3*H,Y1,K3,RPAR,IPAR) + DO 24 I=1,N + 24 Y1(I)=Y(I)+H*(A41*K1(I)+A43*K3(I)) + CALL FCN(N,X+C4*H,Y1,K4,RPAR,IPAR) + DO 25 I=1,N + 25 Y1(I)=Y(I)+H*(A51*K1(I)+A53*K3(I)+A54*K4(I)) + CALL FCN(N,X+C5*H,Y1,K5,RPAR,IPAR) + DO 26 I=1,N + 26 Y1(I)=Y(I)+H*(A61*K1(I)+A64*K4(I)+A65*K5(I)) + CALL FCN(N,X+C6*H,Y1,K6,RPAR,IPAR) + DO 27 I=1,N + 27 Y1(I)=Y(I)+H*(A71*K1(I)+A74*K4(I)+A75*K5(I)+A76*K6(I)) + CALL FCN(N,X+C7*H,Y1,K7,RPAR,IPAR) + DO 28 I=1,N + 28 Y1(I)=Y(I)+H*(A81*K1(I)+A84*K4(I)+A85*K5(I)+A86*K6(I)+A87*K7(I)) + CALL FCN(N,X+C8*H,Y1,K8,RPAR,IPAR) + DO 29 I=1,N + 29 Y1(I)=Y(I)+H*(A91*K1(I)+A94*K4(I)+A95*K5(I)+A96*K6(I)+A97*K7(I) + & +A98*K8(I)) + CALL FCN(N,X+C9*H,Y1,K9,RPAR,IPAR) + DO 30 I=1,N + 30 Y1(I)=Y(I)+H*(A101*K1(I)+A104*K4(I)+A105*K5(I)+A106*K6(I) + & +A107*K7(I)+A108*K8(I)+A109*K9(I)) + CALL FCN(N,X+C10*H,Y1,K10,RPAR,IPAR) + DO 31 I=1,N + 31 Y1(I)=Y(I)+H*(A111*K1(I)+A114*K4(I)+A115*K5(I)+A116*K6(I) + & +A117*K7(I)+A118*K8(I)+A119*K9(I)+A1110*K10(I)) + CALL FCN(N,X+C11*H,Y1,K2,RPAR,IPAR) + XPH=X+H + DO 32 I=1,N + 32 Y1(I)=Y(I)+H*(A121*K1(I)+A124*K4(I)+A125*K5(I)+A126*K6(I) + & +A127*K7(I)+A128*K8(I)+A129*K9(I)+A1210*K10(I)+A1211*K2(I)) + CALL FCN(N,XPH,Y1,K3,RPAR,IPAR) + NFCN=NFCN+11 + DO 35 I=1,N + K4(I)=B1*K1(I)+B6*K6(I)+B7*K7(I)+B8*K8(I)+B9*K9(I) + & +B10*K10(I)+B11*K2(I)+B12*K3(I) + 35 K5(I)=Y(I)+H*K4(I) +C --- ERROR ESTIMATION + ERR=0.D0 + ERR2=0.D0 + IF (ITOL.EQ.0) THEN + DO 41 I=1,N + SK=ATOLI+RTOLI*MAX(ABS(Y(I)),ABS(K5(I))) + ERRI=K4(I)-BHH1*K1(I)-BHH2*K9(I)-BHH3*K3(I) + ERR2=ERR2+(ERRI/SK)**2 + ERRI=ER1*K1(I)+ER6*K6(I)+ER7*K7(I)+ER8*K8(I)+ER9*K9(I) + & +ER10*K10(I)+ER11*K2(I)+ER12*K3(I) + 41 ERR=ERR+(ERRI/SK)**2 + ELSE + DO 42 I=1,N + SK=ATOL(I)+RTOL(I)*MAX(ABS(Y(I)),ABS(K5(I))) + ERRI=K4(I)-BHH1*K1(I)-BHH2*K9(I)-BHH3*K3(I) + ERR2=ERR2+(ERRI/SK)**2 + ERRI=ER1*K1(I)+ER6*K6(I)+ER7*K7(I)+ER8*K8(I)+ER9*K9(I) + & +ER10*K10(I)+ER11*K2(I)+ER12*K3(I) + 42 ERR=ERR+(ERRI/SK)**2 + END IF + DENO=ERR+0.01D0*ERR2 + IF (DENO.LE.0.D0) DENO=1.D0 + ERR=ABS(H)*ERR*SQRT(1.D0/(N*DENO)) +C --- COMPUTATION OF HNEW + FAC11=ERR**EXPO1 +C --- LUND-STABILIZATION + FAC=FAC11/FACOLD**BETA +C --- WE REQUIRE FAC1 <= HNEW/H <= FAC2 + FAC=MAX(FACC2,MIN(FACC1,FAC/SAFE)) + HNEW=H/FAC + IF(ERR.LE.1.D0)THEN +C --- STEP IS ACCEPTED + FACOLD=MAX(ERR,1.0D-4) + NACCPT=NACCPT+1 + CALL FCN(N,XPH,K5,K4,RPAR,IPAR) + NFCN=NFCN+1 +C ------- STIFFNESS DETECTION + IF (MOD(NACCPT,NSTIFF).EQ.0.OR.IASTI.GT.0) THEN + STNUM=0.D0 + STDEN=0.D0 + DO 64 I=1,N + STNUM=STNUM+(K4(I)-K3(I))**2 + STDEN=STDEN+(K5(I)-Y1(I))**2 + 64 CONTINUE + IF (STDEN.GT.0.D0) HLAMB=ABS(H)*SQRT(STNUM/STDEN) + IF (HLAMB.GT.6.1D0) THEN + NONSTI=0 + IASTI=IASTI+1 + IF (IASTI.EQ.15) THEN + IF (IPRINT.GT.0) WRITE (IPRINT,*) + & ' THE PROBLEM SEEMS TO BECOME STIFF AT X = ',X + IF (IPRINT.LE.0) GOTO 76 + END IF + ELSE + NONSTI=NONSTI+1 + IF (NONSTI.EQ.6) IASTI=0 + END IF + END IF +C ------- FINAL PREPARATION FOR DENSE OUTPUT + IF (IOUT.GE.2) THEN +C ---- SAVE THE FIRST FUNCTION EVALUATIONS + DO 62 J=1,NRD + I=ICOMP(J) + CONT(J)=Y(I) + YDIFF=K5(I)-Y(I) + CONT(J+NRD)=YDIFF + BSPL=H*K1(I)-YDIFF + CONT(J+NRD*2)=BSPL + CONT(J+NRD*3)=YDIFF-H*K4(I)-BSPL + CONT(J+NRD*4)=D41*K1(I)+D46*K6(I)+D47*K7(I)+D48*K8(I) + & +D49*K9(I)+D410*K10(I)+D411*K2(I)+D412*K3(I) + CONT(J+NRD*5)=D51*K1(I)+D56*K6(I)+D57*K7(I)+D58*K8(I) + & +D59*K9(I)+D510*K10(I)+D511*K2(I)+D512*K3(I) + CONT(J+NRD*6)=D61*K1(I)+D66*K6(I)+D67*K7(I)+D68*K8(I) + & +D69*K9(I)+D610*K10(I)+D611*K2(I)+D612*K3(I) + CONT(J+NRD*7)=D71*K1(I)+D76*K6(I)+D77*K7(I)+D78*K8(I) + & +D79*K9(I)+D710*K10(I)+D711*K2(I)+D712*K3(I) + 62 CONTINUE +C --- THE NEXT THREE FUNCTION EVALUATIONS + DO 51 I=1,N + 51 Y1(I)=Y(I)+H*(A141*K1(I)+A147*K7(I)+A148*K8(I) + & +A149*K9(I)+A1410*K10(I)+A1411*K2(I)+A1412*K3(I) + & +A1413*K4(I)) + CALL FCN(N,X+C14*H,Y1,K10,RPAR,IPAR) + DO 52 I=1,N + 52 Y1(I)=Y(I)+H*(A151*K1(I)+A156*K6(I)+A157*K7(I) + & +A158*K8(I)+A1511*K2(I)+A1512*K3(I)+A1513*K4(I) + & +A1514*K10(I)) + CALL FCN(N,X+C15*H,Y1,K2,RPAR,IPAR) + DO 53 I=1,N + 53 Y1(I)=Y(I)+H*(A161*K1(I)+A166*K6(I)+A167*K7(I) + & +A168*K8(I)+A169*K9(I)+A1613*K4(I)+A1614*K10(I) + & +A1615*K2(I)) + CALL FCN(N,X+C16*H,Y1,K3,RPAR,IPAR) + NFCN=NFCN+3 +C --- FINAL PREPARATION + DO 63 J=1,NRD + I=ICOMP(J) + CONT(J+NRD*4)=H*(CONT(J+NRD*4)+D413*K4(I)+D414*K10(I) + & +D415*K2(I)+D416*K3(I)) + CONT(J+NRD*5)=H*(CONT(J+NRD*5)+D513*K4(I)+D514*K10(I) + & +D515*K2(I)+D516*K3(I)) + CONT(J+NRD*6)=H*(CONT(J+NRD*6)+D613*K4(I)+D614*K10(I) + & +D615*K2(I)+D616*K3(I)) + CONT(J+NRD*7)=H*(CONT(J+NRD*7)+D713*K4(I)+D714*K10(I) + & +D715*K2(I)+D716*K3(I)) + 63 CONTINUE + HOUT=H + END IF + DO 67 I=1,N + K1(I)=K4(I) + 67 Y(I)=K5(I) + XOLD=X + X=XPH + IF (IOUT.GE.1) THEN + CALL SOLOUT(NACCPT+1,XOLD,X,Y,N,CONT,ICOMP,NRD, + & RPAR,IPAR,IRTRN) + IF (IRTRN.LT.0) GOTO 79 + END IF +C ------- NORMAL EXIT + IF (LAST) THEN + H=HNEW + IDID=1 + RETURN + END IF + IF(ABS(HNEW).GT.HMAX)HNEW=POSNEG*HMAX + IF(REJECT)HNEW=POSNEG*MIN(ABS(HNEW),ABS(H)) + REJECT=.FALSE. + ELSE +C --- STEP IS REJECTED + HNEW=H/MIN(FACC1,FAC11/SAFE) + REJECT=.TRUE. + IF(NACCPT.GE.1)NREJCT=NREJCT+1 + LAST=.FALSE. + END IF + H=HNEW + GOTO 1 +C --- FAIL EXIT + 76 CONTINUE + IDID=-4 + RETURN + 77 CONTINUE + IF (IPRINT.GT.0) WRITE(IPRINT,979)X + IF (IPRINT.GT.0) WRITE(IPRINT,*)' STEP SIZE TOO SMALL, H=',H + IDID=-3 + RETURN + 78 CONTINUE + IF (IPRINT.GT.0) WRITE(IPRINT,979)X + IF (IPRINT.GT.0) WRITE(IPRINT,*) + & ' MORE THAN NMAX =',NMAX,'STEPS ARE NEEDED' + IDID=-2 + RETURN + 79 CONTINUE + IF (IPRINT.GT.0) WRITE(IPRINT,979)X + 979 FORMAT(' EXIT OF DOP853 AT X=',E18.4) + IDID=2 + RETURN + END +C + FUNCTION HINIT853(N,FCN,X,Y,XEND,POSNEG,F0,F1,Y1,IORD, + & HMAX,ATOL,RTOL,ITOL,RPAR,IPAR) +C ---------------------------------------------------------- +C ---- COMPUTATION OF AN INITIAL STEP SIZE GUESS +C ---------------------------------------------------------- + IMPLICIT DOUBLE PRECISION (A-H,O-Z) + DIMENSION Y(N),Y1(N),F0(N),F1(N),ATOL(*),RTOL(*) + DIMENSION RPAR(*),IPAR(*) +C ---- COMPUTE A FIRST GUESS FOR EXPLICIT EULER AS +C ---- H = 0.01 * NORM (Y0) / NORM (F0) +C ---- THE INCREMENT FOR EXPLICIT EULER IS SMALL +C ---- COMPARED TO THE SOLUTION + DNF=0.0D0 + DNY=0.0D0 + ATOLI=ATOL(1) + RTOLI=RTOL(1) + IF (ITOL.EQ.0) THEN + DO 10 I=1,N + SK=ATOLI+RTOLI*ABS(Y(I)) + DNF=DNF+(F0(I)/SK)**2 + 10 DNY=DNY+(Y(I)/SK)**2 + ELSE + DO 11 I=1,N + SK=ATOL(I)+RTOL(I)*ABS(Y(I)) + DNF=DNF+(F0(I)/SK)**2 + 11 DNY=DNY+(Y(I)/SK)**2 + END IF + IF (DNF.LE.1.D-10.OR.DNY.LE.1.D-10) THEN + H=1.0D-6 + ELSE + H=SQRT(DNY/DNF)*0.01D0 + END IF + H=MIN(H,HMAX) + H=SIGN(H,POSNEG) +C ---- PERFORM AN EXPLICIT EULER STEP + DO 12 I=1,N + 12 Y1(I)=Y(I)+H*F0(I) + CALL FCN(N,X+H,Y1,F1,RPAR,IPAR) +C ---- ESTIMATE THE SECOND DERIVATIVE OF THE SOLUTION + DER2=0.0D0 + IF (ITOL.EQ.0) THEN + DO 15 I=1,N + SK=ATOLI+RTOLI*ABS(Y(I)) + 15 DER2=DER2+((F1(I)-F0(I))/SK)**2 + ELSE + DO 16 I=1,N + SK=ATOL(I)+RTOL(I)*ABS(Y(I)) + 16 DER2=DER2+((F1(I)-F0(I))/SK)**2 + END IF + DER2=SQRT(DER2)/H +C ---- STEP SIZE IS COMPUTED SUCH THAT +C ---- H**IORD * MAX ( NORM (F0), NORM (DER2)) = 0.01 + DER12=MAX(ABS(DER2),SQRT(DNF)) + IF (DER12.LE.1.D-15) THEN + H1=MAX(1.0D-6,ABS(H)*1.0D-3) + ELSE + H1=(0.01D0/DER12)**(1.D0/IORD) + END IF + H=MIN(100*ABS(H),H1,HMAX) + HINIT853=SIGN(H,POSNEG) + RETURN + END +C + FUNCTION CONTD8(II,X,CON,ICOMP,ND) +C ---------------------------------------------------------- +C THIS FUNCTION CAN BE USED FOR CONINUOUS OUTPUT IN CONNECTION +C WITH THE OUTPUT-SUBROUTINE FOR DOP853. IT PROVIDES AN +C APPROXIMATION TO THE II-TH COMPONENT OF THE SOLUTION AT X. +C ---------------------------------------------------------- + IMPLICIT DOUBLE PRECISION (A-H,O-Z) + DIMENSION CON(8*ND),ICOMP(ND) + COMMON /CONDO8/XOLD,H +C ----- COMPUTE PLACE OF II-TH COMPONENT + I=0 + DO 5 J=1,ND + IF (ICOMP(J).EQ.II) I=J + 5 CONTINUE + IF (I.EQ.0) THEN + WRITE (6,*) ' NO DENSE OUTPUT AVAILABLE FOR COMP.',II + RETURN + END IF + S=(X-XOLD)/H + S1=1.D0-S + CONPAR=CON(I+ND*4)+S*(CON(I+ND*5)+S1*(CON(I+ND*6)+S*CON(I+ND*7))) + CONTD8=CON(I)+S*(CON(I+ND)+S1*(CON(I+ND*2)+S*(CON(I+ND*3) + & +S1*CONPAR))) + RETURN + END + diff -Nru python-scipy-0.7.2+dfsg1/scipy/integrate/dop/dopri5.f python-scipy-0.8.0+dfsg1/scipy/integrate/dop/dopri5.f --- python-scipy-0.7.2+dfsg1/scipy/integrate/dop/dopri5.f 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/integrate/dop/dopri5.f 2010-07-26 15:48:30.000000000 +0100 @@ -0,0 +1,693 @@ + SUBROUTINE DOPRI5(N,FCN,X,Y,XEND, + & RTOL,ATOL,ITOL, + & SOLOUT,IOUT, + & WORK,LWORK,IWORK,LIWORK,RPAR,IPAR,IDID) +C ---------------------------------------------------------- +C NUMERICAL SOLUTION OF A SYSTEM OF FIRST 0RDER +C ORDINARY DIFFERENTIAL EQUATIONS Y'=F(X,Y). +C THIS IS AN EXPLICIT RUNGE-KUTTA METHOD OF ORDER (4)5 +C DUE TO DORMAND & PRINCE (WITH STEPSIZE CONTROL AND +C DENSE OUTPUT). +C +C AUTHORS: E. HAIRER AND G. WANNER +C UNIVERSITE DE GENEVE, DEPT. DE MATHEMATIQUES +C CH-1211 GENEVE 24, SWITZERLAND +C E-MAIL: Ernst.Hairer@math.unige.ch +C Gerhard.Wanner@math.unige.ch +C +C THIS CODE IS DESCRIBED IN: +C E. HAIRER, S.P. NORSETT AND G. WANNER, SOLVING ORDINARY +C DIFFERENTIAL EQUATIONS I. NONSTIFF PROBLEMS. 2ND EDITION. +C SPRINGER SERIES IN COMPUTATIONAL MATHEMATICS, +C SPRINGER-VERLAG (1993) +C +C VERSION OF APRIL 25, 1996 +C (latest correction of a small bug: August 8, 2005) +C +C INPUT PARAMETERS +C ---------------- +C N DIMENSION OF THE SYSTEM +C +C FCN NAME (EXTERNAL) OF SUBROUTINE COMPUTING THE +C VALUE OF F(X,Y): +C SUBROUTINE FCN(N,X,Y,F,RPAR,IPAR) +C DOUBLE PRECISION X,Y(N),F(N) +C F(1)=... ETC. +C +C X INITIAL X-VALUE +C +C Y(N) INITIAL VALUES FOR Y +C +C XEND FINAL X-VALUE (XEND-X MAY BE POSITIVE OR NEGATIVE) +C +C RTOL,ATOL RELATIVE AND ABSOLUTE ERROR TOLERANCES. THEY +C CAN BE BOTH SCALARS OR ELSE BOTH VECTORS OF LENGTH N. +C +C ITOL SWITCH FOR RTOL AND ATOL: +C ITOL=0: BOTH RTOL AND ATOL ARE SCALARS. +C THE CODE KEEPS, ROUGHLY, THE LOCAL ERROR OF +C Y(I) BELOW RTOL*ABS(Y(I))+ATOL +C ITOL=1: BOTH RTOL AND ATOL ARE VECTORS. +C THE CODE KEEPS THE LOCAL ERROR OF Y(I) BELOW +C RTOL(I)*ABS(Y(I))+ATOL(I). +C +C SOLOUT NAME (EXTERNAL) OF SUBROUTINE PROVIDING THE +C NUMERICAL SOLUTION DURING INTEGRATION. +C IF IOUT.GE.1, IT IS CALLED AFTER EVERY SUCCESSFUL STEP. +C SUPPLY A DUMMY SUBROUTINE IF IOUT=0. +C IT MUST HAVE THE FORM +C SUBROUTINE SOLOUT (NR,XOLD,X,Y,N,CON,ICOMP,ND, +C RPAR,IPAR,IRTRN) +C DIMENSION Y(N),CON(5*ND),ICOMP(ND) +C .... +C SOLOUT FURNISHES THE SOLUTION "Y" AT THE NR-TH +C GRID-POINT "X" (THEREBY THE INITIAL VALUE IS +C THE FIRST GRID-POINT). +C "XOLD" IS THE PRECEEDING GRID-POINT. +C "IRTRN" SERVES TO INTERRUPT THE INTEGRATION. IF IRTRN +C IS SET <0, DOPRI5 WILL RETURN TO THE CALLING PROGRAM. +C IF THE NUMERICAL SOLUTION IS ALTERED IN SOLOUT, +C SET IRTRN = 2 +C +C ----- CONTINUOUS OUTPUT: ----- +C DURING CALLS TO "SOLOUT", A CONTINUOUS SOLUTION +C FOR THE INTERVAL [XOLD,X] IS AVAILABLE THROUGH +C THE FUNCTION +C >>> CONTD5(I,S,CON,ICOMP,ND) <<< +C WHICH PROVIDES AN APPROXIMATION TO THE I-TH +C COMPONENT OF THE SOLUTION AT THE POINT S. THE VALUE +C S SHOULD LIE IN THE INTERVAL [XOLD,X]. +C +C IOUT SWITCH FOR CALLING THE SUBROUTINE SOLOUT: +C IOUT=0: SUBROUTINE IS NEVER CALLED +C IOUT=1: SUBROUTINE IS USED FOR OUTPUT. +C IOUT=2: DENSE OUTPUT IS PERFORMED IN SOLOUT +C (IN THIS CASE WORK(5) MUST BE SPECIFIED) +C +C WORK ARRAY OF WORKING SPACE OF LENGTH "LWORK". +C WORK(1),...,WORK(20) SERVE AS PARAMETERS FOR THE CODE. +C FOR STANDARD USE, SET THEM TO ZERO BEFORE CALLING. +C "LWORK" MUST BE AT LEAST 8*N+5*NRDENS+21 +C WHERE NRDENS = IWORK(5) +C +C LWORK DECLARED LENGHT OF ARRAY "WORK". +C +C IWORK INTEGER WORKING SPACE OF LENGHT "LIWORK". +C IWORK(1),...,IWORK(20) SERVE AS PARAMETERS FOR THE CODE. +C FOR STANDARD USE, SET THEM TO ZERO BEFORE CALLING. +C "LIWORK" MUST BE AT LEAST NRDENS+21 . +C +C LIWORK DECLARED LENGHT OF ARRAY "IWORK". +C +C RPAR, IPAR REAL AND INTEGER PARAMETERS (OR PARAMETER ARRAYS) WHICH +C CAN BE USED FOR COMMUNICATION BETWEEN YOUR CALLING +C PROGRAM AND THE FCN, JAC, MAS, SOLOUT SUBROUTINES. +C +C----------------------------------------------------------------------- +C +C SOPHISTICATED SETTING OF PARAMETERS +C ----------------------------------- +C SEVERAL PARAMETERS (WORK(1),...,IWORK(1),...) ALLOW +C TO ADAPT THE CODE TO THE PROBLEM AND TO THE NEEDS OF +C THE USER. FOR ZERO INPUT, THE CODE CHOOSES DEFAULT VALUES. +C +C WORK(1) UROUND, THE ROUNDING UNIT, DEFAULT 2.3D-16. +C +C WORK(2) THE SAFETY FACTOR IN STEP SIZE PREDICTION, +C DEFAULT 0.9D0. +C +C WORK(3), WORK(4) PARAMETERS FOR STEP SIZE SELECTION +C THE NEW STEP SIZE IS CHOSEN SUBJECT TO THE RESTRICTION +C WORK(3) <= HNEW/HOLD <= WORK(4) +C DEFAULT VALUES: WORK(3)=0.2D0, WORK(4)=10.D0 +C +C WORK(5) IS THE "BETA" FOR STABILIZED STEP SIZE CONTROL +C (SEE SECTION IV.2). LARGER VALUES OF BETA ( <= 0.1 ) +C MAKE THE STEP SIZE CONTROL MORE STABLE. DOPRI5 NEEDS +C A LARGER BETA THAN HIGHAM & HALL. NEGATIVE WORK(5) +C PROVOKE BETA=0. +C DEFAULT 0.04D0. +C +C WORK(6) MAXIMAL STEP SIZE, DEFAULT XEND-X. +C +C WORK(7) INITIAL STEP SIZE, FOR WORK(7)=0.D0 AN INITIAL GUESS +C IS COMPUTED WITH HELP OF THE FUNCTION HINIT +C +C IWORK(1) THIS IS THE MAXIMAL NUMBER OF ALLOWED STEPS. +C THE DEFAULT VALUE (FOR IWORK(1)=0) IS 100000. +C +C IWORK(2) SWITCH FOR THE CHOICE OF THE COEFFICIENTS +C IF IWORK(2).EQ.1 METHOD DOPRI5 OF DORMAND AND PRINCE +C (TABLE 5.2 OF SECTION II.5). +C AT THE MOMENT THIS IS THE ONLY POSSIBLE CHOICE. +C THE DEFAULT VALUE (FOR IWORK(2)=0) IS IWORK(2)=1. +C +C IWORK(3) SWITCH FOR PRINTING ERROR MESSAGES +C IF IWORK(3).LT.0 NO MESSAGES ARE BEING PRINTED +C IF IWORK(3).GT.0 MESSAGES ARE PRINTED WITH +C WRITE (IWORK(3),*) ... +C DEFAULT VALUE (FOR IWORK(3)=0) IS IWORK(3)=6 +C +C IWORK(4) TEST FOR STIFFNESS IS ACTIVATED AFTER STEP NUMBER +C J*IWORK(4) (J INTEGER), PROVIDED IWORK(4).GT.0. +C FOR NEGATIVE IWORK(4) THE STIFFNESS TEST IS +C NEVER ACTIVATED; DEFAULT VALUE IS IWORK(4)=1000 +C +C IWORK(5) = NRDENS = NUMBER OF COMPONENTS, FOR WHICH DENSE OUTPUT +C IS REQUIRED; DEFAULT VALUE IS IWORK(5)=0; +C FOR 0 < NRDENS < N THE COMPONENTS (FOR WHICH DENSE +C OUTPUT IS REQUIRED) HAVE TO BE SPECIFIED IN +C IWORK(21),...,IWORK(NRDENS+20); +C FOR NRDENS=N THIS IS DONE BY THE CODE. +C +C---------------------------------------------------------------------- +C +C OUTPUT PARAMETERS +C ----------------- +C X X-VALUE FOR WHICH THE SOLUTION HAS BEEN COMPUTED +C (AFTER SUCCESSFUL RETURN X=XEND). +C +C Y(N) NUMERICAL SOLUTION AT X +C +C H PREDICTED STEP SIZE OF THE LAST ACCEPTED STEP +C +C IDID REPORTS ON SUCCESSFULNESS UPON RETURN: +C IDID= 1 COMPUTATION SUCCESSFUL, +C IDID= 2 COMPUT. SUCCESSFUL (INTERRUPTED BY SOLOUT) +C IDID=-1 INPUT IS NOT CONSISTENT, +C IDID=-2 LARGER NMAX IS NEEDED, +C IDID=-3 STEP SIZE BECOMES TOO SMALL. +C IDID=-4 PROBLEM IS PROBABLY STIFF (INTERRUPTED). +C +C IWORK(17) NFCN NUMBER OF FUNCTION EVALUATIONS +C IWORK(18) NSTEP NUMBER OF COMPUTED STEPS +C IWORK(19) NACCPT NUMBER OF ACCEPTED STEPS +C IWORK(20) NREJCT NUMBER OF REJECTED STEPS (DUE TO ERROR TEST), +C (STEP REJECTIONS IN THE FIRST STEP ARE NOT COUNTED) +C----------------------------------------------------------------------- +C *** *** *** *** *** *** *** *** *** *** *** *** *** +C DECLARATIONS +C *** *** *** *** *** *** *** *** *** *** *** *** *** + IMPLICIT DOUBLE PRECISION (A-H,O-Z) + DIMENSION Y(N),ATOL(*),RTOL(*),WORK(LWORK),IWORK(LIWORK) + DIMENSION RPAR(*),IPAR(*) + LOGICAL ARRET + EXTERNAL FCN,SOLOUT +C *** *** *** *** *** *** *** +C SETTING THE PARAMETERS +C *** *** *** *** *** *** *** + NFCN=0 + NSTEP=0 + NACCPT=0 + NREJCT=0 + ARRET=.FALSE. +C -------- IPRINT FOR MONITORING THE PRINTING + IF(IWORK(3).EQ.0)THEN + IPRINT=6 + ELSE + IPRINT=IWORK(3) + END IF +C -------- NMAX , THE MAXIMAL NUMBER OF STEPS ----- + IF(IWORK(1).EQ.0)THEN + NMAX=100000 + ELSE + NMAX=IWORK(1) + IF(NMAX.LE.0)THEN + IF (IPRINT.GT.0) WRITE(IPRINT,*) + & ' WRONG INPUT IWORK(1)=',IWORK(1) + ARRET=.TRUE. + END IF + END IF +C -------- METH COEFFICIENTS OF THE METHOD + IF(IWORK(2).EQ.0)THEN + METH=1 + ELSE + METH=IWORK(2) + IF(METH.LE.0.OR.METH.GE.4)THEN + IF (IPRINT.GT.0) WRITE(IPRINT,*) + & ' CURIOUS INPUT IWORK(2)=',IWORK(2) + ARRET=.TRUE. + END IF + END IF +C -------- NSTIFF PARAMETER FOR STIFFNESS DETECTION + NSTIFF=IWORK(4) + IF (NSTIFF.EQ.0) NSTIFF=1000 + IF (NSTIFF.LT.0) NSTIFF=NMAX+10 +C -------- NRDENS NUMBER OF DENSE OUTPUT COMPONENTS + NRDENS=IWORK(5) + IF(NRDENS.LT.0.OR.NRDENS.GT.N)THEN + IF (IPRINT.GT.0) WRITE(IPRINT,*) + & ' CURIOUS INPUT IWORK(5)=',IWORK(5) + ARRET=.TRUE. + ELSE + IF(NRDENS.GT.0.AND.IOUT.LT.2)THEN + IF (IPRINT.GT.0) WRITE(IPRINT,*) + & ' WARNING: PUT IOUT=2 FOR DENSE OUTPUT ' + END IF + IF (NRDENS.EQ.N) THEN + DO 16 I=1,NRDENS + 16 IWORK(20+I)=I + END IF + END IF +C -------- UROUND SMALLEST NUMBER SATISFYING 1.D0+UROUND>1.D0 + IF(WORK(1).EQ.0.D0)THEN + UROUND=2.3D-16 + ELSE + UROUND=WORK(1) + IF(UROUND.LE.1.D-35.OR.UROUND.GE.1.D0)THEN + IF (IPRINT.GT.0) WRITE(IPRINT,*) + & ' WHICH MACHINE DO YOU HAVE? YOUR UROUND WAS:',WORK(1) + ARRET=.TRUE. + END IF + END IF +C ------- SAFETY FACTOR ------------- + IF(WORK(2).EQ.0.D0)THEN + SAFE=0.9D0 + ELSE + SAFE=WORK(2) + IF(SAFE.GE.1.D0.OR.SAFE.LE.1.D-4)THEN + IF (IPRINT.GT.0) WRITE(IPRINT,*) + & ' CURIOUS INPUT FOR SAFETY FACTOR WORK(2)=',WORK(2) + ARRET=.TRUE. + END IF + END IF +C ------- FAC1,FAC2 PARAMETERS FOR STEP SIZE SELECTION + IF(WORK(3).EQ.0.D0)THEN + FAC1=0.2D0 + ELSE + FAC1=WORK(3) + END IF + IF(WORK(4).EQ.0.D0)THEN + FAC2=10.D0 + ELSE + FAC2=WORK(4) + END IF +C --------- BETA FOR STEP CONTROL STABILIZATION ----------- + IF(WORK(5).EQ.0.D0)THEN + BETA=0.04D0 + ELSE + IF(WORK(5).LT.0.D0)THEN + BETA=0.D0 + ELSE + BETA=WORK(5) + IF(BETA.GT.0.2D0)THEN + IF (IPRINT.GT.0) WRITE(IPRINT,*) + & ' CURIOUS INPUT FOR BETA: WORK(5)=',WORK(5) + ARRET=.TRUE. + END IF + END IF + END IF +C -------- MAXIMAL STEP SIZE + IF(WORK(6).EQ.0.D0)THEN + HMAX=XEND-X + ELSE + HMAX=WORK(6) + END IF +C -------- INITIAL STEP SIZE + H=WORK(7) +C ------- PREPARE THE ENTRY-POINTS FOR THE ARRAYS IN WORK ----- + IEY1=21 + IEK1=IEY1+N + IEK2=IEK1+N + IEK3=IEK2+N + IEK4=IEK3+N + IEK5=IEK4+N + IEK6=IEK5+N + IEYS=IEK6+N + IECO=IEYS+N +C ------ TOTAL STORAGE REQUIREMENT ----------- + ISTORE=IEYS+5*NRDENS-1 + IF(ISTORE.GT.LWORK)THEN + IF (IPRINT.GT.0) WRITE(IPRINT,*) + & ' INSUFFICIENT STORAGE FOR WORK, MIN. LWORK=',ISTORE + ARRET=.TRUE. + END IF + ICOMP=21 + ISTORE=ICOMP+NRDENS-1 + IF(ISTORE.GT.LIWORK)THEN + IF (IPRINT.GT.0) WRITE(IPRINT,*) + & ' INSUFFICIENT STORAGE FOR IWORK, MIN. LIWORK=',ISTORE + ARRET=.TRUE. + END IF +C ------ WHEN A FAIL HAS OCCURED, WE RETURN WITH IDID=-1 + IF (ARRET) THEN + IDID=-1 + RETURN + END IF +C -------- CALL TO CORE INTEGRATOR ------------ + CALL DOPCOR(N,FCN,X,Y,XEND,HMAX,H,RTOL,ATOL,ITOL,IPRINT, + & SOLOUT,IOUT,IDID,NMAX,UROUND,METH,NSTIFF,SAFE,BETA,FAC1,FAC2, + & WORK(IEY1),WORK(IEK1),WORK(IEK2),WORK(IEK3),WORK(IEK4), + & WORK(IEK5),WORK(IEK6),WORK(IEYS),WORK(IECO),IWORK(ICOMP), + & NRDENS,RPAR,IPAR,NFCN,NSTEP,NACCPT,NREJCT) + WORK(7)=H + IWORK(17)=NFCN + IWORK(18)=NSTEP + IWORK(19)=NACCPT + IWORK(20)=NREJCT +C ----------- RETURN ----------- + RETURN + END +C +C +C +C ----- ... AND HERE IS THE CORE INTEGRATOR ---------- +C + SUBROUTINE DOPCOR(N,FCN,X,Y,XEND,HMAX,H,RTOL,ATOL,ITOL,IPRINT, + & SOLOUT,IOUT,IDID,NMAX,UROUND,METH,NSTIFF,SAFE,BETA,FAC1,FAC2, + & Y1,K1,K2,K3,K4,K5,K6,YSTI,CONT,ICOMP,NRD,RPAR,IPAR, + & NFCN,NSTEP,NACCPT,NREJCT) +C ---------------------------------------------------------- +C CORE INTEGRATOR FOR DOPRI5 +C PARAMETERS SAME AS IN DOPRI5 WITH WORKSPACE ADDED +C ---------------------------------------------------------- +C DECLARATIONS +C ---------------------------------------------------------- + IMPLICIT DOUBLE PRECISION (A-H,O-Z) + DOUBLE PRECISION K1(N),K2(N),K3(N),K4(N),K5(N),K6(N) + DIMENSION Y(N),Y1(N),YSTI(N),ATOL(*),RTOL(*),RPAR(*),IPAR(*) + DIMENSION CONT(5*NRD),ICOMP(NRD) + LOGICAL REJECT,LAST + EXTERNAL FCN + COMMON /CONDO5/XOLD,HOUT +C *** *** *** *** *** *** *** +C INITIALISATIONS +C *** *** *** *** *** *** *** + IF (METH.EQ.1) CALL CDOPRI(C2,C3,C4,C5,E1,E3,E4,E5,E6,E7, + & A21,A31,A32,A41,A42,A43,A51,A52,A53,A54, + & A61,A62,A63,A64,A65,A71,A73,A74,A75,A76, + & D1,D3,D4,D5,D6,D7) + FACOLD=1.D-4 + EXPO1=0.2D0-BETA*0.75D0 + FACC1=1.D0/FAC1 + FACC2=1.D0/FAC2 + POSNEG=SIGN(1.D0,XEND-X) +C --- INITIAL PREPARATIONS + ATOLI=ATOL(1) + RTOLI=RTOL(1) + LAST=.FALSE. + HLAMB=0.D0 + IASTI=0 + CALL FCN(N,X,Y,K1,RPAR,IPAR) + HMAX=ABS(HMAX) + IORD=5 + IF (H.EQ.0.D0) H=HINIT(N,FCN,X,Y,XEND,POSNEG,K1,K2,K3,IORD, + & HMAX,ATOL,RTOL,ITOL,RPAR,IPAR) + NFCN=NFCN+2 + REJECT=.FALSE. + XOLD=X + IF (IOUT.NE.0) THEN + IRTRN=1 + HOUT=H + CALL SOLOUT(NACCPT+1,XOLD,X,Y,N,CONT,ICOMP,NRD, + & RPAR,IPAR,IRTRN) + IF (IRTRN.LT.0) GOTO 79 + ELSE + IRTRN=0 + END IF +C --- BASIC INTEGRATION STEP + 1 CONTINUE + IF (NSTEP.GT.NMAX) GOTO 78 + IF (0.1D0*ABS(H).LE.ABS(X)*UROUND)GOTO 77 + IF ((X+1.01D0*H-XEND)*POSNEG.GT.0.D0) THEN + H=XEND-X + LAST=.TRUE. + END IF + NSTEP=NSTEP+1 +C --- THE FIRST 6 STAGES + IF (IRTRN.GE.2) THEN + CALL FCN(N,X,Y,K1,RPAR,IPAR) + END IF + DO 22 I=1,N + 22 Y1(I)=Y(I)+H*A21*K1(I) + CALL FCN(N,X+C2*H,Y1,K2,RPAR,IPAR) + DO 23 I=1,N + 23 Y1(I)=Y(I)+H*(A31*K1(I)+A32*K2(I)) + CALL FCN(N,X+C3*H,Y1,K3,RPAR,IPAR) + DO 24 I=1,N + 24 Y1(I)=Y(I)+H*(A41*K1(I)+A42*K2(I)+A43*K3(I)) + CALL FCN(N,X+C4*H,Y1,K4,RPAR,IPAR) + DO 25 I=1,N + 25 Y1(I)=Y(I)+H*(A51*K1(I)+A52*K2(I)+A53*K3(I)+A54*K4(I)) + CALL FCN(N,X+C5*H,Y1,K5,RPAR,IPAR) + DO 26 I=1,N + 26 YSTI(I)=Y(I)+H*(A61*K1(I)+A62*K2(I)+A63*K3(I)+A64*K4(I)+A65*K5(I)) + XPH=X+H + CALL FCN(N,XPH,YSTI,K6,RPAR,IPAR) + DO 27 I=1,N + 27 Y1(I)=Y(I)+H*(A71*K1(I)+A73*K3(I)+A74*K4(I)+A75*K5(I)+A76*K6(I)) + CALL FCN(N,XPH,Y1,K2,RPAR,IPAR) + IF (IOUT.GE.2) THEN + DO 40 J=1,NRD + I=ICOMP(J) + CONT(4*NRD+J)=H*(D1*K1(I)+D3*K3(I)+D4*K4(I)+D5*K5(I) + & +D6*K6(I)+D7*K2(I)) + 40 CONTINUE + END IF + DO 28 I=1,N + 28 K4(I)=(E1*K1(I)+E3*K3(I)+E4*K4(I)+E5*K5(I)+E6*K6(I)+E7*K2(I))*H + NFCN=NFCN+6 +C --- ERROR ESTIMATION + ERR=0.D0 + IF (ITOL.EQ.0) THEN + DO 41 I=1,N + SK=ATOLI+RTOLI*MAX(ABS(Y(I)),ABS(Y1(I))) + 41 ERR=ERR+(K4(I)/SK)**2 + ELSE + DO 42 I=1,N + SK=ATOL(I)+RTOL(I)*MAX(ABS(Y(I)),ABS(Y1(I))) + 42 ERR=ERR+(K4(I)/SK)**2 + END IF + ERR=SQRT(ERR/N) +C --- COMPUTATION OF HNEW + FAC11=ERR**EXPO1 +C --- LUND-STABILIZATION + FAC=FAC11/FACOLD**BETA +C --- WE REQUIRE FAC1 <= HNEW/H <= FAC2 + FAC=MAX(FACC2,MIN(FACC1,FAC/SAFE)) + HNEW=H/FAC + IF(ERR.LE.1.D0)THEN +C --- STEP IS ACCEPTED + FACOLD=MAX(ERR,1.0D-4) + NACCPT=NACCPT+1 +C ------- STIFFNESS DETECTION + IF (MOD(NACCPT,NSTIFF).EQ.0.OR.IASTI.GT.0) THEN + STNUM=0.D0 + STDEN=0.D0 + DO 64 I=1,N + STNUM=STNUM+(K2(I)-K6(I))**2 + STDEN=STDEN+(Y1(I)-YSTI(I))**2 + 64 CONTINUE + IF (STDEN.GT.0.D0) HLAMB=H*SQRT(STNUM/STDEN) + IF (HLAMB.GT.3.25D0) THEN + NONSTI=0 + IASTI=IASTI+1 + IF (IASTI.EQ.15) THEN + IF (IPRINT.GT.0) WRITE (IPRINT,*) + & ' THE PROBLEM SEEMS TO BECOME STIFF AT X = ',X + IF (IPRINT.LE.0) GOTO 76 + END IF + ELSE + NONSTI=NONSTI+1 + IF (NONSTI.EQ.6) IASTI=0 + END IF + END IF + IF (IOUT.GE.2) THEN + DO 43 J=1,NRD + I=ICOMP(J) + YD0=Y(I) + YDIFF=Y1(I)-YD0 + BSPL=H*K1(I)-YDIFF + CONT(J)=Y(I) + CONT(NRD+J)=YDIFF + CONT(2*NRD+J)=BSPL + CONT(3*NRD+J)=-H*K2(I)+YDIFF-BSPL + 43 CONTINUE + END IF + DO 44 I=1,N + K1(I)=K2(I) + 44 Y(I)=Y1(I) + XOLD=X + X=XPH + IF (IOUT.NE.0) THEN + HOUT=H + CALL SOLOUT(NACCPT+1,XOLD,X,Y,N,CONT,ICOMP,NRD, + & RPAR,IPAR,IRTRN) + IF (IRTRN.LT.0) GOTO 79 + END IF +C ------- NORMAL EXIT + IF (LAST) THEN + H=HNEW + IDID=1 + RETURN + END IF + IF(ABS(HNEW).GT.HMAX)HNEW=POSNEG*HMAX + IF(REJECT)HNEW=POSNEG*MIN(ABS(HNEW),ABS(H)) + REJECT=.FALSE. + ELSE +C --- STEP IS REJECTED + HNEW=H/MIN(FACC1,FAC11/SAFE) + REJECT=.TRUE. + IF(NACCPT.GE.1)NREJCT=NREJCT+1 + LAST=.FALSE. + END IF + H=HNEW + GOTO 1 +C --- FAIL EXIT + 76 CONTINUE + IDID=-4 + RETURN + 77 CONTINUE + IF (IPRINT.GT.0) WRITE(IPRINT,979)X + IF (IPRINT.GT.0) WRITE(IPRINT,*)' STEP SIZE T0O SMALL, H=',H + IDID=-3 + RETURN + 78 CONTINUE + IF (IPRINT.GT.0) WRITE(IPRINT,979)X + IF (IPRINT.GT.0) WRITE(IPRINT,*) + & ' MORE THAN NMAX =',NMAX,'STEPS ARE NEEDED' + IDID=-2 + RETURN + 79 CONTINUE + IF (IPRINT.GT.0) WRITE(IPRINT,979)X + 979 FORMAT(' EXIT OF DOPRI5 AT X=',E18.4) + IDID=2 + RETURN + END +C + FUNCTION HINIT(N,FCN,X,Y,XEND,POSNEG,F0,F1,Y1,IORD, + & HMAX,ATOL,RTOL,ITOL,RPAR,IPAR) +C ---------------------------------------------------------- +C ---- COMPUTATION OF AN INITIAL STEP SIZE GUESS +C ---------------------------------------------------------- + IMPLICIT DOUBLE PRECISION (A-H,O-Z) + DIMENSION Y(N),Y1(N),F0(N),F1(N),ATOL(*),RTOL(*) + DIMENSION RPAR(*),IPAR(*) +C ---- COMPUTE A FIRST GUESS FOR EXPLICIT EULER AS +C ---- H = 0.01 * NORM (Y0) / NORM (F0) +C ---- THE INCREMENT FOR EXPLICIT EULER IS SMALL +C ---- COMPARED TO THE SOLUTION + DNF=0.0D0 + DNY=0.0D0 + ATOLI=ATOL(1) + RTOLI=RTOL(1) + IF (ITOL.EQ.0) THEN + DO 10 I=1,N + SK=ATOLI+RTOLI*ABS(Y(I)) + DNF=DNF+(F0(I)/SK)**2 + 10 DNY=DNY+(Y(I)/SK)**2 + ELSE + DO 11 I=1,N + SK=ATOL(I)+RTOL(I)*ABS(Y(I)) + DNF=DNF+(F0(I)/SK)**2 + 11 DNY=DNY+(Y(I)/SK)**2 + END IF + IF (DNF.LE.1.D-10.OR.DNY.LE.1.D-10) THEN + H=1.0D-6 + ELSE + H=SQRT(DNY/DNF)*0.01D0 + END IF + H=MIN(H,HMAX) + H=SIGN(H,POSNEG) +C ---- PERFORM AN EXPLICIT EULER STEP + DO 12 I=1,N + 12 Y1(I)=Y(I)+H*F0(I) + CALL FCN(N,X+H,Y1,F1,RPAR,IPAR) +C ---- ESTIMATE THE SECOND DERIVATIVE OF THE SOLUTION + DER2=0.0D0 + IF (ITOL.EQ.0) THEN + DO 15 I=1,N + SK=ATOLI+RTOLI*ABS(Y(I)) + 15 DER2=DER2+((F1(I)-F0(I))/SK)**2 + ELSE + DO 16 I=1,N + SK=ATOL(I)+RTOL(I)*ABS(Y(I)) + 16 DER2=DER2+((F1(I)-F0(I))/SK)**2 + END IF + DER2=SQRT(DER2)/H +C ---- STEP SIZE IS COMPUTED SUCH THAT +C ---- H**IORD * MAX ( NORM (F0), NORM (DER2)) = 0.01 + DER12=MAX(ABS(DER2),SQRT(DNF)) + IF (DER12.LE.1.D-15) THEN + H1=MAX(1.0D-6,ABS(H)*1.0D-3) + ELSE + H1=(0.01D0/DER12)**(1.D0/IORD) + END IF + H=MIN(100*ABS(H),H1,HMAX) + HINIT=SIGN(H,POSNEG) + RETURN + END +C + FUNCTION CONTD5(II,X,CON,ICOMP,ND) +C ---------------------------------------------------------- +C THIS FUNCTION CAN BE USED FOR CONTINUOUS OUTPUT IN CONNECTION +C WITH THE OUTPUT-SUBROUTINE FOR DOPRI5. IT PROVIDES AN +C APPROXIMATION TO THE II-TH COMPONENT OF THE SOLUTION AT X. +C ---------------------------------------------------------- + IMPLICIT DOUBLE PRECISION (A-H,O-Z) + DIMENSION CON(5*ND),ICOMP(ND) + COMMON /CONDO5/XOLD,H +C ----- COMPUTE PLACE OF II-TH COMPONENT + I=0 + DO 5 J=1,ND + IF (ICOMP(J).EQ.II) I=J + 5 CONTINUE + IF (I.EQ.0) THEN + WRITE (6,*) ' NO DENSE OUTPUT AVAILABLE FOR COMP.',II + RETURN + END IF + THETA=(X-XOLD)/H + THETA1=1.D0-THETA + CONTD5=CON(I)+THETA*(CON(ND+I)+THETA1*(CON(2*ND+I)+THETA* + & (CON(3*ND+I)+THETA1*CON(4*ND+I)))) + RETURN + END +C + SUBROUTINE CDOPRI(C2,C3,C4,C5,E1,E3,E4,E5,E6,E7, + & A21,A31,A32,A41,A42,A43,A51,A52,A53,A54, + & A61,A62,A63,A64,A65,A71,A73,A74,A75,A76, + & D1,D3,D4,D5,D6,D7) +C ---------------------------------------------------------- +C RUNGE-KUTTA COEFFICIENTS OF DORMAND AND PRINCE (1980) +C ---------------------------------------------------------- + IMPLICIT DOUBLE PRECISION (A-H,O-Z) + C2=0.2D0 + C3=0.3D0 + C4=0.8D0 + C5=8.D0/9.D0 + A21=0.2D0 + A31=3.D0/40.D0 + A32=9.D0/40.D0 + A41=44.D0/45.D0 + A42=-56.D0/15.D0 + A43=32.D0/9.D0 + A51=19372.D0/6561.D0 + A52=-25360.D0/2187.D0 + A53=64448.D0/6561.D0 + A54=-212.D0/729.D0 + A61=9017.D0/3168.D0 + A62=-355.D0/33.D0 + A63=46732.D0/5247.D0 + A64=49.D0/176.D0 + A65=-5103.D0/18656.D0 + A71=35.D0/384.D0 + A73=500.D0/1113.D0 + A74=125.D0/192.D0 + A75=-2187.D0/6784.D0 + A76=11.D0/84.D0 + E1=71.D0/57600.D0 + E3=-71.D0/16695.D0 + E4=71.D0/1920.D0 + E5=-17253.D0/339200.D0 + E6=22.D0/525.D0 + E7=-1.D0/40.D0 +C ---- DENSE OUTPUT OF SHAMPINE (1986) + D1=-12715105075.D0/11282082432.D0 + D3=87487479700.D0/32700410799.D0 + D4=-10690763975.D0/1880347072.D0 + D5=701980252875.D0/199316789632.D0 + D6=-1453857185.D0/822651844.D0 + D7=69997945.D0/29380423.D0 + RETURN + END + diff -Nru python-scipy-0.7.2+dfsg1/scipy/integrate/dop.pyf python-scipy-0.8.0+dfsg1/scipy/integrate/dop.pyf --- python-scipy-0.7.2+dfsg1/scipy/integrate/dop.pyf 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/integrate/dop.pyf 2010-07-26 15:48:30.000000000 +0100 @@ -0,0 +1,80 @@ +!%f90 -*- f90 -*- +!Author: John Travers +!Date: 22 Feb 2009 + +python module __user__routines + interface + subroutine fcn(n,x,y,f,rpar,ipar) + integer intent(hide) :: n + double precision intent(in) :: x + double precision dimension(n),intent(in,c) :: y + double precision dimension(n),intent(out,c) :: f + double precision intent(hide) :: rpar + integer intent(hide) :: ipar + end subroutine fcn + subroutine solout(nr,xold,x,y,n,con,icomp,nd,rpar,ipar,irtn) + integer intent(in) :: nr + integer intent(hide) :: n + double precision intent(in) :: xold, x + double precision dimension(n),intent(c,in) :: y + integer intent(in) :: nd + integer dimension(nd), intent(in) :: icomp + double precision dimension(5*nd), intent(in) :: con + double precision intent(hide) :: rpar + integer intent(hide) :: ipar + integer intent(out) :: irtn + end subroutine solout + end interface +end python module __user__routines + +python module _dop + interface + subroutine dopri5(n,fcn,x,y,xend,rtol,atol,itol,solout,iout,work,lwork,iwork,liwork,rpar,ipar,idid) + use __user__routines + external fcn + external solout + integer intent(hide),depend(y) :: n = len(y) + double precision dimension(n),intent(in,out,copy) :: y + double precision intent(in,out):: x + double precision intent(in):: xend + double precision dimension(*),intent(in),check(len(atol)<& + &=1||len(atol)>=n),depend(n) :: atol + double precision dimension(*),intent(in),check(len(rtol)==len(atol)), & + depend(atol) :: rtol + integer intent(hide), depend(atol) :: itol = (len(atol)<=1?0:1) + integer intent(hide) :: iout=0 + double precision dimension(*), intent(in), check(len(work)>=8*n+21), & + :: work + integer intent(hide), depend(work) :: lwork = len(work) + integer intent(in,out), dimension(*), check(len(iwork)>=21) :: iwork + integer intent(hide), depend(iwork) :: liwork = len(iwork) + integer intent(out) :: idid + double precision intent(hide) :: rpar = 0.0 + integer intent(hide) :: ipar = 0 + end subroutine dopri5 + subroutine dop853(n,fcn,x,y,xend,rtol,atol,itol,solout,iout,work,lwork,iwork,liwork,rpar,ipar,idid) + use __user__routines + external fcn + external solout + integer intent(hide),depend(y) :: n = len(y) + double precision dimension(n),intent(in,out,copy) :: y + double precision intent(in,out):: x + double precision intent(in):: xend + double precision dimension(*),intent(in),check(len(atol)<& + &=1||len(atol)>=n),depend(n) :: atol + double precision dimension(*),intent(in),check(len(rtol)==len(atol)), & + depend(atol) :: rtol + integer intent(hide), depend(atol) :: itol = (len(atol)<=1?0:1) + integer intent(hide) :: iout=0 + double precision dimension(*), intent(in), check(len(work)>=8*n+21), & + :: work + integer intent(hide), depend(work) :: lwork = len(work) + integer intent(in,out), dimension(*), check(len(iwork)>=21) :: iwork + integer intent(hide), depend(iwork) :: liwork = len(iwork) + integer intent(out) :: idid + double precision intent(hide) :: rpar = 0.0 + integer intent(hide) :: ipar = 0 + end subroutine dop853 + end interface +end python module dop + diff -Nru python-scipy-0.7.2+dfsg1/scipy/integrate/info.py python-scipy-0.8.0+dfsg1/scipy/integrate/info.py --- python-scipy-0.7.2+dfsg1/scipy/integrate/info.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/integrate/info.py 2010-07-26 15:48:30.000000000 +0100 @@ -1,5 +1,3 @@ -## Automatically adapted for scipy Oct 21, 2005 by - """ Integration routines ==================== diff -Nru python-scipy-0.7.2+dfsg1/scipy/integrate/__init__.py python-scipy-0.8.0+dfsg1/scipy/integrate/__init__.py --- python-scipy-0.7.2+dfsg1/scipy/integrate/__init__.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/integrate/__init__.py 2010-07-26 15:48:30.000000000 +0100 @@ -1,5 +1,3 @@ -## Automatically adapted for scipy Oct 21, 2005 by - # # integrate - Integration routines # diff -Nru python-scipy-0.7.2+dfsg1/scipy/integrate/__odepack.h python-scipy-0.8.0+dfsg1/scipy/integrate/__odepack.h --- python-scipy-0.7.2+dfsg1/scipy/integrate/__odepack.h 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/integrate/__odepack.h 2010-07-26 15:48:30.000000000 +0100 @@ -14,10 +14,18 @@ */ -#if defined(NO_APPEND_FORTRAN) -#define LSODA lsoda +#if defined(UPPERCASE_FORTRAN) + #if defined(NO_APPEND_FORTRAN) + /* nothing to do here */ + #else + #define LSODA LSODA_ + #endif #else -#define LSODA lsoda_ + #if defined(NO_APPEND_FORTRAN) + #define LSODA lsoda + #else + #define LSODA lsoda_ + #endif #endif void LSODA(); diff -Nru python-scipy-0.7.2+dfsg1/scipy/integrate/odepack.py python-scipy-0.8.0+dfsg1/scipy/integrate/odepack.py --- python-scipy-0.7.2+dfsg1/scipy/integrate/odepack.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/integrate/odepack.py 2010-07-26 15:48:30.000000000 +0100 @@ -1,5 +1,3 @@ -## Automatically adapted for scipy Oct 21, 2005 by - # Author: Travis Oliphant __all__ = ['odeint'] @@ -21,7 +19,8 @@ ml=None, mu=None, rtol=None, atol=None, tcrit=None, h0=0.0, hmax=0.0, hmin=0.0, ixpr=0, mxstep=0, mxhnil=0, mxordn=12, mxords=5, printmessg=0): - """Integrate a system of ordinary differential equations. + """ + Integrate a system of ordinary differential equations. Solve a system of ordinary differential equations using lsoda from the FORTRAN library odepack. @@ -95,16 +94,16 @@ For the banded case, Dfun should return a matrix whose columns contain the non-zero bands (starting with the lowest diagonal). Thus, the return matrix from Dfun should - have shape len(y0) * (ml + mu + 1) when ml >=0 or mu >=0 + have shape ``len(y0) * (ml + mu + 1) when ml >=0 or mu >=0`` rtol, atol : float The input parameters rtol and atol determine the error control performed by the solver. The solver will control the vector, e, of estimated local errors in y, according to an - inequality of the form:: - max-norm of (e / ewt) <= 1 - where ewt is a vector of positive error weights computed as:: - ewt = rtol * abs(y) + atol + inequality of the form ``max-norm of (e / ewt) <= 1``, + where ewt is a vector of positive error weights computed as: + ``ewt = rtol * abs(y) + atol`` rtol and atol can be either vectors the same length as y or scalars. + Defaults to 1.49012e-8. tcrit : array Vector of critical points (e.g. singularities) where integration care should be taken. diff -Nru python-scipy-0.7.2+dfsg1/scipy/integrate/ode.py python-scipy-0.8.0+dfsg1/scipy/integrate/ode.py --- python-scipy-0.7.2+dfsg1/scipy/integrate/ode.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/integrate/ode.py 2010-07-26 15:48:30.000000000 +0100 @@ -1,4 +1,4 @@ -# Authors: Pearu Peterson, Pauli Virtanen +# Authors: Pearu Peterson, Pauli Virtanen, John Travers """ First-order ODE integrators @@ -29,6 +29,13 @@ y1 = integrator.integrate(t1,step=0,relax=0) flag = integrator.successful() +class complex_ode +----------------- + +This class has the same generic interface as ode, except it can handle complex +f, y and Jacobians by transparently translating them into the equivalent +real valued system. It supports the real valued solvers (i.e not zvode) and is +an alternative to ode with the zvode solver, sometimes performing better. """ integrator_info = \ @@ -97,6 +104,58 @@ failures, and for this problem one should instead use DVODE on the equivalent real system (in the real and imaginary parts of y). +dopri5 +~~~~~~ + + Numerical solution of a system of first order + ordinary differential equations y'=f(x,y). + this is an explicit runge-kutta method of order (4)5 + due to Dormand & Prince (with stepsize control and + dense output). + + Authors: E. Hairer and G. Wanner + Universite de Geneve, Dept. de Mathematiques + CH-1211 Geneve 24, Switzerland + e-mail: ernst.hairer@math.unige.ch + gerhard.wanner@math.unige.ch + + This code is described in: + E. Hairer, S.P. Norsett and G. Wanner, Solving Ordinary + Differential Equations i. Nonstiff Problems. 2nd edition. + Springer Series in Computational Mathematics, + Springer-Verlag (1993) + +This integrator accepts the following parameters in set_integrator() +method of the ode class: + +- atol : float or sequence + absolute tolerance for solution +- rtol : float or sequence + relative tolerance for solution +- nsteps : int + Maximum number of (internally defined) steps allowed during one + call to the solver. +- first_step : float +- max_step : float +- safety : float + Safety factor on new step selection (default 0.9) +- ifactor : float +- dfactor : float + Maximum factor to increase/decrease step sixe by in one step +- beta : float + Beta parameter for stabilised step size control. + +dop853 +~~~~~~ + + Numerical solution of a system of first 0rder + ordinary differential equations y'=f(x,y). + this is an explicit runge-kutta method of order 8(5,3) + due to Dormand & Prince (with stepsize control and + dense output). + + Options and references the same as dopri5. + """ if __doc__: @@ -143,12 +202,17 @@ # if myodeint.runner: # IntegratorBase.integrator_classes.append(myodeint) -__all__ = ['ode'] +__all__ = ['ode', 'complex_ode'] __version__ = "$Id$" __docformat__ = "restructuredtext en" -from numpy import asarray, array, zeros, int32, isscalar -import re, sys +import re +import warnings + +from numpy import asarray, array, zeros, int32, isscalar, real, imag + +import vode as _vode +import _dop #------------------------------------------------------------------------------ # User interface @@ -232,13 +296,15 @@ ---------- name : str Name of the integrator - integrator_params + integrator_params : Additional parameters for the integrator. """ integrator = find_integrator(name) if integrator is None: - print 'No integrator name match with %s or is not available.'\ - %(`name`) + # FIXME: this really should be raise an exception. Will that break + # any code? + warnings.warn('No integrator name match with %r or is not ' + 'available.' % name) else: self._integrator = integrator(**integrator_params) if not len(self.y): @@ -262,8 +328,10 @@ def successful(self): """Check if integration was successful.""" - try: self._integrator - except AttributeError: self.set_integrator('') + try: + self._integrator + except AttributeError: + self.set_integrator('') return self._integrator.success==1 def set_f_params(self,*args): @@ -276,6 +344,72 @@ self.jac_params = args return self +class complex_ode(ode): + """ A wrapper of ode for complex systems. """ + + def __init__(self, f, jac=None): + """ + Define equation y' = f(y,t), where y and f can be complex. + + Parameters + ---------- + f : f(t, y, *f_args) + Rhs of the equation. t is a scalar, y.shape == (n,). + f_args is set by calling set_f_params(*args) + jac : jac(t, y, *jac_args) + Jacobian of the rhs, jac[i,j] = d f[i] / d y[j] + jac_args is set by calling set_f_params(*args) + """ + self.cf = f + self.cjac = jac + if jac is not None: + ode.__init__(self, self._wrap, self._wrap_jac) + else: + ode.__init__(self, self._wrap, None) + + def _wrap(self, t, y, *f_args): + f = self.cf(*((t, y[::2] + 1j*y[1::2]) + f_args)) + self.tmp[::2] = real(f) + self.tmp[1::2] = imag(f) + return self.tmp + + def _wrap_jac(self, t, y, *jac_args): + jac = self.cjac(*((t, y[::2] + 1j*y[1::2]) + jac_args)) + self.jac_tmp[1::2,1::2] = self.jac_tmp[::2,::2] = real(jac) + self.jac_tmp[1::2,::2] = imag(jac) + self.jac_tmp[::2,1::2] = -self.jac_tmp[1::2,::2] + return self.jac_tmp + + def set_integrator(self, name, **integrator_params): + """ + Set integrator by name. + + Parameters + ---------- + name : str + Name of the integrator + integrator_params : + Additional parameters for the integrator. + """ + if name == 'zvode': + raise ValueError("zvode should be used with ode, not zode") + return ode.set_integrator(self, name, **integrator_params) + + def set_initial_value(self, y, t=0.0): + """Set initial conditions y(t) = y.""" + y = asarray(y) + self.tmp = zeros(y.size*2, 'float') + self.tmp[::2] = real(y) + self.tmp[1::2] = imag(y) + if self.cjac is not None: + self.jac_tmp = zeros((y.size*2, y.size*2), 'float') + return ode.set_initial_value(self, self.tmp, t) + + def integrate(self, t, step=0, relax=0): + """Find y=y(t), set y as an initial condition, and return y.""" + y = ode.integrate(self, t, step, relax) + return y[::2] + 1j*y[1::2] + #------------------------------------------------------------------------------ # ODE integrators #------------------------------------------------------------------------------ @@ -284,7 +418,7 @@ for cl in IntegratorBase.integrator_classes: if re.match(name,cl.__name__,re.I): return cl - return + return None class IntegratorBase(object): @@ -322,11 +456,7 @@ #XXX: __str__ method for getting visual state of the integrator class vode(IntegratorBase): - try: - import vode as _vode - except ImportError: - print sys.exc_value - _vode = None + runner = getattr(_vode,'dvode',None) messages = {-1:'Excess work done on this call. (Perhaps wrong MF.)', @@ -353,9 +483,12 @@ first_step = 0.0, # determined by solver ): - if re.match(method,r'adams',re.I): self.meth = 1 - elif re.match(method,r'bdf',re.I): self.meth = 2 - else: raise ValueError,'Unknown integration method %s'%(method) + if re.match(method,r'adams',re.I): + self.meth = 1 + elif re.match(method,r'bdf',re.I): + self.meth = 2 + else: + raise ValueError('Unknown integration method %s' % method) self.with_jacobian = with_jacobian self.rtol = rtol self.atol = atol @@ -409,7 +542,7 @@ elif mf in [24,25]: lrw = 22 + 11*n + (3*self.ml+2*self.mu)*n else: - raise ValueError,'Unexpected mf=%s'%(mf) + raise ValueError('Unexpected mf=%s' % mf) if miter in [0,3]: liw = 30 else: @@ -434,7 +567,7 @@ def run(self,*args): y1,t,istate = self.runner(*(args[:5]+tuple(self.call_args)+args[5:])) if istate <0: - print 'vode:',self.messages.get(istate,'Unexpected istate=%s'%istate) + warnings.warn('vode: ' + self.messages.get(istate,'Unexpected istate=%s'%istate)) self.success = 0 else: self.call_args[3] = 2 # upgrade istate from 1 to 2 @@ -454,16 +587,11 @@ self.call_args[2] = itask return r -if vode.runner: +if vode.runner is not None: IntegratorBase.integrator_classes.append(vode) class zvode(vode): - try: - import vode as _vode - except ImportError: - print sys.exc_value - _vode = None runner = getattr(_vode,'zvode',None) supports_run_relax = 1 @@ -553,12 +681,122 @@ def run(self,*args): y1,t,istate = self.runner(*(args[:5]+tuple(self.call_args)+args[5:])) if istate < 0: - print 'zvode:', self.messages.get(istate, - 'Unexpected istate=%s'%istate) + warnings.warn('zvode: ' + + self.messages.get(istate, 'Unexpected istate=%s'%istate)) self.success = 0 else: self.call_args[3] = 2 # upgrade istate from 1 to 2 return y1, t -if zvode.runner: +if zvode.runner is not None: IntegratorBase.integrator_classes.append(zvode) + +class dopri5(IntegratorBase): + + runner = getattr(_dop,'dopri5',None) + name = 'dopri5' + + messages = { 1 : 'computation successful', + 2 : 'comput. successful (interrupted by solout)', + -1 : 'input is not consistent', + -2 : 'larger nmax is needed', + -3 : 'step size becomes too small', + -4 : 'problem is probably stiff (interrupted)', + } + + def __init__(self, + rtol=1e-6,atol=1e-12, + nsteps = 500, + max_step = 0.0, + first_step = 0.0, # determined by solver + safety = 0.9, + ifactor = 10.0, + dfactor = 0.2, + beta = 0.0, + method = None + ): + self.rtol = rtol + self.atol = atol + self.nsteps = nsteps + self.max_step = max_step + self.first_step = first_step + self.safety = safety + self.ifactor = ifactor + self.dfactor = dfactor + self.beta = beta + self.success = 1 + + def reset(self,n,has_jac): + work = zeros((8*n+21,), float) + work[1] = self.safety + work[2] = self.dfactor + work[3] = self.ifactor + work[4] = self.beta + work[5] = self.max_step + work[6] = self.first_step + self.work = work + iwork = zeros((21,), int32) + iwork[0] = self.nsteps + self.iwork = iwork + self.call_args = [self.rtol,self.atol,self._solout,self.work,self.iwork] + self.success = 1 + + def run(self,f,jac,y0,t0,t1,f_params,jac_params): + x,y,iwork,idid = self.runner(*((f,t0,y0,t1) + tuple(self.call_args))) + if idid < 0: + warnings.warn(self.name + ': ' + + self.messages.get(idid, 'Unexpected idid=%s'%idid)) + self.success = 0 + return y,x + + def _solout(self, *args): + # dummy solout function + pass + +if dopri5.runner is not None: + IntegratorBase.integrator_classes.append(dopri5) + +class dop853(dopri5): + + runner = getattr(_dop,'dop853',None) + name = 'dop853' + + def __init__(self, + rtol=1e-6,atol=1e-12, + nsteps = 500, + max_step = 0.0, + first_step = 0.0, # determined by solver + safety = 0.9, + ifactor = 6.0, + dfactor = 0.3, + beta = 0.0, + method = None + ): + self.rtol = rtol + self.atol = atol + self.nsteps = nsteps + self.max_step = max_step + self.first_step = first_step + self.safety = safety + self.ifactor = ifactor + self.dfactor = dfactor + self.beta = beta + self.success = 1 + + def reset(self,n,has_jac): + work = zeros((11*n+21,), float) + work[1] = self.safety + work[2] = self.dfactor + work[3] = self.ifactor + work[4] = self.beta + work[5] = self.max_step + work[6] = self.first_step + self.work = work + iwork = zeros((21,), int32) + iwork[0] = self.nsteps + self.iwork = iwork + self.call_args = [self.rtol,self.atol,self._solout,self.work,self.iwork] + self.success = 1 + +if dop853.runner is not None: + IntegratorBase.integrator_classes.append(dop853) diff -Nru python-scipy-0.7.2+dfsg1/scipy/integrate/__quadpack.h python-scipy-0.8.0+dfsg1/scipy/integrate/__quadpack.h --- python-scipy-0.7.2+dfsg1/scipy/integrate/__quadpack.h 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/integrate/__quadpack.h 2010-07-26 15:48:30.000000000 +0100 @@ -20,21 +20,35 @@ */ #if defined(NO_APPEND_FORTRAN) -#define DQAGSE dqagse -#define DQAGIE dqagie -#define DQAGPE dqagpe -#define DQAWOE dqawoe -#define DQAWFE dqawfe -#define DQAWSE dqawse -#define DQAWCE dqawce + #if defined(UPPERCASE_FORTRAN) + /* nothing to do here */ + #else + #define DQAGSE dqagse + #define DQAGIE dqagie + #define DQAGPE dqagpe + #define DQAWOE dqawoe + #define DQAWFE dqawfe + #define DQAWSE dqawse + #define DQAWCE dqawce + #endif #else -#define DQAGSE dqagse_ -#define DQAGIE dqagie_ -#define DQAGPE dqagpe_ -#define DQAWOE dqawoe_ -#define DQAWFE dqawfe_ -#define DQAWSE dqawse_ -#define DQAWCE dqawce_ + #if defined(UPPERCASE_FORTRAN) + #define DQAGSE DQAGSE_ + #define DQAGIE DQAGIE_ + #define DQAGPE DQAGPE_ + #define DQAWOE DQAWOE_ + #define DQAWFE DQAWFE_ + #define DQAWSE DQAWSE_ + #define DQAWCE DQAWCE_ + #else + #define DQAGSE dqagse_ + #define DQAGIE dqagie_ + #define DQAGPE dqagpe_ + #define DQAWOE dqawoe_ + #define DQAWFE dqawfe_ + #define DQAWSE dqawse_ + #define DQAWCE dqawce_ + #endif #endif void DQAGSE(); diff -Nru python-scipy-0.7.2+dfsg1/scipy/integrate/quadpack.py python-scipy-0.8.0+dfsg1/scipy/integrate/quadpack.py --- python-scipy-0.7.2+dfsg1/scipy/integrate/quadpack.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/integrate/quadpack.py 2010-07-26 15:48:30.000000000 +0100 @@ -1,5 +1,3 @@ -## Automatically adapted for scipy Oct 21, 2005 by - # Author: Travis Oliphant 2001 __all__ = ['quad', 'dblquad', 'tplquad', 'quad_explain', 'Inf','inf'] @@ -11,6 +9,20 @@ error = _quadpack.error def quad_explain(output=sys.stdout): + """ + Print extra information about integrate.quad() parameters and returns. + + Parameters + ---------- + output : instance with "write" method + Information about `quad` is passed to ``output.write()``. + Default is ``sys.stdout``. + + Returns + ------- + None + + """ output.write(""" Extra information for quad() inputs and outputs: @@ -120,65 +132,115 @@ def quad(func, a, b, args=(), full_output=0, epsabs=1.49e-8, epsrel=1.49e-8, limit=50, points=None, weight=None, wvar=None, wopts=None, maxp1=50, limlst=50): - """Compute a definite integral. - - Description: + """ + Compute a definite integral. Integrate func from a to b (possibly infinite interval) using a technique - from the Fortran library QUADPACK. Run scipy.integrate.quad_explain() - for more information on the more esoteric inputs and outputs. + from the Fortran library QUADPACK. - Inputs: + If func takes many arguments, it is integrated along the axis corresponding + to the first argument. Use the keyword argument `args` to pass the other + arguments. + + Run scipy.integrate.quad_explain() for more information on the + more esoteric inputs and outputs. + + Parameters + ---------- + + func : function + A Python function or method to integrate. + a : float + Lower limit of integration (use -scipy.integrate.Inf for -infinity). + b : float + Upper limit of integration (use scipy.integrate.Inf for +infinity). + args : tuple, optional + extra arguments to pass to func + full_output : int + Non-zero to return a dictionary of integration information. + If non-zero, warning messages are also suppressed and the + message is appended to the output tuple. + + Returns + ------- + + y : float + The integral of func from a to b. + abserr : float + an estimate of the absolute error in the result. + + infodict : dict + a dictionary containing additional information. + Run scipy.integrate.quad_explain() for more information. + message : + a convergence message. + explain : + appended only with 'cos' or 'sin' weighting and infinite + integration limits, it contains an explanation of the codes in + infodict['ierlst'] + + Other Parameters + ---------------- + epsabs : + absolute error tolerance. + epsrel : + relative error tolerance. + limit : + an upper bound on the number of subintervals used in the adaptive + algorithm. + points : + a sequence of break points in the bounded integration interval + where local difficulties of the integrand may occur (e.g., + singularities, discontinuities). The sequence does not have + to be sorted. + weight : + string indicating weighting function. + wvar : + variables for use with weighting functions. + limlst : + Upper bound on the number of cylces (>=3) for use with a sinusoidal + weighting and an infinite end-point. + wopts : + Optional input for reusing Chebyshev moments. + maxp1 : + An upper bound on the number of Chebyshev moments. + + See Also + -------- + dblquad, tplquad - double and triple integrals + fixed_quad - fixed-order Gaussian quadrature + quadrature - adaptive Gaussian quadrature + odeint, ode - ODE integrators + simps, trapz, romb - integrators for sampled data + scipy.special - for coefficients and roots of orthogonal polynomials + + Examples + -------- + + Calculate :math:`\\int^4_0 x^2 dx` and compare with an analytic result + + >>> from scipy import integrate + >>> x2 = lambda x: x**2 + >>> integrate.quad(x,0.,4.) + (21.333333333333332, 2.3684757858670003e-13) + >> print 4.**3/3 + 21.3333333333 + + Calculate :math:`\\int^\\infty_0 e^{-x} dx` + + >>> invexp = lambda x: exp(-x) + >>> integrate.quad(invexp,0,inf) + (0.99999999999999989, 5.8426061711142159e-11) + + + >>> f = lambda x,a : a*x + >>> y, err = integrate.quad(f, 0, 1, args=(1,)) + >>> y + 0.5 + >>> y, err = integrate.quad(f, 0, 1, args=(3,)) + >>> y + 1.5 - func -- a Python function or method to integrate. - a -- lower limit of integration (use -scipy.integrate.Inf for -infinity). - b -- upper limit of integration (use scipy.integrate.Inf for +infinity). - args -- extra arguments to pass to func. - full_output -- non-zero to return a dictionary of integration information. - If non-zero, warning messages are also suppressed and the - message is appended to the output tuple. - - Outputs: (y, abserr, {infodict, message, explain}) - - y -- the integral of func from a to b. - abserr -- an estimate of the absolute error in the result. - - infodict -- a dictionary containing additional information. - Run scipy.integrate.quad_explain() for more information. - message -- a convergence message. - explain -- appended only with 'cos' or 'sin' weighting and infinite - integration limits, it contains an explanation of the codes in - infodict['ierlst'] - - Additional Inputs: - - epsabs -- absolute error tolerance. - epsrel -- relative error tolerance. - limit -- an upper bound on the number of subintervals used in the adaptive - algorithm. - points -- a sequence of break points in the bounded integration interval - where local difficulties of the integrand may occur (e.g., - singularities, discontinuities). The sequence does not have - to be sorted. - - ** - ** Run scipy.integrate.quad_explain() for more information - ** on the following inputs - ** - weight -- string indicating weighting function. - wvar -- variables for use with weighting functions. - limlst -- Upper bound on the number of cylces (>=3) for use with a sinusoidal - weighting and an infinite end-point. - wopts -- Optional input for reusing Chebyshev moments. - maxp1 -- An upper bound on the number of Chebyshev moments. - - See also: - dblquad, tplquad - double and triple integrals - fixed_quad - fixed-order Gaussian quadrature - quadrature - adaptive Gaussian quadrature - odeint, ode - ODE integrators - simps, trapz, romb - integrators for sampled data - scipy.special - for coefficients and roots of orthogonal polynomials """ if type(args) != type(()): args = (args,) if (weight is None): @@ -312,39 +374,48 @@ return quad(func,a,b,args=myargs)[0] def dblquad(func, a, b, gfun, hfun, args=(), epsabs=1.49e-8, epsrel=1.49e-8): - """Compute a double (definite) integral. - - Description: - - Return the double integral of func2d(y,x) from x=a..b and y=gfun(x)..hfun(x). - - Inputs: - - func2d -- a Python function or method of at least two variables: y must be - the first argument and x the second argument. - (a,b) -- the limits of integration in x: a < b - gfun -- the lower boundary curve in y which is a function taking a single - floating point argument (x) and returning a floating point result: - a lambda function can be useful here. - hfun -- the upper boundary curve in y (same requirements as gfun). - args -- extra arguments to pass to func2d. - epsabs -- absolute tolerance passed directly to the inner 1-D quadrature - integration. - epsrel -- relative tolerance of the inner 1-D integrals. - - Outputs: (y, abserr) + """ + Compute the double integral of func2d(y,x) + from x=a..b and y=gfun(x)..hfun(x). - y -- the resultant integral. - abserr -- an estimate of the error. + Parameters + ----------- + func2d : function + a Python function or method of at least two variables: y must be + the first argument and x the second argument. + (a,b) : tuple + the limits of integration in x: a < b + gfun : function + the lower boundary curve in y which is a function taking a single + floating point argument (x) and returning a floating point result: + a lambda function can be useful here. + hfun : function + the upper boundary curve in y (same requirements as gfun). + args : + extra arguments to pass to func2d. + epsabs : float + absolute tolerance passed directly to the inner 1-D quadrature + integration. + epsrel : float + relative tolerance of the inner 1-D integrals. + + + Returns + ----------- + y : float + the resultant integral. + abserr : float + an estimate of the error. See also: - quad - single integral - tplquad - triple integral - fixed_quad - fixed-order Gaussian quadrature - quadrature - adaptive Gaussian quadrature - odeint, ode - ODE integrators - simps, trapz, romb - integrators for sampled data - scipy.special - for coefficients and roots of orthogonal polynomials + quad - single integral + tplquad - triple integral + fixed_quad - fixed-order Gaussian quadrature + quadrature - adaptive Gaussian quadrature + odeint, ode - ODE integrators + simps, trapz, romb - integrators for sampled data + scipy.special - for coefficients and roots of orthogonal polynomials + """ return quad(_infunc,a,b,(func,gfun,hfun,args),epsabs=epsabs,epsrel=epsrel) @@ -356,42 +427,57 @@ def tplquad(func, a, b, gfun, hfun, qfun, rfun, args=(), epsabs=1.49e-8, epsrel=1.49e-8): - """Compute a triple (definite) integral. - - Description: + """ + Compute a triple (definite) integral. - Return the triple integral of func3d(z, y,x) from x=a..b, y=gfun(x)..hfun(x), - and z=qfun(x,y)..rfun(x,y) + Return the triple integral of func3d(z, y,x) from + x=a..b, y=gfun(x)..hfun(x), and z=qfun(x,y)..rfun(x,y) - Inputs: + Parameters + ---------- + func3d : function + A Python function or method of at least three variables in the + order (z, y, x). + (a,b) : tuple + The limits of integration in x: a < b + gfun : function + The lower boundary curve in y which is a function taking a single + floating point argument (x) and returning a floating point result: + a lambda function can be useful here. + hfun : function + The upper boundary curve in y (same requirements as gfun). + qfun : function + The lower boundary surface in z. It must be a function that takes + two floats in the order (x, y) and returns a float. + rfun : function + The upper boundary surface in z. (Same requirements as qfun.) + args : Arguments + Extra arguments to pass to func3d. + epsabs : float + Absolute tolerance passed directly to the innermost 1-D quadrature + integration. + epsrel : float + Relative tolerance of the innermost 1-D integrals. + + Returns + ------- + y : float + The resultant integral. + abserr : float + An estimate of the error. + + See Also + -------- + quad: Adaptive quadrature using QUADPACK + quadrature: Adaptive Gaussian quadrature + fixed_quad: Fixed-order Gaussian quadrature + dblquad: Double integrals + romb: Integrators for sampled data + trapz: Integrators for sampled data + simps: Integrators for sampled data + ode: ODE integrators + odeint: ODE integrators + scipy.special: For coefficients and roots of orthogonal polynomials - func3d -- a Python function or method of at least three variables in the - order (z, y, x). - (a,b) -- the limits of integration in x: a < b - gfun -- the lower boundary curve in y which is a function taking a single - floating point argument (x) and returning a floating point result: - a lambda function can be useful here. - hfun -- the upper boundary curve in y (same requirements as gfun). - qfun -- the lower boundary surface in z. It must be a function that takes - two floats in the order (x, y) and returns a float. - rfun -- the upper boundary surface in z. (Same requirements as qfun.) - args -- extra arguments to pass to func3d. - epsabs -- absolute tolerance passed directly to the innermost 1-D quadrature - integration. - epsrel -- relative tolerance of the innermost 1-D integrals. - - Outputs: (y, abserr) - - y -- the resultant integral. - abserr -- an estimate of the error. - - See also: - quad - single integral - dblquad - double integral - fixed_quad - fixed-order Gaussian quadrature - quadrature - adaptive Gaussian quadrature - odeint, ode - ODE integrators - simps, trapz, romb - integrators for sampled data - scipy.special - for coefficients and roots of orthogonal polynomials """ return dblquad(_infunc2,a,b,gfun,hfun,(func,qfun,rfun,args),epsabs=epsabs,epsrel=epsrel) diff -Nru python-scipy-0.7.2+dfsg1/scipy/integrate/quadrature.py python-scipy-0.8.0+dfsg1/scipy/integrate/quadrature.py --- python-scipy-0.7.2+dfsg1/scipy/integrate/quadrature.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/integrate/quadrature.py 2010-07-26 15:48:30.000000000 +0100 @@ -7,36 +7,42 @@ from numpy import sum, ones, add, diff, isinf, isscalar, \ asarray, real, trapz, arange, empty import numpy as np +import math def fixed_quad(func,a,b,args=(),n=5): - """Compute a definite integral using fixed-order Gaussian quadrature. - - Description: - - Integrate func from a to b using Gaussian quadrature of order n. - - Inputs: - - func -- a Python function or method to integrate - (must accept vector inputs) - a -- lower limit of integration - b -- upper limit of integration - args -- extra arguments to pass to function. - n -- order of quadrature integration. + """ + Compute a definite integral using fixed-order Gaussian quadrature. - Outputs: (val, None) + Integrate `func` from a to b using Gaussian quadrature of order n. - val -- Gaussian quadrature approximation to the integral. + Parameters + ---------- + func : callable + A Python function or method to integrate (must accept vector inputs). + a : float + Lower limit of integration. + b : float + Upper limit of integration. + args : tuple, optional + Extra arguments to pass to function, if any. + n : int, optional + Order of quadrature integration. Default is 5. - See also: + Returns + ------- + val : float + Gaussian quadrature approximation to the integral - quad - adaptive quadrature using QUADPACK - dblquad, tplquad - double and triple integrals - romberg - adaptive Romberg quadrature - quadrature - adaptive Gaussian quadrature - romb, simps, trapz - integrators for sampled data - cumtrapz - cumulative integration for sampled data + See Also + -------- + quad : adaptive quadrature using QUADPACK + dblquad, tplquad : double and triple integrals + romberg : adaptive Romberg quadrature + quadrature : adaptive Gaussian quadrature + romb, simps, trapz : integrators for sampled data + cumtrapz : cumulative integration for sampled data ode, odeint - ODE integrators + """ [x,w] = p_roots(n) x = real(x) @@ -94,39 +100,52 @@ return vfunc def quadrature(func,a,b,args=(),tol=1.49e-8,maxiter=50, vec_func=True): - """Compute a definite integral using fixed-tolerance Gaussian quadrature. - - Description: + """ + Compute a definite integral using fixed-tolerance Gaussian quadrature. Integrate func from a to b using Gaussian quadrature - with absolute tolerance tol. + with absolute tolerance `tol`. - Inputs: + Parameters + ---------- + func : function + A Python function or method to integrate. + a : float + Lower limit of integration. + b : float + Upper limit of integration. + args : tuple, optional + Extra arguments to pass to function. + tol : float, optional + Iteration stops when error between last two iterates is less than + tolerance. + maxiter : int, optional + Maximum number of iterations. + vec_func : bool, optional + True or False if func handles arrays as arguments (is + a "vector" function). Default is True. + + Returns + ------- + val : float + Gaussian quadrature approximation (within tolerance) to integral. + err : float + Difference between last two estimates of the integral. + + See also + -------- + romberg: adaptive Romberg quadrature + fixed_quad: fixed-order Gaussian quadrature + quad: adaptive quadrature using QUADPACK + dblquad: double integrals + tplquad: triple integrals + romb: integrator for sampled data + simps: integrator for sampled data + trapz: integrator for sampled data + cumtrapz: cumulative integration for sampled data + ode: ODE integrator + odeint: ODE integrator - func -- a Python function or method to integrate. - a -- lower limit of integration. - b -- upper limit of integration. - args -- extra arguments to pass to function. - tol -- iteration stops when error between last two iterates is less than - tolerance. - maxiter -- maximum number of iterations. - vec_func -- True or False if func handles arrays as arguments (is - a "vector" function ). Default is True. - - Outputs: (val, err) - - val -- Gaussian quadrature approximation (within tolerance) to integral. - err -- Difference between last two estimates of the integral. - - See also: - - romberg - adaptive Romberg quadrature - fixed_quad - fixed-order Gaussian quadrature - quad - adaptive quadrature using QUADPACK - dblquad, tplquad - double and triple integrals - romb, simps, trapz - integrators for sampled data - cumtrapz - cumulative integration for sampled data - ode, odeint - ODE integrators """ err = 100.0 val = err @@ -147,20 +166,41 @@ return tuple(l) def cumtrapz(y, x=None, dx=1.0, axis=-1): - """Cumulatively integrate y(x) using samples along the given axis + """ + Cumulatively integrate y(x) using samples along the given axis and the composite trapezoidal rule. If x is None, spacing given by dx is assumed. - See also: + Parameters + ---------- + y : array + + x : array, optional + + dx : int, optional + + axis : int, optional + Specifies the axis to cumulate: + + - -1 --> X axis + - 0 --> Z axis + - 1 --> Y axis + + See Also + -------- + + quad: adaptive quadrature using QUADPACK + romberg: adaptive Romberg quadrature + quadrature: adaptive Gaussian quadrature + fixed_quad: fixed-order Gaussian quadrature + dblquad: double integrals + tplquad: triple integrals + romb: integrators for sampled data + trapz: integrators for sampled data + cumtrapz: cumulative integration for sampled data + ode: ODE integrators + odeint: ODE integrators - quad - adaptive quadrature using QUADPACK - romberg - adaptive Romberg quadrature - quadrature - adaptive Gaussian quadrature - fixed_quad - fixed-order Gaussian quadrature - dblquad, tplquad - double and triple integrals - romb, trapz - integrators for sampled data - cumtrapz - cumulative integration for sampled data - ode, odeint - ODE integrators """ y = asarray(y) if x is None: @@ -203,39 +243,57 @@ def simps(y, x=None, dx=1, axis=-1, even='avg'): - """Integrate y(x) using samples along the given axis and the composite + """ + Integrate y(x) using samples along the given axis and the composite Simpson's rule. If x is None, spacing of dx is assumed. If there are an even number of samples, N, then there are an odd number of intervals (N-1), but Simpson's rule requires an even number - of intervals. The parameter 'even' controls how this is handled as - follows: - - even='avg': Average two results: 1) use the first N-2 intervals with - a trapezoidal rule on the last interval and 2) use the last - N-2 intervals with a trapezoidal rule on the first interval - - even='first': Use Simpson's rule for the first N-2 intervals with - a trapezoidal rule on the last interval. + of intervals. The parameter 'even' controls how this is handled. - even='last': Use Simpson's rule for the last N-2 intervals with a - trapezoidal rule on the first interval. + Parameters + ---------- + y : array_like + Array to be integrated. + x : array_like, optional + If given, the points at which `y` is sampled. + dx : int, optional + Spacing of integration points along axis of `y`. Only used when + `x` is None. Default is 1. + axis : int, optional + Axis along which to integrate. Default is the last axis. + even : {'avg', 'first', 'str'}, optional + 'avg' : Average two results:1) use the first N-2 intervals with + a trapezoidal rule on the last interval and 2) use the last + N-2 intervals with a trapezoidal rule on the first interval. + + 'first' : Use Simpson's rule for the first N-2 intervals with + a trapezoidal rule on the last interval. + + 'last' : Use Simpson's rule for the last N-2 intervals with a + trapezoidal rule on the first interval. + + See Also + -------- + quad: adaptive quadrature using QUADPACK + romberg: adaptive Romberg quadrature + quadrature: adaptive Gaussian quadrature + fixed_quad: fixed-order Gaussian quadrature + dblquad: double integrals + tplquad: triple integrals + romb: integrators for sampled data + trapz: integrators for sampled data + cumtrapz: cumulative integration for sampled data + ode: ODE integrators + odeint: ODE integrators + Notes + ----- For an odd number of samples that are equally spaced the result is - exact if the function is a polynomial of order 3 or less. If - the samples are not equally spaced, then the result is exact only - if the function is a polynomial of order 2 or less. + exact if the function is a polynomial of order 3 or less. If + the samples are not equally spaced, then the result is exact only + if the function is a polynomial of order 2 or less. - See also: - - quad - adaptive quadrature using QUADPACK - romberg - adaptive Romberg quadrature - quadrature - adaptive Gaussian quadrature - fixed_quad - fixed-order Gaussian quadrature - dblquad, tplquad - double and triple integrals - romb, trapz - integrators for sampled data - cumtrapz - cumulative integration for sampled data - ode, odeint - ODE integrators """ y = asarray(y) nd = len(y.shape) @@ -292,20 +350,30 @@ return result def romb(y, dx=1.0, axis=-1, show=False): - """Romberg integration using samples of a function - - Inputs: + """ + Romberg integration using samples of a function - y - a vector of 2**k + 1 equally-spaced samples of a fucntion - dx - the sample spacing. - axis - the axis along which to integrate - show - When y is a single 1-d array, then if this argument is True - print the table showing Richardson extrapolation from the - samples. + Parameters + ----------- + y : array like + a vector of 2**k + 1 equally-spaced samples of a function + + dx : array like + the sample spacing. + + axis : array like? + the axis along which to integrate + + show : Boolean + When y is a single 1-d array, then if this argument is True + print the table showing Richardson extrapolation from the + samples. - Output: ret + Returns + ----------- - ret - The integrated result for each axis. + ret : array_like? + The integrated result for each axis. See also: @@ -317,6 +385,7 @@ simps, trapz - integrators for sampled data cumtrapz - cumulative integration for sampled data ode, odeint - ODE integrators + """ y = asarray(y) nd = len(y.shape) @@ -428,7 +497,7 @@ print '' print '%6s %9s %9s' % ('Steps', 'StepSize', 'Results') for i in range(len(resmat)): - print '%6d %9f' % (2**i, (interval[1]-interval[0])/(i+1.0)), + print '%6d %9f' % (2**i, (interval[1]-interval[0])/(2.**i)), for j in range(i+1): print '%9f' % (resmat[i][j]), print '' @@ -438,25 +507,83 @@ def romberg(function, a, b, args=(), tol=1.48E-8, show=False, divmax=10, vec_func=False): - """Romberg integration of a callable function or method. + """ + Romberg integration of a callable function or method. + + Returns the integral of `function` (a function of one variable) + over the interval (`a`, `b`). - Returns the integral of |function| (a function of one variable) - over |interval| (a sequence of length two containing the lower and - upper limit of the integration interval), calculated using - Romberg integration up to the specified |accuracy|. If |show| is 1, - the triangular array of the intermediate results will be printed. - If |vec_func| is True (default is False), then |function| is + If `show` is 1, the triangular array of the intermediate results + will be printed. If `vec_func` is True (default is False), then `function` is assumed to support vector arguments. - See also: + Parameters + ---------- + function : callable + Function to be integrated. + a : float + Lower limit of integration. + b : float + Upper limit of integration. + + Returns + -------- + results : float + Result of the integration. + + Other Parameters + ---------------- + args : tuple, optional + Extra arguments to pass to function. Each element of `args` will + be passed as a single argument to `func`. Default is to pass no + extra arguments. + tol : float, optional + The desired tolerance. Default is 1.48e-8. + show : bool, optional + Whether to print the results. Default is False. + divmax : int, optional + ?? Default is 10. + vec_func : bool, optional + Whether `func` handles arrays as arguments (i.e whether it is a + "vector" function). Default is False. + + See Also + -------- + fixed_quad : Fixed-order Gaussian quadrature. + quad : Adaptive quadrature using QUADPACK. + dblquad, tplquad : Double and triple integrals. + romb, simps, trapz : Integrators for sampled data. + cumtrapz : Cumulative integration for sampled data. + ode, odeint : ODE integrators. + + References + ---------- + .. [1] 'Romberg's method' http://en.wikipedia.org/wiki/Romberg%27s_method + + Examples + -------- + Integrate a gaussian from 0,1 and compare to the error function. + + >>> from scipy.special import erf + >>> gaussian = lambda x: 1/np.sqrt(np.pi) * np.exp(-x**2) + >>> result = romberg(gaussian, 0, 1, show=True) + Romberg integration of from [0, 1] + + :: + + Steps StepSize Results + 1 1.000000 0.385872 + 2 0.500000 0.412631 0.421551 + 4 0.250000 0.419184 0.421368 0.421356 + 8 0.125000 0.420810 0.421352 0.421350 0.421350 + 16 0.062500 0.421215 0.421350 0.421350 0.421350 0.421350 + 32 0.031250 0.421317 0.421350 0.421350 0.421350 0.421350 0.421350 + + The final result is 0.421350396475 after 33 function evaluations. + + >>> print 2*result,erf(1) + 0.84270079295 0.84270079295 - quad - adaptive quadrature using QUADPACK - quadrature - adaptive Gaussian quadrature - fixed_quad - fixed-order Gaussian quadrature - dblquad, tplquad - double and triple integrals - romb, simps, trapz - integrators for sampled data - cumtrapz - cumulative integration for sampled data - ode, odeint - ODE integrators """ if isinf(a) or isinf(b): raise ValueError("Romberg integration only available for finite limits.") @@ -544,35 +671,46 @@ } def newton_cotes(rn,equal=0): - r"""Return weights and error coefficient for Netwon-Cotes integration. + """ + Return weights and error coefficient for Newton-Cotes integration. + + Suppose we have (N+1) samples of f at the positions + x_0, x_1, ..., x_N. Then an N-point Newton-Cotes formula for the + integral between x_0 and x_N is: + + :math:`\\int_{x_0}^{x_N} f(x)dx = \\Delta x \\sum_{i=0}^{N} a_i f(x_i) + + B_N (\\Delta x)^{N+2} f^{N+1} (\\xi)` + + where :math:`\\xi \\in [x_0,x_N]` and :math:`\\Delta x = \\frac{x_N-x_0}{N}` + is the averages samples spacing. + + If the samples are equally-spaced and N is even, then the error + term is :math:`B_N (\\Delta x)^{N+3} f^{N+2}(\\xi)`. + + Parameters + ---------- + + rn : int + The integer order for equally-spaced data + or the relative positions of the samples with + the first sample at 0 and the last at N, where + N+1 is the length of rn. N is the order of the Newton + equal: int, optional + Set to 1 to enforce equally spaced data + + Returns + ------- + an : array + 1-d array of weights to apply to the function at + the provided sample positions. + B : float + error coefficient + + Notes + ----- + Normally, the Newton-Cotes rules are used on smaller integration + regions and a composite rule is used to return the total integral. - Suppose we have (N+1) samples of f at the positions - x_0, x_1, ..., x_N. Then an N-point Newton-Cotes formula for the - integral between x_0 and x_N is: - - $\int_{x_0}^{x_N} f(x)dx = \Delta x \sum_{i=0}^{N} a_i f(x_i) - + B_N (\Delta x)^{N+2} f^{N+1} (\xi)$ - - where $\xi \in [x_0,x_N]$ and $\Delta x = \frac{x_N-x_0}{N}$ is the - averages samples spacing. - - If the samples are equally-spaced and N is even, then the error - term is $B_N (\Delta x)^{N+3} f^{N+2}(\xi)$. - - Normally, the Newton-Cotes rules are used on smaller integration - regions and a composite rule is used to return the total integral. - - Inputs: - rn -- the integer order for equally-spaced data - or the relative positions of the samples with - the first sample at 0 and the last at N, where - N+1 is the length of rn. N is the order of the Newt - equal -- Set to 1 to enforce equally spaced data - - Outputs: - an -- 1-d array of weights to apply to the function at - the provided sample positions. - B -- error coefficient """ try: N = len(rn)-1 diff -Nru python-scipy-0.7.2+dfsg1/scipy/integrate/SConscript python-scipy-0.8.0+dfsg1/scipy/integrate/SConscript --- python-scipy-0.7.2+dfsg1/scipy/integrate/SConscript 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/integrate/SConscript 2010-07-26 15:48:30.000000000 +0100 @@ -1,4 +1,4 @@ -# Last Change: Thu Jun 12 07:00 PM 2008 J +# Last Change: Wed Apr 08 11:00 PM 2009 J # vim:syntax=python from os.path import join as pjoin import warnings @@ -39,6 +39,10 @@ "dqwgts.f"]] quadpack = env.DistutilsStaticExtLibrary('quadpack', source = src) + +src = [pjoin('dop', f) for f in ['dop853.f', 'dopri5.f']] +env.DistutilsStaticExtLibrary('dop', source=src) + # Build odepack src = [pjoin("odepack", s) for s in [ "adjlr.f", "aigbt.f", "ainvg.f", "blkdta000.f", "bnorm.f", "cdrv.f", "cfode.f", "cntnzu.f", "ddasrt.f", @@ -68,3 +72,8 @@ # Build vode odenv.NumpyPythonExtension('vode', source = 'vode.pyf') + +# Dop extension +dopenv = env.Clone() +dopenv.Prepend(LIBS=['dop']) +dopenv.NumpyPythonExtension('_dop', source='dop.pyf') diff -Nru python-scipy-0.7.2+dfsg1/scipy/integrate/setup.py python-scipy-0.8.0+dfsg1/scipy/integrate/setup.py --- python-scipy-0.7.2+dfsg1/scipy/integrate/setup.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/integrate/setup.py 2010-07-26 15:48:30.000000000 +0100 @@ -18,6 +18,8 @@ sources=[join('quadpack','*.f')]) config.add_library('odepack', sources=[join('odepack','*.f')]) + config.add_library('dop', + sources=[join('dop','*.f')]) # should we try to weed through files and replace with calls to # LAPACK routines? # Yes, someday... @@ -33,6 +35,7 @@ depends=['quadpack.h','__quadpack.h']) # odepack libs = ['odepack','linpack_lite','mach'] + # Remove libraries key from blas_opt if 'libraries' in blas_opt: # key doesn't exist on OS X ... @@ -54,6 +57,11 @@ libraries=libs, **newblas) + # dop + config.add_extension('_dop', + sources=['dop.pyf'], + libraries=['dop']) + config.add_data_dir('tests') return config diff -Nru python-scipy-0.7.2+dfsg1/scipy/integrate/tests/test_integrate.py python-scipy-0.8.0+dfsg1/scipy/integrate/tests/test_integrate.py --- python-scipy-0.7.2+dfsg1/scipy/integrate/tests/test_integrate.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/integrate/tests/test_integrate.py 2010-07-26 15:48:30.000000000 +0100 @@ -1,4 +1,4 @@ -# Authors: Nils Wagner, Ed Schofield, Pauli Virtanen +# Authors: Nils Wagner, Ed Schofield, Pauli Virtanen, John Travers """ Tests for numerical integration. """ @@ -8,7 +8,7 @@ allclose from numpy.testing import * -from scipy.integrate import odeint, ode +from scipy.integrate import odeint, ode, complex_ode #------------------------------------------------------------------------------ # Test ODE integrators @@ -69,6 +69,71 @@ self._do_problem(problem, 'zvode', 'adams') self._do_problem(problem, 'zvode', 'bdf') + def test_dopri5(self): + """Check the dopri5 solver""" + for problem_cls in PROBLEMS: + problem = problem_cls() + if problem.cmplx: continue + if problem.stiff: continue + if hasattr(problem, 'jac'): continue + self._do_problem(problem, 'dopri5') + + def test_dop853(self): + """Check the dop853 solver""" + for problem_cls in PROBLEMS: + problem = problem_cls() + if problem.cmplx: continue + if problem.stiff: continue + if hasattr(problem, 'jac'): continue + self._do_problem(problem, 'dop853') + +class TestComplexOde(TestCase): + """ + Check integrate.complex_ode + """ + def _do_problem(self, problem, integrator, method='adams'): + + # ode has callback arguments in different order than odeint + f = lambda t, z: problem.f(z, t) + jac = None + if hasattr(problem, 'jac'): + jac = lambda t, z: problem.jac(z, t) + ig = complex_ode(f, jac) + ig.set_integrator(integrator, + atol=problem.atol/10, + rtol=problem.rtol/10, + method=method) + ig.set_initial_value(problem.z0, t=0.0) + z = ig.integrate(problem.stop_t) + + assert ig.successful(), (problem, method) + assert problem.verify(array([z]), problem.stop_t), (problem, method) + + def test_vode(self): + """Check the vode solver""" + for problem_cls in PROBLEMS: + problem = problem_cls() + if not problem.stiff: + self._do_problem(problem, 'vode', 'adams') + else: + self._do_problem(problem, 'vode', 'bdf') + + def test_dopri5(self): + """Check the dopri5 solver""" + for problem_cls in PROBLEMS: + problem = problem_cls() + if problem.stiff: continue + if hasattr(problem, 'jac'): continue + self._do_problem(problem, 'dopri5') + + def test_dop853(self): + """Check the dop853 solver""" + for problem_cls in PROBLEMS: + problem = problem_cls() + if problem.stiff: continue + if hasattr(problem, 'jac'): continue + self._do_problem(problem, 'dop853') + #------------------------------------------------------------------------------ # Test problems #------------------------------------------------------------------------------ diff -Nru python-scipy-0.7.2+dfsg1/scipy/integrate/tests/test_quadrature.py python-scipy-0.8.0+dfsg1/scipy/integrate/tests/test_quadrature.py --- python-scipy-0.7.2+dfsg1/scipy/integrate/tests/test_quadrature.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/integrate/tests/test_quadrature.py 2010-07-26 15:48:30.000000000 +0100 @@ -3,7 +3,7 @@ from numpy import cos, sin, pi from numpy.testing import * -from scipy.integrate import quadrature, romberg, romb +from scipy.integrate import quadrature, romberg, romb, newton_cotes class TestQuadrature(TestCase): def quad(self, x, a, b, args): @@ -35,6 +35,45 @@ expected_val = 0.45969769413185085 assert_almost_equal(valmath, expected_val, decimal=7) + def test_newton_cotes(self): + """Test the first few degrees, for evenly spaced points.""" + n = 1 + wts, errcoff = newton_cotes(n, 1) + assert_equal(wts, n*numpy.array([0.5, 0.5])) + assert_almost_equal(errcoff, -n**3/12.0) + + n = 2 + wts, errcoff = newton_cotes(n, 1) + assert_almost_equal(wts, n*numpy.array([1.0, 4.0, 1.0])/6.0) + assert_almost_equal(errcoff, -n**5/2880.0) + + n = 3 + wts, errcoff = newton_cotes(n, 1) + assert_almost_equal(wts, n*numpy.array([1.0, 3.0, 3.0, 1.0])/8.0) + assert_almost_equal(errcoff, -n**5/6480.0) + + n = 4 + wts, errcoff = newton_cotes(n, 1) + assert_almost_equal(wts, n*numpy.array([7.0, 32.0, 12.0, 32.0, 7.0])/90.0) + assert_almost_equal(errcoff, -n**7/1935360.0) + + def test_newton_cotes2(self): + """Test newton_cotes with points that are not evenly spaced.""" + + x = numpy.array([0.0, 1.5, 2.0]) + y = x**2 + wts, errcoff = newton_cotes(x) + exact_integral = 8.0/3 + numeric_integral = numpy.dot(wts, y) + assert_almost_equal(numeric_integral, exact_integral) + + x = numpy.array([0.0, 1.4, 2.1, 3.0]) + y = x**2 + wts, errcoff = newton_cotes(x) + exact_integral = 9.0 + numeric_integral = numpy.dot(wts, y) + assert_almost_equal(numeric_integral, exact_integral) + if __name__ == "__main__": run_module_suite() diff -Nru python-scipy-0.7.2+dfsg1/scipy/interpolate/fitpack2.py python-scipy-0.8.0+dfsg1/scipy/interpolate/fitpack2.py --- python-scipy-0.7.2+dfsg1/scipy/interpolate/fitpack2.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/interpolate/fitpack2.py 2010-07-26 15:48:30.000000000 +0100 @@ -52,16 +52,61 @@ } class UnivariateSpline(object): - """ Univariate spline s(x) of degree k on the interval - [xb,xe] calculated from a given set of data points - (x,y). + """ + One-dimensional smoothing spline fit to a given set of data points. - Can include least-squares fitting. + Fits a spline y=s(x) of degree `k` to the provided `x`,`y` data. `s` + specifies the number of knots by specifying a smoothing condition. - See also: + Parameters + ---------- + x : sequence + input dimension of data points -- must be increasing + y : sequence + input dimension of data points + w : sequence or None, optional + weights for spline fitting. Must be positive. If None (default), + weights are all equal. + bbox : sequence or None, optional + 2-sequence specifying the boundary of the approximation interval. If + None (default), bbox=[x[0],x[-1]]. + k : int, optional + Degree of the smoothing spline. Must be <= 5. + s : float or None, optional + Positive smoothing factor used to choose the number of knots. Number + of knots will be increased until the smoothing condition is satisfied: + + sum((w[i]*(y[i]-s(x[i])))**2,axis=0) <= s + + If None (default), s=len(w) which should be a good value if 1/w[i] is + an estimate of the standard deviation of y[i]. If 0, spline will + interpolate through all data points. + + + See Also + -------- + InterpolatedUnivariateSpline : Subclass with smoothing forced to 0 + LSQUnivariateSpline : Subclass in which knots are user-selected instead of + being set by smoothing condition + splrep : An older, non object-oriented wrapping of FITPACK + splev, sproot, splint, spalde + BivariateSpline : A similar class for two-dimensional spline interpolation + + + + Examples + -------- + >>> from numpy import linspace,exp + >>> from numpy.random import randn + >>> from scipy.interpolate import UnivariateSpline + >>> x = linspace(-3,3,100) + >>> y = exp(-x**2) + randn(100)/10 + >>> s = UnivariateSpline(x,y,s=1) + >>> xs = linspace(-3,3,1000) + >>> ys = s(xs) + + xs,ys is now a smoothed, super-sampled version of the noisy gaussian x,y - splrep, splev, sproot, spint, spalde - an older wrapping of FITPACK - BivariateSpline - a similar class for bivariate spline interpolation """ def __init__(self, x, y, w=None, bbox = [None]*2, k=3, s=None): @@ -217,8 +262,52 @@ 'finding roots unsupported for non-cubic splines' class InterpolatedUnivariateSpline(UnivariateSpline): - """ Interpolated univariate spline approximation. Identical to - UnivariateSpline with less error checking. + """ + One-dimensional interpolating spline for a given set of data points. + + Fits a spline y=s(x) of degree `k` to the provided `x`,`y` data. Spline + function passes through all provided points. Equivalent to + `UnivariateSpline` with s=0. + + Parameters + ---------- + x : sequence + input dimension of data points -- must be increasing + y : sequence + input dimension of data points + w : sequence or None, optional + weights for spline fitting. Must be positive. If None (default), + weights are all equal. + bbox : sequence or None, optional + 2-sequence specifying the boundary of the approximation interval. If + None (default), bbox=[x[0],x[-1]]. + k : int, optional + Degree of the smoothing spline. Must be <= 5. + + + See Also + -------- + UnivariateSpline : Superclass -- allows knots to be selected by a + smoothing condition + LSQUnivariateSpline : spline for which knots are user-selected + splrep : An older, non object-oriented wrapping of FITPACK + splev, sproot, splint, spalde + BivariateSpline : A similar class for two-dimensional spline interpolation + + + + Examples + -------- + >>> from numpy import linspace,exp + >>> from numpy.random import randn + >>> from scipy.interpolate import UnivariateSpline + >>> x = linspace(-3,3,100) + >>> y = exp(-x**2) + randn(100)/10 + >>> s = UnivariateSpline(x,y,s=1) + >>> xs = linspace(-3,3,1000) + >>> ys = s(xs) + + xs,ys is now a smoothed, super-sampled version of the noisy gaussian x,y """ @@ -241,9 +330,61 @@ self._reset_class() class LSQUnivariateSpline(UnivariateSpline): - """ Weighted least-squares univariate spline - approximation. Appears to be identical to UnivariateSpline with - more error checking. + """ + One-dimensional spline with explicit internal knots. + + Fits a spline y=s(x) of degree `k` to the provided `x`,`y` data. `t` + specifies the internal knots of the spline + + Parameters + ---------- + x : sequence + input dimension of data points -- must be increasing + y : sequence + input dimension of data points + t: sequence + interior knots of the spline. Must be in ascending order + and bbox[0]>> from numpy import linspace,exp + >>> from numpy.random import randn + >>> from scipy.interpolate import LSQUnivariateSpline + >>> x = linspace(-3,3,100) + >>> y = exp(-x**2) + randn(100)/10 + >>> t = [-1,0,1] + >>> s = LSQUnivariateSpline(x,y,t) + >>> xs = linspace(-3,3,1000) + >>> ys = s(xs) + + xs,ys is now a smoothed, super-sampled version of the noisy gaussian x,y + with knots [-3,-1,0,1,3] """ diff -Nru python-scipy-0.7.2+dfsg1/scipy/interpolate/fitpack.py python-scipy-0.8.0+dfsg1/scipy/interpolate/fitpack.py --- python-scipy-0.7.2+dfsg1/scipy/interpolate/fitpack.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/interpolate/fitpack.py 2010-07-26 15:48:30.000000000 +0100 @@ -432,43 +432,45 @@ #return l[0] def splev(x,tck,der=0): - """Evaulate a B-spline and its derivatives. - - Description: + """ + Evaluate a B-spline and its derivatives. - Given the knots and coefficients of a B-spline representation, evaluate - the value of the smoothing polynomial and it's derivatives. - This is a wrapper around the FORTRAN routines splev and splder of FITPACK. + Given the knots and coefficients of a B-spline representation, evaluate + the value of the smoothing polynomial and it's derivatives. + This is a wrapper around the FORTRAN routines splev and splder of FITPACK. - Inputs: + Parameters + ---------- + x (u) -- a 1-D array of points at which to return the value of the + smoothed spline or its derivatives. If tck was returned from + splprep, then the parameter values, u should be given. + tck -- A sequence of length 3 returned by splrep or splprep containg the + knots, coefficients, and degree of the spline. + der -- The order of derivative of the spline to compute (must be less than + or equal to k). - x (u) -- a 1-D array of points at which to return the value of the - smoothed spline or its derivatives. If tck was returned from - splprep, then the parameter values, u should be given. - tck -- A sequence of length 3 returned by splrep or splprep containg the - knots, coefficients, and degree of the spline. - der -- The order of derivative of the spline to compute (must be less than - or equal to k). - - Outputs: (y, ) - - y -- an array of values representing the spline function or curve. - If tck was returned from splrep, then this is a list of arrays - representing the curve in N-dimensional space. + Returns + ------- + y -- an array of values representing the spline function or curve. + If tck was returned from splrep, then this is a list of arrays + representing the curve in N-dimensional space. + + See Also + -------- + splprep, splrep, sproot, spalde, splint : evaluation, roots, integral + bisplrep, bisplev : bivariate splines + UnivariateSpline, BivariateSpline : + An alternative wrapping of the FITPACK functions. - See also: - splprep, splrep, sproot, spalde, splint - evaluation, roots, integral - bisplrep, bisplev - bivariate splines - UnivariateSpline, BivariateSpline - an alternative wrapping - of the FITPACK functions + References + ---------- + .. [1] C. de Boor, "On calculating with b-splines", J. Approximation + Theory, 6, p.50-62, 1972. + .. [2] M.G. Cox, "The numerical evaluation of b-splines", J. Inst. Maths + Applics, 10, p.134-149, 1972. + .. [3] P. Dierckx, "Curve and surface fitting with splines", Monographs + on Numerical Analysis, Oxford University Press, 1993. - Notes: - de Boor C : On calculating with b-splines, J. Approximation Theory - 6 (1972) 50-62. - Cox M.G. : The numerical evaluation of b-splines, J. Inst. Maths - Applics 10 (1972) 134-149. - Dierckx P. : Curve and surface fitting with splines, Monographs on - Numerical Analysis, Oxford University Press, 1993. """ t,c,k=tck try: @@ -489,37 +491,39 @@ return y[0] def splint(a,b,tck,full_output=0): - """Evaluate the definite integral of a B-spline. - - Description: - - Given the knots and coefficients of a B-spline, evaluate the definite - integral of the smoothing polynomial between two given points. + """ + Evaluate the definite integral of a B-spline. - Inputs: + Given the knots and coefficients of a B-spline, evaluate the definite + integral of the smoothing polynomial between two given points. - a, b -- The end-points of the integration interval. - tck -- A length 3 sequence describing the given spline (See splev). - full_output -- Non-zero to return optional output. + Parameters + ---------- + a, b -- The end-points of the integration interval. + tck -- A length 3 sequence describing the given spline (See splev). + full_output -- Non-zero to return optional output. - Outputs: (integral, {wrk}) + Returns + ------- + integral -- The resulting integral. - integral -- The resulting integral. - wrk -- An array containing the integrals of the normalized B-splines defined - on the set of knots. + wrk -- An array containing the integrals of the + normalized B-splines defined on the set of knots. + See Also + -------- + splprep, splrep, sproot, spalde, splev : evaluation, roots, integral + bisplrep, bisplev : bivariate splines + UnivariateSpline, BivariateSpline : + An alternative wrapping of the FITPACK functions. - See also: - splprep, splrep, sproot, spalde, splev - evaluation, roots, integral - bisplrep, bisplev - bivariate splines - UnivariateSpline, BivariateSpline - an alternative wrapping - of the FITPACK functions + References + ---------- + .. [1] P.W. Gaffney, The calculation of indefinite integrals of b-splines", + J. Inst. Maths Applics, 17, p.37-41, 1976. + .. [2] P. Dierckx, "Curve and surface fitting with splines", Monographs + on Numerical Analysis, Oxford University Press, 1993. - Notes: - Gaffney P.W. : The calculation of indefinite integrals of b-splines - J. Inst. Maths Applics 17 (1976) 37-41. - Dierckx P. : Curve and surface fitting with splines, Monographs on - Numerical Analysis, Oxford University Press, 1993. """ t,c,k=tck try: @@ -535,29 +539,44 @@ else: return aint def sproot(tck,mest=10): - """Find the roots of a cubic B-spline. + """ + Find the roots of a cubic B-spline. - Description: + Given the knots (>=8) and coefficients of a cubic B-spline return the + roots of the spline. - Given the knots (>=8) and coefficients of a cubic B-spline return the - roots of the spline. + Parameters + ---------- - Inputs: + tck -- A length 3 sequence describing the given spline (See splev). + The number of knots must be >= 8. The knots must be a montonically + increasing sequence. - tck -- A length 3 sequence describing the given spline (See splev). - The number of knots must be >= 8. The knots must be a montonically - increasing sequence. - mest -- An estimate of the number of zeros (Default is 10). + mest -- An estimate of the number of zeros (Default is 10) - Outputs: (zeros, ) - zeros -- An array giving the roots of the spline. + Returns + ------- - See also: - splprep, splrep, splint, spalde, splev - evaluation, roots, integral - bisplrep, bisplev - bivariate splines - UnivariateSpline, BivariateSpline - an alternative wrapping - of the FITPACK functions + zeros -- An array giving the roots of the spline. + + See also + -------- + splprep, splrep, splint, spalde, splev : + evaluation, roots, integral + bisplrep, bisplev : + bivariate splines + UnivariateSpline, BivariateSpline : + An alternative wrapping of the FITPACK functions. + + References + ---------- + .. [1] C. de Boor, "On calculating with b-splines", J. Approximation + Theory, 6, p.50-62, 1972. + .. [2] M.G. Cox, "The numerical evaluation of b-splines", J. Inst. Maths + Applics, 10, p.134-149, 1972. + .. [3] P. Dierckx, "Curve and surface fitting with splines", Monographs + on Numerical Analysis, Oxford University Press, 1993. """ t,c,k=tck @@ -868,40 +887,49 @@ return dfitpack.dblint(tx,ty,c,kx,ky,xb,xe,yb,ye) def insert(x,tck,m=1,per=0): - """Insert knots into a B-spline. + """ + Insert knots into a B-spline. - Description: + Given the knots and coefficients of a B-spline representation, create a + new B-spline with a knot inserted m times at point x. + This is a wrapper around the FORTRAN routine insert of FITPACK. - Given the knots and coefficients of a B-spline representation, create a - new B-spline with a knot inserted m times at point x. - This is a wrapper around the FORTRAN routine insert of FITPACK. + Parameters + ---------- - Inputs: + x (u) -- A 1-D point at which to insert a new knot(s). If tck was returned + from splprep, then the parameter values, u should be given. + tck -- A sequence of length 3 returned by splrep or splprep containg the + knots, coefficients, and degree of the spline. + + m -- The number of times to insert the given knot (its multiplicity). - x (u) -- A 1-D point at which to insert a new knot(s). If tck was returned - from splprep, then the parameter values, u should be given. - tck -- A sequence of length 3 returned by splrep or splprep containg the - knots, coefficients, and degree of the spline. - m -- The number of times to insert the given knot (its multiplicity). - per -- If non-zero, input spline is considered periodic. + per -- If non-zero, input spline is considered periodic. - Outputs: tck + Returns + ------- - tck -- (t,c,k) a tuple containing the vector of knots, the B-spline + tck -- (t,c,k) a tuple containing the vector of knots, the B-spline coefficients, and the degree of the new spline. - Requirements: - t(k+1) <= x <= t(n-k), where k is the degree of the spline. - In case of a periodic spline (per != 0) there must be - either at least k interior knots t(j) satisfying t(k+1)>> KroghInterpolator([0,0,1],[0,2,0]) + + This constructs the quadratic 2*X**2-2*X. The derivative condition + is indicated by the repeated zero in the xi array; the corresponding + yi values are 0, the function value, and 2, the derivative value. + + For another example, given xi, yi, and a derivative ypi for each + point, appropriate arrays can be constructed as: + + >>> xi_k, yi_k = np.repeat(xi, 2), np.ravel(np.dstack((yi,ypi))) + >>> KroghInterpolator(xi_k, yi_k) + + To produce a vector-valued polynomial, supply a higher-dimensional + array for yi: + + >>> KroghInterpolator([0,1],[[2,3],[4,5]]) + + This constructs a linear polynomial giving (2,3) at 0 and (4,5) at 1. + """ self.xi = np.asarray(xi) self.yi = np.asarray(yi) @@ -97,7 +131,7 @@ whether the interpolator is vector-valued or scalar-valued. If x is a vector, returns a vector of values. """ - if np.isscalar(x): + if _isscalar(x): scalar = True m = 1 else: @@ -155,7 +189,7 @@ [2.0,2.0], [3.0,3.0]]) """ - if np.isscalar(x): + if _isscalar(x): scalar = True m = 1 else: @@ -283,7 +317,7 @@ P = KroghInterpolator(xi, yi) if der==0: return P(x) - elif np.isscalar(der): + elif _isscalar(der): return P.derivative(x,der=der) else: return P.derivatives(x,der=np.amax(der)+1)[der] @@ -292,9 +326,9 @@ def approximate_taylor_polynomial(f,x,degree,scale,order=None): - """Estimate the Taylor polynomial of f at x by polynomial fitting + """ + Estimate the Taylor polynomial of f at x by polynomial fitting. - A polynomial Parameters ---------- f : callable @@ -302,32 +336,33 @@ a vector of x values. x : scalar The point at which the polynomial is to be evaluated. - degree : integer + degree : int The degree of the Taylor polynomial scale : scalar The width of the interval to use to evaluate the Taylor polynomial. Function values spread over a range this wide are used to fit the polynomial. Must be chosen carefully. - order : integer or None + order : int or None The order of the polynomial to be used in the fitting; f will be - evaluated order+1 times. If None, use degree. + evaluated ``order+1`` times. If None, use `degree`. Returns ------- - p : poly1d - the Taylor polynomial (translated to the origin, so that + p : poly1d instance + The Taylor polynomial (translated to the origin, so that for example p(0)=f(x)). Notes ----- - The appropriate choice of "scale" is a tradeoff - too large and the + The appropriate choice of "scale" is a trade-off; too large and the function differs from its Taylor polynomial too much to get a good - answer, too small and roundoff errors overwhelm the higher-order terms. + answer, too small and round-off errors overwhelm the higher-order terms. The algorithm used becomes numerically unstable around order 30 even under ideal circumstances. Choosing order somewhat larger than degree may improve the higher-order terms. + """ if order is None: order=degree @@ -492,7 +527,7 @@ weights, that is, it constructs an intermediate array of size N by M, where N is the degree of the polynomial. """ - scalar = np.isscalar(x) + scalar = _isscalar(x) x = np.atleast_1d(x) c = np.subtract.outer(x,self.xi) z = c==0 @@ -553,7 +588,7 @@ """ return BarycentricInterpolator(xi, yi)(x) - + class PiecewisePolynomial(object): """Piecewise polynomial curve specified by points and derivatives @@ -707,7 +742,7 @@ """ for i in xrange(len(xi)): - if orders is None or np.isscalar(orders): + if orders is None or _isscalar(orders): self.append(xi[i],yi[i],orders) else: self.append(xi[i],yi[i],orders[i]) @@ -723,7 +758,7 @@ ------- y : scalar or array-like of length R or length N or N by R """ - if np.isscalar(x): + if _isscalar(x): pos = np.clip(np.searchsorted(self.xi, x) - 1, 0, self.n-2) y = self.polynomials[pos](x) else: @@ -773,7 +808,7 @@ y : array-like of shape der by R or der by N or der by N by R """ - if np.isscalar(x): + if _isscalar(x): pos = np.clip(np.searchsorted(self.xi, x) - 1, 0, self.n-2) y = self.polynomials[pos].derivatives(x,der=der) else: @@ -827,7 +862,72 @@ P = PiecewisePolynomial(xi, yi, orders) if der==0: return P(x) - elif np.isscalar(der): + elif _isscalar(der): return P.derivative(x,der=der) else: return P.derivatives(x,der=np.amax(der)+1)[der] + +def _isscalar(x): + """Check whether x is if a scalar type, or 0-dim""" + return np.isscalar(x) or hasattr(x, 'shape') and x.shape == () + +def _edge_case(m0, d1): + return np.where((d1==0) | (m0==0), 0.0, 1.0/(1.0/m0+1.0/d1)) + +def _find_derivatives(x, y): + # Determine the derivatives at the points y_k, d_k, by using + # PCHIP algorithm is: + # We choose the derivatives at the point x_k by + # Let m_k be the slope of the kth segment (between k and k+1) + # If m_k=0 or m_{k-1}=0 or sgn(m_k) != sgn(m_{k-1}) then d_k == 0 + # else use weighted harmonic mean: + # w_1 = 2h_k + h_{k-1}, w_2 = h_k + 2h_{k-1} + # 1/d_k = 1/(w_1 + w_2)*(w_1 / m_k + w_2 / m_{k-1}) + # where h_k is the spacing between x_k and x_{k+1} + + hk = x[1:] - x[:-1] + mk = (y[1:] - y[:-1]) / hk + smk = np.sign(mk) + condition = ((smk[1:] != smk[:-1]) | (mk[1:]==0) | (mk[:-1]==0)) + + w1 = 2*hk[1:] + hk[:-1] + w2 = hk[1:] + 2*hk[:-1] + whmean = 1.0/(w1+w2)*(w1/mk[1:] + w2/mk[:-1]) + + dk = np.zeros_like(y) + dk[1:-1][condition] = 0.0 + dk[1:-1][~condition] = 1.0/whmean[~condition] + + # For end-points choose d_0 so that 1/d_0 = 1/m_0 + 1/d_1 unless + # one of d_1 or m_0 is 0, then choose d_0 = 0 + + dk[0] = _edge_case(mk[0],dk[1]) + dk[-1] = _edge_case(mk[-1],dk[-2]) + return dk + + +def pchip(x, y): + """PCHIP 1-d monotonic cubic interpolation + + Description + ----------- + x and y are arrays of values used to approximate some function f: + y = f(x) + This class factory function returns a callable class whose __call__ method + uses monotonic cubic, interpolation to find the value of new points. + + Parameters + ---------- + x : array + A 1D array of monotonically increasing real values. x cannot + include duplicate values (otherwise f is overspecified) + y : array + A 1-D array of real values. y's length along the interpolation + axis must be equal to the length of x. + + Assumes x is sorted in monotonic order (e.g. x[1] > x[0]) + """ + derivs = _find_derivatives(x,y) + return PiecewisePolynomial(x, zip(y, derivs), orders=3, direction=None) + + diff -Nru python-scipy-0.7.2+dfsg1/scipy/interpolate/rbf.py python-scipy-0.8.0+dfsg1/scipy/interpolate/rbf.py --- python-scipy-0.7.2+dfsg1/scipy/interpolate/rbf.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/interpolate/rbf.py 2010-07-26 15:48:30.000000000 +0100 @@ -3,6 +3,7 @@ Written by John Travers , February 2007 Based closely on Matlab code by Alex Chirokov Additional, large, improvements by Robert Hetland +Some additional alterations by Travis Oliphant Permission to use, modify, and distribute this software is given under the terms of the SciPy (BSD style) license. See LICENSE.txt that came with @@ -42,10 +43,11 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ -from numpy import (sqrt, log, asarray, newaxis, all, dot, float64, exp, eye, - isnan, float_) +from numpy import (sqrt, log, asarray, newaxis, all, dot, exp, eye, + float_) from scipy import linalg + class Rbf(object): """ Rbf(*args) @@ -58,17 +60,21 @@ *args : arrays x, y, z, ..., d, where x, y, z, ... are the coordinates of the nodes and d is the array of values at the nodes - function : str, optional + function : str or callable, optional The radial basis function, based on the radius, r, given by the norm (defult is Euclidean distance); the default is 'multiquadric':: 'multiquadric': sqrt((r/self.epsilon)**2 + 1) - 'inverse multiquadric': 1.0/sqrt((r/self.epsilon)**2 + 1) + 'inverse': 1.0/sqrt((r/self.epsilon)**2 + 1) 'gaussian': exp(-(r/self.epsilon)**2) 'linear': r 'cubic': r**3 'quintic': r**5 - 'thin-plate': r**2 * log(r) + 'thin_plate': r**2 * log(r) + + If callable, then it must take 2 arguments (self, r). The epsilon parameter + will be available as self.epsilon. Other keyword arguments passed in will + be available as well. epsilon : float, optional Adjustable constant for gaussian or multiquadrics functions @@ -99,25 +105,66 @@ def _euclidean_norm(self, x1, x2): return sqrt( ((x1 - x2)**2).sum(axis=0) ) - def _function(self, r): - if self.function.lower() == 'multiquadric': + def _h_multiquadric(self, r): return sqrt((1.0/self.epsilon*r)**2 + 1) - elif self.function.lower() == 'inverse multiquadric': + def _h_inverse_multiquadric(self, r): return 1.0/sqrt((1.0/self.epsilon*r)**2 + 1) - elif self.function.lower() == 'gaussian': + def _h_gaussian(self, r): return exp(-(1.0/self.epsilon*r)**2) - elif self.function.lower() == 'linear': - return r - elif self.function.lower() == 'cubic': - return r**3 - elif self.function.lower() == 'quintic': - return r**5 - elif self.function.lower() == 'thin-plate': - result = r**2 * log(r) - result[r == 0] = 0 # the spline is zero at zero - return result - else: - raise ValueError, 'Invalid basis function name' + def _h_linear(self, r): + return r + def _h_cubic(self, r): + return r**3 + def _h_quintic(self, r): + return r**5 + def _h_thin_plate(self, r): + result = r**2 * log(r) + result[r == 0] = 0 # the spline is zero at zero + return result + + # Setup self._function and do smoke test on initial r + def _init_function(self, r): + if isinstance(self.function, str): + self.function = self.function.lower() + _mapped = {'inverse': 'inverse_multiquadric', + 'inverse multiquadric': 'inverse_multiquadric', + 'thin-plate': 'thin_plate'} + if self.function in _mapped: + self.function = _mapped[self.function] + + func_name = "_h_" + self.function + if hasattr(self, func_name): + self._function = getattr(self, func_name) + else: + functionlist = [x[3:] for x in dir(self) if x.startswith('_h_')] + raise ValueError, "function must be a callable or one of ", \ + ", ".join(functionlist) + self._function = getattr(self, "_h_"+self.function) + elif callable(self.function): + import new + allow_one = False + if hasattr(self.function, 'func_code'): + val = self.function + allow_one = True + elif hasattr(self.function, "im_func"): + val = self.function.im_func + elif hasattr(self.function, "__call__"): + val = self.function.__call__.im_func + else: + raise ValueError, "Cannot determine number of arguments to function" + + argcount = val.func_code.co_argcount + if allow_one and argcount == 1: + self._function = self.function + elif argcount == 2: + self._function = new.instancemethod(self.function, self, Rbf) + else: + raise ValueError, "Function argument must take 1 or 2 arguments." + + a0 = self._function(r) + if a0.shape != r.shape: + raise ValueError, "Callable must take array and return array of the same shape" + return a0 def __init__(self, *args, **kwargs): self.xi = asarray([asarray(a, dtype=float_).flatten() @@ -131,10 +178,17 @@ self.norm = kwargs.pop('norm', self._euclidean_norm) r = self._call_norm(self.xi, self.xi) self.epsilon = kwargs.pop('epsilon', r.mean()) - self.function = kwargs.pop('function', 'multiquadric') self.smooth = kwargs.pop('smooth', 0.0) - self.A = self._function(r) - eye(self.N)*self.smooth + self.function = kwargs.pop('function', 'multiquadric') + + # attach anything left in kwargs to self + # for use by any user-callable function or + # to save on the object returned. + for item, value in kwargs.items(): + setattr(self, item, value) + + self.A = self._init_function(r) - eye(self.N)*self.smooth self.nodes = linalg.solve(self.A, self.di) def _call_norm(self, x1, x2): diff -Nru python-scipy-0.7.2+dfsg1/scipy/interpolate/SConscript python-scipy-0.8.0+dfsg1/scipy/interpolate/SConscript --- python-scipy-0.7.2+dfsg1/scipy/interpolate/SConscript 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/interpolate/SConscript 2010-07-26 15:48:30.000000000 +0100 @@ -10,6 +10,7 @@ config = env.NumpyConfigure(custom_tests = {'CheckF77Clib' : CheckF77Clib}) if not config.CheckF77Clib(): raise Exception("Could not check F77 runtime, needed for interpolate") +config.CheckF77Mangling() config.Finish() # Build fitpack diff -Nru python-scipy-0.7.2+dfsg1/scipy/interpolate/src/__fitpack.h python-scipy-0.8.0+dfsg1/scipy/interpolate/src/__fitpack.h --- python-scipy-0.7.2+dfsg1/scipy/interpolate/src/__fitpack.h 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/interpolate/src/__fitpack.h 2010-07-26 15:48:30.000000000 +0100 @@ -24,35 +24,56 @@ /* python files: (to be imported to Multipack.py) fitpack.py */ -#if defined(NO_APPEND_FORTRAN) -#define CURFIT curfit -#define PERCUR percur -#define SPALDE spalde -#define SPLDER splder -#define SPLEV splev -#define SPLINT splint -#define SPROOT sproot -#define PARCUR parcur -#define CLOCUR clocur -#define SURFIT surfit -#define BISPEV bispev -#define PARDER parder -#define INSERT insert +#if defined(UPPERCASE_FORTRAN) + #if defined(NO_APPEND_FORTRAN) + /* nothing to do */ + #else + #define CURFIT CURFIT_ + #define PERCUR PERCUR_ + #define SPALDE SPALDE_ + #define SPLDER SPLDER_ + #define SPLEV SPLEV_ + #define SPLINT SPLINT_ + #define SPROOT SPROOT_ + #define PARCUR PARCUR_ + #define CLOCUR CLOCUR_ + #define SURFIT SURFIT_ + #define BISPEV BISPEV_ + #define PARDER PARDER_ + #define INSERT INSERT_ + #endif #else -#define CURFIT curfit_ -#define PERCUR percur_ -#define SPALDE spalde_ -#define SPLDER splder_ -#define SPLEV splev_ -#define SPLINT splint_ -#define SPROOT sproot_ -#define PARCUR parcur_ -#define CLOCUR clocur_ -#define SURFIT surfit_ -#define BISPEV bispev_ -#define PARDER parder_ -#define INSERT insert_ + #if defined(NO_APPEND_FORTRAN) + #define CURFIT curfit + #define PERCUR percur + #define SPALDE spalde + #define SPLDER splder + #define SPLEV splev + #define SPLINT splint + #define SPROOT sproot + #define PARCUR parcur + #define CLOCUR clocur + #define SURFIT surfit + #define BISPEV bispev + #define PARDER parder + #define INSERT insert + #else + #define CURFIT curfit_ + #define PERCUR percur_ + #define SPALDE spalde_ + #define SPLDER splder_ + #define SPLEV splev_ + #define SPLINT splint_ + #define SPROOT sproot_ + #define PARCUR parcur_ + #define CLOCUR clocur_ + #define SURFIT surfit_ + #define BISPEV bispev_ + #define PARDER parder_ + #define INSERT insert_ + #endif #endif + void CURFIT(int*,int*,double*,double*,double*,double*,double*,int*,double*,int*,int*,double*,double*,double*,double*,int*,int*,int*); void PERCUR(int*,int*,double*,double*,double*,int*,double*,int*,int*,double*,double*,double*,double*,int*,int*,int*); void SPALDE(double*,int*,double*,int*,double*,double*,int*); diff -Nru python-scipy-0.7.2+dfsg1/scipy/interpolate/tests/test_fitpack.py python-scipy-0.8.0+dfsg1/scipy/interpolate/tests/test_fitpack.py --- python-scipy-0.7.2+dfsg1/scipy/interpolate/tests/test_fitpack.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/interpolate/tests/test_fitpack.py 2010-07-26 15:48:30.000000000 +0100 @@ -12,6 +12,8 @@ """ #import libwadpy +import warnings + from numpy.testing import * from numpy import array, diff from scipy.interpolate.fitpack2 import UnivariateSpline, LSQBivariateSpline, \ @@ -163,5 +165,9 @@ assert_almost_equal(zi, zi2) +# filter test_bilinearity and test_integral warnings +warnings.filterwarnings("ignore", "\nThe coefficients of the spline returned") +warnings.filterwarnings("ignore", "\nThe required storage space exceeds") + if __name__ == "__main__": run_module_suite() diff -Nru python-scipy-0.7.2+dfsg1/scipy/interpolate/tests/test_polyint.py python-scipy-0.8.0+dfsg1/scipy/interpolate/tests/test_polyint.py --- python-scipy-0.7.2+dfsg1/scipy/interpolate/tests/test_polyint.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/interpolate/tests/test_polyint.py 2010-07-26 15:48:30.000000000 +0100 @@ -21,6 +21,7 @@ def test_scalar(self): P = KroghInterpolator(self.xs,self.ys) assert_almost_equal(self.true_poly(7),P(7)) + assert_almost_equal(self.true_poly(np.array(7)), P(np.array(7))) def test_derivatives(self): P = KroghInterpolator(self.xs,self.ys) @@ -75,6 +76,7 @@ def test_shapes_scalarvalue(self): P = KroghInterpolator(self.xs,self.ys) assert_array_equal(np.shape(P(0)), ()) + assert_array_equal(np.shape(P(np.array(0))), ()) assert_array_equal(np.shape(P([0])), (1,)) assert_array_equal(np.shape(P([0,1])), (2,)) @@ -82,6 +84,7 @@ P = KroghInterpolator(self.xs,self.ys) n = P.n assert_array_equal(np.shape(P.derivatives(0)), (n,)) + assert_array_equal(np.shape(P.derivatives(np.array(0))), (n,)) assert_array_equal(np.shape(P.derivatives([0])), (n,1)) assert_array_equal(np.shape(P.derivatives([0,1])), (n,2)) @@ -132,6 +135,7 @@ def test_scalar(self): P = BarycentricInterpolator(self.xs,self.ys) assert_almost_equal(self.true_poly(7),P(7)) + assert_almost_equal(self.true_poly(np.array(7)),P(np.array(7))) def test_delayed(self): P = BarycentricInterpolator(self.xs) @@ -155,6 +159,7 @@ def test_shapes_scalarvalue(self): P = BarycentricInterpolator(self.xs,self.ys) assert_array_equal(np.shape(P(0)), ()) + assert_array_equal(np.shape(P(np.array(0))), ()) assert_array_equal(np.shape(P([0])), (1,)) assert_array_equal(np.shape(P([0,1])), (2,)) @@ -189,6 +194,9 @@ P = PiecewisePolynomial(self.xi,self.yi,3) assert_almost_equal(P(self.test_xs[0]),self.spline_ys[0]) assert_almost_equal(P.derivative(self.test_xs[0],1),self.spline_yps[0]) + assert_almost_equal(P(np.array(self.test_xs[0])),self.spline_ys[0]) + assert_almost_equal(P.derivative(np.array(self.test_xs[0]),1), + self.spline_yps[0]) def test_derivative(self): P = PiecewisePolynomial(self.xi,self.yi,3) assert_almost_equal(P.derivative(self.test_xs,1),self.spline_yps) @@ -220,6 +228,7 @@ def test_shapes_scalarvalue(self): P = PiecewisePolynomial(self.xi,self.yi,4) assert_array_equal(np.shape(P(0)), ()) + assert_array_equal(np.shape(P(np.array(0))), ()) assert_array_equal(np.shape(P([0])), (1,)) assert_array_equal(np.shape(P([0,1])), (2,)) @@ -227,6 +236,7 @@ P = PiecewisePolynomial(self.xi,self.yi,4) n = 4 assert_array_equal(np.shape(P.derivative(0,1)), ()) + assert_array_equal(np.shape(P.derivative(np.array(0),1)), ()) assert_array_equal(np.shape(P.derivative([0],1)), (1,)) assert_array_equal(np.shape(P.derivative([0,1],1)), (2,)) diff -Nru python-scipy-0.7.2+dfsg1/scipy/interpolate/tests/test_rbf.py python-scipy-0.8.0+dfsg1/scipy/interpolate/tests/test_rbf.py --- python-scipy-0.7.2+dfsg1/scipy/interpolate/tests/test_rbf.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/interpolate/tests/test_rbf.py 2010-07-26 15:48:31.000000000 +0100 @@ -74,3 +74,22 @@ } for function in FUNCTIONS: yield check_rbf1d_regularity, function, tolerances.get(function, 1e-2) + +def test_default_construction(): + """Check that the Rbf class can be constructed with the default + multiquadric basis function. Regression test for ticket #1228.""" + x = linspace(0,10,9) + y = sin(x) + rbf = Rbf(x, y) + yi = rbf(x) + assert_array_almost_equal(y, yi) + + +def test_function_is_callable(): + """Check that the Rbf class can be constructed with function=callable.""" + x = linspace(0,10,9) + y = sin(x) + linfunc = lambda x:x + rbf = Rbf(x, y, function=linfunc) + yi = rbf(x) + assert_array_almost_equal(y, yi) diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/arff/arffread.py python-scipy-0.8.0+dfsg1/scipy/io/arff/arffread.py --- python-scipy-0.7.2+dfsg1/scipy/io/arff/arffread.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/arff/arffread.py 2010-07-26 15:48:31.000000000 +0100 @@ -39,7 +39,7 @@ # To get attributes name enclosed with '' r_comattrval = re.compile(r"'(..+)'\s+(..+$)") -# To get attributes name enclosed with '', possibly spread accross multilines +# To get attributes name enclosed with '', possibly spread across multilines r_mcomattrval = re.compile(r"'([..\n]+)'\s+(..+$)") # To get normal attributes r_wcomattrval = re.compile(r"(\S+)\s+(..+$)") @@ -83,6 +83,7 @@ """If attribute is nominal, returns a list of the values""" return attribute.split(',') + def read_data_list(ofile): """Read each line of the iterable and put it in a list.""" data = [ofile.next()] @@ -91,6 +92,7 @@ data.extend([i for i in ofile]) return data + def get_ndata(ofile): """Read the whole file to get number of data attributes.""" data = [ofile.next()] @@ -101,25 +103,56 @@ loc += 1 return loc + def maxnomlen(atrv): - """Given a string contening a nominal type definition, returns the string - len of the biggest component. + """Given a string containing a nominal type definition, returns the + string len of the biggest component. A nominal type is defined as seomthing framed between brace ({}). - Example: maxnomlen("{floup, bouga, fl, ratata}") returns 6 (the size of - ratata, the longest nominal value).""" + Parameters + ---------- + atrv : str + Nominal type definition + + Returns + ------- + slen : int + length of longest component + + Examples + -------- + maxnomlen("{floup, bouga, fl, ratata}") returns 6 (the size of + ratata, the longest nominal value). + + >>> maxnomlen("{floup, bouga, fl, ratata}") + 6 + """ nomtp = get_nom_val(atrv) return max(len(i) for i in nomtp) + def get_nom_val(atrv): - """Given a string contening a nominal type, returns a tuple of the possible - values. + """Given a string containing a nominal type, returns a tuple of the + possible values. - A nominal type is defined as something framed between brace ({}). + A nominal type is defined as something framed between braces ({}). - Example: get_nom_val("{floup, bouga, fl, ratata}") returns ("floup", - "bouga", "fl", "ratata").""" + Parameters + ---------- + atrv : str + Nominal type definition + + Returns + ------- + poss_vals : tuple + possible values + + Examples + -------- + >>> get_nom_val("{floup, bouga, fl, ratata}") + ('floup', 'bouga', 'fl', 'ratata') + """ r_nominal = re.compile('{(..+)}') m = r_nominal.match(atrv) if m: @@ -127,12 +160,14 @@ else: raise ValueError("This does not look like a nominal string") + def go_data(ofile): """Skip header. the first next() call of the returned iterator will be the @data line""" return itertools.dropwhile(lambda x : not r_datameta.match(x), ofile) + #---------------- # Parsing header #---------------- @@ -141,28 +176,42 @@ Given a raw string attribute, try to get the name and type of the attribute. Constraints: - - The first line must start with @attribute (case insensitive, and - space like characters begore @attribute are allowed) - - Works also if the attribute is spread on multilines. - - Works if empty lines or comments are in between - - :Parameters: - attribute : str - the attribute string. - - :Returns: - name : str - name of the attribute - value : str - value of the attribute - next : str - next line to be parsed - - Example: - - if attribute is a string defined in python as r"floupi real", will - return floupi as name, and real as value. - - if attribute is r"'floupi 2' real", will return 'floupi 2' as name, - and real as value. """ + + * The first line must start with @attribute (case insensitive, and + space like characters before @attribute are allowed) + * Works also if the attribute is spread on multilines. + * Works if empty lines or comments are in between + + Parameters + ---------- + attribute : str + the attribute string. + + Returns + ------- + name : str + name of the attribute + value : str + value of the attribute + next : str + next line to be parsed + + Examples + -------- + If attribute is a string defined in python as r"floupi real", will + return floupi as name, and real as value. + + >>> iterable = iter([0] * 10) # dummy iterator + >>> tokenize_attribute(iterable, r"@attribute floupi real") + ('floupi', 'real', 0) + + If attribute is r"'floupi 2' real", will return 'floupi 2' as name, + and real as value. + + >>> tokenize_attribute(iterable, r" @attribute 'floupi 2' real ") + ('floupi 2', 'real', 0) + + """ sattr = attribute.strip() mattr = r_attribute.match(sattr) if mattr: @@ -186,6 +235,7 @@ raise ValueError("relational attributes not supported yet") return name, type, next + def tokenize_multilines(iterable, val): """Can tokenize an attribute spread over several lines.""" # If one line does not match, read all the following lines up to next @@ -205,6 +255,7 @@ raise ValueError("Cannot parse attribute names spread over multi "\ "lines yet") + def tokenize_single_comma(val): # XXX we match twice the same string (here and at the caller level). It is # stupid, but it is easier for now... @@ -219,6 +270,7 @@ raise ValueError("Error while tokenizing single %s" % val) return name, type + def tokenize_single_wcomma(val): # XXX we match twice the same string (here and at the caller level). It is # stupid, but it is easier for now... @@ -233,6 +285,7 @@ raise ValueError("Error while tokenizing single %s" % val) return name, type + def read_header(ofile): """Read the header of the iterable ofile.""" i = ofile.next() @@ -263,17 +316,39 @@ return relation, attributes + #-------------------- # Parsing actual data #-------------------- def safe_float(x): """given a string x, convert it to a float. If the stripped string is a ?, - return a Nan (missing value).""" + return a Nan (missing value). + + Parameters + ---------- + x : str + string to convert + + Returns + ------- + f : float + where float can be nan + + Examples + -------- + >>> safe_float('1') + 1.0 + >>> safe_float('1\\n') + 1.0 + >>> safe_float('?\\n') + nan + """ if x.strip() == '?': return np.nan else: return np.float(x) + def safe_nominal(value, pvalue): svalue = value.strip() if svalue in pvalue: @@ -283,41 +358,61 @@ else: raise ValueError("%s value not in %s" % (str(svalue), str(pvalue))) + def get_delim(line): """Given a string representing a line of data, check whether the - delimiter is ',' or space.""" - l = line.split(',') - if len(l) > 1: + delimiter is ',' or space. + + Parameters + ---------- + line : str + line of data + + Returns + ------- + delim : {',', ' '} + + Examples + -------- + >>> get_delim(',') + ',' + >>> get_delim(' ') + ' ' + >>> get_delim(', ') + ',' + >>> get_delim('x') + Traceback (most recent call last): + ... + ValueError: delimiter not understood: x + """ + if ',' in line: return ',' - else: - l = line.split(' ') - if len(l) > 1: - return ' ' - else: - raise ValueError("delimiter not understood: " + line) + if ' ' in line: + return ' ' + raise ValueError("delimiter not understood: " + line) + -class MetaData: +class MetaData(object): """Small container to keep useful informations on a ARFF dataset. Knows about attributes names and types. - :Example: - - data, meta = loadarff('iris.arff') - # This will print the attributes names of the iris.arff dataset - for i in meta: - print i - # This works too - meta.names() - # Getting attribute type - types = meta.types() - - :Note: - - Also maintains the list of attributes in order, i.e. doing for i in - meta, where meta is an instance of MetaData, will return the different - attribute names in the order they were defined. - + Example + ------- + data, meta = loadarff('iris.arff') + # This will print the attributes names of the iris.arff dataset + for i in meta: + print i + # This works too + meta.names() + # Getting attribute type + types = meta.types() + + Notes + ----- + Also maintains the list of attributes in order, i.e. doing for i in + meta, where meta is an instance of MetaData, will return the + different attribute names in the order they were defined. """ def __init__(self, rel, attr): self.name = rel @@ -357,31 +452,34 @@ """Return the list of attribute types.""" return [v[0] for v in self._attributes.values()] + def loadarff(filename): """Read an arff file. - :Args: - - filename: str - the name of the file - - :Returns: + Parameters + ---------- + filename : str + the name of the file + + Returns + ------- + data : record array + the data of the arff file. Each record corresponds to one attribute. + meta : MetaData + this contains information about the arff file, like type and + names of attributes, the relation (name of the dataset), etc... + + Notes + ----- - data: record array - the data of the arff file. Each record corresponds to one attribute. - meta: MetaData - this contains informations about the arff file, like type and names - of attributes, the relation (name of the dataset), etc... + This function should be able to read most arff files. Not + implemented functionalities include: - :Note: + * date type attributes + * string type attributes - This function should be able to read most arff files. Not implemented - functionalities include: - - date type attributes - - string type attributes - - It can read files with numeric and nominal attributes. - It can read files with sparse data (? in the file). + It can read files with numeric and nominal attributes. It can read + files with sparse data (? in the file). """ ofile = open(filename) @@ -491,6 +589,7 @@ data = np.fromiter(a, descr) return data, meta + #----- # Misc #----- @@ -498,6 +597,7 @@ nbfac = data.size * 1. / (data.size - 1) return np.nanmin(data), np.nanmax(data), np.mean(data), np.std(data) * nbfac + def print_attribute(name, tp, data): type = tp[0] if type == 'numeric' or type == 'real' or type == 'integer': @@ -511,6 +611,7 @@ msg += "}" print msg + def test_weka(filename): data, meta = loadarff(filename) print len(data.dtype) @@ -518,6 +619,10 @@ for i in meta: print_attribute(i,meta[i],data[i]) +# make sure nose does not find this as a test +test_weka.__test__ = False + + def floupi(filename): data, meta = loadarff(filename) from attrselect import print_dataset_info @@ -534,6 +639,7 @@ #else: # print "\tinstance %s is non numeric" % i + if __name__ == '__main__': #import glob #for i in glob.glob('arff.bak/data/*'): diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/array_import.py python-scipy-0.8.0+dfsg1/scipy/io/array_import.py --- python-scipy-0.7.2+dfsg1/scipy/io/array_import.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/array_import.py 1970-01-01 01:00:00.000000000 +0100 @@ -1,501 +0,0 @@ -# Authors: Travis Oliphant, Trent Oliphant -# with support from Lee Barford's group at Agilent, Inc. -# - -"""This module allows for the loading of an array from an ASCII -Text File - -""" - -__all__ = ['read_array', 'write_array'] - -# Standard library imports. -import os -import re -import sys -import types - -# Numpy imports. -import numpy - -from numpy import array, take, concatenate, asarray, real, imag, \ - deprecate_with_doc -# Sadly, this module is still written with typecodes in mind. -from numpy.oldnumeric import Float - -# Local imports. -import numpyio - -default = None -_READ_BUFFER_SIZE = 1024*1024 -#_READ_BUFFER_SIZE = 1000 -#_READ_BUFFER_SIZE = 160 - -# ASCII Text object stream with automatic (un)compression and URL access. -# -# Adapted from -# TextFile class Written by: Konrad Hinsen -# -# Written by Travis Oliphant and Trent Oliphant -# with support from Agilent, Inc. -# - - -def convert_separator(sep): - newsep = '' - for k in sep: - if k in '.^$*+?{[\\|()': - newsep = newsep + '\\' + k - else: - newsep = newsep + k - return newsep - -def build_numberlist(lines): - if lines is default: - linelist = [-1] - else: - linelist = [] - errstr = "Argument lines must be a sequence of integers and/or range tuples." - try: - for num in lines[:-1]: # handle all but last element - if type(num) not in [types.IntType, types.TupleType]: - raise ValueError, errstr - if isinstance(num, types.IntType): - linelist.append(num) - else: - if not 1 < len(num) < 4: - raise ValueError, "Tuples must be valid range tuples." - linelist.extend(range(*num)) - except TypeError: - raise ValueError, errstr - num = lines[-1] - if type(num) is types.IntType: - linelist.append(num) - elif type(num) is types.TupleType: - if [types.IntType]*len(num) != map(type, num): - if len(num) > 1 and num[1] is not None: - raise ValueError, errstr - if len(num) == 1: - linelist.extend([num[0],-1]) - elif len(num) == 2: - if num[1] is None: - linelist.extend([num[0], -1]) - else: - linelist.extend(range(*num)) - elif len(num) == 3: - if num[1] is None: - linelist.extend([num[0], -num[2]]) - else: - linelist.extend(range(*num)) - else: - raise ValueError, errstr - return linelist - -def get_open_file(fileobject, mode='rb'): - try: - # this is the duck typing check: if fileobject - # can be used is os.path.expanduser, it is a string - # otherwise it is a fileobject - fileobject = os.path.expanduser(fileobject) - - if mode[0]=='r' and not os.path.exists(fileobject): - raise IOError, (2, 'No such file or directory: ' - + fileobject) - else: - try: - file = open(fileobject, mode) - except IOError, details: - file = None - if type(details) == type(()): - details = details + (fileobject,) - raise IOError, details - except (AttributeError, TypeError): - # it is assumed that the fileobject is a python - # file object if it can not be used in os.path.expanduser - file = fileobject - - return file - - -class ascii_stream(object): - """Text files with line iteration - - Ascii_stream instances can be used like normal read-only file objects - (i.e. by calling readline() and readlines()), but can - also be used as sequences of lines in for-loops. - - Finally, ascii_stream objects accept file names that start with '~' or - '~user' to indicate a home directory(for reading only). - - Constructor: ascii_stream(|fileobject|, |lines|,|comment|), - where |fileobject| is either an open python file object or - the name of the file, |lines| is a sequence of integers - or tuples(indicating ranges) of lines to be read, |comment| is the - comment line identifier """ - - def __init__(self, fileobject, lines=default, comment="#", - linesep='\n'): - if not isinstance(comment, types.StringType): - raise ValueError, "Comment must be a string." - self.linelist = build_numberlist(lines) - self.comment = comment - self.lencomment = len(comment) - self.file = get_open_file(fileobject, mode='r') - self.should_close_file = not (self.file is fileobject) - self._pos = self.file.tell() - self._lineindex = 0 - if self.linelist[-1] < 0: - self._linetoget = self.linelist[-1] - else: - self._linetoget = 0 - self._oldbuflines = 0 - self._linesplitter = linesep - self._buffer = self.readlines(_READ_BUFFER_SIZE) - self._totbuflines = len(self._buffer) - - def readlines(self, sizehint): - buffer = self.file.read(sizehint) - lines = buffer.split(self._linesplitter) - if len(buffer) < sizehint: # EOF - if buffer == '': - return [] - else: - return lines - else: - if len(lines) < 2: - raise ValueError, "Buffer size too small." - backup = len(lines[-1]) - self.file.seek(-backup, 1) - return lines[:-1] - - def __del__(self): - if hasattr(getattr(self, 'file', None),'close') and self.should_close_file: - self.file.close() - - def __getitem__(self, item): - while 1: - line = self.readnextline() - if line is None: - raise IndexError - if len(line) < self.lencomment or line[:self.lencomment] != self.comment: - break - return line - - def readnextline(self): - if self.linelist[self._lineindex] >= 0: - self._linetoget = self.linelist[self._lineindex] - self._lineindex += 1 - else: - self._linetoget = self._linetoget - self.linelist[self._lineindex] - while self._linetoget >= self._totbuflines: - self._buffer = self.readlines(_READ_BUFFER_SIZE) - self._oldbuflines = self._totbuflines - self._totbuflines += len(self._buffer) - if (self._totbuflines == self._oldbuflines): - return None - line = self._buffer[self._linetoget - self._oldbuflines] - return line - - def close(self): - self.file.close() - - def flush(self): - self.file.flush() - - -def move_past_spaces(firstline): - ind = 0 - firstline = firstline.lstrip() - while firstline[ind] not in [' ','\n','\t','\v','\f','\r']: - ind += 1 - return firstline[ind:], ind - - -def extract_columns(arlist, collist, atype, missing): - if collist[-1] < 0: - if len(collist) == 1: - toconvlist = arlist[::-collist[-1]] - else: - toconvlist = take(arlist,collist[:-1],0) - toconvlist = concatenate((toconvlist, - arlist[(collist[-2]-collist[-1])::(-collist[-1])])) - else: - toconvlist = take(arlist, collist,0) - - return numpyio.convert_objectarray(toconvlist, atype, missing) - - -# Given a string representing one line, a separator tuple, a list of -# columns to read for each element of the atype list and a missing -# value to insert when conversion fails. - -# Regular expressions for detecting complex numbers and for dealing -# with spaces between the real and imaginary parts - -_obj = re.compile(r""" - ([0-9.eE]+) # Real part - ([\t ]*) # Space between real and imaginary part - ([+-]) # +/- sign - ([\t ]*) # 0 or more spaces - (([0-9.eE]+[iIjJ]) - |([iIjJ][0-9.eE]+)) # Imaginary part - """, re.VERBOSE) - -_not_warned = 1 -def process_line(line, separator, collist, atype, missing): - global _not_warned - strlist = [] - line = _obj.sub(r"\1\3\5",line) # remove spaces between real - # and imaginary parts of complex numbers - - if _not_warned: - warn = 0 - if (_obj.search(line) is not None): - warn = 1 - for k in range(len(atype)): - if atype[k] in numpy.typecodes['Complex']: - warn = 0 - if warn: - numpy.disp("Warning: Complex data detected, but no requested typecode was complex.") - _not_warned = 0 - for mysep in separator[:-1]: - if mysep is None: - newline, ind = move_past_spaces(line) - strlist.append(line[:ind]) - line = newline - else: - ind = line.find(mysep) - strlist.append(line[:ind]) - line = line[ind+len(mysep):] - strlist.extend(line.split(separator[-1])) - arlist = array(strlist,'O') - N = len(atype) - vals = [None]*N - for k in range(len(atype)): - vals[k] = extract_columns(arlist, collist[k], atype[k], missing) - return vals - -def getcolumns(stream, columns, separator): - global _not_warned - comment = stream.comment - lenc = stream.lencomment - k, K = stream.linelist[0], len(stream._buffer) - while k < K: - firstline = stream._buffer[k] - if firstline != '' and firstline[:lenc] != comment: - break - k = k + 1 - if k == K: - raise ValueError, "First line to read not within %d lines of top." % K - firstline = stream._buffer[k] - N = len(columns) - collist = [None]*N - colsize = [None]*N - for k in range(N): - collist[k] = build_numberlist(columns[k]) - _not_warned = 0 - val = process_line(firstline, separator, collist, [Float]*N, 0) - for k in range(N): - colsize[k] = len(val[k]) - return colsize, collist - -def convert_to_equal_lists(cols, atype): - if not isinstance(cols, types.ListType): - cols = [cols] - if not isinstance(atype, types.ListType): - atype = [atype] - N = len(cols) - len(atype) - if N > 0: - atype.extend([atype[-1]]*N) - elif N < 0: - cols.extend([cols[-1]]*(-N)) - return cols, atype - - -@deprecate_with_doc(""" -The functionality of read_array is in numpy.loadtxt which allows the same -functionality using different syntax. -""") -def read_array(fileobject, separator=default, columns=default, comment="#", - lines=default, atype=Float, linesep='\n', - rowsize=10000, missing=0): - """Return an array or arrays from ascii_formatted data in |fileobject|. - - Inputs: - - fileobject -- An open file object or a string for a valid filename. - The string can be prepended by "~/" or "~/" to - read a file from the home directory. - separator -- a string or a tuple of strings to indicate the column - separators. If the length of the string tuple is less - than the total number of columns, then the last separator - is assumed to be the separator for the rest of the columns. - columns -- a tuple of integers and range-tuples which describe the - columns to read from the file. A negative entry in the - last column specifies the negative skip value to the end. - Example: columns=(1, 4, (5, 9), (11, 15, 3), 17, -2) - will read [1,4,5,6,7,8,11,14,17,19,21,23,...] - If multiple arrays are to be returned, then this argument - should be an ordered list of such tuples. There should be - one entry in the list for each arraytype in the atype list. - lines -- a tuple with the same structure as columns which indicates - the lines to read. - comment -- the comment character (line will be ignored even if it is - specified by the lines tuple) - linesep -- separator between rows. - missing -- value to insert in array when conversion to number fails. - atype -- the typecode of the output array. If multiple outputs are - desired, then this should be a list of typecodes. The columns - to fill the array represented by the given typecode is - determined from the columns argument. If the length of atype - does not match the length of the columns list, then, the - smallest one is expanded to match the largest by repeatedly - copying the last entry. - rowsize -- the allocation row size (array grows by this amount as - data is read in). - - Output -- the 1 or 2d array, or a tuple of output arrays of different - types, sorted in order of the first column to be placed - in the output array. - - """ - - global _not_warned - # Make separator into a tuple of separators. - if type(separator) in [types.StringType, type(default)]: - sep = (separator,) - else: - sep = tuple(separator) - # Create ascii_object from |fileobject| argument. - ascii_object = ascii_stream(fileobject, lines=lines, comment=comment, linesep=linesep) - columns, atype = convert_to_equal_lists(columns, atype) - numout = len(atype) - # Get the number of columns to read and expand the columns argument - colsize, collist = getcolumns(ascii_object, columns, sep) - # Intialize the output arrays - outrange = range(numout) - outarr = [] - typecodes = "".join(numpy.typecodes.values()) - for k in outrange: - if not atype[k] in typecodes: - raise ValueError, "One of the array types is invalid, k=%d" % k - outarr.append(numpy.zeros((rowsize, colsize[k]),atype[k])) - row = 0 - block_row = 0 - _not_warned = 1 - for line in ascii_object: - if line.strip() == '': - continue - vals = process_line(line, sep, collist, atype, missing) - for k in outrange: - outarr[k][row] = vals[k] - row += 1 - block_row += 1 - if block_row >= rowsize: - for k in outrange: - outarr[k].resize((outarr[k].shape[0] + rowsize,colsize[k])) - block_row = 0 - for k in outrange: - if outarr[k].shape[0] != row: - outarr[k].resize((row,colsize[k])) - a = outarr[k] - if a.shape[0] == 1 or a.shape[1] == 1: - outarr[k] = numpy.ravel(a) - if len(outarr) == 1: - return outarr[0] - else: - return tuple(outarr) - - -# takes 1-d array and returns a string -def str_array(arr, precision=5,col_sep=' ',row_sep="\n",ss=0): - thestr = [] - arr = asarray(arr) - N,M = arr.shape - thistype = arr.dtype.char - nofloat = (thistype in '1silbwu') or (thistype in 'Oc') - cmplx = thistype in 'FD' - fmtstr = "%%.%de" % precision - cmpnum = pow(10.0,-precision) - for n in xrange(N): - theline = [] - for m in xrange(M): - val = arr[n,m] - if ss and abs(val) < cmpnum: - val = 0*val - if nofloat or val==0: - thisval = str(val) - elif cmplx: - rval = real(val) - ival = imag(val) - thisval = eval('fmtstr % rval') - if (ival >= 0): - istr = eval('fmtstr % ival') - thisval = '%s+j%s' % (thisval, istr) - else: - istr = eval('fmtstr % abs(ival)') - thisval = '%s-j%s' % (thisval, istr) - else: - thisval = eval('fmtstr % val') - theline.append(thisval) - strline = col_sep.join(theline) - thestr.append(strline) - return row_sep.join(thestr) - - -@deprecate_with_doc(""" - -This function is replaced by numpy.savetxt which allows the same functionality -through a different syntax. -""") -def write_array(fileobject, arr, separator=" ", linesep='\n', - precision=5, suppress_small=0, keep_open=0): - """Write a rank-2 or less array to file represented by fileobject. - - Inputs: - - fileobject -- An open file object or a string to a valid filename. - arr -- The array to write. - separator -- separator to write between elements of the array. - linesep -- separator to write between rows of array - precision -- number of digits after the decimal place to write. - suppress_small -- non-zero to round small numbers down to 0.0 - keep_open = non-zero to return the open file, otherwise, the file is closed. - Outputs: - - file -- The open file (if keep_open is non-zero) - """ - # XXX: What to when appending to files ? 'wa' does not do what one might - # expect, and opening a file twice to create it first is not easily doable - # with get_open_file ? - file = get_open_file(fileobject, mode='w') - rank = numpy.rank(arr) - if rank > 2: - raise ValueError, "Can-only write up to 2-D arrays." - - if rank == 0: - h = 1 - arr = numpy.reshape(arr, (1,1)) - elif rank == 1: - h = numpy.shape(arr)[0] - arr = numpy.reshape(arr, (h,1)) - else: - h = numpy.shape(arr)[0] - arr = asarray(arr) - - for ch in separator: - if ch in '0123456789-+FfeEgGjJIi.': - raise ValueError, "Bad string for separator" - - astr = str_array(arr, precision=precision, - col_sep=separator, row_sep=linesep, - ss = suppress_small) - file.write(astr) - file.write('\n') - if keep_open: - return file - else: - if file is sys.stdout or file is sys.stderr: - return - file.close() - return diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/data_store.py python-scipy-0.8.0+dfsg1/scipy/io/data_store.py --- python-scipy-0.7.2+dfsg1/scipy/io/data_store.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/data_store.py 2010-07-26 15:48:31.000000000 +0100 @@ -15,38 +15,11 @@ 1 """ -__all__ = ['save_as_module', - # The rest of these are all deprecated - 'save', 'create_module', - 'create_shelf', 'load'] +__all__ = ['save_as_module'] import dumb_shelve import os -from numpy import deprecate_with_doc, deprecate - -def _load(module): - """ Load data into module from a shelf with - the same name as the module. - """ - dir,filename = os.path.split(module.__file__) - filebase = filename.split('.')[0] - fn = os.path.join(dir, filebase) - f = dumb_shelve.open(fn, "r") - #exec( 'import ' + module.__name__) - for i in f.keys(): - exec( 'import ' + module.__name__+ ';' + - module.__name__+'.'+i + '=' + 'f["' + i + '"]') -# print i, 'loaded...' -# print 'done' - -load = deprecate_with_doc(""" -This is an internal function used with scipy.io.save_as_module - -If you are saving arrays into a module, you should think about using -HDF5 or .npz files instead. -""")(_load) - def _create_module(file_name): """ Create the module file. @@ -59,12 +32,6 @@ f.write('data_store._load(%s)' % module_name) f.close() -create_module = deprecate_with_doc(""" -This is an internal function used with scipy.io.save_as_module - -If you are saving arrays into a module, you should think about -using HDF5 or .npz files instead. -""")(_create_module) def _create_shelf(file_name,data): """Use this to write the data to a new file @@ -77,19 +44,19 @@ # print 'done' f.close() -create_shelf = deprecate_with_doc(""" -This is an internal function used with scipy.io.save_as_module -If you are saving arrays into a module, you should think about using -HDF5 or .npz files instead. -""")(_create_shelf) +def save_as_module(file_name=None,data=None): + """ + Save the dictionary "data" into a module and shelf named save. + Parameters + ---------- + file_name : str, optional + File name of the module to save. + data : dict, optional + The dictionary to store in the module. -def save_as_module(file_name=None,data=None): - """ Save the dictionary "data" into - a module and shelf named save """ _create_module(file_name) _create_shelf(file_name,data) -save = deprecate(save_as_module, 'save', 'save_as_module') diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/dumbdbm_patched.py python-scipy-0.8.0+dfsg1/scipy/io/dumbdbm_patched.py --- python-scipy-0.7.2+dfsg1/scipy/io/dumbdbm_patched.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/dumbdbm_patched.py 2010-07-26 15:48:31.000000000 +0100 @@ -77,6 +77,9 @@ f.close() return dat + def __contains__(self, key): + return key in self._index + def _addval(self, val): f = _open(self._datfile, 'rb+') f.seek(0, 2) diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/examples/read_array_demo1.py python-scipy-0.8.0+dfsg1/scipy/io/examples/read_array_demo1.py --- python-scipy-0.7.2+dfsg1/scipy/io/examples/read_array_demo1.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/io/examples/read_array_demo1.py 1970-01-01 01:00:00.000000000 +0100 @@ -1,58 +0,0 @@ -#========================================================================= -# NAME: read_array_demo1 -# -# DESCRIPTION: Examples to read 2 columns from a multicolumn ascii text -# file, skipping the first line of header. First example reads into -# 2 separate arrays. Second example reads into a single array. Data are -# then plotted. -# -# Here is the format of the file test.txt: -# -------- -# Some header to skip -# 1 2 3 -# 2 4 6 -# 3 6 9 -# 4 8 12 -# -# USAGE: -# python read_array_demo1.py -# -# PARAMETERS: -# -# DEPENDENCIES: -# matplotlib (pylab) -# test.txt -# -# -# AUTHOR: Simon J. Hook -# DATE : 09/23/2005 -# -# MODIFICATION HISTORY: -# -# COMMENT: -# -#============================================================================ - -from scipy import * -from scipy.io import read_array -from pylab import * - -def main(): - - # First example, read first and second column from ascii file. Skip first - # line of header. - # Note use of (1,-1) in lines to skip first line and then read to end of file - # Note use of (0,) in columns to pick first column, since its a tuple need trailing comma - x=read_array("test.txt",lines=(1,-1), columns=(0,)) - y=read_array("test.txt",lines=(1,-1), columns=(1,)) - - #Second example, read the file into a single arry - z=read_array("test.txt",lines=(1,-1), columns=(0,2)) - - # Plot the data - plot(x,y,'r--',z[:,0],z[:,1]) - show() - -# The one and only main function -if __name__ == "__main__": - main() diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/examples/test.txt python-scipy-0.8.0+dfsg1/scipy/io/examples/test.txt --- python-scipy-0.7.2+dfsg1/scipy/io/examples/test.txt 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/io/examples/test.txt 1970-01-01 01:00:00.000000000 +0100 @@ -1,5 +0,0 @@ -some header to skip -1 2 3 -2 4 6 -3 6 9 -4 8 12 \ No newline at end of file diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/fopen.py python-scipy-0.8.0+dfsg1/scipy/io/fopen.py --- python-scipy-0.7.2+dfsg1/scipy/io/fopen.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/io/fopen.py 1970-01-01 01:00:00.000000000 +0100 @@ -1,358 +0,0 @@ -## Automatically adapted for scipy Oct 05, 2005 by convertcode.py - -# Author: Travis Oliphant - -import struct -import sys -import types - -from numpy import * -import numpyio - -import warnings -warnings.warn('fopen module is deprecated, please use npfile instead', - DeprecationWarning, stacklevel=2) - -LittleEndian = (sys.byteorder == 'little') - -__all__ = ['fopen'] - -def getsize_type(mtype): - if mtype in ['B','uchar','byte','unsigned char','integer*1', 'int8']: - mtype = 'B' - elif mtype in ['S1', 'char', 'char*1']: - mtype = 'B' - elif mtype in ['b', 'schar', 'signed char']: - mtype = 'b' - elif mtype in ['h','short','int16','integer*2']: - mtype = 'h' - elif mtype in ['H','ushort','uint16','unsigned short']: - mtype = 'H' - elif mtype in ['i','int']: - mtype = 'i' - elif mtype in ['I','uint','uint32','unsigned int']: - mtype = 'I' - elif mtype in ['u4','int32','integer*4']: - mtype = 'u4' - elif mtype in ['f','float','float32','real*4', 'real']: - mtype = 'f' - elif mtype in ['d','double','float64','real*8', 'double precision']: - mtype = 'd' - elif mtype in ['F','complex float','complex*8','complex64']: - mtype = 'F' - elif mtype in ['D','complex*16','complex128','complex','complex double']: - mtype = 'D' - else: - mtype = obj2sctype(mtype) - - newarr = empty((1,),mtype) - return newarr.itemsize, newarr.dtype.char - -class fopen(object): - """Class for reading and writing binary files into numpy arrays. - - Inputs: - - file_name -- The complete path name to the file to open. - permission -- Open the file with given permissions: ('r', 'H', 'a') - for reading, writing, or appending. This is the same - as the mode argument in the builtin open command. - format -- The byte-ordering of the file: - (['native', 'n'], ['ieee-le', 'l'], ['ieee-be', 'B']) for - native, little-endian, or big-endian respectively. - - Attributes (Read only): - - bs -- non-zero if byte-swapping is performed on read and write. - format -- 'native', 'ieee-le', or 'ieee-be' - closed -- non-zero if the file is closed. - mode -- permissions with which this file was opened - name -- name of the file - """ - -# Methods: -# -# read -- read data from file and return numpy array -# write -- write to file from numpy array -# fort_read -- read Fortran-formatted binary data from the file. -# fort_write -- write Fortran-formatted binary data to the file. -# rewind -- rewind to beginning of file -# size -- get size of file -# seek -- seek to some position in the file -# tell -- return current position in file -# close -- close the file - - def __init__(self,file_name,permission='rb',format='n'): - if 'b' not in permission: permission += 'b' - if isinstance(file_name, basestring): - self.file = file(file_name, permission) - elif isinstance(file_name, file) and not file_name.closed: - # first argument is an open file - self.file = file_name - else: - raise TypeError, 'Need filename or open file as input' - self.setformat(format) - - def __del__(self): - try: - self.file.close() - except: - pass - - def close(self): - self.file.close() - - def seek(self, *args): - self.file.seek(*args) - - def tell(self): - return self.file.tell() - - def raw_read(self, size=-1): - """Read raw bytes from file as string.""" - return self.file.read(size) - - def raw_write(self, str): - """Write string to file as raw bytes.""" - return self.file.write(str) - - def setformat(self, format): - """Set the byte-order of the file.""" - if format in ['native','n','default']: - self.bs = False - self.format = 'native' - elif format in ['ieee-le','l','little-endian','le']: - self.bs = not LittleEndian - self.format = 'ieee-le' - elif format in ['ieee-be','B','big-endian','be']: - self.bs = LittleEndian - self.format = 'ieee-be' - else: - raise ValueError, "Unrecognized format: " + format - return - - def write(self,data,mtype=None,bs=None): - """Write to open file object the flattened numpy array data. - - Inputs: - - data -- the numpy array to write. - mtype -- a string indicating the binary type to write. - The default is the type of data. If necessary a cast is made. - unsigned byte : 'B', 'uchar', 'byte' 'unsigned char', 'int8', - 'integer*1' - character : 'S1', 'char', 'char*1' - signed char : 'b', 'schar', 'signed char' - short : 'h', 'short', 'int16', 'integer*2' - unsigned short : 'H', 'ushort','uint16','unsigned short' - int : 'i', 'int' - unsigned int : 'I', 'uint32','uint','unsigned int' - int32 : 'u4', 'int32', 'integer*4' - float : 'f', 'float', 'float32', 'real*4' - double : 'd', 'double', 'float64', 'real*8' - complex float : 'F', 'complex float', 'complex*8', 'complex64' - complex double : 'D', 'complex', 'complex double', 'complex*16', - 'complex128' - """ - if bs is None: - bs = self.bs - else: - bs = (bs == 1) - if isinstance(data, str): - N, buf = len(data), buffer(data) - data = ndarray(shape=(N,),dtype='B',buffer=buf) - else: - data = asarray(data) - if mtype is None: - mtype = data.dtype.char - howmany,mtype = getsize_type(mtype) - count = product(data.shape,axis=0) - numpyio.fwrite(self.file,count,data,mtype,bs) - return - - fwrite = write - - def read(self,count,stype,rtype=None,bs=None,c_is_b=0): - """Read data from file and return it in a numpy array. - - Inputs: - - count -- an integer specifying the number of elements of type - stype to read or a tuple indicating the shape of - the output array. - stype -- The data type of the stored data (see fwrite method). - rtype -- The type of the output array. Same as stype if None. - bs -- Whether or not to byteswap (or use self.bs if None) - c_is_b --- If non-zero then the count is an integer - specifying the total number of bytes to read - (must be a multiple of the size of stype). - - Outputs: (output,) - - output -- a numpy array of type rtype. - """ - if bs is None: - bs = self.bs - else: - bs = (bs == 1) - howmany,stype = getsize_type(stype) - shape = None - if c_is_b: - if count % howmany != 0: - raise ValueError, "When c_is_b is non-zero then " \ - "count is bytes\nand must be multiple of basic size." - count = count / howmany - elif type(count) in [types.TupleType, types.ListType]: - shape = list(count) - # allow -1 to specify unknown dimension size as in reshape - minus_ones = shape.count(-1) - if minus_ones == 0: - count = product(shape,axis=0) - elif minus_ones == 1: - now = self.tell() - self.seek(0,2) - end = self.tell() - self.seek(now) - remaining_bytes = end - now - know_dimensions_size = -product(count,axis=0) * getsize_type(stype)[0] - unknown_dimension_size, illegal = divmod(remaining_bytes, - know_dimensions_size) - if illegal: - raise ValueError("unknown dimension doesn't match filesize") - shape[shape.index(-1)] = unknown_dimension_size - count = product(shape,axis=0) - else: - raise ValueError( - "illegal count; can only specify one unknown dimension") - shape = tuple(shape) - if rtype is None: - rtype = stype - else: - howmany,rtype = getsize_type(rtype) - if count == 0: - return zeros(0,rtype) - retval = numpyio.fread(self.file, count, stype, rtype, bs) - if shape is not None: - retval = resize(retval, shape) - return retval - - fread = read - - def rewind(self,howmany=None): - """Rewind a file to its beginning or by a specified amount. - """ - if howmany is None: - self.seek(0) - else: - self.seek(-howmany,1) - - def size(self): - """Return the size of the file. - """ - try: - sz = self.thesize - except AttributeError: - curpos = self.tell() - self.seek(0,2) - sz = self.tell() - self.seek(curpos) - self.thesize = sz - return sz - - def fort_write(self,fmt,*args): - """Write a Fortran binary record. - - Inputs: - - fmt -- If a string then it represents the same format string as - used by struct.pack. The remaining arguments are passed - to struct.pack. - - If fmt is an array, then this array will be written as - a Fortran record using the output type args[0]. - - *args -- Arguments representing data to write. - """ - if self.format == 'ieee-le': - nfmt = " 0: - sz,mtype = getsize_type(args[0]) - else: - sz,mtype = getsize_type(fmt.dtype.char) - count = product(fmt.shape,axis=0) - strlen = struct.pack(nfmt,count*sz) - self.write(strlen) - numpyio.fwrite(self.file,count,fmt,mtype,self.bs) - self.write(strlen) - else: - raise TypeError, "Unknown type in first argument" - - def fort_read(self,fmt,dtype=None): - """Read a Fortran binary record. - - Inputs: - - fmt -- If dtype is not given this represents a struct.pack - format string to interpret the next record. Otherwise this - argument is ignored. - dtype -- If dtype is not None, then read in the next record as - an array of type dtype. - - Outputs: (data,) - - data -- If dtype is None, then data is a tuple containing the output - of struct.unpack on the next Fortan record. - If dtype is a datatype string, then the next record is - read in as a 1-D array of type datatype. - """ - lookup_dict = {'ieee-le':"<",'ieee-be':">",'native':''} - if dtype is None: - fmt = lookup_dict[self.format] + fmt - numbytes = struct.calcsize(fmt) - nn = struct.calcsize("i"); - if (self.raw_read(nn) == ''): - raise ValueError, "Unexpected end of file..." - strdata = self.raw_read(numbytes) - if strdata == '': - raise ValueError, "Unexpected end of file..." - data = struct.unpack(fmt,strdata) - if (self.raw_read(nn) == ''): - raise ValueError, "Unexpected end of file..." - return data - else: # Ignore format string and read in next record as an array. - fmt = lookup_dict[self.format] + "i" - nn = struct.calcsize(fmt) - nbytestr = self.raw_read(nn) - if nbytestr == '': - raise ValueError, "Unexpected end of file..." - nbytes = struct.unpack(fmt,nbytestr)[0] - howmany, dtype = getsize_type(dtype) - ncount = nbytes / howmany - if ncount*howmany != nbytes: - self.rewind(4) - raise ValueError, "A mismatch between the type requested and the data stored." - if ncount < 0: - raise ValueError, "Negative number of bytes to read:\n file is probably not opened with correct endian-ness." - if ncount == 0: - raise ValueError, "End of file? Zero-bytes to read." - retval = numpyio.fread(self.file, ncount, dtype, dtype, self.bs) - if len(retval) == 1: - retval = retval[0] - if (self.raw_read(nn) == ''): - raise ValueError, "Unexpected end of file..." - return retval diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/info.py python-scipy-0.8.0+dfsg1/scipy/io/info.py --- python-scipy-0.7.2+dfsg1/scipy/io/info.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/info.py 2010-07-26 15:48:31.000000000 +0100 @@ -2,33 +2,19 @@ Data input and output ===================== - Classes - - npfile -- a class for reading and writing numpy arrays from / to binary files - Cache - DataSource - Repository - Functions - read_array -- reading ascii streams into NumPy arrays - write_array -- write an array to an ascii stream - loadmat -- read a MATLAB style mat file (version 4 and 5) - savemat -- write a MATLAB (version <= 4) style mat file - - fread -- low-level reading - fwrite -- low-level writing - bswap -- in-place byte-swapping - packbits -- Pack a binary array of 1's and 0's into an array of bytes - unpackbits -- Unpack an array packed by packbits. - - save --- simple storing of Python dictionary into module + loadmat -- read a MATLAB style mat file (version 4 through 7.1) + savemat -- write a MATLAB (version through 7.1) style mat file + netcdf_file -- read NetCDF files (version of ``pupynere`` package) + save_as_module -- simple storing of Python dictionary into module that can then be imported and the data accessed as attributes of the module. - mminfo -- query matrix info from Matrix Market formatted file mmread -- read matrix from Matrix Market formatted file mmwrite -- write matrix to Matrix Market formatted file + wavfile -- module to read / write wav files using numpy arrays + arrf -- read files in Arff format """ postpone_import = 1 diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/__init__.py python-scipy-0.8.0+dfsg1/scipy/io/__init__.py --- python-scipy-0.7.2+dfsg1/scipy/io/__init__.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/__init__.py 2010-07-26 15:48:31.000000000 +0100 @@ -4,74 +4,8 @@ from info import __doc__ -from numpy import deprecate_with_doc +from numpy import deprecate -# These are all deprecated (until the end deprecated tag) -from npfile import npfile -from data_store import save, load, create_module, create_shelf -from array_import import read_array, write_array -from pickler import objload, objsave - -from numpyio import packbits, unpackbits, bswap, fread, fwrite, \ - convert_objectarray - -fread = deprecate_with_doc(""" -scipy.io.fread is can be replaced with raw reading capabilities of NumPy -including fromfile as well as memory-mapping capabilities. -""")(fread) - -fwrite = deprecate_with_doc(""" -scipy.io.fwrite can be replaced with raw writing capabilities of -NumPy. Also, remember that files can be directly memory-mapped into NumPy -arrays which is often a better way of reading especially large files. - -Look at the tofile methods as well as save and savez for writing arrays into -easily transported files of data. -""")(fwrite) - -bswap = deprecate_with_doc(""" -scipy.io.bswap is easily replaced with the byteswap method on an array. -out = scipy.io.bswap(arr) --> out = arr.byteswap(True) -""")(bswap) - -packbits = deprecate_with_doc(""" -The functionality of scipy.io.packbits is now available as numpy.packbits -The calling convention is a bit different as the 2-d case is not specialized. - -However, you can simulate scipy.packbits by raveling the last 2 dimensions -of the array and calling numpy.packbits with an axis=-1 keyword: - -def scipy_packbits(inp): - a = np.asarray(inp) - if a.ndim < 2: - return np.packbits(a) - oldshape = a.shape - newshape = oldshape[:-2] + (oldshape[-2]*oldshape[-1],) - a = np.reshape(a, newshape) - return np.packbits(a, axis=-1).ravel() -""")(packbits) - -unpackbits = deprecate_with_doc(""" -The functionality of scipy.io.unpackbits is now available in numpy.unpackbits -The calling convention is different however as the 2-d case is no longer -specialized. - -Thus, the scipy.unpackbits behavior must be simulated using numpy.unpackbits. - -def scipy_unpackbits(inp, els_per_slice, out_type=None): - inp = np.asarray(inp) - num4els = ((els_per_slice-1) >> 3) + 1 - inp = np.reshape(inp, (-1,num4els)) - res = np.unpackbits(inp, axis=-1)[:,:els_per_slice] - return res.ravel() -""")(unpackbits) - -convert_objectarray = deprecate_with_doc(""" -The same functionality can be obtained using NumPy string arrays and the -.astype method (except for the optional missing value feature). -""")(convert_objectarray) - -# end deprecated # matfile read and write from matlab.mio import loadmat, savemat diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/matlab/benchmarks/bench_structarr.py python-scipy-0.8.0+dfsg1/scipy/io/matlab/benchmarks/bench_structarr.py --- python-scipy-0.7.2+dfsg1/scipy/io/matlab/benchmarks/bench_structarr.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/matlab/benchmarks/bench_structarr.py 2010-07-26 15:48:31.000000000 +0100 @@ -0,0 +1,43 @@ +from __future__ import division +from numpy.testing import * + +from cStringIO import StringIO + +import numpy as np +import scipy.io as sio + + +def make_structarr(n_vars, n_fields, n_structs): + var_dict = {} + for vno in range(n_vars): + vname = 'var%00d' % vno + end_dtype = [('f%d' % d, 'i4', 10) for d in range(n_fields)] + s_arrs = np.zeros((n_structs,), dtype=end_dtype) + var_dict[vname] = s_arrs + return var_dict + + +def bench_run(): + str_io = StringIO() + print + print 'Read / writing matlab structs' + print '='*60 + print ' write | read | vars | fields | structs ' + print '-'*60 + print + for n_vars, n_fields, n_structs in ( + (10, 10, 20),): + var_dict = make_structarr(n_vars, n_fields, n_structs) + str_io = StringIO() + write_time = measure('sio.savemat(str_io, var_dict)') + read_time = measure('sio.loadmat(str_io)') + print '%.5f | %.5f | %5d | %5d | %5d ' % ( + write_time, + read_time, + n_vars, + n_fields, + n_structs) + + +if __name__ == '__main__' : + bench_run() diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/matlab/byteordercodes.py python-scipy-0.8.0+dfsg1/scipy/io/matlab/byteordercodes.py --- python-scipy-0.7.2+dfsg1/scipy/io/matlab/byteordercodes.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/io/matlab/byteordercodes.py 2010-07-26 15:48:31.000000000 +0100 @@ -18,20 +18,22 @@ 'swapped': ('swapped', 'S')} def to_numpy_code(code): - ''' Convert various order codings to numpy format + """ + Convert various order codings to numpy format. + Parameters ---------- - code : {'little','big','l','b','le','be','<','>', - 'native','=', - 'swapped', 's'} string - code is converted to lower case before parsing + code : str + The code to convert. It is converted to lower case before parsing. + Legal values are: + 'little', 'big', 'l', 'b', 'le', 'be', '<', '>', 'native', '=', + 'swapped', 's'. Returns ------- - out_code : {'<','>'} string - where '<' is the numpy dtype code for little - endian, and '>' is the code for big endian - + out_code : {'<', '>'} + Here '<' is the numpy dtype code for little endian, + and '>' is the code for big endian. Examples -------- @@ -48,7 +50,8 @@ >>> sc = to_numpy_code('swapped') >>> sc == '>' if sys_is_le else sc == '<' True - ''' + + """ code = code.lower() if code is None: return native_code diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/matlab/__init__.py python-scipy-0.8.0+dfsg1/scipy/io/matlab/__init__.py --- python-scipy-0.7.2+dfsg1/scipy/io/matlab/__init__.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/matlab/__init__.py 2010-07-26 15:48:31.000000000 +0100 @@ -3,3 +3,4 @@ from numpy.testing import Tester test = Tester().test +bench = Tester().bench diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/matlab/mio4.py python-scipy-0.8.0+dfsg1/scipy/io/matlab/mio4.py --- python-scipy-0.7.2+dfsg1/scipy/io/matlab/mio4.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/matlab/mio4.py 2010-07-26 15:48:31.000000000 +0100 @@ -7,8 +7,11 @@ import scipy.sparse -from miobase import MatFileReader, MatArrayReader, MatMatrixGetter, \ - MatFileWriter, MatStreamWriter, docfiller, matdims +from miobase import MatFileReader, docfiller, matdims, \ + read_dtype, convert_dtypes, arr_to_chars, arr_dtype_number, \ + MatWriteError + +from mio_utils import squeeze_element, chars_to_strings SYS_LITTLE_ENDIAN = sys.byteorder == 'little' @@ -62,16 +65,38 @@ 4: 'Cray', #!! } +class VarHeader4(object): + # Mat4 variables never logical or global + is_logical = False + is_global = False -class Mat4ArrayReader(MatArrayReader): - ''' Class for reading Mat4 arrays - ''' - - def matrix_getter_factory(self): - ''' Read header, return matrix getter ''' - data = self.read_dtype(self.dtypes['header']) - header = {} - header['name'] = self.read_ztstring(int(data['namlen'])) + def __init__(self, + name, + dtype, + mclass, + dims, + is_complex): + self.name = name + self.dtype = dtype + self.mclass = mclass + self.dims = dims + self.is_complex = is_complex + + +class VarReader4(object): + ''' Class to read matlab 4 variables ''' + + def __init__(self, file_reader): + self.file_reader = file_reader + self.mat_stream = file_reader.mat_stream + self.dtypes = file_reader.dtypes + self.chars_as_strings = file_reader.chars_as_strings + self.squeeze_me = file_reader.squeeze_me + + def read_header(self): + ''' Reads and return header for variable ''' + data = read_dtype(self.mat_stream, self.dtypes['header']) + name = self.mat_stream.read(int(data['namlen'])).strip('\x00') if data['mopt'] < 0 or data['mopt'] > 5000: ValueError, 'Mat 4 mopt wrong format, byteswapping problem?' M,rest = divmod(data['mopt'], 1000) @@ -80,40 +105,45 @@ T = rest if O != 0: raise ValueError, 'O in MOPT integer should be 0, wrong format?' - header['dtype'] = self.dtypes[P] - header['mclass'] = T - header['dims'] = (data['mrows'], data['ncols']) - header['is_complex'] = data['imagf'] == 1 - remaining_bytes = header['dtype'].itemsize * np.product(header['dims']) - if header['is_complex'] and not header['mclass'] == mxSPARSE_CLASS: - remaining_bytes *= 2 - next_pos = self.mat_stream.tell() + remaining_bytes - if T == mxFULL_CLASS: - getter = Mat4FullGetter(self, header) - elif T == mxCHAR_CLASS: - getter = Mat4CharGetter(self, header) - elif T == mxSPARSE_CLASS: - getter = Mat4SparseGetter(self, header) + dims = (data['mrows'], data['ncols']) + is_complex = data['imagf'] == 1 + dtype = self.dtypes[P] + return VarHeader4( + name, + dtype, + T, + dims, + is_complex) + + def array_from_header(self, hdr, process=True): + mclass = hdr.mclass + if mclass == mxFULL_CLASS: + arr = self.read_full_array(hdr) + elif mclass == mxCHAR_CLASS: + arr = self.read_char_array(hdr) + if process and self.chars_as_strings: + arr = chars_to_strings(arr) + elif mclass == mxSPARSE_CLASS: + # no current processing (below) makes sense for sparse + return self.read_sparse_array(hdr) else: - raise TypeError, 'No reader for class code %s' % T - getter.next_position = next_pos - return getter - - -class Mat4MatrixGetter(MatMatrixGetter): - - # Mat4 variables never global or logical - is_global = False - is_logical = False + raise TypeError, 'No reader for class code %s' % mclass + if process and self.squeeze_me: + return squeeze_element(arr) + return arr + + def read_sub_array(self, hdr, copy=True): + ''' Mat4 read always uses header dtype and dims + hdr : object + object with attributes 'dtype', 'dims' + copy : bool + copies array if True (default True) + (buffer is usually read only) - def read_array(self, copy=True): - ''' Mat4 read array always uses header dtype and dims - copy - copies array if True - (buffer is usually read only) - a_dtype is assumed to be correct endianness + self.dtype is assumed to be correct endianness ''' - dt = self.header['dtype'] - dims = self.header['dims'] + dt = hdr.dtype + dims = hdr.dims num_bytes = dt.itemsize for d in dims: num_bytes *= d @@ -125,53 +155,45 @@ arr = arr.copy() return arr - -class Mat4FullGetter(Mat4MatrixGetter): - def __init__(self, array_reader, header): - super(Mat4FullGetter, self).__init__(array_reader, header) - if header['is_complex']: - self.mat_dtype = np.dtype(np.complex128) - else: - self.mat_dtype = np.dtype(np.float64) - - def get_raw_array(self): - if self.header['is_complex']: + def read_full_array(self, hdr): + ''' Full (rather than sparse matrix) getter + ''' + if hdr.is_complex: # avoid array copy to save memory - res = self.read_array(copy=False) - res_j = self.read_array(copy=False) + res = self.read_sub_array(hdr, copy=False) + res_j = self.read_sub_array(hdr, copy=False) return res + (res_j * 1j) - return self.read_array() + return self.read_sub_array(hdr) + def read_char_array(self, hdr): + ''' Ascii text matrix (char matrix) reader -class Mat4CharGetter(Mat4MatrixGetter): - def get_raw_array(self): - arr = self.read_array().astype(np.uint8) + ''' + arr = self.read_sub_array(hdr).astype(np.uint8) # ascii to unicode S = arr.tostring().decode('ascii') - return np.ndarray(shape=self.header['dims'], + return np.ndarray(shape=hdr.dims, dtype=np.dtype('U1'), buffer = np.array(S)).copy() + def read_sparse_array(self, hdr): + ''' Read sparse matrix type -class Mat4SparseGetter(Mat4MatrixGetter): - ''' Read sparse matrix type - - Matlab (TM) 4 real sparse arrays are saved in a N+1 by 3 array - format, where N is the number of non-zero values. Column 1 values - [0:N] are the (1-based) row indices of the each non-zero value, - column 2 [0:N] are the column indices, column 3 [0:N] are the - (real) values. The last values [-1,0:2] of the rows, column - indices are shape[0] and shape[1] respectively of the output - matrix. The last value for the values column is a padding 0. mrows - and ncols values from the header give the shape of the stored - matrix, here [N+1, 3]. Complex data is saved as a 4 column - matrix, where the fourth column contains the imaginary component; - the last value is again 0. Complex sparse data do _not_ have the - header imagf field set to True; the fact that the data are complex - is only detectable because there are 4 storage columns - ''' - def get_raw_array(self): - res = self.read_array() + Matlab (TM) 4 real sparse arrays are saved in a N+1 by 3 array + format, where N is the number of non-zero values. Column 1 values + [0:N] are the (1-based) row indices of the each non-zero value, + column 2 [0:N] are the column indices, column 3 [0:N] are the + (real) values. The last values [-1,0:2] of the rows, column + indices are shape[0] and shape[1] respectively of the output + matrix. The last value for the values column is a padding 0. mrows + and ncols values from the header give the shape of the stored + matrix, here [N+1, 3]. Complex data is saved as a 4 column + matrix, where the fourth column contains the imaginary component; + the last value is again 0. Complex sparse data do _not_ have the + header imagf field set to True; the fact that the data are complex + is only detectable because there are 4 storage columns + ''' + res = self.read_sub_array(hdr) tmp = res[:-1,:] dims = res[-1,0:2] I = np.ascontiguousarray(tmp[:,0],dtype='intc') #fixes byte order also @@ -195,41 +217,153 @@ %(matstream_arg)s %(load_args)s ''' - self._array_reader = Mat4ArrayReader( - mat_stream, - None, - None, - ) super(MatFile4Reader, self).__init__(mat_stream, *args, **kwargs) - self._array_reader.processor_func = self.processor_func - - def set_dtypes(self): - self.dtypes = self.convert_dtypes(mdtypes_template) - self._array_reader.dtypes = self.dtypes - - def matrix_getter_factory(self): - return self._array_reader.matrix_getter_factory() - + self._matrix_reader = None + def guess_byte_order(self): self.mat_stream.seek(0) - mopt = self.read_dtype(np.dtype('i4')) + mopt = read_dtype(self.mat_stream, np.dtype('i4')) self.mat_stream.seek(0) if mopt < 0 or mopt > 5000: return SYS_LITTLE_ENDIAN and '>' or '<' return SYS_LITTLE_ENDIAN and '<' or '>' + def initialize_read(self): + ''' Run when beginning read of variables -class Mat4MatrixWriter(MatStreamWriter): + Sets up readers from parameters in `self` + ''' + self.dtypes = convert_dtypes(mdtypes_template, self.byte_order) + self._matrix_reader = VarReader4(self) + + def read_var_header(self): + ''' Read header, return header, next position + + Header has to define at least .name and .is_global + + Parameters + ---------- + None + + Returns + ------- + header : object + object that can be passed to self.read_var_array, and that + has attributes .name and .is_global + next_position : int + position in stream of next variable + ''' + hdr = self._matrix_reader.read_header() + n = reduce(lambda x, y: x*y, hdr.dims, 1) # fast product + remaining_bytes = hdr.dtype.itemsize * n + if hdr.is_complex and not hdr.mclass == mxSPARSE_CLASS: + remaining_bytes *= 2 + next_position = self.mat_stream.tell() + remaining_bytes + return hdr, next_position + + def read_var_array(self, header, process=True): + ''' Read array, given `header` + + Parameters + ---------- + header : header object + object with fields defining variable header + process : {True, False} bool, optional + If True, apply recursive post-processing during loading of + array. + + Returns + ------- + arr : array + array with post-processing applied or not according to + `process`. + ''' + return self._matrix_reader.array_from_header(header, process) + + def get_variables(self, variable_names=None): + ''' get variables from stream as dictionary + + variable_names - optional list of variable names to get + + If variable_names is None, then get all variables in file + ''' + if isinstance(variable_names, basestring): + variable_names = [variable_names] + self.mat_stream.seek(0) + # set up variable reader + self.initialize_read() + mdict = {} + while not self.end_of_stream(): + hdr, next_position = self.read_var_header() + name = hdr.name + if variable_names and name not in variable_names: + self.mat_stream.seek(next_position) + continue + mdict[name] = self.read_var_array(hdr) + self.mat_stream.seek(next_position) + if variable_names: + variable_names.remove(name) + if len(variable_names) == 0: + break + return mdict + + +def arr_to_2d(arr, oned_as='row'): + ''' Make ``arr`` exactly two dimensional + + If `arr` has more than 2 dimensions, then, for the sake of + compatibility with previous versions of scipy, we reshape to 2D + preserving the last dimension and increasing the first dimension. + In future versions we will raise an error, as this is at best a very + counterinituitive thing to do. + + Parameters + ---------- + arr : array + oned_as : {'row', 'column'} + Whether to reshape 1D vectors as row vectors or column vectors. + See documentation for ``matdims`` for more detail + + Returns + ------- + arr2d : array + 2D version of the array + ''' + dims = matdims(arr, oned_as) + if len(dims) > 2: + warnings.warn('Matlab 4 files only support <=2 ' + 'dimensions; the next version of scipy will ' + 'raise an error when trying to write >2D arrays ' + 'to matlab 4 format files', + DeprecationWarning, + ) + return arr.reshape((-1,dims[-1])) + return arr.reshape(dims) + + +class VarWriter4(object): + def __init__(self, file_writer): + self.file_stream = file_writer.file_stream + self.oned_as = file_writer.oned_as + + def write_bytes(self, arr): + self.file_stream.write(arr.tostring(order='F')) - def write_header(self, P=0, T=0, imagf=0, dims=None): + def write_string(self, s): + self.file_stream.write(s) + + def write_header(self, name, shape, P=0, T=0, imagf=0): ''' Write header for given data options + + Parameters + ---------- + name : str + shape : sequence + Shape of array as it will be read in matlab P - mat4 data type T - mat4 matrix class imagf - complex flag - dims - matrix dimensions ''' - if dims is None: - dims = self.arr.shape header = np.empty((), mdtypes_template['header']) M = not SYS_LITTLE_ENDIAN O = 0 @@ -237,72 +371,89 @@ O * 100 + P * 10 + T) - header['mrows'] = dims[0] - header['ncols'] = dims[1] + header['mrows'] = shape[0] + header['ncols'] = shape[1] header['imagf'] = imagf - header['namlen'] = len(self.name) + 1 + header['namlen'] = len(name) + 1 self.write_bytes(header) - self.write_string(self.name + '\0') - - def arr_to_2d(self): - dims = matdims(self.arr, self.oned_as) - self.arr.shape = dims - if len(dims) > 2: - self.arr = self.arr.reshape(-1,dims[-1]) + self.write_string(name + '\0') - def write(self): - assert False, 'Not implemented' + def write(self, arr, name): + ''' Write matrix `arr`, with name `name` - -class Mat4NumericWriter(Mat4MatrixWriter): - - def write(self): - self.arr_to_2d() - imagf = self.arr.dtype.kind == 'c' + Parameters + ---------- + arr : array-like + array to write + name : str + name in matlab workspace + ''' + # we need to catch sparse first, because np.asarray returns an + # an object array for scipy.sparse + if scipy.sparse.issparse(arr): + self.write_sparse(arr, name) + return + arr = np.asarray(arr) + dt = arr.dtype + if not dt.isnative: + arr = arr.astype(dt.newbyteorder('=')) + dtt = dt.type + if dtt is np.object_: + raise TypeError, 'Cannot save object arrays in Mat4' + elif dtt is np.void: + raise TypeError, 'Cannot save void type arrays' + elif dtt in (np.unicode_, np.string_): + self.write_char(arr, name) + return + self.write_numeric(arr, name) + + def write_numeric(self, arr, name): + arr = arr_to_2d(arr, self.oned_as) + imagf = arr.dtype.kind == 'c' try: - P = np_to_mtypes[self.arr.dtype.str[1:]] + P = np_to_mtypes[arr.dtype.str[1:]] except KeyError: if imagf: - self.arr = self.arr.astype('c128') + arr = arr.astype('c128') else: - self.arr = self.arr.astype('f8') + arr = arr.astype('f8') P = miDOUBLE - self.write_header(P=P, + self.write_header(name, + arr.shape, + P=P, T=mxFULL_CLASS, imagf=imagf) if imagf: - self.write_bytes(self.arr.real) - self.write_bytes(self.arr.imag) + self.write_bytes(arr.real) + self.write_bytes(arr.imag) else: - self.write_bytes(self.arr) + self.write_bytes(arr) - -class Mat4CharWriter(Mat4MatrixWriter): - - def write(self): - self.arr_to_chars() - self.arr_to_2d() - dims = self.arr.shape - self.write_header(P=miUINT8, - T=mxCHAR_CLASS) - if self.arr.dtype.kind == 'U': + def write_char(self, arr, name): + arr = arr_to_chars(arr) + arr = arr_to_2d(arr, self.oned_as) + dims = arr.shape + self.write_header( + name, + dims, + P=miUINT8, + T=mxCHAR_CLASS) + if arr.dtype.kind == 'U': # Recode unicode to ascii n_chars = np.product(dims) st_arr = np.ndarray(shape=(), - dtype=self.arr_dtype_number(n_chars), - buffer=self.arr) + dtype=arr_dtype_number(arr, n_chars), + buffer=arr) st = st_arr.item().encode('ascii') - self.arr = np.ndarray(shape=dims, dtype='S1', buffer=st) - self.write_bytes(self.arr) - + arr = np.ndarray(shape=dims, dtype='S1', buffer=st) + self.write_bytes(arr) -class Mat4SparseWriter(Mat4MatrixWriter): - - def write(self): + def write_sparse(self, arr, name): ''' Sparse matrices are 2D - See docstring for Mat4SparseGetter + + See docstring for VarReader4.read_sparse_array ''' - A = self.arr.tocoo() #convert to sparse COO format (ijv) + A = arr.tocoo() #convert to sparse COO format (ijv) imagf = A.dtype.kind == 'c' ijv = np.zeros((A.nnz + 1, 3+imagf), dtype='f8') ijv[:-1,0] = A.row @@ -314,43 +465,43 @@ else: ijv[:-1,2] = A.data ijv[-1,0:2] = A.shape - self.write_header(P=miDOUBLE, - T=mxSPARSE_CLASS, - dims=ijv.shape) + self.write_header( + name, + ijv.shape, + P=miDOUBLE, + T=mxSPARSE_CLASS) self.write_bytes(ijv) -def matrix_writer_factory(stream, arr, name, oned_as='row'): - ''' Factory function to return matrix writer given variable to write - stream - file or file-like stream to write to - arr - array to write - name - name in matlab (TM) workspace - ''' - if scipy.sparse.issparse(arr): - return Mat4SparseWriter(stream, arr, name, oned_as) - arr = np.array(arr) - dtt = arr.dtype.type - if dtt is np.object_: - raise TypeError, 'Cannot save object arrays in Mat4' - elif dtt is np.void: - raise TypeError, 'Cannot save void type arrays' - elif dtt in (np.unicode_, np.string_): - return Mat4CharWriter(stream, arr, name, oned_as) - else: - return Mat4NumericWriter(stream, arr, name, oned_as) - - -class MatFile4Writer(MatFileWriter): +class MatFile4Writer(object): ''' Class for writing matlab 4 format files ''' def __init__(self, file_stream, oned_as=None): self.file_stream = file_stream if oned_as is None: oned_as = 'row' self.oned_as = oned_as + self._matrix_writer = None + + def put_variables(self, mdict, write_header=None): + ''' Write variables in `mdict` to stream - def put_variables(self, mdict): + Parameters + ---------- + mdict : mapping + mapping with method ``items`` return name, contents pairs + where ``name`` which will appeak in the matlab workspace in + file load, and ``contents`` is something writeable to a + matlab file, such as a numpy array. + write_header : {None, True, False} + If True, then write the matlab file header before writing the + variables. If None (the default) then write the file header + if we are at position 0 in the stream. By setting False + here, and setting the stream position to the end of the file, + you can append variables to a matlab file + ''' + # there is no header for a matlab 4 mat file, so we ignore the + # ``write_header`` input argument. It's there for compatibility + # with the matlab 5 version of this method + self._matrix_writer = VarWriter4(self) for name, var in mdict.items(): - matrix_writer_factory(self.file_stream, - var, - name, - self.oned_as).write() + self._matrix_writer.write(var, name) diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/matlab/mio5_params.py python-scipy-0.8.0+dfsg1/scipy/io/matlab/mio5_params.py --- python-scipy-0.7.2+dfsg1/scipy/io/matlab/mio5_params.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/matlab/mio5_params.py 2010-07-26 15:48:31.000000000 +0100 @@ -0,0 +1,97 @@ +''' Constants and classes for matlab 5 read and write + +See also mio5_utils.pyx where these same constants arise as c enums. + +If you make changes in this file, don't forget to change mio5_utils.pyx +''' + +import numpy as np + + +miINT8 = 1 +miUINT8 = 2 +miINT16 = 3 +miUINT16 = 4 +miINT32 = 5 +miUINT32 = 6 +miSINGLE = 7 +miDOUBLE = 9 +miINT64 = 12 +miUINT64 = 13 +miMATRIX = 14 +miCOMPRESSED = 15 +miUTF8 = 16 +miUTF16 = 17 +miUTF32 = 18 + +mxCELL_CLASS = 1 +mxSTRUCT_CLASS = 2 +# The March 2008 edition of "Matlab 7 MAT-File Format" says that +# mxOBJECT_CLASS = 3, whereas matrix.h says that mxLOGICAL = 3. +# Matlab 2008a appears to save logicals as type 9, so we assume that +# the document is correct. See type 18, below. +mxOBJECT_CLASS = 3 +mxCHAR_CLASS = 4 +mxSPARSE_CLASS = 5 +mxDOUBLE_CLASS = 6 +mxSINGLE_CLASS = 7 +mxINT8_CLASS = 8 +mxUINT8_CLASS = 9 +mxINT16_CLASS = 10 +mxUINT16_CLASS = 11 +mxINT32_CLASS = 12 +mxUINT32_CLASS = 13 +# The following are not in the March 2008 edition of "Matlab 7 +# MAT-File Format," but were guessed from matrix.h. +mxINT64_CLASS = 14 +mxUINT64_CLASS = 15 +mxFUNCTION_CLASS = 16 +# Not doing anything with these at the moment. +mxOPAQUE_CLASS = 17 # This appears to be a function workspace +# https://www-old.cae.wisc.edu/pipermail/octave-maintainers/2007-May/002824.html +mxOBJECT_CLASS_FROM_MATRIX_H = 18 + + +class mat_struct(object): + ''' Placeholder for holding read data from structs + + We deprecate this method of holding struct information, and will + soon remove it, in favor of the recarray method (see loadmat + docstring) + ''' + pass + + +class MatlabObject(np.ndarray): + ''' ndarray Subclass to contain matlab object ''' + def __new__(cls, input_array, classname=None): + # Input array is an already formed ndarray instance + # We first cast to be our class type + obj = np.asarray(input_array).view(cls) + # add the new attribute to the created instance + obj.classname = classname + # Finally, we must return the newly created object: + return obj + + def __array_finalize__(self,obj): + # reset the attribute from passed original object + self.classname = getattr(obj, 'classname', None) + # We do not need to return anything + + +class MatlabFunction(np.ndarray): + ''' Subclass to signal this is a matlab function ''' + def __new__(cls, input_array): + obj = np.asarray(input_array).view(cls) + return obj + + +class MatlabOpaque(np.ndarray): + ''' Subclass to signal this is a matlab opaque matrix ''' + def __new__(cls, input_array): + obj = np.asarray(input_array).view(cls) + return obj + + +OPAQUE_DTYPE = np.dtype( + [('s0', 'O'), ('s1', 'O'), ('s2', 'O'), ('arr', 'O')]) diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/matlab/mio5.py python-scipy-0.8.0+dfsg1/scipy/io/matlab/mio5.py --- python-scipy-0.7.2+dfsg1/scipy/io/matlab/mio5.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/matlab/mio5.py 2010-07-26 15:48:31.000000000 +0100 @@ -7,6 +7,67 @@ (as of December 5 2008) ''' +''' +================================= + Note on functions and mat files +================================= + +The document above does not give any hints as to the storage of matlab +function handles, or anonymous function handles. I had therefore to +guess the format of matlab arrays of ``mxFUNCTION_CLASS`` and +``mxOPAQUE_CLASS`` by looking at example mat files. + +``mxFUNCTION_CLASS`` stores all types of matlab functions. It seems to +contain a struct matrix with a set pattern of fields. For anonymous +functions, a sub-fields of one of these fields seems to contain the +well-named ``mxOPAQUE_CLASS``. This seems to cotain: + +* array flags as for any matlab matrix +* 3 int8 strings +* a matrix + +It seems that, whenever the mat file contains a ``mxOPAQUE_CLASS`` +instance, there is also an un-named matrix (name == '') at the end of +the mat file. I'll call this the ``__function_workspace__`` matrix. + +When I saved two anonymous functions in a mat file, or appended another +anonymous function to the mat file, there was still only one +``__function_workspace__`` un-named matrix at the end, but larger than +that for a mat file with a single anonymous function, suggesting that +the workspaces for the two functions had been merged. + +The ``__function_workspace__`` matrix appears to be of double class +(``mxCLASS_DOUBLE``), but stored as uint8, the memory for which is in +the format of a mini .mat file, without the first 124 bytes of the file +header (the description and the subsystem_offset), but with the version +U2 bytes, and the S2 endian test bytes. There follow 4 zero bytes, +presumably for 8 byte padding, and then a series of ``miMATRIX`` +entries, as in a standard mat file. The ``miMATRIX`` entries appear to +be series of un-named (name == '') matrices, and may also contain arrays +of this same mini-mat format. + +I guess that: + +* saving an anonymous function back to a mat file will need the + associated ``__function_workspace__`` matrix saved as well for the + anonymous function to work correctly. +* appending to a mat file that has a ``__function_workspace__`` would + involve first pulling off this workspace, appending, checking whether + there were any more anonymous functions appended, and then somehow + merging the relevant workspaces, and saving at the end of the mat + file. + +The mat files I was playing with are in ``tests/data``: + +* sqr.mat +* parabola.mat +* some_functions.mat + +See ``tests/test_mio.py:test_mio_funcs.py`` for a debugging +script I was working with. + +''' + # Small fragments of current code adapted from matfile.py by Heiko # Henkelmann @@ -14,83 +75,30 @@ import time import sys import zlib -from StringIO import StringIO -from cStringIO import StringIO as cStringIO -from copy import copy as pycopy +from cStringIO import StringIO import warnings import numpy as np import scipy.sparse -import byteordercodes -from miobase import MatFileReader, MatArrayReader, MatMatrixGetter, \ - MatFileWriter, MatStreamWriter, docfiller, matdims, \ - MatReadError - -miINT8 = 1 -miUINT8 = 2 -miINT16 = 3 -miUINT16 = 4 -miINT32 = 5 -miUINT32 = 6 -miSINGLE = 7 -miDOUBLE = 9 -miINT64 = 12 -miUINT64 = 13 -miMATRIX = 14 -miCOMPRESSED = 15 -miUTF8 = 16 -miUTF16 = 17 -miUTF32 = 18 - -mxCELL_CLASS = 1 -mxSTRUCT_CLASS = 2 -# The March 2008 edition of "Matlab 7 MAT-File Format" says that -# mxOBJECT_CLASS = 3, whereas matrix.h says that mxLOGICAL = 3. -# Matlab 2008a appears to save logicals as type 9, so we assume that -# the document is correct. See type 18, below. -mxOBJECT_CLASS = 3 -mxCHAR_CLASS = 4 -mxSPARSE_CLASS = 5 -mxDOUBLE_CLASS = 6 -mxSINGLE_CLASS = 7 -mxINT8_CLASS = 8 -mxUINT8_CLASS = 9 -mxINT16_CLASS = 10 -mxUINT16_CLASS = 11 -mxINT32_CLASS = 12 -mxUINT32_CLASS = 13 -# The following are not in the March 2008 edition of "Matlab 7 -# MAT-File Format," but were guessed from matrix.h. -mxINT64_CLASS = 14 -mxUINT64_CLASS = 15 -mxFUNCTION_CLASS = 16 -# Not doing anything with these at the moment. -mxOPAQUE_CLASS = 17 # This appears to be a function workspace -# https://www-old.cae.wisc.edu/pipermail/octave-maintainers/2007-May/002824.html -mxOBJECT_CLASS_FROM_MATRIX_H = 18 - -mxmap = { # Sometimes good for debug prints - mxCELL_CLASS: 'mxCELL_CLASS', - mxSTRUCT_CLASS: 'mxSTRUCT_CLASS', - mxOBJECT_CLASS: 'mxOBJECT_CLASS', - mxCHAR_CLASS: 'mxCHAR_CLASS', - mxSPARSE_CLASS: 'mxSPARSE_CLASS', - mxDOUBLE_CLASS: 'mxDOUBLE_CLASS', - mxSINGLE_CLASS: 'mxSINGLE_CLASS', - mxINT8_CLASS: 'mxINT8_CLASS', - mxUINT8_CLASS: 'mxUINT8_CLASS', - mxINT16_CLASS: 'mxINT16_CLASS', - mxUINT16_CLASS: 'mxUINT16_CLASS', - mxINT32_CLASS: 'mxINT32_CLASS', - mxUINT32_CLASS: 'mxUINT32_CLASS', - mxINT64_CLASS: 'mxINT64_CLASS', - mxUINT64_CLASS: 'mxUINT64_CLASS', - mxFUNCTION_CLASS: 'mxFUNCTION_CLASS', - mxOPAQUE_CLASS: 'mxOPAQUE_CLASS', - mxOBJECT_CLASS_FROM_MATRIX_H: 'mxOBJECT_CLASS_FROM_MATRIX_H', -} +from miobase import MatFileReader, docfiller, matdims, \ + read_dtype, convert_dtypes, arr_to_chars, arr_dtype_number, \ + MatWriteError, MatReadError + +# Reader object for matlab 5 format variables +from mio5_utils import VarReader5 + +# Constants and helper objects +from mio5_params import MatlabObject, MatlabFunction, \ + miINT8, miUINT8, miINT16, miUINT16, miINT32, miUINT32, \ + miSINGLE, miDOUBLE, miINT64, miUINT64, miMATRIX, \ + miCOMPRESSED, miUTF8, miUTF16, miUTF32, \ + mxCELL_CLASS, mxSTRUCT_CLASS, mxOBJECT_CLASS, mxCHAR_CLASS, \ + mxSPARSE_CLASS, mxDOUBLE_CLASS, mxSINGLE_CLASS, mxINT8_CLASS, \ + mxUINT8_CLASS, mxINT16_CLASS, mxUINT16_CLASS, mxINT32_CLASS, \ + mxUINT32_CLASS, mxINT64_CLASS, mxUINT64_CLASS + mdtypes_template = { miINT8: 'i1', @@ -182,427 +190,60 @@ miUTF32: {'codec': 'utf_32','width': 4}, } -miUINT16_codec = sys.getdefaultencoding() - -mx_numbers = ( - mxDOUBLE_CLASS, - mxSINGLE_CLASS, - mxINT8_CLASS, - mxUINT8_CLASS, - mxINT16_CLASS, - mxUINT16_CLASS, - mxINT32_CLASS, - mxUINT32_CLASS, - mxINT64_CLASS, - mxUINT64_CLASS, - ) - - -class mat_struct(object): - ''' Placeholder for holding read data from structs - - We will deprecate this method of holding struct information in a - future version of scipy, in favor of the recarray method (see - loadmat docstring) - ''' - pass - - -class MatlabObject(np.ndarray): - ''' ndarray Subclass to contain matlab object ''' - def __new__(cls, input_array, classname=None): - # Input array is an already formed ndarray instance - # We first cast to be our class type - obj = np.asarray(input_array).view(cls) - # add the new attribute to the created instance - obj.classname = classname - # Finally, we must return the newly created object: - return obj - - def __array_finalize__(self,obj): - # reset the attribute from passed original object - self.classname = getattr(obj, 'classname', None) - # We do not need to return anything - - -class MatlabFunction(np.ndarray): - ''' Subclass to signal this is a matlab function ''' - def __new__(cls, input_array): - obj = np.asarray(input_array).view(cls) - -class MatlabBinaryBlock(object): - ''' Class to contain matlab unreadable blocks ''' - def __init__(self, binaryblock, endian): - self.binaryblock = binaryblock - self.endian = endian +def convert_codecs(template, byte_order): + ''' Convert codec template mapping to byte order + Set codecs not on this system to None -class Mat5ArrayReader(MatArrayReader): - ''' Class to get Mat5 arrays - - Provides element reader functions, header reader, matrix reader - factory function - ''' - - def __init__(self, - mat_stream, - dtypes, - processor_func, - codecs, - class_dtypes, - struct_as_record): - super(Mat5ArrayReader, self).__init__(mat_stream, - dtypes, - processor_func) - self.codecs = codecs - self.class_dtypes = class_dtypes - self.struct_as_record = struct_as_record - - def read_element(self, copy=True): - raw_tag = self.mat_stream.read(8) - tag = np.ndarray(shape=(), - dtype=self.dtypes['tag_full'], - buffer=raw_tag) - mdtype = tag['mdtype'].item() - # Byte count if this is small data element - byte_count = mdtype >> 16 - if byte_count: # small data element format - if byte_count > 4: - raise ValueError, 'Too many bytes for sde format' - mdtype = mdtype & 0xFFFF - if mdtype == miMATRIX: - raise TypeError('Cannot have matrix in SDE format') - raw_str = raw_tag[4:byte_count+4] - else: # regular element - byte_count = tag['byte_count'].item() - # Deal with miMATRIX type (cannot pass byte string) - if mdtype == miMATRIX: - return self.current_getter(byte_count).get_array() - # All other types can be read from string - raw_str = self.mat_stream.read(byte_count) - # Seek to next 64-bit boundary - mod8 = byte_count % 8 - if mod8: - self.mat_stream.seek(8 - mod8, 1) - - if mdtype in self.codecs: # encoded char data - codec = self.codecs[mdtype] - if not codec: - raise TypeError, 'Do not support encoding %d' % mdtype - el = raw_str.decode(codec) - else: # numeric data - dt = self.dtypes[mdtype] - el_count = byte_count // dt.itemsize - el = np.ndarray(shape=(el_count,), - dtype=dt, - buffer=raw_str) - if copy: - el = el.copy() - - return el - - def matrix_getter_factory(self): - ''' Returns reader for next matrix at top level ''' - tag = self.read_dtype(self.dtypes['tag_full']) - mdtype = tag['mdtype'].item() - byte_count = tag['byte_count'].item() - next_pos = self.mat_stream.tell() + byte_count - if mdtype == miCOMPRESSED: - getter = Mat5ZArrayReader(self, byte_count).matrix_getter_factory() - elif not mdtype == miMATRIX: - raise TypeError, \ - 'Expecting miMATRIX type here, got %d' % mdtype - else: - getter = self.current_getter(byte_count) - getter.next_position = next_pos - return getter - - def current_getter(self, byte_count): - ''' Return matrix getter for current stream position - - Returns matrix getters at top level and sub levels - ''' - if not byte_count: # an empty miMATRIX can contain no bytes - return Mat5EmptyMatrixGetter(self) - af = self.read_dtype(self.dtypes['array_flags']) - header = {} - flags_class = af['flags_class'] - mc = flags_class & 0xFF - header['mclass'] = mc - header['is_logical'] = flags_class >> 9 & 1 - header['is_global'] = flags_class >> 10 & 1 - header['is_complex'] = flags_class >> 11 & 1 - header['nzmax'] = af['nzmax'] - ''' Here I am playing with a binary block read of - untranslatable data. I am not using this at the moment because - reading it has the side effect of making opposite ending mat - files unwritable on the round trip. - - if mc == mxFUNCTION_CLASS: - # we can't read these, and want to keep track of the byte - # count - so we need to avoid the following unpredictable - # length element reads - return Mat5BinaryBlockGetter(self, - header, - af, - byte_count) - ''' - header['dims'] = self.read_element() - header['name'] = self.read_element().tostring() - # maybe a dictionary mapping here as a dispatch table - if mc in mx_numbers: - return Mat5NumericMatrixGetter(self, header) - if mc == mxSPARSE_CLASS: - return Mat5SparseMatrixGetter(self, header) - if mc == mxCHAR_CLASS: - return Mat5CharMatrixGetter(self, header) - if mc == mxCELL_CLASS: - return Mat5CellMatrixGetter(self, header) - if mc == mxSTRUCT_CLASS: - return Mat5StructMatrixGetter(self, header) - if mc == mxOBJECT_CLASS: - return Mat5ObjectMatrixGetter(self, header) - if mc == mxFUNCTION_CLASS: - return Mat5FunctionGetter(self, header) - raise TypeError, 'No reader for class code %s' % mc - - -class Mat5ZArrayReader(Mat5ArrayReader): - ''' Getter for compressed arrays - - Sets up reader for gzipped stream on init, providing wrapper - for this new sub-stream. - - ''' - def __init__(self, array_reader, byte_count): - super(Mat5ZArrayReader, self).__init__( - cStringIO(zlib.decompress( - array_reader.mat_stream.read(byte_count))), - array_reader.dtypes, - array_reader.processor_func, - array_reader.codecs, - array_reader.class_dtypes, - array_reader.struct_as_record) - - -class Mat5MatrixGetter(MatMatrixGetter): - ''' Base class for getting Mat5 matrices - - Gets current read information from passed array_reader - ''' - - def __init__(self, array_reader, header): - super(Mat5MatrixGetter, self).__init__(array_reader, header) - self.class_dtypes = array_reader.class_dtypes - self.codecs = array_reader.codecs - self.is_global = header['is_global'] - self.mat_dtype = None - - def read_element(self, *args, **kwargs): - return self.array_reader.read_element(*args, **kwargs) - - -class Mat5EmptyMatrixGetter(Mat5MatrixGetter): - ''' Dummy class to return empty array for empty matrix - ''' - def __init__(self, array_reader): - self.array_reader = array_reader - self.mat_stream = array_reader.mat_stream - self.header = {} - self.name = '' - self.is_global = False - self.mat_dtype = 'f8' - - def get_raw_array(self): - return np.array([[]]) - - -class Mat5NumericMatrixGetter(Mat5MatrixGetter): - - def __init__(self, array_reader, header): - super(Mat5NumericMatrixGetter, self).__init__(array_reader, header) - if header['is_logical']: - self.mat_dtype = np.dtype('bool') - else: - self.mat_dtype = self.class_dtypes[header['mclass']] - - def get_raw_array(self): - if self.header['is_complex']: - # avoid array copy to save memory - res = self.read_element(copy=False) - res_j = self.read_element(copy=False) - res = res + (res_j * 1j) - else: - res = self.read_element() - return np.ndarray(shape=self.header['dims'], - dtype=res.dtype, - buffer=res, - order='F') - - -class Mat5SparseMatrixGetter(Mat5MatrixGetter): - def get_raw_array(self): - rowind = self.read_element() - indptr = self.read_element() - if self.header['is_complex']: - # avoid array copy to save memory - data = self.read_element(copy=False) - data_j = self.read_element(copy=False) - data = data + (data_j * 1j) - else: - data = self.read_element() - ''' From the matlab (TM) API documentation, last found here: - http://www.mathworks.com/access/helpdesk/help/techdoc/matlab_external/ - rowind are simply the row indices for all the (nnz) non-zero - entries in the sparse array. rowind has nzmax entries, so - may well have more entries than nnz, the actual number - of non-zero entries, but rowind[nnz:] can be discarded - and should be 0. indptr has length (number of columns + 1), - and is such that, if D = diff(colind), D[j] gives the number - of non-zero entries in column j. Because rowind values are - stored in column order, this gives the column corresponding to - each rowind - ''' - M,N = self.header['dims'] - indptr = indptr[:N+1] - nnz = indptr[-1] - rowind = rowind[:nnz] - data = data[:nnz] - return scipy.sparse.csc_matrix( - (data,rowind,indptr), - shape=(M,N)) - - -class Mat5CharMatrixGetter(Mat5MatrixGetter): - def get_raw_array(self): - res = self.read_element() - # Convert non-string types to unicode - if isinstance(res, np.ndarray): - if res.dtype.type == np.uint16: - codec = miUINT16_codec - if self.codecs['uint16_len'] == 1: - res = res.astype(np.uint8) - elif res.dtype.type in (np.uint8, np.int8): - codec = 'ascii' - else: - raise TypeError, 'Did not expect type %s' % res.dtype - res = res.tostring().decode(codec) - return np.ndarray(shape=self.header['dims'], - dtype=np.dtype('U1'), - buffer=np.array(res), - order='F').copy() - - -class Mat5CellMatrixGetter(Mat5MatrixGetter): - def get_raw_array(self): - # Account for fortran indexing of cells - tupdims = tuple(self.header['dims'][::-1]) - length = np.product(tupdims) - result = np.empty(length, dtype=object) - for i in range(length): - result[i] = self.get_item() - return result.reshape(tupdims).T - - def get_item(self): - return self.read_element() - - -class Mat5StructMatrixGetter(Mat5MatrixGetter): - def __init__(self, array_reader, header): - super(Mat5StructMatrixGetter, self).__init__(array_reader, header) - self.struct_as_record = array_reader.struct_as_record - - def get_raw_array(self): - namelength = self.read_element()[0] - names = self.read_element() - field_names = [names[i:i+namelength].tostring().strip('\x00') - for i in xrange(0,len(names),namelength)] - tupdims = tuple(self.header['dims'][::-1]) - length = np.product(tupdims) - if self.struct_as_record: - if not len(field_names): - # If there are no field names, there is no dtype - # representation we can use, falling back to empty - # object - return np.empty(tupdims, dtype=object).T - dtype = [(field_name, object) for field_name in field_names] - result = np.empty(length, dtype=dtype) - for i in range(length): - for field_name in field_names: - result[i][field_name] = self.read_element() - else: # Backward compatibility with previous format - self.obj_template = mat_struct() - self.obj_template._fieldnames = field_names - result = np.empty(length, dtype=object) - for i in range(length): - item = pycopy(self.obj_template) - for name in field_names: - item.__dict__[name] = self.read_element() - result[i] = item - return result.reshape(tupdims).T - - -class Mat5ObjectMatrixGetter(Mat5StructMatrixGetter): - def get_raw_array(self): - '''Matlab objects are like structs, with an extra classname field''' - classname = self.read_element().tostring() - result = super(Mat5ObjectMatrixGetter, self).get_raw_array() - return MatlabObject(result, classname) - - -class Mat5FunctionGetter(Mat5ObjectMatrixGetter): - ''' Class to provide warning and message string for unreadable - matlab function data - ''' - def get_raw_array(self): - raise MatReadError('Cannot read matlab functions') - - -class Mat5BinaryBlockGetter(object): - ''' Class to read in unreadable binary blocks - - This class could be used to read in matlab functions + Parameters + ---------- + template : mapping + key, value are respectively codec name, and root name for codec + (without byte order suffix) + byte_order : {'<', '>'} + code for little or big endian + + Returns + ------- + codecs : dict + key, value are name, codec (as in .encode(codec)) ''' + codecs = {} + postfix = byte_order == '<' and '_le' or '_be' + for k, v in template.items(): + codec = v['codec'] + try: + " ".encode(codec) + except LookupError: + codecs[k] = None + continue + if v['width'] > 1: + codec += postfix + codecs[k] = codec + return codecs.copy() - def __init__(self, - array_reader, - header, - array_flags, - byte_count): - self.array_reader = array_reader - self.header = header - self.array_flags = array_flags - arr_str = array_flags.tostring() - self.binaryblock = array_reader.mat_stream.read( - byte_count-len(array_flags.tostring())) - stream = StringIO(self.binaryblock) - reader = Mat5ArrayReader( - stream, - array_reader.dtypes, - lambda x : None, - array_reader.codecs, - array_reader.class_dtypes, - False) - self.header['dims'] = reader.read_element() - self.header['name'] = reader.read_element().tostring() - self.name = self.header['name'] - self.is_global = header['is_global'] - - def get_array(self): - dt = self.array_reader.dtypes[miINT32] - endian = byteordercodes.to_numpy_code(dt.byteorder) - data = self.array_flags.tostring() + self.binaryblock - return MatlabBinaryBlock(data, endian) - class MatFile5Reader(MatFileReader): ''' Reader for Mat 5 mat files Adds the following attribute to base class - - uint16_codec - char codec to use for uint16 char arrays - (defaults to system default codec) - ''' + + uint16_codec - char codec to use for uint16 char arrays + (defaults to system default codec) + + Uses variable reader that has the following stardard interface (see + abstract class in ``miobase``:: + + __init__(self, file_reader) + read_header(self) + array_from_header(self) + + and added interface:: + + set_stream(self, stream) + read_full_tag(self) + + ''' @docfiller def __init__(self, mat_stream, @@ -611,7 +252,7 @@ squeeze_me=False, chars_as_strings=True, matlab_compatible=False, - struct_as_record=None, # default False, for now + struct_as_record=True, uint16_codec=None ): '''Initializer for matlab 5 file format reader @@ -623,24 +264,6 @@ Set codec to use for uint16 char arrays (e.g. 'utf-8'). Use system default codec if None ''' - # Deal with deprecations - if struct_as_record is None: - warnings.warn("Using struct_as_record default value (False)" + - " This will change to True in future versions", - FutureWarning, stacklevel=2) - struct_as_record = False - self.codecs = {} - # Missing inputs to array reader set later (processor func - # below, dtypes, codecs via our own set_dtype function, called - # from parent __init__) - self._array_reader = Mat5ArrayReader( - mat_stream, - None, - None, - None, - None, - struct_as_record - ) super(MatFile5Reader, self).__init__( mat_stream, byte_order, @@ -648,52 +271,19 @@ squeeze_me, chars_as_strings, matlab_compatible, + struct_as_record ) - self._array_reader.processor_func = self.processor_func - self.uint16_codec = uint16_codec - - def get_uint16_codec(self): - return self._uint16_codec - def set_uint16_codec(self, uint16_codec): + # Set uint16 codec if not uint16_codec: uint16_codec = sys.getdefaultencoding() - # Set length of miUINT16 char encoding - self.codecs['uint16_len'] = len(" ".encode(uint16_codec)) \ - - len(" ".encode(uint16_codec)) - self.codecs['uint16_codec'] = uint16_codec - self._array_reader.codecs = self.codecs - self._uint16_codec = uint16_codec - uint16_codec = property(get_uint16_codec, - set_uint16_codec, - None, - 'get/set uint16_codec') - - def set_dtypes(self): - ''' Set dtypes and codecs ''' - self.dtypes = self.convert_dtypes(mdtypes_template) - self.class_dtypes = self.convert_dtypes(mclass_dtypes_template) - codecs = {} - postfix = self.order_code == '<' and '_le' or '_be' - for k, v in codecs_template.items(): - codec = v['codec'] - try: - " ".encode(codec) - except LookupError: - codecs[k] = None - continue - if v['width'] > 1: - codec += postfix - codecs[k] = codec - self.codecs.update(codecs) - self.update_array_reader() - - def update_array_reader(self): - self._array_reader.codecs = self.codecs - self._array_reader.dtypes = self.dtypes - self._array_reader.class_dtypes = self.class_dtypes - - def matrix_getter_factory(self): - return self._array_reader.matrix_getter_factory() + self.uint16_codec = uint16_codec + # placeholders for dtypes, codecs - see initialize_read + self.dtypes = None + self.class_dtypes = None + self.codecs = None + # placeholders for readers - see initialize_read method + self._file_reader = None + self._matrix_reader = None def guess_byte_order(self): ''' Guess byte order. @@ -703,41 +293,253 @@ self.mat_stream.seek(0) return mi == 'IM' and '<' or '>' - def file_header(self): + def read_file_header(self): ''' Read in mat 5 file header ''' hdict = {} - hdr = self.read_dtype(self.dtypes['file_header']) + hdr = read_dtype(self.mat_stream, self.dtypes['file_header']) hdict['__header__'] = hdr['description'].item().strip(' \t\n\000') v_major = hdr['version'] >> 8 v_minor = hdr['version'] & 0xFF hdict['__version__'] = '%d.%d' % (v_major, v_minor) return hdict + def initialize_read(self): + ''' Run when beginning read of variables + + Sets up readers from parameters in `self` + ''' + self.dtypes = convert_dtypes(mdtypes_template, self.byte_order) + self.class_dtypes = convert_dtypes(mclass_dtypes_template, + self.byte_order) + self.codecs = convert_codecs(codecs_template, self.byte_order) + uint16_codec = self.uint16_codec + # Set length of miUINT16 char encoding + self.codecs['uint16_len'] = len(" ".encode(uint16_codec)) \ + - len(" ".encode(uint16_codec)) + self.codecs['uint16_codec'] = uint16_codec + # reader for top level stream. We need this extra top-level + # reader because we use the matrix_reader object to contain + # compressed matrices (so they have their own stream) + self._file_reader = VarReader5(self) + # reader for matrix streams + self._matrix_reader = VarReader5(self) + + def read_var_header(self): + ''' Read header, return header, next position + + Header has to define at least .name and .is_global + + Parameters + ---------- + None + + Returns + ------- + header : object + object that can be passed to self.read_var_array, and that + has attributes .name and .is_global + next_position : int + position in stream of next variable + ''' + mdtype, byte_count = self._file_reader.read_full_tag() + assert byte_count > 0 + next_pos = self.mat_stream.tell() + byte_count + if mdtype == miCOMPRESSED: + # make new stream from compressed data + data = self.mat_stream.read(byte_count) + # Some matlab files contain zlib streams without valid + # Z_STREAM_END termination. To get round this, we use the + # decompressobj object, that allows you to decode an + # incomplete stream. See discussion at + # http://bugs.python.org/issue8672 + dcor = zlib.decompressobj() + stream = StringIO(dcor.decompress(data)) + # Check the stream is not so broken as to leave cruft behind + assert dcor.flush() == '' + del data + self._matrix_reader.set_stream(stream) + mdtype, byte_count = self._matrix_reader.read_full_tag() + else: + self._matrix_reader.set_stream(self.mat_stream) + if not mdtype == miMATRIX: + raise TypeError, \ + 'Expecting miMATRIX type here, got %d' % mdtype + header = self._matrix_reader.read_header() + return header, next_pos + + def read_var_array(self, header, process=True): + ''' Read array, given `header` + + Parameters + ---------- + header : header object + object with fields defining variable header + process : {True, False} bool, optional + If True, apply recursive post-processing during loading of + array. + + Returns + ------- + arr : array + array with post-processing applied or not according to + `process`. + ''' + return self._matrix_reader.array_from_header(header, process) + + def get_variables(self, variable_names=None): + ''' get variables from stream as dictionary + + variable_names - optional list of variable names to get + + If variable_names is None, then get all variables in file + ''' + if isinstance(variable_names, basestring): + variable_names = [variable_names] + self.mat_stream.seek(0) + # Here we pass all the parameters in self to the reading objects + self.initialize_read() + mdict = self.read_file_header() + mdict['__globals__'] = [] + while not self.end_of_stream(): + hdr, next_position = self.read_var_header() + name = hdr.name + if name == '': + # can only be a matlab 7 function workspace + name = '__function_workspace__' + # We want to keep this raw because mat_dtype processing + # will break the format (uint8 as mxDOUBLE_CLASS) + process = False + else: + process = True + if variable_names and name not in variable_names: + self.mat_stream.seek(next_position) + continue + try: + res = self.read_var_array(hdr, process) + except MatReadError, err: + warnings.warn( + 'Unreadable variable "%s", because "%s"' % \ + (name, err), + Warning, stacklevel=2) + res = "Read error: %s" % err + self.mat_stream.seek(next_position) + mdict[name] = res + if hdr.is_global: + mdict['__globals__'].append(name) + if variable_names: + variable_names.remove(name) + if len(variable_names) == 0: + break + return mdict + + +def to_writeable(source): + ''' Convert input object ``source`` to something we can write + + Parameters + ---------- + source : object + + Returns + ------- + arr : ndarray + + Examples + -------- + >>> to_writeable(np.array([1])) # pass through ndarrays + array([1]) + >>> expected = np.array([(1, 2)], dtype=[('a', '|O8'), ('b', '|O8')]) + >>> np.all(to_writeable({'a':1,'b':2}) == expected) + True + >>> np.all(to_writeable({'a':1,'b':2, '_c':3}) == expected) + True + >>> np.all(to_writeable({'a':1,'b':2, 100:3}) == expected) + True + >>> np.all(to_writeable({'a':1,'b':2, '99':3}) == expected) + True + >>> class klass(object): pass + >>> c = klass + >>> c.a = 1 + >>> c.b = 2 + >>> np.all(to_writeable({'a':1,'b':2}) == expected) + True + >>> to_writeable([]) + array([], dtype=float64) + >>> to_writeable(()) + array([], dtype=float64) + >>> to_writeable(None) + + >>> to_writeable('a string').dtype + dtype('|S8') + >>> to_writeable(1) + array(1) + >>> to_writeable([1]) + array([1]) + >>> to_writeable([1]) + array([1]) + >>> to_writeable(object()) # not convertable + + dict keys with legal characters are convertible + + >>> to_writeable({'a':1})['a'] + array([1], dtype=object) + + but not with illegal characters + + >>> to_writeable({'1':1}) is None + True + >>> to_writeable({'_a':1}) is None + True + ''' + if isinstance(source, np.ndarray): + return source + if source is None: + return None + # Objects that have dicts + if hasattr(source, '__dict__'): + source = dict((key, value) for key, value in source.__dict__.items() + if not key.startswith('_')) + # Mappings or object dicts + if hasattr(source, 'keys'): + dtype = [] + values = [] + for field, value in source.items(): + if (isinstance(field, basestring) and + not field[0] in '_0123456789'): + dtype.append((field,object)) + values.append(value) + if dtype: + return np.array( [tuple(values)] ,dtype) + else: + return None + # Next try and convert to an array + narr = np.asanyarray(source) + if narr.dtype.type in (np.object, np.object_) and \ + narr.shape == () and narr == source: + # No interesting conversion possible + return None + return narr + -class Mat5MatrixWriter(MatStreamWriter): +class VarWriter5(object): ''' Generic matlab matrix writing class ''' mat_tag = np.zeros((), mdtypes_template['tag_full']) mat_tag['mdtype'] = miMATRIX - default_mclass = None # default class for header writing - def __init__(self, - file_stream, - arr, - name, - is_global=False, - unicode_strings=False, - long_field_names=False, - oned_as='column'): - super(Mat5MatrixWriter, self).__init__(file_stream, - arr, - name, - oned_as) - self.is_global = is_global - self.unicode_strings = unicode_strings - self.long_field_names = long_field_names - self.oned_as = oned_as - def write_dtype(self, arr): - self.file_stream.write(arr.tostring()) + def __init__(self, file_writer): + self.file_stream = file_writer.file_stream + self.unicode_strings=file_writer.unicode_strings + self.long_field_names=file_writer.long_field_names + self.oned_as = file_writer.oned_as + # These are used for top level writes, and unset after + self._var_name = None + self._var_is_global = False + + def write_bytes(self, arr): + self.file_stream.write(arr.tostring(order='F')) + + def write_string(self, s): + self.file_stream.write(s) def write_element(self, arr, mdtype=None): ''' write tag and data ''' @@ -755,41 +557,43 @@ tag['byte_count_mdtype'] = (byte_count << 16) + mdtype # if arr.tostring is < 4, the element will be zero-padded as needed. tag['data'] = arr.tostring(order='F') - self.write_dtype(tag) + self.write_bytes(tag) def write_regular_element(self, arr, mdtype, byte_count): # write tag, data tag = np.zeros((), mdtypes_template['tag_full']) tag['mdtype'] = mdtype tag['byte_count'] = byte_count - padding = (8 - tag['byte_count']) % 8 - self.write_dtype(tag) + self.write_bytes(tag) self.write_bytes(arr) # pad to next 64-bit boundary - self.write_bytes(np.zeros((padding,),'u1')) - - def write_header(self, mclass=None, - is_global=False, + bc_mod_8 = byte_count % 8 + if bc_mod_8: + self.file_stream.write('\x00' * (8-bc_mod_8)) + + def write_header(self, + shape, + mclass, is_complex=False, is_logical=False, - nzmax=0, - shape=None): + nzmax=0): ''' Write header for given data options + shape : sequence + array shape mclass - mat5 matrix class - is_global - True if matrix is global is_complex - True if matrix is complex is_logical - True if matrix is logical nzmax - max non zero elements for sparse arrays - shape : {None, tuple} optional - directly specify shape if this is not the same as for - self.arr + + We get the name and the global flag from the object, and reset + them to defaults after we've used them ''' - if mclass is None: - mclass = self.default_mclass - if shape is None: - shape = matdims(self.arr, self.oned_as) + # get name and is_global from one-shot object store + name = self._var_name + is_global = self._var_is_global + # initialize the top-level matrix tag, store position self._mat_tag_pos = self.file_stream.tell() - self.write_dtype(self.mat_tag) + self.write_bytes(self.mat_tag) # write array flags (complex, global, logical, class, nzmax) af = np.zeros((), mdtypes_template['array_flags']) af['data_type'] = miUINT32 @@ -797,133 +601,179 @@ flags = is_complex << 3 | is_global << 2 | is_logical << 1 af['flags_class'] = mclass | flags << 8 af['nzmax'] = nzmax - self.write_dtype(af) + self.write_bytes(af) + # shape self.write_element(np.array(shape, dtype='i4')) # write name - self.write_element(np.array([ord(c) for c in self.name], 'i1')) - - def update_matrix_tag(self): + name = np.asarray(name) + if name == '': # empty string zero-terminated + self.write_smalldata_element(name, miINT8, 0) + else: + self.write_element(name, miINT8) + # reset the one-shot store to defaults + self._var_name = '' + self._var_is_global = False + + def update_matrix_tag(self, start_pos): curr_pos = self.file_stream.tell() - self.file_stream.seek(self._mat_tag_pos) - self.mat_tag['byte_count'] = curr_pos - self._mat_tag_pos - 8 - self.write_dtype(self.mat_tag) + self.file_stream.seek(start_pos) + self.mat_tag['byte_count'] = curr_pos - start_pos - 8 + self.write_bytes(self.mat_tag) self.file_stream.seek(curr_pos) - def write(self): - raise NotImplementedError - - def make_writer_getter(self): - ''' Make writer getter for this stream ''' - return Mat5WriterGetter(self.unicode_strings, - self.long_field_names, - self.oned_as) + def write_top(self, arr, name, is_global): + """ Write variable at top level of mat file + + Parameters + ---------- + arr : array-like + array-like object to create writer for + name : str, optional + name as it will appear in matlab workspace + default is empty string + is_global : {False, True} optional + whether variable will be global on load into matlab + """ + # these are set before the top-level header write, and unset at + # the end of the same write, because they do not apply for lower levels + self._var_is_global = is_global + self._var_name = name + # write the header and data + self.write(arr) + + def write(self, arr): + ''' Write `arr` to stream at top and sub levels + Parameters + ---------- + arr : array-like + array-like object to create writer for + ''' + # store position, so we can update the matrix tag + mat_tag_pos = self.file_stream.tell() + # First check if these are sparse + if scipy.sparse.issparse(arr): + self.write_sparse(arr) + self.update_matrix_tag(mat_tag_pos) + return + # Try to convert things that aren't arrays + narr = to_writeable(arr) + if narr is None: + raise TypeError('Could not convert %s (type %s) to array' + % (arr, type(arr))) + if isinstance(narr, MatlabObject): + self.write_object(narr) + elif isinstance(narr, MatlabFunction): + raise MatWriteError('Cannot write matlab functions') + elif narr.dtype.fields: # struct array + self.write_struct(narr) + elif narr.dtype.hasobject: # cell array + self.write_cells(narr) + elif narr.dtype.kind in ('U', 'S'): + if self.unicode_strings: + codec='UTF8' + else: + codec = 'ascii' + self.write_char(narr, codec) + else: + self.write_numeric(narr) + self.update_matrix_tag(mat_tag_pos) -class Mat5NumericWriter(Mat5MatrixWriter): - default_mclass = None # can be any numeric type - def write(self): - imagf = self.arr.dtype.kind == 'c' + def write_numeric(self, arr): + imagf = arr.dtype.kind == 'c' try: - mclass = np_to_mxtypes[self.arr.dtype.str[1:]] + mclass = np_to_mxtypes[arr.dtype.str[1:]] except KeyError: if imagf: - self.arr = self.arr.astype('c128') + arr = arr.astype('c128') else: - self.arr = self.arr.astype('f8') + arr = arr.astype('f8') mclass = mxDOUBLE_CLASS - self.write_header(mclass=mclass,is_complex=imagf) + self.write_header(matdims(arr, self.oned_as), + mclass, + is_complex=imagf) if imagf: - self.write_element(self.arr.real) - self.write_element(self.arr.imag) + self.write_element(arr.real) + self.write_element(arr.imag) else: - self.write_element(self.arr) - self.update_matrix_tag() - + self.write_element(arr) -class Mat5CharWriter(Mat5MatrixWriter): - codec='ascii' - default_mclass = mxCHAR_CLASS - def write(self): - self.arr_to_chars() + def write_char(self, arr, codec='ascii'): + ''' Write string array `arr` with given `codec` + ''' + if arr.size == 0 or np.all(arr == ''): + # This an empty string array or a string array containing + # only empty strings. Matlab cannot distiguish between a + # string array that is empty, and a string array containing + # only empty strings, because it stores strings as arrays of + # char. There is no way of having an array of char that is + # not empty, but contains an empty string. We have to + # special-case the array-with-empty-strings because even + # empty strings have zero padding, which would otherwise + # appear in matlab as a string with a space. + shape = (0,) * np.max([arr.ndim, 2]) + self.write_header(shape, mxCHAR_CLASS) + self.write_smalldata_element(arr, miUTF8, 0) + return + # non-empty string. + # + # Convert to char array + arr = arr_to_chars(arr) # We have to write the shape directly, because we are going # recode the characters, and the resulting stream of chars # may have a different length - shape = self.arr.shape - self.write_header(shape=shape) - # We need to do our own transpose (not using the normal - # write routines that do this for us) - arr = self.arr.T.copy() - if self.arr.dtype.kind == 'U' and arr.size: - # Recode unicode using self.codec + shape = arr.shape + self.write_header(shape, mxCHAR_CLASS) + if arr.dtype.kind == 'U' and arr.size: + # Make one long string from all the characters. We need to + # transpose here, because we're flattening the array, before + # we write the bytes. The bytes have to be written in + # Fortran order. n_chars = np.product(shape) st_arr = np.ndarray(shape=(), - dtype=self.arr_dtype_number(n_chars), - buffer=arr) - st = st_arr.item().encode(self.codec) + dtype=arr_dtype_number(arr, n_chars), + buffer=arr.T.copy()) # Fortran order + # Recode with codec to give byte string + st = st_arr.item().encode(codec) + # Reconstruct as one-dimensional byte array arr = np.ndarray(shape=(len(st),), - dtype='u1', + dtype='S1', buffer=st) self.write_element(arr, mdtype=miUTF8) - self.update_matrix_tag() - -class Mat5UniCharWriter(Mat5CharWriter): - codec='UTF8' - - -class Mat5SparseWriter(Mat5MatrixWriter): - default_mclass = mxSPARSE_CLASS - def write(self): + def write_sparse(self, arr): ''' Sparse matrices are 2D ''' - A = self.arr.tocsc() # convert to sparse CSC format + A = arr.tocsc() # convert to sparse CSC format A.sort_indices() # MATLAB expects sorted row indices is_complex = (A.dtype.kind == 'c') nz = A.nnz - self.write_header(is_complex=is_complex, + self.write_header(matdims(arr, self.oned_as), + mxSPARSE_CLASS, + is_complex=is_complex, nzmax=nz) self.write_element(A.indices.astype('i4')) self.write_element(A.indptr.astype('i4')) self.write_element(A.data.real) if is_complex: self.write_element(A.data.imag) - self.update_matrix_tag() - -class Mat5CellWriter(Mat5MatrixWriter): - default_mclass = mxCELL_CLASS - def write(self): - self.write_header() - self._write_items() - - def _write_items(self): + def write_cells(self, arr): + self.write_header(matdims(arr, self.oned_as), + mxCELL_CLASS) # loop over data, column major - A = np.atleast_2d(self.arr).flatten('F') - MWG = self.make_writer_getter() + A = np.atleast_2d(arr).flatten('F') for el in A: - MW = MWG.matrix_writer_factory(self.file_stream, el) - MW.write() - self.update_matrix_tag() - - -class Mat5BinaryBlockWriter(Mat5MatrixWriter): - ''' class to write untranslatable binary blocks ''' - def write(self): - # check endian - # write binary block as is - pass - -class Mat5StructWriter(Mat5CellWriter): - ''' class to write matlab structs + self.write(el) - Differs from cell writing class in writing field names, - and in mx class - ''' - default_mclass = mxSTRUCT_CLASS + def write_struct(self, arr): + self.write_header(matdims(arr, self.oned_as), + mxSTRUCT_CLASS) + self._write_items(arr) - def _write_items(self): + def _write_items(self, arr): # write fieldnames - fieldnames = [f[0] for f in self.arr.dtype.descr] + fieldnames = [f[0] for f in arr.dtype.descr] length = max([len(fieldname) for fieldname in fieldnames])+1 max_length = (self.long_field_names and 64) or 32 if length > max_length: @@ -934,189 +784,23 @@ self.write_element( np.array(fieldnames, dtype='S%d'%(length)), mdtype=miINT8) - A = np.atleast_2d(self.arr).flatten('F') - MWG = self.make_writer_getter() + A = np.atleast_2d(arr).flatten('F') for el in A: for f in fieldnames: - MW = MWG.matrix_writer_factory(self.file_stream, el[f]) - MW.write() - self.update_matrix_tag() - - -class Mat5ObjectWriter(Mat5StructWriter): - ''' class to write matlab objects - - Same as writing structs, except different mx class, and extra - classname element after header - ''' - default_mclass = mxOBJECT_CLASS - def write(self): - self.write_header() - self.write_element(np.array(self.arr.classname, dtype='S'), - mdtype=miINT8) - self._write_items() - + self.write(el[f]) -class Mat5WriterGetter(object): - ''' Wraps options, provides methods for getting Writer objects ''' - @docfiller - def __init__(self, - unicode_strings=True, - long_field_names=False, - oned_as='column'): - ''' Initialize writer getter - - Parameters - ---------- - unicode_strings : bool - If True, write unicode strings - %(long_fields)s - %(oned_as)s - ''' - self.unicode_strings = unicode_strings - self.long_field_names = long_field_names - self.oned_as = oned_as - - def to_writeable(self, source): - ''' Convert input object ``source`` to something we can write - - Parameters - ---------- - source : object - - Returns - ------- - arr : ndarray - - Examples - -------- - >>> mwg = Mat5WriterGetter() - >>> mwg.to_writeable(np.array([1])) # pass through ndarrays - array([1]) - >>> expected = np.array([(1, 2)], dtype=[('a', '|O8'), ('b', '|O8')]) - >>> np.all(mwg.to_writeable({'a':1,'b':2}) == expected) - True - >>> np.all(mwg.to_writeable({'a':1,'b':2, '_c':3}) == expected) - True - >>> np.all(mwg.to_writeable({'a':1,'b':2, 100:3}) == expected) - True - >>> np.all(mwg.to_writeable({'a':1,'b':2, '99':3}) == expected) - True - >>> class klass(object): pass - >>> c = klass - >>> c.a = 1 - >>> c.b = 2 - >>> np.all(mwg.to_writeable({'a':1,'b':2}) == expected) - True - >>> mwg.to_writeable([]) - array([], dtype=float64) - >>> mwg.to_writeable(()) - array([], dtype=float64) - >>> mwg.to_writeable(None) - - >>> mwg.to_writeable('a string').dtype - dtype('|S8') - >>> mwg.to_writeable(1) - array(1) - >>> mwg.to_writeable([1]) - array([1]) - >>> mwg.to_writeable([1]) - array([1]) - >>> mwg.to_writeable(object()) # not convertable - - dict keys with legal characters are convertible - - >>> mwg.to_writeable({'a':1})['a'] - array([1], dtype=object) - - but not with illegal characters - - >>> mwg.to_writeable({'1':1}) is None - True - >>> mwg.to_writeable({'_a':1}) is None - True - ''' - if isinstance(source, np.ndarray): - return source - if source is None: - return None - # Objects that have dicts - if hasattr(source, '__dict__'): - source = dict((key, value) for key, value in source.__dict__.items() - if not key.startswith('_')) - # Mappings or object dicts - if hasattr(source, 'keys'): - dtype = [] - values = [] - for field, value in source.items(): - if (isinstance(field, basestring) and - not field[0] in '_0123456789'): - dtype.append((field,object)) - values.append(value) - if dtype: - return np.array( [tuple(values)] ,dtype) - else: - return None - # Next try and convert to an array - narr = np.asanyarray(source) - if narr.dtype.type in (np.object, np.object_) and \ - narr.shape == () and narr == source: - # No interesting conversion possible - return None - return narr - - def matrix_writer_factory(self, stream, arr, name='', is_global=False): - ''' Factory function to return matrix writer given variable to write - - Parameters - ---------- - stream : fileobj - stream to write to - arr : array-like - array-like object to create writer for - name : string - name as it will appear in matlab workspace - default is empty string - is_global : {False, True} optional - whether variable will be global on load into matlab - - Returns - ------- - writer : matrix writer object + def write_object(self, arr): + '''Same as writing structs, except different mx class, and extra + classname element after header ''' - # First check if these are sparse - if scipy.sparse.issparse(arr): - return Mat5SparseWriter(stream, arr, name, is_global) - # Try to convert things that aren't arrays - narr = self.to_writeable(arr) - if narr is None: - raise TypeError('Could not convert %s (type %s) to array' - % (arr, type(arr))) - args = (stream, - narr, - name, - is_global, - self.unicode_strings, - self.long_field_names, - self.oned_as) - if isinstance(narr, MatlabBinaryBlock): - return Mat5BinaryBlockWriter(*args) - if isinstance(narr, MatlabObject): - return Mat5ObjectWriter(*args) - if narr.dtype.fields: # struct array - return Mat5StructWriter(*args) - if narr.dtype.hasobject: # cell array - return Mat5CellWriter(*args) - if narr.dtype.kind in ('U', 'S'): - if self.unicode_strings: - return Mat5UniCharWriter(*args) - else: - return Mat5CharWriter(*args) - else: - return Mat5NumericWriter(*args) + self.write_header(matdims(arr, self.oned_as), + mxOBJECT_CLASS) + self.write_element(np.array(arr.classname, dtype='S'), + mdtype=miINT8) + self._write_items(arr) -class MatFile5Writer(MatFileWriter): +class MatFile5Writer(object): ''' Class for writing mat5 files ''' @docfiller def __init__(self, file_stream, @@ -1136,22 +820,24 @@ %(long_fields)s %(oned_as)s ''' - super(MatFile5Writer, self).__init__(file_stream) + self.file_stream = file_stream self.do_compression = do_compression + self.unicode_strings = unicode_strings if global_vars: self.global_vars = global_vars else: self.global_vars = [] + self.long_field_names = long_field_names # deal with deprecations if oned_as is None: warnings.warn("Using oned_as default value ('column')" + " This will change to 'row' in future versions", FutureWarning, stacklevel=2) oned_as = 'column' - self.writer_getter = Mat5WriterGetter( - unicode_strings, - long_field_names, - oned_as) + self.oned_as = oned_as + self._matrix_writer = None + + def write_file_header(self): # write header hdr = np.zeros((), mdtypes_template['file_header']) hdr['description']='MATLAB 5.0 MAT-file Platform: %s, Created on: %s' \ @@ -1160,58 +846,43 @@ hdr['endian_test']=np.ndarray(shape=(), dtype='S2', buffer=np.uint16(0x4d49)) - file_stream.write(hdr.tostring()) + self.file_stream.write(hdr.tostring()) - def get_unicode_strings(self): - return self.writer_getter.unicode_strings - def set_unicode_strings(self, unicode_strings): - self.writer_getter.unicode_strings = unicode_strings - unicode_strings = property(get_unicode_strings, - set_unicode_strings, - None, - 'get/set unicode strings property') - - def get_long_field_names(self): - return self.writer_getter.long_field_names - def set_long_field_names(self, long_field_names): - self.writer_getter.long_field_names = long_field_names - long_field_names = property(get_long_field_names, - set_long_field_names, - None, - 'enable writing 32-63 character field ' - 'names for Matlab 7.6+') - - def get_oned_as(self): - return self.writer_getter.oned_as - def set_oned_as(self, oned_as): - self.writer_getter.oned_as = oned_as - oned_as = property(get_oned_as, - set_oned_as, - None, - 'get/set oned_as property') + def put_variables(self, mdict, write_header=None): + ''' Write variables in `mdict` to stream - def put_variables(self, mdict): + Parameters + ---------- + mdict : mapping + mapping with method ``items`` return name, contents pairs + where ``name`` which will appeak in the matlab workspace in + file load, and ``contents`` is something writeable to a + matlab file, such as a numpy array. + write_header : {None, True, False} + If True, then write the matlab file header before writing the + variables. If None (the default) then write the file header + if we are at position 0 in the stream. By setting False + here, and setting the stream position to the end of the file, + you can append variables to a matlab file + ''' + # write header if requested, or None and start of file + if write_header is None: + write_header = self.file_stream.tell() == 0 + if write_header: + self.write_file_header() + self._matrix_writer = VarWriter5(self) for name, var in mdict.items(): if name[0] == '_': continue is_global = name in self.global_vars if self.do_compression: stream = StringIO() - mat_writer = self.writer_getter.matrix_writer_factory( - stream, - var, - name, - is_global) - mat_writer.write() + self._matrix_writer.file_stream = stream + self._matrix_writer.write_top(var, name, is_global) out_str = zlib.compress(stream.getvalue()) tag = np.empty((), mdtypes_template['tag_full']) tag['mdtype'] = miCOMPRESSED tag['byte_count'] = len(out_str) self.file_stream.write(tag.tostring() + out_str) else: # not compressing - mat_writer = self.writer_getter.matrix_writer_factory( - self.file_stream, - var, - name, - is_global) - mat_writer.write() + self._matrix_writer.write_top(var, name, is_global) diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/matlab/mio5_utils.c python-scipy-0.8.0+dfsg1/scipy/io/matlab/mio5_utils.c --- python-scipy-0.7.2+dfsg1/scipy/io/matlab/mio5_utils.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/matlab/mio5_utils.c 2010-07-26 15:48:31.000000000 +0100 @@ -0,0 +1,12185 @@ +/* Generated by Cython 0.12.1 on Wed May 26 12:20:26 2010 */ + +#define PY_SSIZE_T_CLEAN +#include "Python.h" +#include "structmember.h" +#ifndef Py_PYTHON_H + #error Python headers needed to compile C extensions, please install development version of Python. +#else + +#ifndef PY_LONG_LONG + #define PY_LONG_LONG LONG_LONG +#endif +#ifndef DL_EXPORT + #define DL_EXPORT(t) t +#endif +#if PY_VERSION_HEX < 0x02040000 + #define METH_COEXIST 0 + #define PyDict_CheckExact(op) (Py_TYPE(op) == &PyDict_Type) + #define PyDict_Contains(d,o) PySequence_Contains(d,o) +#endif + +#if PY_VERSION_HEX < 0x02050000 + typedef int Py_ssize_t; + #define PY_SSIZE_T_MAX INT_MAX + #define PY_SSIZE_T_MIN INT_MIN + #define PY_FORMAT_SIZE_T "" + #define PyInt_FromSsize_t(z) PyInt_FromLong(z) + #define PyInt_AsSsize_t(o) PyInt_AsLong(o) + #define PyNumber_Index(o) PyNumber_Int(o) + #define PyIndex_Check(o) PyNumber_Check(o) + #define PyErr_WarnEx(category, message, stacklevel) PyErr_Warn(category, message) +#endif + +#if PY_VERSION_HEX < 0x02060000 + #define Py_REFCNT(ob) (((PyObject*)(ob))->ob_refcnt) + #define Py_TYPE(ob) (((PyObject*)(ob))->ob_type) + #define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size) + #define PyVarObject_HEAD_INIT(type, size) \ + PyObject_HEAD_INIT(type) size, + #define PyType_Modified(t) + + typedef struct { + void *buf; + PyObject *obj; + Py_ssize_t len; + Py_ssize_t itemsize; + int readonly; + int ndim; + char *format; + Py_ssize_t *shape; + Py_ssize_t *strides; + Py_ssize_t *suboffsets; + void *internal; + } Py_buffer; + + #define PyBUF_SIMPLE 0 + #define PyBUF_WRITABLE 0x0001 + #define PyBUF_FORMAT 0x0004 + #define PyBUF_ND 0x0008 + #define PyBUF_STRIDES (0x0010 | PyBUF_ND) + #define PyBUF_C_CONTIGUOUS (0x0020 | PyBUF_STRIDES) + #define PyBUF_F_CONTIGUOUS (0x0040 | PyBUF_STRIDES) + #define PyBUF_ANY_CONTIGUOUS (0x0080 | PyBUF_STRIDES) + #define PyBUF_INDIRECT (0x0100 | PyBUF_STRIDES) + +#endif + +#if PY_MAJOR_VERSION < 3 + #define __Pyx_BUILTIN_MODULE_NAME "__builtin__" +#else + #define __Pyx_BUILTIN_MODULE_NAME "builtins" +#endif + +#if PY_MAJOR_VERSION >= 3 + #define Py_TPFLAGS_CHECKTYPES 0 + #define Py_TPFLAGS_HAVE_INDEX 0 +#endif + +#if (PY_VERSION_HEX < 0x02060000) || (PY_MAJOR_VERSION >= 3) + #define Py_TPFLAGS_HAVE_NEWBUFFER 0 +#endif + +#if PY_MAJOR_VERSION >= 3 + #define PyBaseString_Type PyUnicode_Type + #define PyString_Type PyUnicode_Type + #define PyString_CheckExact PyUnicode_CheckExact +#else + #define PyBytes_Type PyString_Type + #define PyBytes_CheckExact PyString_CheckExact +#endif + +#if PY_MAJOR_VERSION >= 3 + #define PyInt_Type PyLong_Type + #define PyInt_Check(op) PyLong_Check(op) + #define PyInt_CheckExact(op) PyLong_CheckExact(op) + #define PyInt_FromString PyLong_FromString + #define PyInt_FromUnicode PyLong_FromUnicode + #define PyInt_FromLong PyLong_FromLong + #define PyInt_FromSize_t PyLong_FromSize_t + #define PyInt_FromSsize_t PyLong_FromSsize_t + #define PyInt_AsLong PyLong_AsLong + #define PyInt_AS_LONG PyLong_AS_LONG + #define PyInt_AsSsize_t PyLong_AsSsize_t + #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask + #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask + #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y) + #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y) +#else + #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y) + #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y) + +#endif + +#if PY_MAJOR_VERSION >= 3 + #define PyMethod_New(func, self, klass) PyInstanceMethod_New(func) +#endif + +#if !defined(WIN32) && !defined(MS_WINDOWS) + #ifndef __stdcall + #define __stdcall + #endif + #ifndef __cdecl + #define __cdecl + #endif + #ifndef __fastcall + #define __fastcall + #endif +#else + #define _USE_MATH_DEFINES +#endif + +#if PY_VERSION_HEX < 0x02050000 + #define __Pyx_GetAttrString(o,n) PyObject_GetAttrString((o),((char *)(n))) + #define __Pyx_SetAttrString(o,n,a) PyObject_SetAttrString((o),((char *)(n)),(a)) + #define __Pyx_DelAttrString(o,n) PyObject_DelAttrString((o),((char *)(n))) +#else + #define __Pyx_GetAttrString(o,n) PyObject_GetAttrString((o),(n)) + #define __Pyx_SetAttrString(o,n,a) PyObject_SetAttrString((o),(n),(a)) + #define __Pyx_DelAttrString(o,n) PyObject_DelAttrString((o),(n)) +#endif + +#if PY_VERSION_HEX < 0x02050000 + #define __Pyx_NAMESTR(n) ((char *)(n)) + #define __Pyx_DOCSTR(n) ((char *)(n)) +#else + #define __Pyx_NAMESTR(n) (n) + #define __Pyx_DOCSTR(n) (n) +#endif +#ifdef __cplusplus +#define __PYX_EXTERN_C extern "C" +#else +#define __PYX_EXTERN_C extern +#endif +#include +#define __PYX_HAVE_API__scipy__io__matlab__mio5_utils +#include "stdio.h" +#include "stdlib.h" +#include "numpy/arrayobject.h" +#include "numpy/ufuncobject.h" +#include "numpy_rephrasing.h" + +#ifndef CYTHON_INLINE + #if defined(__GNUC__) + #define CYTHON_INLINE __inline__ + #elif defined(_MSC_VER) + #define CYTHON_INLINE __inline + #else + #define CYTHON_INLINE + #endif +#endif + +typedef struct {PyObject **p; char *s; const long n; const char* encoding; const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; /*proto*/ + + +/* Type Conversion Predeclarations */ + +#if PY_MAJOR_VERSION < 3 +#define __Pyx_PyBytes_FromString PyString_FromString +#define __Pyx_PyBytes_FromStringAndSize PyString_FromStringAndSize +#define __Pyx_PyBytes_AsString PyString_AsString +#else +#define __Pyx_PyBytes_FromString PyBytes_FromString +#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize +#define __Pyx_PyBytes_AsString PyBytes_AsString +#endif + +#define __Pyx_PyBytes_FromUString(s) __Pyx_PyBytes_FromString((char*)s) +#define __Pyx_PyBytes_AsUString(s) ((unsigned char*) __Pyx_PyBytes_AsString(s)) + +#define __Pyx_PyBool_FromLong(b) ((b) ? (Py_INCREF(Py_True), Py_True) : (Py_INCREF(Py_False), Py_False)) +static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); +static CYTHON_INLINE PyObject* __Pyx_PyNumber_Int(PyObject* x); + +#if !defined(T_PYSSIZET) +#if PY_VERSION_HEX < 0x02050000 +#define T_PYSSIZET T_INT +#elif !defined(T_LONGLONG) +#define T_PYSSIZET \ + ((sizeof(Py_ssize_t) == sizeof(int)) ? T_INT : \ + ((sizeof(Py_ssize_t) == sizeof(long)) ? T_LONG : -1)) +#else +#define T_PYSSIZET \ + ((sizeof(Py_ssize_t) == sizeof(int)) ? T_INT : \ + ((sizeof(Py_ssize_t) == sizeof(long)) ? T_LONG : \ + ((sizeof(Py_ssize_t) == sizeof(PY_LONG_LONG)) ? T_LONGLONG : -1))) +#endif +#endif + + +#if !defined(T_ULONGLONG) +#define __Pyx_T_UNSIGNED_INT(x) \ + ((sizeof(x) == sizeof(unsigned char)) ? T_UBYTE : \ + ((sizeof(x) == sizeof(unsigned short)) ? T_USHORT : \ + ((sizeof(x) == sizeof(unsigned int)) ? T_UINT : \ + ((sizeof(x) == sizeof(unsigned long)) ? T_ULONG : -1)))) +#else +#define __Pyx_T_UNSIGNED_INT(x) \ + ((sizeof(x) == sizeof(unsigned char)) ? T_UBYTE : \ + ((sizeof(x) == sizeof(unsigned short)) ? T_USHORT : \ + ((sizeof(x) == sizeof(unsigned int)) ? T_UINT : \ + ((sizeof(x) == sizeof(unsigned long)) ? T_ULONG : \ + ((sizeof(x) == sizeof(unsigned PY_LONG_LONG)) ? T_ULONGLONG : -1))))) +#endif +#if !defined(T_LONGLONG) +#define __Pyx_T_SIGNED_INT(x) \ + ((sizeof(x) == sizeof(char)) ? T_BYTE : \ + ((sizeof(x) == sizeof(short)) ? T_SHORT : \ + ((sizeof(x) == sizeof(int)) ? T_INT : \ + ((sizeof(x) == sizeof(long)) ? T_LONG : -1)))) +#else +#define __Pyx_T_SIGNED_INT(x) \ + ((sizeof(x) == sizeof(char)) ? T_BYTE : \ + ((sizeof(x) == sizeof(short)) ? T_SHORT : \ + ((sizeof(x) == sizeof(int)) ? T_INT : \ + ((sizeof(x) == sizeof(long)) ? T_LONG : \ + ((sizeof(x) == sizeof(PY_LONG_LONG)) ? T_LONGLONG : -1))))) +#endif + +#define __Pyx_T_FLOATING(x) \ + ((sizeof(x) == sizeof(float)) ? T_FLOAT : \ + ((sizeof(x) == sizeof(double)) ? T_DOUBLE : -1)) + +#if !defined(T_SIZET) +#if !defined(T_ULONGLONG) +#define T_SIZET \ + ((sizeof(size_t) == sizeof(unsigned int)) ? T_UINT : \ + ((sizeof(size_t) == sizeof(unsigned long)) ? T_ULONG : -1)) +#else +#define T_SIZET \ + ((sizeof(size_t) == sizeof(unsigned int)) ? T_UINT : \ + ((sizeof(size_t) == sizeof(unsigned long)) ? T_ULONG : \ + ((sizeof(size_t) == sizeof(unsigned PY_LONG_LONG)) ? T_ULONGLONG : -1))) +#endif +#endif + +static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); +static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t); +static CYTHON_INLINE size_t __Pyx_PyInt_AsSize_t(PyObject*); + +#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) + + +#ifdef __GNUC__ +/* Test for GCC > 2.95 */ +#if __GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95)) +#define likely(x) __builtin_expect(!!(x), 1) +#define unlikely(x) __builtin_expect(!!(x), 0) +#else /* __GNUC__ > 2 ... */ +#define likely(x) (x) +#define unlikely(x) (x) +#endif /* __GNUC__ > 2 ... */ +#else /* __GNUC__ */ +#define likely(x) (x) +#define unlikely(x) (x) +#endif /* __GNUC__ */ + +static PyObject *__pyx_m; +static PyObject *__pyx_b; +static PyObject *__pyx_empty_tuple; +static PyObject *__pyx_empty_bytes; +static int __pyx_lineno; +static int __pyx_clineno = 0; +static const char * __pyx_cfilenm= __FILE__; +static const char *__pyx_filename; +static const char **__pyx_f; + + +#if !defined(CYTHON_CCOMPLEX) + #if defined(__cplusplus) + #define CYTHON_CCOMPLEX 1 + #elif defined(_Complex_I) + #define CYTHON_CCOMPLEX 1 + #else + #define CYTHON_CCOMPLEX 0 + #endif +#endif + +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + #include + #else + #include + #endif +#endif + +#if CYTHON_CCOMPLEX && !defined(__cplusplus) && defined(__sun__) && defined(__GNUC__) + #undef _Complex_I + #define _Complex_I 1.0fj +#endif + +typedef npy_int8 __pyx_t_5numpy_int8_t; + +typedef npy_int16 __pyx_t_5numpy_int16_t; + +typedef npy_int32 __pyx_t_5numpy_int32_t; + +typedef npy_int64 __pyx_t_5numpy_int64_t; + +typedef npy_uint8 __pyx_t_5numpy_uint8_t; + +typedef npy_uint16 __pyx_t_5numpy_uint16_t; + +typedef npy_uint32 __pyx_t_5numpy_uint32_t; + +typedef npy_uint64 __pyx_t_5numpy_uint64_t; + +typedef npy_float32 __pyx_t_5numpy_float32_t; + +typedef npy_float64 __pyx_t_5numpy_float64_t; + +typedef npy_long __pyx_t_5numpy_int_t; + +typedef npy_longlong __pyx_t_5numpy_long_t; + +typedef npy_intp __pyx_t_5numpy_intp_t; + +typedef npy_uintp __pyx_t_5numpy_uintp_t; + +typedef npy_ulong __pyx_t_5numpy_uint_t; + +typedef npy_ulonglong __pyx_t_5numpy_ulong_t; + +typedef npy_double __pyx_t_5numpy_float_t; + +typedef npy_double __pyx_t_5numpy_double_t; + +typedef npy_longdouble __pyx_t_5numpy_longdouble_t; + +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + typedef ::std::complex< double > __pyx_t_double_complex; + #else + typedef double _Complex __pyx_t_double_complex; + #endif +#else + typedef struct { double real, imag; } __pyx_t_double_complex; +#endif + +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + typedef ::std::complex< float > __pyx_t_float_complex; + #else + typedef float _Complex __pyx_t_float_complex; + #endif +#else + typedef struct { float real, imag; } __pyx_t_float_complex; +#endif + +/* Type declarations */ + +typedef npy_cfloat __pyx_t_5numpy_cfloat_t; + +typedef npy_cdouble __pyx_t_5numpy_cdouble_t; + +typedef npy_clongdouble __pyx_t_5numpy_clongdouble_t; + +typedef npy_cdouble __pyx_t_5numpy_complex_t; + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/streams.pxd":6 + * cdef object fobj + * + * cpdef int seek(self, long int offset, int whence=*) except -1 # <<<<<<<<<<<<<< + * cpdef long int tell(self) except -1 + * cdef int read_into(self, void *buf, size_t n) except -1 + */ + +struct __pyx_opt_args_5scipy_2io_6matlab_7streams_13GenericStream_seek { + int __pyx_n; + int whence; +}; + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/streams.pxd":9 + * cpdef long int tell(self) except -1 + * cdef int read_into(self, void *buf, size_t n) except -1 + * cdef object read_string(self, size_t n, void **pp, int copy=*) # <<<<<<<<<<<<<< + * + * cpdef GenericStream make_stream(object fobj) + */ + +struct __pyx_opt_args_5scipy_2io_6matlab_7streams_13GenericStream_read_string { + int __pyx_n; + int copy; +}; + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":64 + * + * + * cdef enum: # <<<<<<<<<<<<<< + * miINT8 = 1 + * miUINT8 = 2 + */ + +enum { + __pyx_e_5scipy_2io_6matlab_10mio5_utils_miINT8 = 1, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_miUINT8 = 2, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_miINT16 = 3, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_miUINT16 = 4, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_miINT32 = 5, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_miUINT32 = 6, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_miSINGLE = 7, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_miDOUBLE = 9, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_miINT64 = 12, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_miUINT64 = 13, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_miMATRIX = 14, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_miCOMPRESSED = 15, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_miUTF8 = 16, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_miUTF16 = 17, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_miUTF32 = 18 +}; + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":81 + * miUTF32 = 18 + * + * cdef enum: # see comments in mio5_params # <<<<<<<<<<<<<< + * mxCELL_CLASS = 1 + * mxSTRUCT_CLASS = 2 + */ + +enum { + __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxCELL_CLASS = 1, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxSTRUCT_CLASS = 2, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxOBJECT_CLASS = 3, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxCHAR_CLASS = 4, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxSPARSE_CLASS = 5, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxDOUBLE_CLASS = 6, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxSINGLE_CLASS = 7, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxINT8_CLASS = 8, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxUINT8_CLASS = 9, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxINT16_CLASS = 10, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxUINT16_CLASS = 11, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxINT32_CLASS = 12, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxUINT32_CLASS = 13, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxINT64_CLASS = 14, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxUINT64_CLASS = 15, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxFUNCTION_CLASS = 16, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxOPAQUE_CLASS = 17, + __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxOBJECT_CLASS_FROM_MATRIX_H = 18 +}; + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":288 + * return 1 + * + * cdef object read_element(self, # <<<<<<<<<<<<<< + * cnp.uint32_t *mdtype_ptr, + * cnp.uint32_t *byte_count_ptr, + */ + +struct __pyx_opt_args_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_element { + int __pyx_n; + int copy; +}; + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":383 + * self.cstream.seek(8 - mod8, 1) + * + * cpdef inline cnp.ndarray read_numeric(self, int copy=True): # <<<<<<<<<<<<<< + * ''' Read numeric data element into ndarray + * + */ + +struct __pyx_opt_args_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_numeric { + int __pyx_n; + int copy; +}; + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":561 + * return size + * + * cdef read_mi_matrix(self, int process=1): # <<<<<<<<<<<<<< + * ''' Read header with matrix at sub-levels + * + */ + +struct __pyx_opt_args_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_mi_matrix { + int __pyx_n; + int process; +}; + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":593 + * return self.array_from_header(header, process) + * + * cpdef array_from_header(self, VarHeader5 header, int process=1): # <<<<<<<<<<<<<< + * ''' Read array of any class, given matrix `header` + * + */ + +struct __pyx_opt_args_5scipy_2io_6matlab_10mio5_utils_10VarReader5_array_from_header { + int __pyx_n; + int process; +}; + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/streams.pxd":3 + * # -*- python -*- or rather like + * + * cdef class GenericStream: # <<<<<<<<<<<<<< + * cdef object fobj + * + */ + +struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream { + PyObject_HEAD + struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_GenericStream *__pyx_vtab; + PyObject *fobj; +}; + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":115 + * + * + * cdef class VarHeader5: # <<<<<<<<<<<<<< + * cdef readonly object name + * cdef readonly int mclass + */ + +struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 { + PyObject_HEAD + PyObject *name; + int mclass; + PyObject *dims; + __pyx_t_5numpy_int32_t dims_ptr[32]; + int n_dims; + int is_complex; + int is_logical; + int is_global; + size_t nzmax; +}; + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":127 + * + * + * cdef class VarReader5: # <<<<<<<<<<<<<< + * cdef public int is_swapped, little_endian + * cdef int struct_as_record + */ + +struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 { + PyObject_HEAD + struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *__pyx_vtab; + int is_swapped; + int little_endian; + int struct_as_record; + PyObject *codecs; + PyObject *uint16_codec; + struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *cstream; + PyObject *dtypes[20]; + PyObject *class_dtypes[20]; + PyObject *preader; + PyArray_Descr *U1_dtype; + PyArray_Descr *bool_dtype; + int mat_dtype; + int squeeze_me; + int chars_as_strings; +}; + + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/streams.pxd":3 + * # -*- python -*- or rather like + * + * cdef class GenericStream: # <<<<<<<<<<<<<< + * cdef object fobj + * + */ + +struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_GenericStream { + int (*seek)(struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *, long, int __pyx_skip_dispatch, struct __pyx_opt_args_5scipy_2io_6matlab_7streams_13GenericStream_seek *__pyx_optional_args); + long (*tell)(struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *, int __pyx_skip_dispatch); + int (*read_into)(struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *, void *, size_t); + PyObject *(*read_string)(struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *, size_t, void **, struct __pyx_opt_args_5scipy_2io_6matlab_7streams_13GenericStream_read_string *__pyx_optional_args); +}; +static struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_GenericStream *__pyx_vtabptr_5scipy_2io_6matlab_7streams_GenericStream; + + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":127 + * + * + * cdef class VarReader5: # <<<<<<<<<<<<<< + * cdef public int is_swapped, little_endian + * cdef int struct_as_record + */ + +struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 { + int (*cread_tag)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, __pyx_t_5numpy_uint32_t *, __pyx_t_5numpy_uint32_t *, char *); + PyObject *(*read_element)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, __pyx_t_5numpy_uint32_t *, __pyx_t_5numpy_uint32_t *, void **, struct __pyx_opt_args_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_element *__pyx_optional_args); + void (*read_element_into)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, __pyx_t_5numpy_uint32_t *, __pyx_t_5numpy_uint32_t *, void *); + PyArrayObject *(*read_numeric)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, int __pyx_skip_dispatch, struct __pyx_opt_args_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_numeric *__pyx_optional_args); + PyObject *(*read_int8_string)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *); + int (*read_into_int32s)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, __pyx_t_5numpy_int32_t *); + void (*cread_full_tag)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, __pyx_t_5numpy_uint32_t *, __pyx_t_5numpy_uint32_t *); + struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *(*read_header)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, int __pyx_skip_dispatch); + size_t (*size_from_header)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *); + PyObject *(*read_mi_matrix)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, struct __pyx_opt_args_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_mi_matrix *__pyx_optional_args); + PyObject *(*array_from_header)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *, int __pyx_skip_dispatch, struct __pyx_opt_args_5scipy_2io_6matlab_10mio5_utils_10VarReader5_array_from_header *__pyx_optional_args); + PyArrayObject *(*read_real_complex)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *, int __pyx_skip_dispatch); + PyObject *(*read_sparse)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *); + PyArrayObject *(*read_char)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *, int __pyx_skip_dispatch); + PyArrayObject *(*read_cells)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *, int __pyx_skip_dispatch); + PyObject *(*cread_fieldnames)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, int *); + PyArrayObject *(*read_struct)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *, int __pyx_skip_dispatch); + PyArrayObject *(*read_opaque)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *, int __pyx_skip_dispatch); +}; +static struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *__pyx_vtabptr_5scipy_2io_6matlab_10mio5_utils_VarReader5; + +#ifndef CYTHON_REFNANNY + #define CYTHON_REFNANNY 0 +#endif + +#if CYTHON_REFNANNY + typedef struct { + void (*INCREF)(void*, PyObject*, int); + void (*DECREF)(void*, PyObject*, int); + void (*GOTREF)(void*, PyObject*, int); + void (*GIVEREF)(void*, PyObject*, int); + void* (*SetupContext)(const char*, int, const char*); + void (*FinishContext)(void**); + } __Pyx_RefNannyAPIStruct; + static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL; + static __Pyx_RefNannyAPIStruct * __Pyx_RefNannyImportAPI(const char *modname) { + PyObject *m = NULL, *p = NULL; + void *r = NULL; + m = PyImport_ImportModule((char *)modname); + if (!m) goto end; + p = PyObject_GetAttrString(m, (char *)"RefNannyAPI"); + if (!p) goto end; + r = PyLong_AsVoidPtr(p); + end: + Py_XDECREF(p); + Py_XDECREF(m); + return (__Pyx_RefNannyAPIStruct *)r; + } + #define __Pyx_RefNannySetupContext(name) void *__pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__) + #define __Pyx_RefNannyFinishContext() __Pyx_RefNanny->FinishContext(&__pyx_refnanny) + #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r);} } while(0) +#else + #define __Pyx_RefNannySetupContext(name) + #define __Pyx_RefNannyFinishContext() + #define __Pyx_INCREF(r) Py_INCREF(r) + #define __Pyx_DECREF(r) Py_DECREF(r) + #define __Pyx_GOTREF(r) + #define __Pyx_GIVEREF(r) + #define __Pyx_XDECREF(r) Py_XDECREF(r) +#endif /* CYTHON_REFNANNY */ +#define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);} } while(0) +#define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r);} } while(0) + +static void __Pyx_RaiseDoubleKeywordsError( + const char* func_name, PyObject* kw_name); /*proto*/ + +static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact, + Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found); /*proto*/ + +static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[], PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args, const char* function_name); /*proto*/ + +static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index); + +static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(void); + +static PyObject *__Pyx_UnpackItem(PyObject *, Py_ssize_t index); /*proto*/ +static int __Pyx_EndUnpack(PyObject *); /*proto*/ + +static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type); /*proto*/ + +static CYTHON_INLINE PyObject* __Pyx_PyObject_Append(PyObject* L, PyObject* x) { + if (likely(PyList_CheckExact(L))) { + if (PyList_Append(L, x) < 0) return NULL; + Py_INCREF(Py_None); + return Py_None; /* this is just to have an accurate signature */ + } + else { + PyObject *r, *m; + m = __Pyx_GetAttrString(L, "append"); + if (!m) return NULL; + r = PyObject_CallFunctionObjArgs(m, x, NULL); + Py_DECREF(m); + return r; + } +} + + +static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j) { + PyObject *r; + if (!j) return NULL; + r = PyObject_GetItem(o, j); + Py_DECREF(j); + return r; +} + + +#define __Pyx_GetItemInt_List(o, i, size, to_py_func) ((size <= sizeof(Py_ssize_t)) ? \ + __Pyx_GetItemInt_List_Fast(o, i, size <= sizeof(long)) : \ + __Pyx_GetItemInt_Generic(o, to_py_func(i))) + +static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, int fits_long) { + if (likely(o != Py_None)) { + if (likely((0 <= i) & (i < PyList_GET_SIZE(o)))) { + PyObject *r = PyList_GET_ITEM(o, i); + Py_INCREF(r); + return r; + } + else if ((-PyList_GET_SIZE(o) <= i) & (i < 0)) { + PyObject *r = PyList_GET_ITEM(o, PyList_GET_SIZE(o) + i); + Py_INCREF(r); + return r; + } + } + return __Pyx_GetItemInt_Generic(o, fits_long ? PyInt_FromLong(i) : PyLong_FromLongLong(i)); +} + +#define __Pyx_GetItemInt_Tuple(o, i, size, to_py_func) ((size <= sizeof(Py_ssize_t)) ? \ + __Pyx_GetItemInt_Tuple_Fast(o, i, size <= sizeof(long)) : \ + __Pyx_GetItemInt_Generic(o, to_py_func(i))) + +static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, int fits_long) { + if (likely(o != Py_None)) { + if (likely((0 <= i) & (i < PyTuple_GET_SIZE(o)))) { + PyObject *r = PyTuple_GET_ITEM(o, i); + Py_INCREF(r); + return r; + } + else if ((-PyTuple_GET_SIZE(o) <= i) & (i < 0)) { + PyObject *r = PyTuple_GET_ITEM(o, PyTuple_GET_SIZE(o) + i); + Py_INCREF(r); + return r; + } + } + return __Pyx_GetItemInt_Generic(o, fits_long ? PyInt_FromLong(i) : PyLong_FromLongLong(i)); +} + + +#define __Pyx_GetItemInt(o, i, size, to_py_func) ((size <= sizeof(Py_ssize_t)) ? \ + __Pyx_GetItemInt_Fast(o, i, size <= sizeof(long)) : \ + __Pyx_GetItemInt_Generic(o, to_py_func(i))) + +static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, int fits_long) { + PyObject *r; + if (PyList_CheckExact(o) && ((0 <= i) & (i < PyList_GET_SIZE(o)))) { + r = PyList_GET_ITEM(o, i); + Py_INCREF(r); + } + else if (PyTuple_CheckExact(o) && ((0 <= i) & (i < PyTuple_GET_SIZE(o)))) { + r = PyTuple_GET_ITEM(o, i); + Py_INCREF(r); + } + else if (Py_TYPE(o)->tp_as_sequence && Py_TYPE(o)->tp_as_sequence->sq_item && (likely(i >= 0))) { + r = PySequence_GetItem(o, i); + } + else { + r = __Pyx_GetItemInt_Generic(o, fits_long ? PyInt_FromLong(i) : PyLong_FromLongLong(i)); + } + return r; +} + +static CYTHON_INLINE long __Pyx_NegateNonNeg(long b) { return unlikely(b < 0) ? b : !b; } +static CYTHON_INLINE PyObject* __Pyx_PyBoolOrNull_FromLong(long b) { + return unlikely(b < 0) ? NULL : __Pyx_PyBool_FromLong(b); +} + +/* Run-time type information about structs used with buffers */ +struct __Pyx_StructField_; + +typedef struct { + const char* name; /* for error messages only */ + struct __Pyx_StructField_* fields; + size_t size; /* sizeof(type) */ + char typegroup; /* _R_eal, _C_omplex, Signed _I_nt, _U_nsigned int, _S_truct, _P_ointer, _O_bject */ +} __Pyx_TypeInfo; + +typedef struct __Pyx_StructField_ { + __Pyx_TypeInfo* type; + const char* name; + size_t offset; +} __Pyx_StructField; + +typedef struct { + __Pyx_StructField* field; + size_t parent_offset; +} __Pyx_BufFmt_StackElem; + + +static CYTHON_INLINE void __Pyx_SafeReleaseBuffer(Py_buffer* info); +static int __Pyx_GetBufferAndValidate(Py_buffer* buf, PyObject* obj, __Pyx_TypeInfo* dtype, int flags, int nd, int cast, __Pyx_BufFmt_StackElem* stack); + +static void __Pyx_RaiseBufferFallbackError(void); /*proto*/ +static void __Pyx_RaiseBufferIndexError(int axis); /*proto*/ +#define __Pyx_BufPtrStrided1d(type, buf, i0, s0) (type)((char*)buf + i0 * s0) + +static CYTHON_INLINE void __Pyx_ErrRestore(PyObject *type, PyObject *value, PyObject *tb); /*proto*/ +static CYTHON_INLINE void __Pyx_ErrFetch(PyObject **type, PyObject **value, PyObject **tb); /*proto*/ + +static CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t, Py_ssize_t); /* proto */ + +#define UNARY_NEG_WOULD_OVERFLOW(x) (((x) < 0) & ((unsigned long)(x) == 0-(unsigned long)(x))) + +static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void); + +static void __Pyx_UnpackTupleError(PyObject *, Py_ssize_t index); /*proto*/ + +static int __Pyx_ArgTypeTest(PyObject *obj, PyTypeObject *type, int none_allowed, + const char *name, int exact); /*proto*/ +#if PY_MAJOR_VERSION < 3 +static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags); +static void __Pyx_ReleaseBuffer(Py_buffer *view); +#else +#define __Pyx_GetBuffer PyObject_GetBuffer +#define __Pyx_ReleaseBuffer PyBuffer_Release +#endif + +Py_ssize_t __Pyx_zeros[] = {0}; +Py_ssize_t __Pyx_minusones[] = {-1}; + +static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list); /*proto*/ + +static PyObject *__Pyx_GetName(PyObject *dict, PyObject *name); /*proto*/ + +static CYTHON_INLINE PyObject *__Pyx_PyInt_to_py_npy_uint32(npy_uint32); + +static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb); /*proto*/ + +static CYTHON_INLINE PyObject *__Pyx_PyInt_to_py_npy_int32(npy_int32); + +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + #define __Pyx_CREAL(z) ((z).real()) + #define __Pyx_CIMAG(z) ((z).imag()) + #else + #define __Pyx_CREAL(z) (__real__(z)) + #define __Pyx_CIMAG(z) (__imag__(z)) + #endif +#else + #define __Pyx_CREAL(z) ((z).real) + #define __Pyx_CIMAG(z) ((z).imag) +#endif + +#if defined(_WIN32) && defined(__cplusplus) && CYTHON_CCOMPLEX + #define __Pyx_SET_CREAL(z,x) ((z).real(x)) + #define __Pyx_SET_CIMAG(z,y) ((z).imag(y)) +#else + #define __Pyx_SET_CREAL(z,x) __Pyx_CREAL(z) = (x) + #define __Pyx_SET_CIMAG(z,y) __Pyx_CIMAG(z) = (y) +#endif + +static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double, double); + +#if CYTHON_CCOMPLEX + #define __Pyx_c_eq(a, b) ((a)==(b)) + #define __Pyx_c_sum(a, b) ((a)+(b)) + #define __Pyx_c_diff(a, b) ((a)-(b)) + #define __Pyx_c_prod(a, b) ((a)*(b)) + #define __Pyx_c_quot(a, b) ((a)/(b)) + #define __Pyx_c_neg(a) (-(a)) + #ifdef __cplusplus + #define __Pyx_c_is_zero(z) ((z)==(double)0) + #define __Pyx_c_conj(z) (::std::conj(z)) + /*#define __Pyx_c_abs(z) (::std::abs(z))*/ + #else + #define __Pyx_c_is_zero(z) ((z)==0) + #define __Pyx_c_conj(z) (conj(z)) + /*#define __Pyx_c_abs(z) (cabs(z))*/ + #endif +#else + static CYTHON_INLINE int __Pyx_c_eq(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg(__pyx_t_double_complex); + static CYTHON_INLINE int __Pyx_c_is_zero(__pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj(__pyx_t_double_complex); + /*static CYTHON_INLINE double __Pyx_c_abs(__pyx_t_double_complex);*/ +#endif + +static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float, float); + +#if CYTHON_CCOMPLEX + #define __Pyx_c_eqf(a, b) ((a)==(b)) + #define __Pyx_c_sumf(a, b) ((a)+(b)) + #define __Pyx_c_difff(a, b) ((a)-(b)) + #define __Pyx_c_prodf(a, b) ((a)*(b)) + #define __Pyx_c_quotf(a, b) ((a)/(b)) + #define __Pyx_c_negf(a) (-(a)) + #ifdef __cplusplus + #define __Pyx_c_is_zerof(z) ((z)==(float)0) + #define __Pyx_c_conjf(z) (::std::conj(z)) + /*#define __Pyx_c_absf(z) (::std::abs(z))*/ + #else + #define __Pyx_c_is_zerof(z) ((z)==0) + #define __Pyx_c_conjf(z) (conjf(z)) + /*#define __Pyx_c_absf(z) (cabsf(z))*/ + #endif +#else + static CYTHON_INLINE int __Pyx_c_eqf(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_sumf(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_difff(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_prodf(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quotf(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_negf(__pyx_t_float_complex); + static CYTHON_INLINE int __Pyx_c_is_zerof(__pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_conjf(__pyx_t_float_complex); + /*static CYTHON_INLINE float __Pyx_c_absf(__pyx_t_float_complex);*/ +#endif + +static CYTHON_INLINE unsigned char __Pyx_PyInt_AsUnsignedChar(PyObject *); + +static CYTHON_INLINE unsigned short __Pyx_PyInt_AsUnsignedShort(PyObject *); + +static CYTHON_INLINE unsigned int __Pyx_PyInt_AsUnsignedInt(PyObject *); + +static CYTHON_INLINE char __Pyx_PyInt_AsChar(PyObject *); + +static CYTHON_INLINE short __Pyx_PyInt_AsShort(PyObject *); + +static CYTHON_INLINE int __Pyx_PyInt_AsInt(PyObject *); + +static CYTHON_INLINE signed char __Pyx_PyInt_AsSignedChar(PyObject *); + +static CYTHON_INLINE signed short __Pyx_PyInt_AsSignedShort(PyObject *); + +static CYTHON_INLINE signed int __Pyx_PyInt_AsSignedInt(PyObject *); + +static CYTHON_INLINE unsigned long __Pyx_PyInt_AsUnsignedLong(PyObject *); + +static CYTHON_INLINE unsigned PY_LONG_LONG __Pyx_PyInt_AsUnsignedLongLong(PyObject *); + +static CYTHON_INLINE long __Pyx_PyInt_AsLong(PyObject *); + +static CYTHON_INLINE PY_LONG_LONG __Pyx_PyInt_AsLongLong(PyObject *); + +static CYTHON_INLINE signed long __Pyx_PyInt_AsSignedLong(PyObject *); + +static CYTHON_INLINE signed PY_LONG_LONG __Pyx_PyInt_AsSignedLongLong(PyObject *); + +static CYTHON_INLINE npy_uint32 __Pyx_PyInt_from_py_npy_uint32(PyObject *); + +static void __Pyx_WriteUnraisable(const char *name); /*proto*/ + +static int __Pyx_SetVtable(PyObject *dict, void *vtable); /*proto*/ + +static PyTypeObject *__Pyx_ImportType(const char *module_name, const char *class_name, long size, int strict); /*proto*/ + +static PyObject *__Pyx_ImportModule(const char *name); /*proto*/ + +static int __Pyx_GetVtable(PyObject *dict, void *vtabptr); /*proto*/ + +static int __Pyx_ImportFunction(PyObject *module, const char *funcname, void (**f)(void), const char *sig); /*proto*/ + +static void __Pyx_AddTraceback(const char *funcname); /*proto*/ + +static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); /*proto*/ +/* Module declarations from python_version */ + +/* Module declarations from python_ref */ + +/* Module declarations from python_exc */ + +/* Module declarations from python_module */ + +/* Module declarations from python_mem */ + +/* Module declarations from python_tuple */ + +/* Module declarations from python_list */ + +/* Module declarations from stdio */ + +/* Module declarations from python_object */ + +/* Module declarations from python_sequence */ + +/* Module declarations from python_mapping */ + +/* Module declarations from python_iterator */ + +/* Module declarations from python_type */ + +/* Module declarations from python_number */ + +/* Module declarations from python_int */ + +/* Module declarations from python_bool */ + +/* Module declarations from python_unicode */ + +/* Module declarations from python_long */ + +/* Module declarations from python_float */ + +/* Module declarations from python_complex */ + +/* Module declarations from python_string */ + +/* Module declarations from python_dict */ + +/* Module declarations from python_instance */ + +/* Module declarations from python_function */ + +/* Module declarations from python_method */ + +/* Module declarations from python_weakref */ + +/* Module declarations from python_getargs */ + +/* Module declarations from python_cobject */ + +/* Module declarations from python_oldbuffer */ + +/* Module declarations from python_set */ + +/* Module declarations from python_buffer */ + +/* Module declarations from python_bytes */ + +/* Module declarations from python_pycapsule */ + +/* Module declarations from python */ + +/* Module declarations from stdlib */ + +/* Module declarations from numpy */ + +/* Module declarations from numpy */ + +static PyTypeObject *__pyx_ptype_5numpy_dtype = 0; +static PyTypeObject *__pyx_ptype_5numpy_flatiter = 0; +static PyTypeObject *__pyx_ptype_5numpy_broadcast = 0; +static PyTypeObject *__pyx_ptype_5numpy_ndarray = 0; +static PyTypeObject *__pyx_ptype_5numpy_ufunc = 0; +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew1(PyObject *); /*proto*/ +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew2(PyObject *, PyObject *); /*proto*/ +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew3(PyObject *, PyObject *, PyObject *); /*proto*/ +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew4(PyObject *, PyObject *, PyObject *, PyObject *); /*proto*/ +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew5(PyObject *, PyObject *, PyObject *, PyObject *, PyObject *); /*proto*/ +static CYTHON_INLINE char *__pyx_f_5numpy__util_dtypestring(PyArray_Descr *, char *, char *, int *); /*proto*/ +static CYTHON_INLINE void __pyx_f_5numpy_set_array_base(PyArrayObject *, PyObject *); /*proto*/ +static CYTHON_INLINE PyObject *__pyx_f_5numpy_get_array_base(PyArrayObject *); /*proto*/ +/* Module declarations from scipy.io.matlab.streams */ + +static PyTypeObject *__pyx_ptype_5scipy_2io_6matlab_7streams_GenericStream = 0; +static struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *(*__pyx_f_5scipy_2io_6matlab_7streams_make_stream)(PyObject *, int __pyx_skip_dispatch); /*proto*/ +/* Module declarations from scipy.io.matlab.mio5_utils */ + +static PyTypeObject *__pyx_ptype_5scipy_2io_6matlab_10mio5_utils_VarHeader5 = 0; +static PyTypeObject *__pyx_ptype_5scipy_2io_6matlab_10mio5_utils_VarReader5 = 0; +static PyArray_Descr *__pyx_v_5scipy_2io_6matlab_10mio5_utils_OPAQUE_DTYPE = 0; +static __pyx_t_5numpy_uint32_t __pyx_f_5scipy_2io_6matlab_10mio5_utils_byteswap_u4(__pyx_t_5numpy_uint32_t, int __pyx_skip_dispatch); /*proto*/ +static __Pyx_TypeInfo __Pyx_TypeInfo_object = { "Python object", NULL, sizeof(PyObject *), 'O' }; +#define __Pyx_MODULE_NAME "scipy.io.matlab.mio5_utils" +int __pyx_module_is_main_scipy__io__matlab__mio5_utils = 0; + +/* Implementation of scipy.io.matlab.mio5_utils */ +static PyObject *__pyx_builtin_basestring; +static PyObject *__pyx_builtin_ValueError; +static PyObject *__pyx_builtin_TypeError; +static PyObject *__pyx_builtin_range; +static PyObject *__pyx_builtin_object; +static PyObject *__pyx_builtin_RuntimeError; +static char __pyx_k_1[] = "> 8 & 0xff00u)) | + * (u4 >> 24)) # <<<<<<<<<<<<<< + * + * + */ + __pyx_r = ((((__pyx_v_u4 << 24) | ((__pyx_v_u4 << 8) & 0xff0000U)) | ((__pyx_v_u4 >> 8) & 0xff00U)) | (__pyx_v_u4 >> 24)); + goto __pyx_L0; + + __pyx_r = 0; + __pyx_L0:; + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":108 + * + * + * cpdef cnp.uint32_t byteswap_u4(cnp.uint32_t u4): # <<<<<<<<<<<<<< + * return ((u4 << 24) | + * ((u4 << 8) & 0xff0000U) | + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_byteswap_u4(PyObject *__pyx_self, PyObject *__pyx_arg_u4); /*proto*/ +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_byteswap_u4(PyObject *__pyx_self, PyObject *__pyx_arg_u4) { + __pyx_t_5numpy_uint32_t __pyx_v_u4; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("byteswap_u4"); + __pyx_self = __pyx_self; + assert(__pyx_arg_u4); { + __pyx_v_u4 = __Pyx_PyInt_from_py_npy_uint32(__pyx_arg_u4); if (unlikely((__pyx_v_u4 == (npy_uint32)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 108; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L3_error:; + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.byteswap_u4"); + return NULL; + __pyx_L4_argument_unpacking_done:; + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = __Pyx_PyInt_to_py_npy_uint32(__pyx_f_5scipy_2io_6matlab_10mio5_utils_byteswap_u4(__pyx_v_u4, 0)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 108; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.byteswap_u4"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":148 + * int chars_as_strings + * + * def __new__(self, preader): # <<<<<<<<<<<<<< + * self.is_swapped = preader.byte_order == swapped_code + * if self.is_swapped: + */ + +static int __pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5___new__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static int __pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5___new__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + PyObject *__pyx_v_preader = 0; + PyObject *__pyx_v_key; + PyObject *__pyx_v_dt; + PyObject *__pyx_v_bool_dtype; + int __pyx_r; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + int __pyx_t_4; + int __pyx_t_5; + Py_ssize_t __pyx_t_6; + PyObject *__pyx_t_7 = NULL; + PyObject *__pyx_t_8 = NULL; + Py_ssize_t __pyx_t_9; + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__preader,0}; + __Pyx_RefNannySetupContext("__cinit__"); + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); + PyObject* values[1] = {0}; + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 0: + values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__preader); + if (likely(values[0])) kw_args--; + else goto __pyx_L5_argtuple_error; + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "__new__") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 148; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + __pyx_v_preader = values[0]; + } else if (PyTuple_GET_SIZE(__pyx_args) != 1) { + goto __pyx_L5_argtuple_error; + } else { + __pyx_v_preader = PyTuple_GET_ITEM(__pyx_args, 0); + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("__new__", 1, 1, 1, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 148; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + __pyx_L3_error:; + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.__cinit__"); + return -1; + __pyx_L4_argument_unpacking_done:; + __Pyx_INCREF((PyObject *)__pyx_v_self); + __Pyx_INCREF(__pyx_v_preader); + __pyx_v_key = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_dt = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_bool_dtype = Py_None; __Pyx_INCREF(Py_None); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":149 + * + * def __new__(self, preader): + * self.is_swapped = preader.byte_order == swapped_code # <<<<<<<<<<<<<< + * if self.is_swapped: + * self.little_endian = not sys_is_le + */ + __pyx_t_1 = PyObject_GetAttr(__pyx_v_preader, __pyx_n_s__byte_order); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 149; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__swapped_code); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 149; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyObject_RichCompare(__pyx_t_1, __pyx_t_2, Py_EQ); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 149; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_4 = __Pyx_PyInt_AsInt(__pyx_t_3); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 149; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->is_swapped = __pyx_t_4; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":150 + * def __new__(self, preader): + * self.is_swapped = preader.byte_order == swapped_code + * if self.is_swapped: # <<<<<<<<<<<<<< + * self.little_endian = not sys_is_le + * else: + */ + __pyx_t_4 = ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->is_swapped; + if (__pyx_t_4) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":151 + * self.is_swapped = preader.byte_order == swapped_code + * if self.is_swapped: + * self.little_endian = not sys_is_le # <<<<<<<<<<<<<< + * else: + * self.little_endian = sys_is_le + */ + __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__sys_is_le); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 151; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_5 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 151; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->little_endian = (!__pyx_t_5); + goto __pyx_L6; + } + /*else*/ { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":153 + * self.little_endian = not sys_is_le + * else: + * self.little_endian = sys_is_le # <<<<<<<<<<<<<< + * # option affecting reading of matlab struct arrays + * self.struct_as_record = preader.struct_as_record + */ + __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__sys_is_le); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 153; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = __Pyx_PyInt_AsInt(__pyx_t_3); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 153; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->little_endian = __pyx_t_4; + } + __pyx_L6:; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":155 + * self.little_endian = sys_is_le + * # option affecting reading of matlab struct arrays + * self.struct_as_record = preader.struct_as_record # <<<<<<<<<<<<<< + * # store codecs for text matrix reading + * self.codecs = preader.codecs + */ + __pyx_t_3 = PyObject_GetAttr(__pyx_v_preader, __pyx_n_s__struct_as_record); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 155; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = __Pyx_PyInt_AsInt(__pyx_t_3); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 155; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->struct_as_record = __pyx_t_4; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":157 + * self.struct_as_record = preader.struct_as_record + * # store codecs for text matrix reading + * self.codecs = preader.codecs # <<<<<<<<<<<<<< + * self.uint16_codec = preader.uint16_codec + * # set c-optimized stream object from python file-like object + */ + __pyx_t_3 = PyObject_GetAttr(__pyx_v_preader, __pyx_n_s__codecs); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 157; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + __Pyx_GOTREF(((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->codecs); + __Pyx_DECREF(((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->codecs); + ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->codecs = __pyx_t_3; + __pyx_t_3 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":158 + * # store codecs for text matrix reading + * self.codecs = preader.codecs + * self.uint16_codec = preader.uint16_codec # <<<<<<<<<<<<<< + * # set c-optimized stream object from python file-like object + * self.set_stream(preader.mat_stream) + */ + __pyx_t_3 = PyObject_GetAttr(__pyx_v_preader, __pyx_n_s__uint16_codec); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 158; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + __Pyx_GOTREF(((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->uint16_codec); + __Pyx_DECREF(((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->uint16_codec); + ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->uint16_codec = __pyx_t_3; + __pyx_t_3 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":160 + * self.uint16_codec = preader.uint16_codec + * # set c-optimized stream object from python file-like object + * self.set_stream(preader.mat_stream) # <<<<<<<<<<<<<< + * # options for element processing + * self.mat_dtype = preader.mat_dtype + */ + __pyx_t_3 = PyObject_GetAttr(__pyx_v_self, __pyx_n_s__set_stream); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 160; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_2 = PyObject_GetAttr(__pyx_v_preader, __pyx_n_s__mat_stream); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 160; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 160; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_2); + __Pyx_GIVEREF(__pyx_t_2); + __pyx_t_2 = 0; + __pyx_t_2 = PyObject_Call(__pyx_t_3, __pyx_t_1, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 160; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":162 + * self.set_stream(preader.mat_stream) + * # options for element processing + * self.mat_dtype = preader.mat_dtype # <<<<<<<<<<<<<< + * self.chars_as_strings = preader.chars_as_strings + * self.squeeze_me = preader.squeeze_me + */ + __pyx_t_2 = PyObject_GetAttr(__pyx_v_preader, __pyx_n_s__mat_dtype); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 162; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_4 = __Pyx_PyInt_AsInt(__pyx_t_2); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 162; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->mat_dtype = __pyx_t_4; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":163 + * # options for element processing + * self.mat_dtype = preader.mat_dtype + * self.chars_as_strings = preader.chars_as_strings # <<<<<<<<<<<<<< + * self.squeeze_me = preader.squeeze_me + * # copy refs to dtypes into object pointer array. Store preader + */ + __pyx_t_2 = PyObject_GetAttr(__pyx_v_preader, __pyx_n_s__chars_as_strings); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 163; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_4 = __Pyx_PyInt_AsInt(__pyx_t_2); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 163; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->chars_as_strings = __pyx_t_4; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":164 + * self.mat_dtype = preader.mat_dtype + * self.chars_as_strings = preader.chars_as_strings + * self.squeeze_me = preader.squeeze_me # <<<<<<<<<<<<<< + * # copy refs to dtypes into object pointer array. Store preader + * # to keep preader.dtypes, class_dtypes alive. We only need the + */ + __pyx_t_2 = PyObject_GetAttr(__pyx_v_preader, __pyx_n_s__squeeze_me); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 164; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_4 = __Pyx_PyInt_AsInt(__pyx_t_2); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 164; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->squeeze_me = __pyx_t_4; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":168 + * # to keep preader.dtypes, class_dtypes alive. We only need the + * # integer-keyed dtypes + * self.preader = preader # <<<<<<<<<<<<<< + * for key, dt in preader.dtypes.items(): + * if isinstance(key, basestring): + */ + __Pyx_INCREF(__pyx_v_preader); + __Pyx_GIVEREF(__pyx_v_preader); + __Pyx_GOTREF(((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->preader); + __Pyx_DECREF(((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->preader); + ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->preader = __pyx_v_preader; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":169 + * # integer-keyed dtypes + * self.preader = preader + * for key, dt in preader.dtypes.items(): # <<<<<<<<<<<<<< + * if isinstance(key, basestring): + * continue + */ + __pyx_t_2 = PyObject_GetAttr(__pyx_v_preader, __pyx_n_s__dtypes); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 169; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__items); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 169; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyObject_Call(__pyx_t_1, ((PyObject *)__pyx_empty_tuple), NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 169; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + if (PyList_CheckExact(__pyx_t_2) || PyTuple_CheckExact(__pyx_t_2)) { + __pyx_t_6 = 0; __pyx_t_1 = __pyx_t_2; __Pyx_INCREF(__pyx_t_1); + } else { + __pyx_t_6 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 169; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + } + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + for (;;) { + if (likely(PyList_CheckExact(__pyx_t_1))) { + if (__pyx_t_6 >= PyList_GET_SIZE(__pyx_t_1)) break; + __pyx_t_2 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_6); __Pyx_INCREF(__pyx_t_2); __pyx_t_6++; + } else if (likely(PyTuple_CheckExact(__pyx_t_1))) { + if (__pyx_t_6 >= PyTuple_GET_SIZE(__pyx_t_1)) break; + __pyx_t_2 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_6); __Pyx_INCREF(__pyx_t_2); __pyx_t_6++; + } else { + __pyx_t_2 = PyIter_Next(__pyx_t_1); + if (!__pyx_t_2) { + if (unlikely(PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 169; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + break; + } + __Pyx_GOTREF(__pyx_t_2); + } + if (PyTuple_CheckExact(__pyx_t_2) && likely(PyTuple_GET_SIZE(__pyx_t_2) == 2)) { + PyObject* tuple = __pyx_t_2; + __pyx_t_3 = PyTuple_GET_ITEM(tuple, 0); __Pyx_INCREF(__pyx_t_3); + __pyx_t_7 = PyTuple_GET_ITEM(tuple, 1); __Pyx_INCREF(__pyx_t_7); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_v_key); + __pyx_v_key = __pyx_t_3; + __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_v_dt); + __pyx_v_dt = __pyx_t_7; + __pyx_t_7 = 0; + } else { + __pyx_t_8 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_8)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 169; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_8); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_3 = __Pyx_UnpackItem(__pyx_t_8, 0); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 169; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_7 = __Pyx_UnpackItem(__pyx_t_8, 1); if (unlikely(!__pyx_t_7)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 169; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_7); + if (__Pyx_EndUnpack(__pyx_t_8) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 169; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; + __Pyx_DECREF(__pyx_v_key); + __pyx_v_key = __pyx_t_3; + __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_v_dt); + __pyx_v_dt = __pyx_t_7; + __pyx_t_7 = 0; + } + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":170 + * self.preader = preader + * for key, dt in preader.dtypes.items(): + * if isinstance(key, basestring): # <<<<<<<<<<<<<< + * continue + * self.dtypes[key] = dt + */ + __pyx_t_5 = PyObject_IsInstance(__pyx_v_key, __pyx_builtin_basestring); if (unlikely(__pyx_t_5 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 170; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + if (__pyx_t_5) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":171 + * for key, dt in preader.dtypes.items(): + * if isinstance(key, basestring): + * continue # <<<<<<<<<<<<<< + * self.dtypes[key] = dt + * # copy refs to class_dtypes into object pointer array + */ + goto __pyx_L7_continue; + goto __pyx_L9; + } + __pyx_L9:; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":172 + * if isinstance(key, basestring): + * continue + * self.dtypes[key] = dt # <<<<<<<<<<<<<< + * # copy refs to class_dtypes into object pointer array + * for key, dt in preader.class_dtypes.items(): + */ + __pyx_t_9 = __Pyx_PyIndex_AsSsize_t(__pyx_v_key); if (unlikely((__pyx_t_9 == (Py_ssize_t)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 172; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + (((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->dtypes[__pyx_t_9]) = ((PyObject *)__pyx_v_dt); + __pyx_L7_continue:; + } + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":174 + * self.dtypes[key] = dt + * # copy refs to class_dtypes into object pointer array + * for key, dt in preader.class_dtypes.items(): # <<<<<<<<<<<<<< + * if isinstance(key, basestring): + * continue + */ + __pyx_t_1 = PyObject_GetAttr(__pyx_v_preader, __pyx_n_s__class_dtypes); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 174; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__items); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 174; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyObject_Call(__pyx_t_2, ((PyObject *)__pyx_empty_tuple), NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 174; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + if (PyList_CheckExact(__pyx_t_1) || PyTuple_CheckExact(__pyx_t_1)) { + __pyx_t_6 = 0; __pyx_t_2 = __pyx_t_1; __Pyx_INCREF(__pyx_t_2); + } else { + __pyx_t_6 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 174; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + } + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + for (;;) { + if (likely(PyList_CheckExact(__pyx_t_2))) { + if (__pyx_t_6 >= PyList_GET_SIZE(__pyx_t_2)) break; + __pyx_t_1 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_6); __Pyx_INCREF(__pyx_t_1); __pyx_t_6++; + } else if (likely(PyTuple_CheckExact(__pyx_t_2))) { + if (__pyx_t_6 >= PyTuple_GET_SIZE(__pyx_t_2)) break; + __pyx_t_1 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_6); __Pyx_INCREF(__pyx_t_1); __pyx_t_6++; + } else { + __pyx_t_1 = PyIter_Next(__pyx_t_2); + if (!__pyx_t_1) { + if (unlikely(PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 174; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + break; + } + __Pyx_GOTREF(__pyx_t_1); + } + if (PyTuple_CheckExact(__pyx_t_1) && likely(PyTuple_GET_SIZE(__pyx_t_1) == 2)) { + PyObject* tuple = __pyx_t_1; + __pyx_t_7 = PyTuple_GET_ITEM(tuple, 0); __Pyx_INCREF(__pyx_t_7); + __pyx_t_3 = PyTuple_GET_ITEM(tuple, 1); __Pyx_INCREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_v_key); + __pyx_v_key = __pyx_t_7; + __pyx_t_7 = 0; + __Pyx_DECREF(__pyx_v_dt); + __pyx_v_dt = __pyx_t_3; + __pyx_t_3 = 0; + } else { + __pyx_t_8 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_8)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 174; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_8); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_7 = __Pyx_UnpackItem(__pyx_t_8, 0); if (unlikely(!__pyx_t_7)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 174; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_7); + __pyx_t_3 = __Pyx_UnpackItem(__pyx_t_8, 1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 174; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + if (__Pyx_EndUnpack(__pyx_t_8) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 174; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; + __Pyx_DECREF(__pyx_v_key); + __pyx_v_key = __pyx_t_7; + __pyx_t_7 = 0; + __Pyx_DECREF(__pyx_v_dt); + __pyx_v_dt = __pyx_t_3; + __pyx_t_3 = 0; + } + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":175 + * # copy refs to class_dtypes into object pointer array + * for key, dt in preader.class_dtypes.items(): + * if isinstance(key, basestring): # <<<<<<<<<<<<<< + * continue + * self.class_dtypes[key] = dt + */ + __pyx_t_5 = PyObject_IsInstance(__pyx_v_key, __pyx_builtin_basestring); if (unlikely(__pyx_t_5 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 175; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + if (__pyx_t_5) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":176 + * for key, dt in preader.class_dtypes.items(): + * if isinstance(key, basestring): + * continue # <<<<<<<<<<<<<< + * self.class_dtypes[key] = dt + * # cache correctly byte ordered dtypes + */ + goto __pyx_L10_continue; + goto __pyx_L12; + } + __pyx_L12:; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":177 + * if isinstance(key, basestring): + * continue + * self.class_dtypes[key] = dt # <<<<<<<<<<<<<< + * # cache correctly byte ordered dtypes + * if self.little_endian: + */ + __pyx_t_9 = __Pyx_PyIndex_AsSsize_t(__pyx_v_key); if (unlikely((__pyx_t_9 == (Py_ssize_t)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 177; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + (((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->class_dtypes[__pyx_t_9]) = ((PyObject *)__pyx_v_dt); + __pyx_L10_continue:; + } + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":179 + * self.class_dtypes[key] = dt + * # cache correctly byte ordered dtypes + * if self.little_endian: # <<<<<<<<<<<<<< + * self.U1_dtype = np.dtype('little_endian; + if (__pyx_t_4) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":180 + * # cache correctly byte ordered dtypes + * if self.little_endian: + * self.U1_dtype = np.dtype('U1') + */ + __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 180; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__dtype); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 180; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 180; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(((PyObject *)__pyx_kp_s_1)); + PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_1)); + __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_1)); + __pyx_t_3 = PyObject_Call(__pyx_t_1, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 180; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_dtype))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 180; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GIVEREF(__pyx_t_3); + __Pyx_GOTREF(((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->U1_dtype); + __Pyx_DECREF(((PyObject *)((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->U1_dtype)); + ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->U1_dtype = ((PyArray_Descr *)__pyx_t_3); + __pyx_t_3 = 0; + goto __pyx_L13; + } + /*else*/ { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":182 + * self.U1_dtype = np.dtype('U1') # <<<<<<<<<<<<<< + * bool_dtype = np.dtype('bool') + * + */ + __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 182; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__dtype); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 182; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 182; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_INCREF(((PyObject *)__pyx_kp_s_2)); + PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_2)); + __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_2)); + __pyx_t_1 = PyObject_Call(__pyx_t_2, __pyx_t_3, NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 182; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_ptype_5numpy_dtype))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 182; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GIVEREF(__pyx_t_1); + __Pyx_GOTREF(((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->U1_dtype); + __Pyx_DECREF(((PyObject *)((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->U1_dtype)); + ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->U1_dtype = ((PyArray_Descr *)__pyx_t_1); + __pyx_t_1 = 0; + } + __pyx_L13:; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":183 + * else: + * self.U1_dtype = np.dtype('>U1') + * bool_dtype = np.dtype('bool') # <<<<<<<<<<<<<< + * + * def set_stream(self, fobj): + */ + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 183; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_3 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__dtype); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 183; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 183; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_INCREF(((PyObject *)__pyx_n_s__bool)); + PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)__pyx_n_s__bool)); + __Pyx_GIVEREF(((PyObject *)__pyx_n_s__bool)); + __pyx_t_2 = PyObject_Call(__pyx_t_3, __pyx_t_1, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 183; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_v_bool_dtype); + __pyx_v_bool_dtype = __pyx_t_2; + __pyx_t_2 = 0; + + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_7); + __Pyx_XDECREF(__pyx_t_8); + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.__cinit__"); + __pyx_r = -1; + __pyx_L0:; + __Pyx_DECREF(__pyx_v_key); + __Pyx_DECREF(__pyx_v_dt); + __Pyx_DECREF(__pyx_v_bool_dtype); + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_DECREF(__pyx_v_preader); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":185 + * bool_dtype = np.dtype('bool') + * + * def set_stream(self, fobj): # <<<<<<<<<<<<<< + * ''' Set stream of best type from file-like `fobj` + * + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_set_stream(PyObject *__pyx_v_self, PyObject *__pyx_v_fobj); /*proto*/ +static char __pyx_doc_5scipy_2io_6matlab_10mio5_utils_10VarReader5_set_stream[] = " Set stream of best type from file-like `fobj`\n\n Called from Python when initiating a variable read\n "; +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_set_stream(PyObject *__pyx_v_self, PyObject *__pyx_v_fobj) { + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("set_stream"); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":190 + * Called from Python when initiating a variable read + * ''' + * self.cstream = streams.make_stream(fobj) # <<<<<<<<<<<<<< + * + * def read_tag(self): + */ + __pyx_t_1 = ((PyObject *)__pyx_f_5scipy_2io_6matlab_7streams_make_stream(__pyx_v_fobj, 0)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 190; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_GIVEREF(__pyx_t_1); + __Pyx_GOTREF(((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->cstream); + __Pyx_DECREF(((PyObject *)((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->cstream)); + ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->cstream = ((struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *)__pyx_t_1); + __pyx_t_1 = 0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.set_stream"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":192 + * self.cstream = streams.make_stream(fobj) + * + * def read_tag(self): # <<<<<<<<<<<<<< + * ''' Read tag mdtype and byte_count + * + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_tag(PyObject *__pyx_v_self, PyObject *unused); /*proto*/ +static char __pyx_doc_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_tag[] = " Read tag mdtype and byte_count\n\n Does necessary swapping and takes account of SDE formats.\n\n See also ``read_full_tag`` method.\n \n Returns\n -------\n mdtype : int\n matlab data type code\n byte_count : int\n number of bytes following that comprise the data\n tag_data : None or str\n Any data from the tag itself. This is None for a full tag,\n and string length `byte_count` if this is a small data\n element.\n "; +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_tag(PyObject *__pyx_v_self, PyObject *unused) { + __pyx_t_5numpy_uint32_t __pyx_v_mdtype; + __pyx_t_5numpy_uint32_t __pyx_v_byte_count; + char __pyx_v_tag_ptr[4]; + int __pyx_v_tag_res; + PyObject *__pyx_v_tag_data = 0; + PyObject *__pyx_r = NULL; + int __pyx_t_1; + int __pyx_t_2; + PyObject *__pyx_t_3 = NULL; + PyObject *__pyx_t_4 = NULL; + PyObject *__pyx_t_5 = NULL; + __Pyx_RefNannySetupContext("read_tag"); + __Pyx_INCREF((PyObject *)__pyx_v_self); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":213 + * cdef char tag_ptr[4] + * cdef int tag_res + * cdef object tag_data = None # <<<<<<<<<<<<<< + * tag_res = self.cread_tag(&mdtype, &byte_count, tag_ptr) + * if tag_res == 2: # sde format + */ + __Pyx_INCREF(Py_None); + __pyx_v_tag_data = Py_None; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":214 + * cdef int tag_res + * cdef object tag_data = None + * tag_res = self.cread_tag(&mdtype, &byte_count, tag_ptr) # <<<<<<<<<<<<<< + * if tag_res == 2: # sde format + * tag_data = tag_ptr[:byte_count] + */ + __pyx_t_1 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->__pyx_vtab)->cread_tag(((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self), (&__pyx_v_mdtype), (&__pyx_v_byte_count), __pyx_v_tag_ptr); if (unlikely(__pyx_t_1 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 214; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_v_tag_res = __pyx_t_1; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":215 + * cdef object tag_data = None + * tag_res = self.cread_tag(&mdtype, &byte_count, tag_ptr) + * if tag_res == 2: # sde format # <<<<<<<<<<<<<< + * tag_data = tag_ptr[:byte_count] + * return (mdtype, byte_count, tag_data) + */ + __pyx_t_2 = (__pyx_v_tag_res == 2); + if (__pyx_t_2) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":216 + * tag_res = self.cread_tag(&mdtype, &byte_count, tag_ptr) + * if tag_res == 2: # sde format + * tag_data = tag_ptr[:byte_count] # <<<<<<<<<<<<<< + * return (mdtype, byte_count, tag_data) + * + */ + __pyx_t_3 = __Pyx_PyBytes_FromStringAndSize(__pyx_v_tag_ptr + 0, __pyx_v_byte_count - 0); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 216; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_3)); + __Pyx_DECREF(__pyx_v_tag_data); + __pyx_v_tag_data = ((PyObject *)__pyx_t_3); + __pyx_t_3 = 0; + goto __pyx_L5; + } + __pyx_L5:; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":217 + * if tag_res == 2: # sde format + * tag_data = tag_ptr[:byte_count] + * return (mdtype, byte_count, tag_data) # <<<<<<<<<<<<<< + * + * cdef int cread_tag(self, + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_3 = __Pyx_PyInt_to_py_npy_uint32(__pyx_v_mdtype); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 217; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = __Pyx_PyInt_to_py_npy_uint32(__pyx_v_byte_count); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 217; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_5 = PyTuple_New(3); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 217; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_4); + __Pyx_GIVEREF(__pyx_t_4); + __Pyx_INCREF(__pyx_v_tag_data); + PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_v_tag_data); + __Pyx_GIVEREF(__pyx_v_tag_data); + __pyx_t_3 = 0; + __pyx_t_4 = 0; + __pyx_r = __pyx_t_5; + __pyx_t_5 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_XDECREF(__pyx_t_5); + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.read_tag"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XDECREF(__pyx_v_tag_data); + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":219 + * return (mdtype, byte_count, tag_data) + * + * cdef int cread_tag(self, # <<<<<<<<<<<<<< + * cnp.uint32_t *mdtype_ptr, + * cnp.uint32_t *byte_count_ptr, + */ + +static int __pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_cread_tag(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *__pyx_v_self, __pyx_t_5numpy_uint32_t *__pyx_v_mdtype_ptr, __pyx_t_5numpy_uint32_t *__pyx_v_byte_count_ptr, char *__pyx_v_data_ptr) { + __pyx_t_5numpy_uint16_t __pyx_v_mdtype_sde; + __pyx_t_5numpy_uint16_t __pyx_v_byte_count_sde; + __pyx_t_5numpy_uint32_t __pyx_v_mdtype; + __pyx_t_5numpy_uint32_t *__pyx_v_u4_ptr; + __pyx_t_5numpy_uint32_t __pyx_v_u4s[2]; + int __pyx_r; + int __pyx_t_1; + __pyx_t_5numpy_uint16_t __pyx_t_2; + int __pyx_t_3; + PyObject *__pyx_t_4 = NULL; + PyObject *__pyx_t_5 = NULL; + __Pyx_RefNannySetupContext("cread_tag"); + __Pyx_INCREF((PyObject *)__pyx_v_self); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":234 + * cdef cnp.uint16_t mdtype_sde, byte_count_sde + * cdef cnp.uint32_t mdtype + * cdef cnp.uint32_t* u4_ptr = data_ptr # <<<<<<<<<<<<<< + * cdef cnp.uint32_t u4s[2] + * # First read 8 bytes. The 8 bytes can be in one of two formats. + */ + __pyx_v_u4_ptr = ((__pyx_t_5numpy_uint32_t *)__pyx_v_data_ptr); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":262 + * # first four bytes are two little-endian uint16 values, first + * # ``mdtype`` and second ``byte_count``. + * self.cstream.read_into(u4s, 8) # <<<<<<<<<<<<<< + * if self.is_swapped: + * mdtype = byteswap_u4(u4s[0]) + */ + __pyx_t_1 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_GenericStream *)__pyx_v_self->cstream->__pyx_vtab)->read_into(__pyx_v_self->cstream, ((void *)__pyx_v_u4s), 8); if (unlikely(__pyx_t_1 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 262; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":263 + * # ``mdtype`` and second ``byte_count``. + * self.cstream.read_into(u4s, 8) + * if self.is_swapped: # <<<<<<<<<<<<<< + * mdtype = byteswap_u4(u4s[0]) + * else: + */ + __pyx_t_1 = __pyx_v_self->is_swapped; + if (__pyx_t_1) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":264 + * self.cstream.read_into(u4s, 8) + * if self.is_swapped: + * mdtype = byteswap_u4(u4s[0]) # <<<<<<<<<<<<<< + * else: + * mdtype = u4s[0] + */ + __pyx_v_mdtype = __pyx_f_5scipy_2io_6matlab_10mio5_utils_byteswap_u4((__pyx_v_u4s[0]), 0); + goto __pyx_L3; + } + /*else*/ { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":266 + * mdtype = byteswap_u4(u4s[0]) + * else: + * mdtype = u4s[0] # <<<<<<<<<<<<<< + * # The most significant two bytes of a U4 *mdtype* will always be + * # 0, if they are not, this must be SDE format + */ + __pyx_v_mdtype = (__pyx_v_u4s[0]); + } + __pyx_L3:; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":269 + * # The most significant two bytes of a U4 *mdtype* will always be + * # 0, if they are not, this must be SDE format + * byte_count_sde = mdtype >> 16 # <<<<<<<<<<<<<< + * if byte_count_sde: # small data element format + * mdtype_sde = mdtype & 0xffff + */ + __pyx_v_byte_count_sde = (__pyx_v_mdtype >> 16); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":270 + * # 0, if they are not, this must be SDE format + * byte_count_sde = mdtype >> 16 + * if byte_count_sde: # small data element format # <<<<<<<<<<<<<< + * mdtype_sde = mdtype & 0xffff + * if byte_count_sde > 4: + */ + __pyx_t_2 = __pyx_v_byte_count_sde; + if (__pyx_t_2) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":271 + * byte_count_sde = mdtype >> 16 + * if byte_count_sde: # small data element format + * mdtype_sde = mdtype & 0xffff # <<<<<<<<<<<<<< + * if byte_count_sde > 4: + * raise ValueError('Error in SDE format data') + */ + __pyx_v_mdtype_sde = (__pyx_v_mdtype & 0xffff); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":272 + * if byte_count_sde: # small data element format + * mdtype_sde = mdtype & 0xffff + * if byte_count_sde > 4: # <<<<<<<<<<<<<< + * raise ValueError('Error in SDE format data') + * return -1 + */ + __pyx_t_3 = (__pyx_v_byte_count_sde > 4); + if (__pyx_t_3) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":273 + * mdtype_sde = mdtype & 0xffff + * if byte_count_sde > 4: + * raise ValueError('Error in SDE format data') # <<<<<<<<<<<<<< + * return -1 + * u4_ptr[0] = u4s[1] + */ + __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 273; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_INCREF(((PyObject *)__pyx_kp_s_3)); + PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)__pyx_kp_s_3)); + __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_3)); + __pyx_t_5 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 273; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_Raise(__pyx_t_5, 0, 0); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + {__pyx_filename = __pyx_f[0]; __pyx_lineno = 273; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":274 + * if byte_count_sde > 4: + * raise ValueError('Error in SDE format data') + * return -1 # <<<<<<<<<<<<<< + * u4_ptr[0] = u4s[1] + * mdtype_ptr[0] = mdtype_sde + */ + __pyx_r = -1; + goto __pyx_L0; + goto __pyx_L5; + } + __pyx_L5:; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":275 + * raise ValueError('Error in SDE format data') + * return -1 + * u4_ptr[0] = u4s[1] # <<<<<<<<<<<<<< + * mdtype_ptr[0] = mdtype_sde + * byte_count_ptr[0] = byte_count_sde + */ + (__pyx_v_u4_ptr[0]) = (__pyx_v_u4s[1]); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":276 + * return -1 + * u4_ptr[0] = u4s[1] + * mdtype_ptr[0] = mdtype_sde # <<<<<<<<<<<<<< + * byte_count_ptr[0] = byte_count_sde + * return 2 + */ + (__pyx_v_mdtype_ptr[0]) = __pyx_v_mdtype_sde; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":277 + * u4_ptr[0] = u4s[1] + * mdtype_ptr[0] = mdtype_sde + * byte_count_ptr[0] = byte_count_sde # <<<<<<<<<<<<<< + * return 2 + * # regular element + */ + (__pyx_v_byte_count_ptr[0]) = __pyx_v_byte_count_sde; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":278 + * mdtype_ptr[0] = mdtype_sde + * byte_count_ptr[0] = byte_count_sde + * return 2 # <<<<<<<<<<<<<< + * # regular element + * if self.is_swapped: + */ + __pyx_r = 2; + goto __pyx_L0; + goto __pyx_L4; + } + __pyx_L4:; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":280 + * return 2 + * # regular element + * if self.is_swapped: # <<<<<<<<<<<<<< + * byte_count_ptr[0] = byteswap_u4(u4s[1]) + * else: + */ + __pyx_t_1 = __pyx_v_self->is_swapped; + if (__pyx_t_1) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":281 + * # regular element + * if self.is_swapped: + * byte_count_ptr[0] = byteswap_u4(u4s[1]) # <<<<<<<<<<<<<< + * else: + * byte_count_ptr[0] = u4s[1] + */ + (__pyx_v_byte_count_ptr[0]) = __pyx_f_5scipy_2io_6matlab_10mio5_utils_byteswap_u4((__pyx_v_u4s[1]), 0); + goto __pyx_L6; + } + /*else*/ { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":283 + * byte_count_ptr[0] = byteswap_u4(u4s[1]) + * else: + * byte_count_ptr[0] = u4s[1] # <<<<<<<<<<<<<< + * mdtype_ptr[0] = mdtype + * u4_ptr[0] = 0 + */ + (__pyx_v_byte_count_ptr[0]) = (__pyx_v_u4s[1]); + } + __pyx_L6:; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":284 + * else: + * byte_count_ptr[0] = u4s[1] + * mdtype_ptr[0] = mdtype # <<<<<<<<<<<<<< + * u4_ptr[0] = 0 + * return 1 + */ + (__pyx_v_mdtype_ptr[0]) = __pyx_v_mdtype; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":285 + * byte_count_ptr[0] = u4s[1] + * mdtype_ptr[0] = mdtype + * u4_ptr[0] = 0 # <<<<<<<<<<<<<< + * return 1 + * + */ + (__pyx_v_u4_ptr[0]) = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":286 + * mdtype_ptr[0] = mdtype + * u4_ptr[0] = 0 + * return 1 # <<<<<<<<<<<<<< + * + * cdef object read_element(self, + */ + __pyx_r = 1; + goto __pyx_L0; + + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_4); + __Pyx_XDECREF(__pyx_t_5); + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.cread_tag"); + __pyx_r = -1; + __pyx_L0:; + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":288 + * return 1 + * + * cdef object read_element(self, # <<<<<<<<<<<<<< + * cnp.uint32_t *mdtype_ptr, + * cnp.uint32_t *byte_count_ptr, + */ + +static PyObject *__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_element(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *__pyx_v_self, __pyx_t_5numpy_uint32_t *__pyx_v_mdtype_ptr, __pyx_t_5numpy_uint32_t *__pyx_v_byte_count_ptr, void **__pyx_v_pp, struct __pyx_opt_args_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_element *__pyx_optional_args) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":292 + * cnp.uint32_t *byte_count_ptr, + * void **pp, + * int copy=True): # <<<<<<<<<<<<<< + * ''' Read data element into string buffer, return buffer + * + */ + int __pyx_v_copy = ((int)1); + __pyx_t_5numpy_uint32_t __pyx_v_mdtype; + __pyx_t_5numpy_uint32_t __pyx_v_byte_count; + char __pyx_v_tag_data[4]; + PyObject *__pyx_v_data; + int __pyx_v_mod8; + int __pyx_v_tag_res; + PyObject *__pyx_r = NULL; + int __pyx_t_1; + int __pyx_t_2; + PyObject *__pyx_t_3 = NULL; + struct __pyx_opt_args_5scipy_2io_6matlab_7streams_13GenericStream_read_string __pyx_t_4; + struct __pyx_opt_args_5scipy_2io_6matlab_7streams_13GenericStream_seek __pyx_t_5; + char *__pyx_t_6; + __Pyx_RefNannySetupContext("read_element"); + if (__pyx_optional_args) { + if (__pyx_optional_args->__pyx_n > 0) { + __pyx_v_copy = __pyx_optional_args->copy; + } + } + __Pyx_INCREF((PyObject *)__pyx_v_self); + __pyx_v_data = Py_None; __Pyx_INCREF(Py_None); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":328 + * cdef int tag_res = self.cread_tag(mdtype_ptr, + * byte_count_ptr, + * tag_data) # <<<<<<<<<<<<<< + * mdtype = mdtype_ptr[0] + * byte_count = byte_count_ptr[0] + */ + __pyx_t_1 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->cread_tag(__pyx_v_self, __pyx_v_mdtype_ptr, __pyx_v_byte_count_ptr, __pyx_v_tag_data); if (unlikely(__pyx_t_1 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 326; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_v_tag_res = __pyx_t_1; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":329 + * byte_count_ptr, + * tag_data) + * mdtype = mdtype_ptr[0] # <<<<<<<<<<<<<< + * byte_count = byte_count_ptr[0] + * if tag_res == 1: # full format + */ + __pyx_v_mdtype = (__pyx_v_mdtype_ptr[0]); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":330 + * tag_data) + * mdtype = mdtype_ptr[0] + * byte_count = byte_count_ptr[0] # <<<<<<<<<<<<<< + * if tag_res == 1: # full format + * data = self.cstream.read_string( + */ + __pyx_v_byte_count = (__pyx_v_byte_count_ptr[0]); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":331 + * mdtype = mdtype_ptr[0] + * byte_count = byte_count_ptr[0] + * if tag_res == 1: # full format # <<<<<<<<<<<<<< + * data = self.cstream.read_string( + * byte_count, + */ + __pyx_t_2 = (__pyx_v_tag_res == 1); + if (__pyx_t_2) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":335 + * byte_count, + * pp, + * copy) # <<<<<<<<<<<<<< + * # Seek to next 64-bit boundary + * mod8 = byte_count % 8 + */ + __pyx_t_4.__pyx_n = 1; + __pyx_t_4.copy = __pyx_v_copy; + __pyx_t_3 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_GenericStream *)__pyx_v_self->cstream->__pyx_vtab)->read_string(__pyx_v_self->cstream, __pyx_v_byte_count, __pyx_v_pp, &__pyx_t_4); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 332; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_v_data); + __pyx_v_data = __pyx_t_3; + __pyx_t_3 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":337 + * copy) + * # Seek to next 64-bit boundary + * mod8 = byte_count % 8 # <<<<<<<<<<<<<< + * if mod8: + * self.cstream.seek(8 - mod8, 1) + */ + __pyx_v_mod8 = (__pyx_v_byte_count % 8); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":338 + * # Seek to next 64-bit boundary + * mod8 = byte_count % 8 + * if mod8: # <<<<<<<<<<<<<< + * self.cstream.seek(8 - mod8, 1) + * else: # SDE format, make safer home for data + */ + __pyx_t_1 = __pyx_v_mod8; + if (__pyx_t_1) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":339 + * mod8 = byte_count % 8 + * if mod8: + * self.cstream.seek(8 - mod8, 1) # <<<<<<<<<<<<<< + * else: # SDE format, make safer home for data + * data = PyString_FromStringAndSize(tag_data, byte_count) + */ + __pyx_t_5.__pyx_n = 1; + __pyx_t_5.whence = 1; + __pyx_t_1 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_GenericStream *)__pyx_v_self->cstream->__pyx_vtab)->seek(__pyx_v_self->cstream, (8 - __pyx_v_mod8), 0, &__pyx_t_5); if (unlikely(__pyx_t_1 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 339; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + goto __pyx_L4; + } + __pyx_L4:; + goto __pyx_L3; + } + /*else*/ { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":341 + * self.cstream.seek(8 - mod8, 1) + * else: # SDE format, make safer home for data + * data = PyString_FromStringAndSize(tag_data, byte_count) # <<<<<<<<<<<<<< + * pp[0] = data + * return data + */ + __pyx_t_3 = PyString_FromStringAndSize(__pyx_v_tag_data, __pyx_v_byte_count); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 341; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_v_data); + __pyx_v_data = __pyx_t_3; + __pyx_t_3 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":342 + * else: # SDE format, make safer home for data + * data = PyString_FromStringAndSize(tag_data, byte_count) + * pp[0] = data # <<<<<<<<<<<<<< + * return data + * + */ + __pyx_t_6 = __Pyx_PyBytes_AsString(__pyx_v_data); if (unlikely((!__pyx_t_6) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 342; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + (__pyx_v_pp[0]) = ((char *)__pyx_t_6); + } + __pyx_L3:; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":343 + * data = PyString_FromStringAndSize(tag_data, byte_count) + * pp[0] = data + * return data # <<<<<<<<<<<<<< + * + * cdef void read_element_into(self, + */ + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(__pyx_v_data); + __pyx_r = __pyx_v_data; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_3); + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.read_element"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_DECREF(__pyx_v_data); + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":345 + * return data + * + * cdef void read_element_into(self, # <<<<<<<<<<<<<< + * cnp.uint32_t *mdtype_ptr, + * cnp.uint32_t *byte_count_ptr, + */ + +static void __pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_element_into(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *__pyx_v_self, __pyx_t_5numpy_uint32_t *__pyx_v_mdtype_ptr, __pyx_t_5numpy_uint32_t *__pyx_v_byte_count_ptr, void *__pyx_v_ptr) { + int __pyx_v_mod8; + int __pyx_v_res; + __pyx_t_5numpy_uint32_t __pyx_v_byte_count; + int __pyx_t_1; + int __pyx_t_2; + struct __pyx_opt_args_5scipy_2io_6matlab_7streams_13GenericStream_seek __pyx_t_3; + __Pyx_RefNannySetupContext("read_element_into"); + __Pyx_INCREF((PyObject *)__pyx_v_self); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":374 + * mdtype_ptr, + * byte_count_ptr, + * ptr) # <<<<<<<<<<<<<< + * cdef cnp.uint32_t byte_count = byte_count_ptr[0] + * if res == 1: # full format + */ + __pyx_t_1 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->cread_tag(__pyx_v_self, __pyx_v_mdtype_ptr, __pyx_v_byte_count_ptr, ((char *)__pyx_v_ptr)); if (unlikely(__pyx_t_1 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 371; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_v_res = __pyx_t_1; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":375 + * byte_count_ptr, + * ptr) + * cdef cnp.uint32_t byte_count = byte_count_ptr[0] # <<<<<<<<<<<<<< + * if res == 1: # full format + * res = self.cstream.read_into(ptr, byte_count) + */ + __pyx_v_byte_count = (__pyx_v_byte_count_ptr[0]); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":376 + * ptr) + * cdef cnp.uint32_t byte_count = byte_count_ptr[0] + * if res == 1: # full format # <<<<<<<<<<<<<< + * res = self.cstream.read_into(ptr, byte_count) + * # Seek to next 64-bit boundary + */ + __pyx_t_2 = (__pyx_v_res == 1); + if (__pyx_t_2) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":377 + * cdef cnp.uint32_t byte_count = byte_count_ptr[0] + * if res == 1: # full format + * res = self.cstream.read_into(ptr, byte_count) # <<<<<<<<<<<<<< + * # Seek to next 64-bit boundary + * mod8 = byte_count % 8 + */ + __pyx_t_1 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_GenericStream *)__pyx_v_self->cstream->__pyx_vtab)->read_into(__pyx_v_self->cstream, __pyx_v_ptr, __pyx_v_byte_count); if (unlikely(__pyx_t_1 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 377; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_v_res = __pyx_t_1; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":379 + * res = self.cstream.read_into(ptr, byte_count) + * # Seek to next 64-bit boundary + * mod8 = byte_count % 8 # <<<<<<<<<<<<<< + * if mod8: + * self.cstream.seek(8 - mod8, 1) + */ + __pyx_v_mod8 = (__pyx_v_byte_count % 8); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":380 + * # Seek to next 64-bit boundary + * mod8 = byte_count % 8 + * if mod8: # <<<<<<<<<<<<<< + * self.cstream.seek(8 - mod8, 1) + * + */ + __pyx_t_1 = __pyx_v_mod8; + if (__pyx_t_1) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":381 + * mod8 = byte_count % 8 + * if mod8: + * self.cstream.seek(8 - mod8, 1) # <<<<<<<<<<<<<< + * + * cpdef inline cnp.ndarray read_numeric(self, int copy=True): + */ + __pyx_t_3.__pyx_n = 1; + __pyx_t_3.whence = 1; + __pyx_t_1 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_GenericStream *)__pyx_v_self->cstream->__pyx_vtab)->seek(__pyx_v_self->cstream, (8 - __pyx_v_mod8), 0, &__pyx_t_3); if (unlikely(__pyx_t_1 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 381; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + goto __pyx_L4; + } + __pyx_L4:; + goto __pyx_L3; + } + __pyx_L3:; + + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_WriteUnraisable("scipy.io.matlab.mio5_utils.VarReader5.read_element_into"); + __pyx_L0:; + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_RefNannyFinishContext(); +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":383 + * self.cstream.seek(8 - mod8, 1) + * + * cpdef inline cnp.ndarray read_numeric(self, int copy=True): # <<<<<<<<<<<<<< + * ''' Read numeric data element into ndarray + * + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_numeric(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static CYTHON_INLINE PyArrayObject *__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_numeric(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *__pyx_v_self, int __pyx_skip_dispatch, struct __pyx_opt_args_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_numeric *__pyx_optional_args) { + int __pyx_v_copy = ((int)1); + __pyx_t_5numpy_uint32_t __pyx_v_mdtype; + __pyx_t_5numpy_uint32_t __pyx_v_byte_count; + void *__pyx_v_data_ptr; + npy_intp __pyx_v_el_count; + PyArrayObject *__pyx_v_el; + PyObject *__pyx_v_data = 0; + PyArray_Descr *__pyx_v_dt = 0; + int __pyx_v_flags; + PyArrayObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + struct __pyx_opt_args_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_element __pyx_t_4; + PyObject *__pyx_t_5; + int __pyx_t_6; + __Pyx_RefNannySetupContext("read_numeric"); + if (__pyx_optional_args) { + if (__pyx_optional_args->__pyx_n > 0) { + __pyx_v_copy = __pyx_optional_args->copy; + } + } + __Pyx_INCREF((PyObject *)__pyx_v_self); + __pyx_v_el = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); + /* Check if called by wrapper */ + if (unlikely(__pyx_skip_dispatch)) ; + /* Check if overriden in Python */ + else if (unlikely(Py_TYPE(((PyObject *)__pyx_v_self))->tp_dictoffset != 0)) { + __pyx_t_1 = PyObject_GetAttr(((PyObject *)__pyx_v_self), __pyx_n_s__read_numeric); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 383; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + if (!PyCFunction_Check(__pyx_t_1) || (PyCFunction_GET_FUNCTION(__pyx_t_1) != (void *)&__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_numeric)) { + __Pyx_XDECREF(((PyObject *)__pyx_r)); + __pyx_t_2 = PyInt_FromLong(__pyx_v_copy); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 383; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 383; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2); + __Pyx_GIVEREF(__pyx_t_2); + __pyx_t_2 = 0; + __pyx_t_2 = PyObject_Call(__pyx_t_1, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 383; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (!(likely(((__pyx_t_2) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_2, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 383; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_r = ((PyArrayObject *)__pyx_t_2); + __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + goto __pyx_L0; + } + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + } + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":396 + * cdef cnp.ndarray el + * cdef object data = self.read_element( + * &mdtype, &byte_count, &data_ptr, copy) # <<<<<<<<<<<<<< + * cdef cnp.dtype dt = self.dtypes[mdtype] + * el_count = byte_count // dt.itemsize + */ + __pyx_t_4.__pyx_n = 1; + __pyx_t_4.copy = __pyx_v_copy; + __pyx_t_1 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_element(__pyx_v_self, (&__pyx_v_mdtype), (&__pyx_v_byte_count), ((void **)(&__pyx_v_data_ptr)), &__pyx_t_4); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 395; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_v_data = __pyx_t_1; + __pyx_t_1 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":397 + * cdef object data = self.read_element( + * &mdtype, &byte_count, &data_ptr, copy) + * cdef cnp.dtype dt = self.dtypes[mdtype] # <<<<<<<<<<<<<< + * el_count = byte_count // dt.itemsize + * cdef int flags = 0 + */ + __pyx_t_5 = (__pyx_v_self->dtypes[__pyx_v_mdtype]); + __Pyx_INCREF(((PyObject *)((PyArray_Descr *)__pyx_t_5))); + __pyx_v_dt = ((PyArray_Descr *)__pyx_t_5); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":398 + * &mdtype, &byte_count, &data_ptr, copy) + * cdef cnp.dtype dt = self.dtypes[mdtype] + * el_count = byte_count // dt.itemsize # <<<<<<<<<<<<<< + * cdef int flags = 0 + * if copy: + */ + if (unlikely(__pyx_v_dt->elsize == 0)) { + PyErr_Format(PyExc_ZeroDivisionError, "integer division or modulo by zero"); + {__pyx_filename = __pyx_f[0]; __pyx_lineno = 398; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + } + __pyx_v_el_count = (__pyx_v_byte_count / __pyx_v_dt->elsize); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":399 + * cdef cnp.dtype dt = self.dtypes[mdtype] + * el_count = byte_count // dt.itemsize + * cdef int flags = 0 # <<<<<<<<<<<<<< + * if copy: + * flags = cnp.NPY_WRITEABLE + */ + __pyx_v_flags = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":400 + * el_count = byte_count // dt.itemsize + * cdef int flags = 0 + * if copy: # <<<<<<<<<<<<<< + * flags = cnp.NPY_WRITEABLE + * Py_INCREF( dt) + */ + __pyx_t_6 = __pyx_v_copy; + if (__pyx_t_6) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":401 + * cdef int flags = 0 + * if copy: + * flags = cnp.NPY_WRITEABLE # <<<<<<<<<<<<<< + * Py_INCREF( dt) + * el = PyArray_NewFromDescr(&PyArray_Type, + */ + __pyx_v_flags = NPY_WRITEABLE; + goto __pyx_L3; + } + __pyx_L3:; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":402 + * if copy: + * flags = cnp.NPY_WRITEABLE + * Py_INCREF( dt) # <<<<<<<<<<<<<< + * el = PyArray_NewFromDescr(&PyArray_Type, + * dt, + */ + Py_INCREF(((PyObject *)__pyx_v_dt)); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":410 + * data_ptr, + * flags, + * NULL) # <<<<<<<<<<<<<< + * Py_INCREF( data) + * PyArray_Set_BASE(el, data) + */ + __pyx_t_1 = ((PyObject *)PyArray_NewFromDescr((&PyArray_Type), __pyx_v_dt, 1, (&__pyx_v_el_count), NULL, ((void *)__pyx_v_data_ptr), __pyx_v_flags, ((PyObject *)NULL))); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 403; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(((PyObject *)__pyx_v_el)); + __pyx_v_el = ((PyArrayObject *)__pyx_t_1); + __pyx_t_1 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":411 + * flags, + * NULL) + * Py_INCREF( data) # <<<<<<<<<<<<<< + * PyArray_Set_BASE(el, data) + * return el + */ + Py_INCREF(__pyx_v_data); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":412 + * NULL) + * Py_INCREF( data) + * PyArray_Set_BASE(el, data) # <<<<<<<<<<<<<< + * return el + * + */ + PyArray_Set_BASE(__pyx_v_el, __pyx_v_data); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":413 + * Py_INCREF( data) + * PyArray_Set_BASE(el, data) + * return el # <<<<<<<<<<<<<< + * + * cdef inline object read_int8_string(self): + */ + __Pyx_XDECREF(((PyObject *)__pyx_r)); + __Pyx_INCREF(((PyObject *)__pyx_v_el)); + __pyx_r = __pyx_v_el; + goto __pyx_L0; + + __pyx_r = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.read_numeric"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_DECREF((PyObject *)__pyx_v_el); + __Pyx_XDECREF(__pyx_v_data); + __Pyx_XDECREF((PyObject *)__pyx_v_dt); + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_XGIVEREF((PyObject *)__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":383 + * self.cstream.seek(8 - mod8, 1) + * + * cpdef inline cnp.ndarray read_numeric(self, int copy=True): # <<<<<<<<<<<<<< + * ''' Read numeric data element into ndarray + * + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_numeric(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static char __pyx_doc_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_numeric[] = " Read numeric data element into ndarray\n\n Reads element, then casts to ndarray. \n\n The type of the array is given by the ``mdtype`` returned via\n ``read_element``. \n "; +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_numeric(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + int __pyx_v_copy; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + struct __pyx_opt_args_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_numeric __pyx_t_2; + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__copy,0}; + __Pyx_RefNannySetupContext("read_numeric"); + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); + PyObject* values[1] = {0}; + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 0: + if (kw_args > 1) { + PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__copy); + if (unlikely(value)) { values[0] = value; kw_args--; } + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "read_numeric") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 383; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + if (values[0]) { + __pyx_v_copy = __Pyx_PyInt_AsInt(values[0]); if (unlikely((__pyx_v_copy == (int)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 383; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } else { + __pyx_v_copy = ((int)1); + } + } else { + __pyx_v_copy = ((int)1); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 1: __pyx_v_copy = __Pyx_PyInt_AsInt(PyTuple_GET_ITEM(__pyx_args, 0)); if (unlikely((__pyx_v_copy == (int)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 383; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("read_numeric", 0, 0, 1, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 383; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + __pyx_L3_error:; + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.read_numeric"); + return NULL; + __pyx_L4_argument_unpacking_done:; + __Pyx_XDECREF(__pyx_r); + __pyx_t_2.__pyx_n = 1; + __pyx_t_2.copy = __pyx_v_copy; + __pyx_t_1 = ((PyObject *)((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->__pyx_vtab)->read_numeric(((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self), 1, &__pyx_t_2)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 383; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.read_numeric"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":415 + * return el + * + * cdef inline object read_int8_string(self): # <<<<<<<<<<<<<< + * ''' Read, return int8 type string + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_int8_string(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *__pyx_v_self) { + __pyx_t_5numpy_uint32_t __pyx_v_mdtype; + __pyx_t_5numpy_uint32_t __pyx_v_byte_count; + void *__pyx_v_ptr; + PyObject *__pyx_v_data; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + int __pyx_t_2; + PyObject *__pyx_t_3 = NULL; + __Pyx_RefNannySetupContext("read_int8_string"); + __Pyx_INCREF((PyObject *)__pyx_v_self); + __pyx_v_data = Py_None; __Pyx_INCREF(Py_None); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":427 + * void *ptr + * object data + * data = self.read_element(&mdtype, &byte_count, &ptr) # <<<<<<<<<<<<<< + * if mdtype != miINT8: + * raise TypeError('Expecting miINT8 as data type') + */ + __pyx_t_1 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_element(__pyx_v_self, (&__pyx_v_mdtype), (&__pyx_v_byte_count), (&__pyx_v_ptr), NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 427; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_v_data); + __pyx_v_data = __pyx_t_1; + __pyx_t_1 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":428 + * object data + * data = self.read_element(&mdtype, &byte_count, &ptr) + * if mdtype != miINT8: # <<<<<<<<<<<<<< + * raise TypeError('Expecting miINT8 as data type') + * return data + */ + __pyx_t_2 = (__pyx_v_mdtype != __pyx_e_5scipy_2io_6matlab_10mio5_utils_miINT8); + if (__pyx_t_2) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":429 + * data = self.read_element(&mdtype, &byte_count, &ptr) + * if mdtype != miINT8: + * raise TypeError('Expecting miINT8 as data type') # <<<<<<<<<<<<<< + * return data + * + */ + __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 429; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_INCREF(((PyObject *)__pyx_kp_s_4)); + PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)__pyx_kp_s_4)); + __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_4)); + __pyx_t_3 = PyObject_Call(__pyx_builtin_TypeError, __pyx_t_1, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 429; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_Raise(__pyx_t_3, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + {__pyx_filename = __pyx_f[0]; __pyx_lineno = 429; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + goto __pyx_L3; + } + __pyx_L3:; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":430 + * if mdtype != miINT8: + * raise TypeError('Expecting miINT8 as data type') + * return data # <<<<<<<<<<<<<< + * + * cdef int read_into_int32s(self, cnp.int32_t *int32p) except -1: + */ + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(__pyx_v_data); + __pyx_r = __pyx_v_data; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.read_int8_string"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_DECREF(__pyx_v_data); + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":432 + * return data + * + * cdef int read_into_int32s(self, cnp.int32_t *int32p) except -1: # <<<<<<<<<<<<<< + * ''' Read int32 values into pre-allocated memory + * + */ + +static int __pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_into_int32s(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *__pyx_v_self, __pyx_t_5numpy_int32_t *__pyx_v_int32p) { + __pyx_t_5numpy_uint32_t __pyx_v_mdtype; + __pyx_t_5numpy_uint32_t __pyx_v_byte_count; + int __pyx_v_i; + int __pyx_v_n_ints; + int __pyx_r; + int __pyx_t_1; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + int __pyx_t_4; + int __pyx_t_5; + __Pyx_RefNannySetupContext("read_into_int32s"); + __Pyx_INCREF((PyObject *)__pyx_v_self); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":449 + * cnp.uint32_t mdtype, byte_count + * int i + * self.read_element_into(&mdtype, &byte_count, int32p) # <<<<<<<<<<<<<< + * if mdtype != miINT32: + * raise TypeError('Expecting miINT32 as data type') + */ + ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_element_into(__pyx_v_self, (&__pyx_v_mdtype), (&__pyx_v_byte_count), ((void *)__pyx_v_int32p)); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":450 + * int i + * self.read_element_into(&mdtype, &byte_count, int32p) + * if mdtype != miINT32: # <<<<<<<<<<<<<< + * raise TypeError('Expecting miINT32 as data type') + * return -1 + */ + __pyx_t_1 = (__pyx_v_mdtype != __pyx_e_5scipy_2io_6matlab_10mio5_utils_miINT32); + if (__pyx_t_1) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":451 + * self.read_element_into(&mdtype, &byte_count, int32p) + * if mdtype != miINT32: + * raise TypeError('Expecting miINT32 as data type') # <<<<<<<<<<<<<< + * return -1 + * cdef int n_ints = byte_count // 4 + */ + __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 451; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(((PyObject *)__pyx_kp_s_5)); + PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_5)); + __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_5)); + __pyx_t_3 = PyObject_Call(__pyx_builtin_TypeError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 451; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_Raise(__pyx_t_3, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + {__pyx_filename = __pyx_f[0]; __pyx_lineno = 451; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":452 + * if mdtype != miINT32: + * raise TypeError('Expecting miINT32 as data type') + * return -1 # <<<<<<<<<<<<<< + * cdef int n_ints = byte_count // 4 + * if self.is_swapped: + */ + __pyx_r = -1; + goto __pyx_L0; + goto __pyx_L3; + } + __pyx_L3:; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":453 + * raise TypeError('Expecting miINT32 as data type') + * return -1 + * cdef int n_ints = byte_count // 4 # <<<<<<<<<<<<<< + * if self.is_swapped: + * for i in range(n_ints): + */ + __pyx_v_n_ints = (__pyx_v_byte_count / 4); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":454 + * return -1 + * cdef int n_ints = byte_count // 4 + * if self.is_swapped: # <<<<<<<<<<<<<< + * for i in range(n_ints): + * int32p[i] = byteswap_u4(int32p[i]) + */ + __pyx_t_4 = __pyx_v_self->is_swapped; + if (__pyx_t_4) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":455 + * cdef int n_ints = byte_count // 4 + * if self.is_swapped: + * for i in range(n_ints): # <<<<<<<<<<<<<< + * int32p[i] = byteswap_u4(int32p[i]) + * return n_ints + */ + __pyx_t_4 = __pyx_v_n_ints; + for (__pyx_t_5 = 0; __pyx_t_5 < __pyx_t_4; __pyx_t_5+=1) { + __pyx_v_i = __pyx_t_5; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":456 + * if self.is_swapped: + * for i in range(n_ints): + * int32p[i] = byteswap_u4(int32p[i]) # <<<<<<<<<<<<<< + * return n_ints + * + */ + (__pyx_v_int32p[__pyx_v_i]) = __pyx_f_5scipy_2io_6matlab_10mio5_utils_byteswap_u4((__pyx_v_int32p[__pyx_v_i]), 0); + } + goto __pyx_L4; + } + __pyx_L4:; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":457 + * for i in range(n_ints): + * int32p[i] = byteswap_u4(int32p[i]) + * return n_ints # <<<<<<<<<<<<<< + * + * def read_full_tag(self): + */ + __pyx_r = __pyx_v_n_ints; + goto __pyx_L0; + + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.read_into_int32s"); + __pyx_r = -1; + __pyx_L0:; + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":459 + * return n_ints + * + * def read_full_tag(self): # <<<<<<<<<<<<<< + * ''' Python method for reading full u4, u4 tag from stream + * + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_full_tag(PyObject *__pyx_v_self, PyObject *unused); /*proto*/ +static char __pyx_doc_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_full_tag[] = " Python method for reading full u4, u4 tag from stream\n\n Returns\n -------\n mdtype : int32\n matlab data type code\n byte_count : int32\n number of data bytes following\n\n Notes\n -----\n Assumes tag is in fact full, that is, is not a small data\n element. This means it can skip some checks and makes it\n slightly faster than ``read_tag``\n "; +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_full_tag(PyObject *__pyx_v_self, PyObject *unused) { + __pyx_t_5numpy_uint32_t __pyx_v_mdtype; + __pyx_t_5numpy_uint32_t __pyx_v_byte_count; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + __Pyx_RefNannySetupContext("read_full_tag"); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":476 + * ''' + * cdef cnp.uint32_t mdtype, byte_count + * self.cread_full_tag(&mdtype, &byte_count) # <<<<<<<<<<<<<< + * return mdtype, byte_count + * + */ + ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->__pyx_vtab)->cread_full_tag(((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self), (&__pyx_v_mdtype), (&__pyx_v_byte_count)); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":477 + * cdef cnp.uint32_t mdtype, byte_count + * self.cread_full_tag(&mdtype, &byte_count) + * return mdtype, byte_count # <<<<<<<<<<<<<< + * + * cdef void cread_full_tag(self, + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = __Pyx_PyInt_to_py_npy_uint32(__pyx_v_mdtype); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 477; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = __Pyx_PyInt_to_py_npy_uint32(__pyx_v_byte_count); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 477; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 477; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_1); + __Pyx_GIVEREF(__pyx_t_1); + PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_2); + __Pyx_GIVEREF(__pyx_t_2); + __pyx_t_1 = 0; + __pyx_t_2 = 0; + __pyx_r = __pyx_t_3; + __pyx_t_3 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.read_full_tag"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":479 + * return mdtype, byte_count + * + * cdef void cread_full_tag(self, # <<<<<<<<<<<<<< + * cnp.uint32_t* mdtype, + * cnp.uint32_t* byte_count): + */ + +static void __pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_cread_full_tag(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *__pyx_v_self, __pyx_t_5numpy_uint32_t *__pyx_v_mdtype, __pyx_t_5numpy_uint32_t *__pyx_v_byte_count) { + __pyx_t_5numpy_uint32_t __pyx_v_u4s[2]; + int __pyx_t_1; + __Pyx_RefNannySetupContext("cread_full_tag"); + __Pyx_INCREF((PyObject *)__pyx_v_self); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":484 + * ''' C method for reading full u4, u4 tag from stream''' + * cdef cnp.uint32_t u4s[2] + * self.cstream.read_into(u4s, 8) # <<<<<<<<<<<<<< + * if self.is_swapped: + * mdtype[0] = byteswap_u4(u4s[0]) + */ + __pyx_t_1 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_GenericStream *)__pyx_v_self->cstream->__pyx_vtab)->read_into(__pyx_v_self->cstream, ((void *)__pyx_v_u4s), 8); if (unlikely(__pyx_t_1 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 484; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":485 + * cdef cnp.uint32_t u4s[2] + * self.cstream.read_into(u4s, 8) + * if self.is_swapped: # <<<<<<<<<<<<<< + * mdtype[0] = byteswap_u4(u4s[0]) + * byte_count[0] = byteswap_u4(u4s[1]) + */ + __pyx_t_1 = __pyx_v_self->is_swapped; + if (__pyx_t_1) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":486 + * self.cstream.read_into(u4s, 8) + * if self.is_swapped: + * mdtype[0] = byteswap_u4(u4s[0]) # <<<<<<<<<<<<<< + * byte_count[0] = byteswap_u4(u4s[1]) + * else: + */ + (__pyx_v_mdtype[0]) = __pyx_f_5scipy_2io_6matlab_10mio5_utils_byteswap_u4((__pyx_v_u4s[0]), 0); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":487 + * if self.is_swapped: + * mdtype[0] = byteswap_u4(u4s[0]) + * byte_count[0] = byteswap_u4(u4s[1]) # <<<<<<<<<<<<<< + * else: + * mdtype[0] = u4s[0] + */ + (__pyx_v_byte_count[0]) = __pyx_f_5scipy_2io_6matlab_10mio5_utils_byteswap_u4((__pyx_v_u4s[1]), 0); + goto __pyx_L3; + } + /*else*/ { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":489 + * byte_count[0] = byteswap_u4(u4s[1]) + * else: + * mdtype[0] = u4s[0] # <<<<<<<<<<<<<< + * byte_count[0] = u4s[1] + * + */ + (__pyx_v_mdtype[0]) = (__pyx_v_u4s[0]); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":490 + * else: + * mdtype[0] = u4s[0] + * byte_count[0] = u4s[1] # <<<<<<<<<<<<<< + * + * cpdef VarHeader5 read_header(self): + */ + (__pyx_v_byte_count[0]) = (__pyx_v_u4s[1]); + } + __pyx_L3:; + + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_WriteUnraisable("scipy.io.matlab.mio5_utils.VarReader5.cread_full_tag"); + __pyx_L0:; + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_RefNannyFinishContext(); +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":492 + * byte_count[0] = u4s[1] + * + * cpdef VarHeader5 read_header(self): # <<<<<<<<<<<<<< + * ''' Return matrix header for current stream position + * + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_header(PyObject *__pyx_v_self, PyObject *unused); /*proto*/ +static struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_header(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *__pyx_v_self, int __pyx_skip_dispatch) { + __pyx_t_5numpy_uint32_t __pyx_v_u4s[2]; + __pyx_t_5numpy_uint32_t __pyx_v_flags_class; + __pyx_t_5numpy_uint32_t __pyx_v_nzmax; + __pyx_t_5numpy_uint16_t __pyx_v_mc; + int __pyx_v_i; + struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *__pyx_v_header; + struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + int __pyx_t_3; + int __pyx_t_4; + int __pyx_t_5; + __Pyx_RefNannySetupContext("read_header"); + __Pyx_INCREF((PyObject *)__pyx_v_self); + __pyx_v_header = ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *)Py_None); __Pyx_INCREF(Py_None); + /* Check if called by wrapper */ + if (unlikely(__pyx_skip_dispatch)) ; + /* Check if overriden in Python */ + else if (unlikely(Py_TYPE(((PyObject *)__pyx_v_self))->tp_dictoffset != 0)) { + __pyx_t_1 = PyObject_GetAttr(((PyObject *)__pyx_v_self), __pyx_n_s__read_header); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 492; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + if (!PyCFunction_Check(__pyx_t_1) || (PyCFunction_GET_FUNCTION(__pyx_t_1) != (void *)&__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_header)) { + __Pyx_XDECREF(((PyObject *)__pyx_r)); + __pyx_t_2 = PyObject_Call(__pyx_t_1, ((PyObject *)__pyx_empty_tuple), NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 492; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + if (!(likely(((__pyx_t_2) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_2, __pyx_ptype_5scipy_2io_6matlab_10mio5_utils_VarHeader5))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 492; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_r = ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *)__pyx_t_2); + __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + goto __pyx_L0; + } + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + } + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":506 + * VarHeader5 header + * # Read and discard mdtype and byte_count + * self.cstream.read_into(u4s, 8) # <<<<<<<<<<<<<< + * # get array flags and nzmax + * self.cstream.read_into(u4s, 8) + */ + __pyx_t_3 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_GenericStream *)__pyx_v_self->cstream->__pyx_vtab)->read_into(__pyx_v_self->cstream, ((void *)__pyx_v_u4s), 8); if (unlikely(__pyx_t_3 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 506; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":508 + * self.cstream.read_into(u4s, 8) + * # get array flags and nzmax + * self.cstream.read_into(u4s, 8) # <<<<<<<<<<<<<< + * if self.is_swapped: + * flags_class = byteswap_u4(u4s[0]) + */ + __pyx_t_3 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_GenericStream *)__pyx_v_self->cstream->__pyx_vtab)->read_into(__pyx_v_self->cstream, ((void *)__pyx_v_u4s), 8); if (unlikely(__pyx_t_3 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 508; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":509 + * # get array flags and nzmax + * self.cstream.read_into(u4s, 8) + * if self.is_swapped: # <<<<<<<<<<<<<< + * flags_class = byteswap_u4(u4s[0]) + * nzmax = byteswap_u4(u4s[1]) + */ + __pyx_t_3 = __pyx_v_self->is_swapped; + if (__pyx_t_3) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":510 + * self.cstream.read_into(u4s, 8) + * if self.is_swapped: + * flags_class = byteswap_u4(u4s[0]) # <<<<<<<<<<<<<< + * nzmax = byteswap_u4(u4s[1]) + * else: + */ + __pyx_v_flags_class = __pyx_f_5scipy_2io_6matlab_10mio5_utils_byteswap_u4((__pyx_v_u4s[0]), 0); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":511 + * if self.is_swapped: + * flags_class = byteswap_u4(u4s[0]) + * nzmax = byteswap_u4(u4s[1]) # <<<<<<<<<<<<<< + * else: + * flags_class = u4s[0] + */ + __pyx_v_nzmax = __pyx_f_5scipy_2io_6matlab_10mio5_utils_byteswap_u4((__pyx_v_u4s[1]), 0); + goto __pyx_L3; + } + /*else*/ { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":513 + * nzmax = byteswap_u4(u4s[1]) + * else: + * flags_class = u4s[0] # <<<<<<<<<<<<<< + * nzmax = u4s[1] + * header = VarHeader5() + */ + __pyx_v_flags_class = (__pyx_v_u4s[0]); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":514 + * else: + * flags_class = u4s[0] + * nzmax = u4s[1] # <<<<<<<<<<<<<< + * header = VarHeader5() + * mc = flags_class & 0xFF + */ + __pyx_v_nzmax = (__pyx_v_u4s[1]); + } + __pyx_L3:; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":515 + * flags_class = u4s[0] + * nzmax = u4s[1] + * header = VarHeader5() # <<<<<<<<<<<<<< + * mc = flags_class & 0xFF + * header.mclass = mc + */ + __pyx_t_1 = PyObject_Call(((PyObject *)((PyObject*)__pyx_ptype_5scipy_2io_6matlab_10mio5_utils_VarHeader5)), ((PyObject *)__pyx_empty_tuple), NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 515; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(((PyObject *)__pyx_v_header)); + __pyx_v_header = ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *)__pyx_t_1); + __pyx_t_1 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":516 + * nzmax = u4s[1] + * header = VarHeader5() + * mc = flags_class & 0xFF # <<<<<<<<<<<<<< + * header.mclass = mc + * header.is_logical = flags_class >> 9 & 1 + */ + __pyx_v_mc = (__pyx_v_flags_class & 0xFF); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":517 + * header = VarHeader5() + * mc = flags_class & 0xFF + * header.mclass = mc # <<<<<<<<<<<<<< + * header.is_logical = flags_class >> 9 & 1 + * header.is_global = flags_class >> 10 & 1 + */ + __pyx_v_header->mclass = __pyx_v_mc; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":518 + * mc = flags_class & 0xFF + * header.mclass = mc + * header.is_logical = flags_class >> 9 & 1 # <<<<<<<<<<<<<< + * header.is_global = flags_class >> 10 & 1 + * header.is_complex = flags_class >> 11 & 1 + */ + __pyx_v_header->is_logical = ((__pyx_v_flags_class >> 9) & 1); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":519 + * header.mclass = mc + * header.is_logical = flags_class >> 9 & 1 + * header.is_global = flags_class >> 10 & 1 # <<<<<<<<<<<<<< + * header.is_complex = flags_class >> 11 & 1 + * header.nzmax = nzmax + */ + __pyx_v_header->is_global = ((__pyx_v_flags_class >> 10) & 1); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":520 + * header.is_logical = flags_class >> 9 & 1 + * header.is_global = flags_class >> 10 & 1 + * header.is_complex = flags_class >> 11 & 1 # <<<<<<<<<<<<<< + * header.nzmax = nzmax + * # all miMATRIX types except the mxOPAQUE_CLASS have dims and a + */ + __pyx_v_header->is_complex = ((__pyx_v_flags_class >> 11) & 1); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":521 + * header.is_global = flags_class >> 10 & 1 + * header.is_complex = flags_class >> 11 & 1 + * header.nzmax = nzmax # <<<<<<<<<<<<<< + * # all miMATRIX types except the mxOPAQUE_CLASS have dims and a + * # name. + */ + __pyx_v_header->nzmax = __pyx_v_nzmax; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":524 + * # all miMATRIX types except the mxOPAQUE_CLASS have dims and a + * # name. + * if mc == mxOPAQUE_CLASS: # <<<<<<<<<<<<<< + * header.name = None + * header.dims = None + */ + __pyx_t_4 = (__pyx_v_mc == __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxOPAQUE_CLASS); + if (__pyx_t_4) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":525 + * # name. + * if mc == mxOPAQUE_CLASS: + * header.name = None # <<<<<<<<<<<<<< + * header.dims = None + * return header + */ + __Pyx_INCREF(Py_None); + __Pyx_GIVEREF(Py_None); + __Pyx_GOTREF(__pyx_v_header->name); + __Pyx_DECREF(__pyx_v_header->name); + __pyx_v_header->name = Py_None; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":526 + * if mc == mxOPAQUE_CLASS: + * header.name = None + * header.dims = None # <<<<<<<<<<<<<< + * return header + * header.n_dims = self.read_into_int32s(header.dims_ptr) + */ + __Pyx_INCREF(Py_None); + __Pyx_GIVEREF(Py_None); + __Pyx_GOTREF(__pyx_v_header->dims); + __Pyx_DECREF(__pyx_v_header->dims); + __pyx_v_header->dims = Py_None; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":527 + * header.name = None + * header.dims = None + * return header # <<<<<<<<<<<<<< + * header.n_dims = self.read_into_int32s(header.dims_ptr) + * if header.n_dims > _MAT_MAXDIMS: + */ + __Pyx_XDECREF(((PyObject *)__pyx_r)); + __Pyx_INCREF(((PyObject *)__pyx_v_header)); + __pyx_r = __pyx_v_header; + goto __pyx_L0; + goto __pyx_L4; + } + __pyx_L4:; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":528 + * header.dims = None + * return header + * header.n_dims = self.read_into_int32s(header.dims_ptr) # <<<<<<<<<<<<<< + * if header.n_dims > _MAT_MAXDIMS: + * raise ValueError('Too many dimensions (%d) for numpy arrays' + */ + __pyx_t_3 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_into_int32s(__pyx_v_self, __pyx_v_header->dims_ptr); if (unlikely(__pyx_t_3 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 528; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_v_header->n_dims = __pyx_t_3; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":529 + * return header + * header.n_dims = self.read_into_int32s(header.dims_ptr) + * if header.n_dims > _MAT_MAXDIMS: # <<<<<<<<<<<<<< + * raise ValueError('Too many dimensions (%d) for numpy arrays' + * % header.n_dims) + */ + __pyx_t_4 = (__pyx_v_header->n_dims > 32); + if (__pyx_t_4) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":531 + * if header.n_dims > _MAT_MAXDIMS: + * raise ValueError('Too many dimensions (%d) for numpy arrays' + * % header.n_dims) # <<<<<<<<<<<<<< + * # convert dims to list + * header.dims = [] + */ + __pyx_t_1 = PyInt_FromLong(__pyx_v_header->n_dims); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 531; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = PyNumber_Remainder(((PyObject *)__pyx_kp_s_6), __pyx_t_1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 531; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 530; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_2); + __Pyx_GIVEREF(__pyx_t_2); + __pyx_t_2 = 0; + __pyx_t_2 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_1, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 530; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_Raise(__pyx_t_2, 0, 0); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + {__pyx_filename = __pyx_f[0]; __pyx_lineno = 530; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + goto __pyx_L5; + } + __pyx_L5:; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":533 + * % header.n_dims) + * # convert dims to list + * header.dims = [] # <<<<<<<<<<<<<< + * for i in range(header.n_dims): + * header.dims.append(header.dims_ptr[i]) + */ + __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 533; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_2)); + __Pyx_GIVEREF(((PyObject *)__pyx_t_2)); + __Pyx_GOTREF(__pyx_v_header->dims); + __Pyx_DECREF(__pyx_v_header->dims); + __pyx_v_header->dims = ((PyObject *)__pyx_t_2); + __pyx_t_2 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":534 + * # convert dims to list + * header.dims = [] + * for i in range(header.n_dims): # <<<<<<<<<<<<<< + * header.dims.append(header.dims_ptr[i]) + * header.name = self.read_int8_string() + */ + __pyx_t_3 = __pyx_v_header->n_dims; + for (__pyx_t_5 = 0; __pyx_t_5 < __pyx_t_3; __pyx_t_5+=1) { + __pyx_v_i = __pyx_t_5; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":535 + * header.dims = [] + * for i in range(header.n_dims): + * header.dims.append(header.dims_ptr[i]) # <<<<<<<<<<<<<< + * header.name = self.read_int8_string() + * return header + */ + __pyx_t_2 = __Pyx_PyInt_to_py_npy_int32((__pyx_v_header->dims_ptr[__pyx_v_i])); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 535; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_1 = __Pyx_PyObject_Append(__pyx_v_header->dims, __pyx_t_2); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 535; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + } + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":536 + * for i in range(header.n_dims): + * header.dims.append(header.dims_ptr[i]) + * header.name = self.read_int8_string() # <<<<<<<<<<<<<< + * return header + * + */ + __pyx_t_1 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_int8_string(__pyx_v_self); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 536; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_GIVEREF(__pyx_t_1); + __Pyx_GOTREF(__pyx_v_header->name); + __Pyx_DECREF(__pyx_v_header->name); + __pyx_v_header->name = __pyx_t_1; + __pyx_t_1 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":537 + * header.dims.append(header.dims_ptr[i]) + * header.name = self.read_int8_string() + * return header # <<<<<<<<<<<<<< + * + * cdef inline size_t size_from_header(self, VarHeader5 header): + */ + __Pyx_XDECREF(((PyObject *)__pyx_r)); + __Pyx_INCREF(((PyObject *)__pyx_v_header)); + __pyx_r = __pyx_v_header; + goto __pyx_L0; + + __pyx_r = ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *)Py_None); __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.read_header"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_DECREF((PyObject *)__pyx_v_header); + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_XGIVEREF((PyObject *)__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":492 + * byte_count[0] = u4s[1] + * + * cpdef VarHeader5 read_header(self): # <<<<<<<<<<<<<< + * ''' Return matrix header for current stream position + * + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_header(PyObject *__pyx_v_self, PyObject *unused); /*proto*/ +static char __pyx_doc_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_header[] = " Return matrix header for current stream position\n\n Returns matrix headers at top level and sub levels\n "; +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_header(PyObject *__pyx_v_self, PyObject *unused) { + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("read_header"); + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = ((PyObject *)((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->__pyx_vtab)->read_header(((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self), 1)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 492; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.read_header"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":539 + * return header + * + * cdef inline size_t size_from_header(self, VarHeader5 header): # <<<<<<<<<<<<<< + * ''' Supporting routine for calculating array sizes from header + * + */ + +static CYTHON_INLINE size_t __pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_size_from_header(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *__pyx_v_self, struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *__pyx_v_header) { + size_t __pyx_v_size; + PyObject *__pyx_v_i; + size_t __pyx_r; + Py_ssize_t __pyx_t_1; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + Py_ssize_t __pyx_t_4; + __Pyx_RefNannySetupContext("size_from_header"); + __Pyx_INCREF((PyObject *)__pyx_v_self); + __Pyx_INCREF((PyObject *)__pyx_v_header); + __pyx_v_i = Py_None; __Pyx_INCREF(Py_None); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":556 + * ''' + * # calculate number of items in array from dims product + * cdef size_t size = 1 # <<<<<<<<<<<<<< + * for i in range(header.n_dims): + * size *= header.dims_ptr[i] + */ + __pyx_v_size = 1; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":557 + * # calculate number of items in array from dims product + * cdef size_t size = 1 + * for i in range(header.n_dims): # <<<<<<<<<<<<<< + * size *= header.dims_ptr[i] + * return size + */ + __pyx_t_2 = PyInt_FromLong(__pyx_v_header->n_dims); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 557; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 557; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2); + __Pyx_GIVEREF(__pyx_t_2); + __pyx_t_2 = 0; + __pyx_t_2 = PyObject_Call(__pyx_builtin_range, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 557; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (PyList_CheckExact(__pyx_t_2) || PyTuple_CheckExact(__pyx_t_2)) { + __pyx_t_1 = 0; __pyx_t_3 = __pyx_t_2; __Pyx_INCREF(__pyx_t_3); + } else { + __pyx_t_1 = -1; __pyx_t_3 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 557; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + } + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + for (;;) { + if (likely(PyList_CheckExact(__pyx_t_3))) { + if (__pyx_t_1 >= PyList_GET_SIZE(__pyx_t_3)) break; + __pyx_t_2 = PyList_GET_ITEM(__pyx_t_3, __pyx_t_1); __Pyx_INCREF(__pyx_t_2); __pyx_t_1++; + } else if (likely(PyTuple_CheckExact(__pyx_t_3))) { + if (__pyx_t_1 >= PyTuple_GET_SIZE(__pyx_t_3)) break; + __pyx_t_2 = PyTuple_GET_ITEM(__pyx_t_3, __pyx_t_1); __Pyx_INCREF(__pyx_t_2); __pyx_t_1++; + } else { + __pyx_t_2 = PyIter_Next(__pyx_t_3); + if (!__pyx_t_2) { + if (unlikely(PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 557; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + break; + } + __Pyx_GOTREF(__pyx_t_2); + } + __Pyx_DECREF(__pyx_v_i); + __pyx_v_i = __pyx_t_2; + __pyx_t_2 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":558 + * cdef size_t size = 1 + * for i in range(header.n_dims): + * size *= header.dims_ptr[i] # <<<<<<<<<<<<<< + * return size + * + */ + __pyx_t_4 = __Pyx_PyIndex_AsSsize_t(__pyx_v_i); if (unlikely((__pyx_t_4 == (Py_ssize_t)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 558; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_v_size *= (__pyx_v_header->dims_ptr[__pyx_t_4]); + } + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":559 + * for i in range(header.n_dims): + * size *= header.dims_ptr[i] + * return size # <<<<<<<<<<<<<< + * + * cdef read_mi_matrix(self, int process=1): + */ + __pyx_r = __pyx_v_size; + goto __pyx_L0; + + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_WriteUnraisable("scipy.io.matlab.mio5_utils.VarReader5.size_from_header"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_DECREF(__pyx_v_i); + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_DECREF((PyObject *)__pyx_v_header); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":561 + * return size + * + * cdef read_mi_matrix(self, int process=1): # <<<<<<<<<<<<<< + * ''' Read header with matrix at sub-levels + * + */ + +static PyObject *__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_mi_matrix(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *__pyx_v_self, struct __pyx_opt_args_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_mi_matrix *__pyx_optional_args) { + int __pyx_v_process = ((int)1); + struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *__pyx_v_header; + __pyx_t_5numpy_uint32_t __pyx_v_mdtype; + __pyx_t_5numpy_uint32_t __pyx_v_byte_count; + PyObject *__pyx_r = NULL; + int __pyx_t_1; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + PyObject *__pyx_t_4 = NULL; + struct __pyx_opt_args_5scipy_2io_6matlab_10mio5_utils_10VarReader5_array_from_header __pyx_t_5; + __Pyx_RefNannySetupContext("read_mi_matrix"); + if (__pyx_optional_args) { + if (__pyx_optional_args->__pyx_n > 0) { + __pyx_v_process = __pyx_optional_args->process; + } + } + __Pyx_INCREF((PyObject *)__pyx_v_self); + __pyx_v_header = ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *)Py_None); __Pyx_INCREF(Py_None); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":582 + * object arr + * # read full tag + * self.cread_full_tag(&mdtype, &byte_count) # <<<<<<<<<<<<<< + * if mdtype != miMATRIX: + * raise TypeError('Expecting matrix here') + */ + ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->cread_full_tag(__pyx_v_self, (&__pyx_v_mdtype), (&__pyx_v_byte_count)); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":583 + * # read full tag + * self.cread_full_tag(&mdtype, &byte_count) + * if mdtype != miMATRIX: # <<<<<<<<<<<<<< + * raise TypeError('Expecting matrix here') + * if byte_count == 0: # empty matrix + */ + __pyx_t_1 = (__pyx_v_mdtype != __pyx_e_5scipy_2io_6matlab_10mio5_utils_miMATRIX); + if (__pyx_t_1) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":584 + * self.cread_full_tag(&mdtype, &byte_count) + * if mdtype != miMATRIX: + * raise TypeError('Expecting matrix here') # <<<<<<<<<<<<<< + * if byte_count == 0: # empty matrix + * if process and self.squeeze_me: + */ + __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 584; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(((PyObject *)__pyx_kp_s_7)); + PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_7)); + __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_7)); + __pyx_t_3 = PyObject_Call(__pyx_builtin_TypeError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 584; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_Raise(__pyx_t_3, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + {__pyx_filename = __pyx_f[0]; __pyx_lineno = 584; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + goto __pyx_L3; + } + __pyx_L3:; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":585 + * if mdtype != miMATRIX: + * raise TypeError('Expecting matrix here') + * if byte_count == 0: # empty matrix # <<<<<<<<<<<<<< + * if process and self.squeeze_me: + * return np.array([]) + */ + __pyx_t_1 = (__pyx_v_byte_count == 0); + if (__pyx_t_1) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":586 + * raise TypeError('Expecting matrix here') + * if byte_count == 0: # empty matrix + * if process and self.squeeze_me: # <<<<<<<<<<<<<< + * return np.array([]) + * else: + */ + if (__pyx_v_process) { + __pyx_t_1 = __pyx_v_self->squeeze_me; + } else { + __pyx_t_1 = __pyx_v_process; + } + if (__pyx_t_1) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":587 + * if byte_count == 0: # empty matrix + * if process and self.squeeze_me: + * return np.array([]) # <<<<<<<<<<<<<< + * else: + * return np.array([[]]) + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 587; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__array); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 587; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 587; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_3)); + __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 587; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)__pyx_t_3)); + __Pyx_GIVEREF(((PyObject *)__pyx_t_3)); + __pyx_t_3 = 0; + __pyx_t_3 = PyObject_Call(__pyx_t_2, __pyx_t_4, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 587; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_r = __pyx_t_3; + __pyx_t_3 = 0; + goto __pyx_L0; + goto __pyx_L5; + } + /*else*/ { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":589 + * return np.array([]) + * else: + * return np.array([[]]) # <<<<<<<<<<<<<< + * header = self.read_header() + * return self.array_from_header(header, process) + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 589; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__array); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 589; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 589; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_3)); + __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 589; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_2)); + PyList_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_t_3)); + __Pyx_GIVEREF(((PyObject *)__pyx_t_3)); + __pyx_t_3 = 0; + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 589; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_t_2)); + __Pyx_GIVEREF(((PyObject *)__pyx_t_2)); + __pyx_t_2 = 0; + __pyx_t_2 = PyObject_Call(__pyx_t_4, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 589; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_r = __pyx_t_2; + __pyx_t_2 = 0; + goto __pyx_L0; + } + __pyx_L5:; + goto __pyx_L4; + } + __pyx_L4:; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":590 + * else: + * return np.array([[]]) + * header = self.read_header() # <<<<<<<<<<<<<< + * return self.array_from_header(header, process) + * + */ + __pyx_t_2 = ((PyObject *)((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_header(__pyx_v_self, 0)); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 590; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(((PyObject *)__pyx_v_header)); + __pyx_v_header = ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *)__pyx_t_2); + __pyx_t_2 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":591 + * return np.array([[]]) + * header = self.read_header() + * return self.array_from_header(header, process) # <<<<<<<<<<<<<< + * + * cpdef array_from_header(self, VarHeader5 header, int process=1): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_5.__pyx_n = 1; + __pyx_t_5.process = __pyx_v_process; + __pyx_t_2 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->array_from_header(__pyx_v_self, __pyx_v_header, 0, &__pyx_t_5); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 591; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_r = __pyx_t_2; + __pyx_t_2 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.read_mi_matrix"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_DECREF((PyObject *)__pyx_v_header); + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":593 + * return self.array_from_header(header, process) + * + * cpdef array_from_header(self, VarHeader5 header, int process=1): # <<<<<<<<<<<<<< + * ''' Read array of any class, given matrix `header` + * + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_array_from_header(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static PyObject *__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_array_from_header(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *__pyx_v_self, struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *__pyx_v_header, int __pyx_skip_dispatch, struct __pyx_opt_args_5scipy_2io_6matlab_10mio5_utils_10VarReader5_array_from_header *__pyx_optional_args) { + int __pyx_v_process = ((int)1); + PyObject *__pyx_v_arr; + PyArray_Descr *__pyx_v_mat_dtype; + int __pyx_v_mc; + PyObject *__pyx_v_classname; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + int __pyx_t_4; + int __pyx_t_5; + PyObject *__pyx_t_6; + __Pyx_RefNannySetupContext("array_from_header"); + if (__pyx_optional_args) { + if (__pyx_optional_args->__pyx_n > 0) { + __pyx_v_process = __pyx_optional_args->process; + } + } + __Pyx_INCREF((PyObject *)__pyx_v_self); + __Pyx_INCREF((PyObject *)__pyx_v_header); + __pyx_v_arr = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_mat_dtype = ((PyArray_Descr *)Py_None); __Pyx_INCREF(Py_None); + __pyx_v_classname = Py_None; __Pyx_INCREF(Py_None); + /* Check if called by wrapper */ + if (unlikely(__pyx_skip_dispatch)) ; + /* Check if overriden in Python */ + else if (unlikely(Py_TYPE(((PyObject *)__pyx_v_self))->tp_dictoffset != 0)) { + __pyx_t_1 = PyObject_GetAttr(((PyObject *)__pyx_v_self), __pyx_n_s__array_from_header); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 593; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + if (!PyCFunction_Check(__pyx_t_1) || (PyCFunction_GET_FUNCTION(__pyx_t_1) != (void *)&__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_array_from_header)) { + __Pyx_XDECREF(__pyx_r); + __pyx_t_2 = PyInt_FromLong(__pyx_v_process); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 593; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 593; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_INCREF(((PyObject *)__pyx_v_header)); + PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_v_header)); + __Pyx_GIVEREF(((PyObject *)__pyx_v_header)); + PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_2); + __Pyx_GIVEREF(__pyx_t_2); + __pyx_t_2 = 0; + __pyx_t_2 = PyObject_Call(__pyx_t_1, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 593; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_r = __pyx_t_2; + __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + goto __pyx_L0; + } + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + } + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":611 + * object arr + * cnp.dtype mat_dtype + * cdef int mc = header.mclass # <<<<<<<<<<<<<< + * if (mc == mxDOUBLE_CLASS + * or mc == mxSINGLE_CLASS + */ + __pyx_v_mc = __pyx_v_header->mclass; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":612 + * cnp.dtype mat_dtype + * cdef int mc = header.mclass + * if (mc == mxDOUBLE_CLASS # <<<<<<<<<<<<<< + * or mc == mxSINGLE_CLASS + * or mc == mxINT8_CLASS + */ + switch (__pyx_v_mc) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":613 + * cdef int mc = header.mclass + * if (mc == mxDOUBLE_CLASS + * or mc == mxSINGLE_CLASS # <<<<<<<<<<<<<< + * or mc == mxINT8_CLASS + * or mc == mxUINT8_CLASS + */ + case __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxDOUBLE_CLASS: + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":614 + * if (mc == mxDOUBLE_CLASS + * or mc == mxSINGLE_CLASS + * or mc == mxINT8_CLASS # <<<<<<<<<<<<<< + * or mc == mxUINT8_CLASS + * or mc == mxINT16_CLASS + */ + case __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxSINGLE_CLASS: + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":615 + * or mc == mxSINGLE_CLASS + * or mc == mxINT8_CLASS + * or mc == mxUINT8_CLASS # <<<<<<<<<<<<<< + * or mc == mxINT16_CLASS + * or mc == mxUINT16_CLASS + */ + case __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxINT8_CLASS: + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":616 + * or mc == mxINT8_CLASS + * or mc == mxUINT8_CLASS + * or mc == mxINT16_CLASS # <<<<<<<<<<<<<< + * or mc == mxUINT16_CLASS + * or mc == mxINT32_CLASS + */ + case __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxUINT8_CLASS: + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":617 + * or mc == mxUINT8_CLASS + * or mc == mxINT16_CLASS + * or mc == mxUINT16_CLASS # <<<<<<<<<<<<<< + * or mc == mxINT32_CLASS + * or mc == mxUINT32_CLASS + */ + case __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxINT16_CLASS: + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":618 + * or mc == mxINT16_CLASS + * or mc == mxUINT16_CLASS + * or mc == mxINT32_CLASS # <<<<<<<<<<<<<< + * or mc == mxUINT32_CLASS + * or mc == mxINT64_CLASS + */ + case __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxUINT16_CLASS: + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":619 + * or mc == mxUINT16_CLASS + * or mc == mxINT32_CLASS + * or mc == mxUINT32_CLASS # <<<<<<<<<<<<<< + * or mc == mxINT64_CLASS + * or mc == mxUINT64_CLASS): # numeric matrix + */ + case __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxINT32_CLASS: + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":620 + * or mc == mxINT32_CLASS + * or mc == mxUINT32_CLASS + * or mc == mxINT64_CLASS # <<<<<<<<<<<<<< + * or mc == mxUINT64_CLASS): # numeric matrix + * arr = self.read_real_complex(header) + */ + case __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxUINT32_CLASS: + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":621 + * or mc == mxUINT32_CLASS + * or mc == mxINT64_CLASS + * or mc == mxUINT64_CLASS): # numeric matrix # <<<<<<<<<<<<<< + * arr = self.read_real_complex(header) + * if process and self.mat_dtype: # might need to recast + */ + case __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxINT64_CLASS: + case __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxUINT64_CLASS: + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":622 + * or mc == mxINT64_CLASS + * or mc == mxUINT64_CLASS): # numeric matrix + * arr = self.read_real_complex(header) # <<<<<<<<<<<<<< + * if process and self.mat_dtype: # might need to recast + * if header.is_logical: + */ + __pyx_t_1 = ((PyObject *)((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_real_complex(__pyx_v_self, __pyx_v_header, 0)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 622; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_v_arr); + __pyx_v_arr = __pyx_t_1; + __pyx_t_1 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":623 + * or mc == mxUINT64_CLASS): # numeric matrix + * arr = self.read_real_complex(header) + * if process and self.mat_dtype: # might need to recast # <<<<<<<<<<<<<< + * if header.is_logical: + * mat_dtype = self.bool_dtype + */ + if (__pyx_v_process) { + __pyx_t_4 = __pyx_v_self->mat_dtype; + } else { + __pyx_t_4 = __pyx_v_process; + } + if (__pyx_t_4) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":624 + * arr = self.read_real_complex(header) + * if process and self.mat_dtype: # might need to recast + * if header.is_logical: # <<<<<<<<<<<<<< + * mat_dtype = self.bool_dtype + * else: + */ + __pyx_t_5 = __pyx_v_header->is_logical; + if (__pyx_t_5) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":625 + * if process and self.mat_dtype: # might need to recast + * if header.is_logical: + * mat_dtype = self.bool_dtype # <<<<<<<<<<<<<< + * else: + * mat_dtype = self.class_dtypes[mc] + */ + __Pyx_INCREF(((PyObject *)__pyx_v_self->bool_dtype)); + __Pyx_DECREF(((PyObject *)__pyx_v_mat_dtype)); + __pyx_v_mat_dtype = __pyx_v_self->bool_dtype; + goto __pyx_L4; + } + /*else*/ { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":627 + * mat_dtype = self.bool_dtype + * else: + * mat_dtype = self.class_dtypes[mc] # <<<<<<<<<<<<<< + * arr = arr.astype(mat_dtype) + * elif mc == mxSPARSE_CLASS: + */ + __pyx_t_6 = (__pyx_v_self->class_dtypes[__pyx_v_mc]); + if (!(likely(((((PyObject *)__pyx_t_6)) == Py_None) || likely(__Pyx_TypeTest(((PyObject *)__pyx_t_6), __pyx_ptype_5numpy_dtype))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 627; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_INCREF(((PyObject *)__pyx_t_6)); + __Pyx_DECREF(((PyObject *)__pyx_v_mat_dtype)); + __pyx_v_mat_dtype = ((PyArray_Descr *)((PyObject *)__pyx_t_6)); + } + __pyx_L4:; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":628 + * else: + * mat_dtype = self.class_dtypes[mc] + * arr = arr.astype(mat_dtype) # <<<<<<<<<<<<<< + * elif mc == mxSPARSE_CLASS: + * arr = self.read_sparse(header) + */ + __pyx_t_1 = PyObject_GetAttr(__pyx_v_arr, __pyx_n_s__astype); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 628; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 628; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(((PyObject *)__pyx_v_mat_dtype)); + PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_v_mat_dtype)); + __Pyx_GIVEREF(((PyObject *)__pyx_v_mat_dtype)); + __pyx_t_3 = PyObject_Call(__pyx_t_1, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 628; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_v_arr); + __pyx_v_arr = __pyx_t_3; + __pyx_t_3 = 0; + goto __pyx_L3; + } + __pyx_L3:; + break; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":629 + * mat_dtype = self.class_dtypes[mc] + * arr = arr.astype(mat_dtype) + * elif mc == mxSPARSE_CLASS: # <<<<<<<<<<<<<< + * arr = self.read_sparse(header) + * # no current processing makes sense for sparse + */ + case __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxSPARSE_CLASS: + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":630 + * arr = arr.astype(mat_dtype) + * elif mc == mxSPARSE_CLASS: + * arr = self.read_sparse(header) # <<<<<<<<<<<<<< + * # no current processing makes sense for sparse + * return arr + */ + __pyx_t_3 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_sparse(__pyx_v_self, __pyx_v_header); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 630; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_v_arr); + __pyx_v_arr = __pyx_t_3; + __pyx_t_3 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":632 + * arr = self.read_sparse(header) + * # no current processing makes sense for sparse + * return arr # <<<<<<<<<<<<<< + * elif mc == mxCHAR_CLASS: + * arr = self.read_char(header) + */ + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(__pyx_v_arr); + __pyx_r = __pyx_v_arr; + goto __pyx_L0; + break; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":633 + * # no current processing makes sense for sparse + * return arr + * elif mc == mxCHAR_CLASS: # <<<<<<<<<<<<<< + * arr = self.read_char(header) + * if process and self.chars_as_strings: + */ + case __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxCHAR_CLASS: + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":634 + * return arr + * elif mc == mxCHAR_CLASS: + * arr = self.read_char(header) # <<<<<<<<<<<<<< + * if process and self.chars_as_strings: + * arr = chars_to_strings(arr) + */ + __pyx_t_3 = ((PyObject *)((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_char(__pyx_v_self, __pyx_v_header, 0)); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 634; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_v_arr); + __pyx_v_arr = __pyx_t_3; + __pyx_t_3 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":635 + * elif mc == mxCHAR_CLASS: + * arr = self.read_char(header) + * if process and self.chars_as_strings: # <<<<<<<<<<<<<< + * arr = chars_to_strings(arr) + * elif mc == mxCELL_CLASS: + */ + if (__pyx_v_process) { + __pyx_t_4 = __pyx_v_self->chars_as_strings; + } else { + __pyx_t_4 = __pyx_v_process; + } + if (__pyx_t_4) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":636 + * arr = self.read_char(header) + * if process and self.chars_as_strings: + * arr = chars_to_strings(arr) # <<<<<<<<<<<<<< + * elif mc == mxCELL_CLASS: + * arr = self.read_cells(header) + */ + __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__chars_to_strings); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 636; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 636; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(__pyx_v_arr); + PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_arr); + __Pyx_GIVEREF(__pyx_v_arr); + __pyx_t_1 = PyObject_Call(__pyx_t_3, __pyx_t_2, NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 636; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_v_arr); + __pyx_v_arr = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L5; + } + __pyx_L5:; + break; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":637 + * if process and self.chars_as_strings: + * arr = chars_to_strings(arr) + * elif mc == mxCELL_CLASS: # <<<<<<<<<<<<<< + * arr = self.read_cells(header) + * elif mc == mxSTRUCT_CLASS: + */ + case __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxCELL_CLASS: + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":638 + * arr = chars_to_strings(arr) + * elif mc == mxCELL_CLASS: + * arr = self.read_cells(header) # <<<<<<<<<<<<<< + * elif mc == mxSTRUCT_CLASS: + * arr = self.read_struct(header) + */ + __pyx_t_1 = ((PyObject *)((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_cells(__pyx_v_self, __pyx_v_header, 0)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 638; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_v_arr); + __pyx_v_arr = __pyx_t_1; + __pyx_t_1 = 0; + break; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":639 + * elif mc == mxCELL_CLASS: + * arr = self.read_cells(header) + * elif mc == mxSTRUCT_CLASS: # <<<<<<<<<<<<<< + * arr = self.read_struct(header) + * elif mc == mxOBJECT_CLASS: # like structs, but with classname + */ + case __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxSTRUCT_CLASS: + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":640 + * arr = self.read_cells(header) + * elif mc == mxSTRUCT_CLASS: + * arr = self.read_struct(header) # <<<<<<<<<<<<<< + * elif mc == mxOBJECT_CLASS: # like structs, but with classname + * classname = self.read_int8_string() + */ + __pyx_t_1 = ((PyObject *)((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_struct(__pyx_v_self, __pyx_v_header, 0)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 640; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_v_arr); + __pyx_v_arr = __pyx_t_1; + __pyx_t_1 = 0; + break; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":641 + * elif mc == mxSTRUCT_CLASS: + * arr = self.read_struct(header) + * elif mc == mxOBJECT_CLASS: # like structs, but with classname # <<<<<<<<<<<<<< + * classname = self.read_int8_string() + * arr = self.read_struct(header) + */ + case __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxOBJECT_CLASS: + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":642 + * arr = self.read_struct(header) + * elif mc == mxOBJECT_CLASS: # like structs, but with classname + * classname = self.read_int8_string() # <<<<<<<<<<<<<< + * arr = self.read_struct(header) + * arr = mio5p.MatlabObject(arr, classname) + */ + __pyx_t_1 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_int8_string(__pyx_v_self); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 642; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_v_classname); + __pyx_v_classname = __pyx_t_1; + __pyx_t_1 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":643 + * elif mc == mxOBJECT_CLASS: # like structs, but with classname + * classname = self.read_int8_string() + * arr = self.read_struct(header) # <<<<<<<<<<<<<< + * arr = mio5p.MatlabObject(arr, classname) + * elif mc == mxFUNCTION_CLASS: # just a matrix of struct type + */ + __pyx_t_1 = ((PyObject *)((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_struct(__pyx_v_self, __pyx_v_header, 0)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 643; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_v_arr); + __pyx_v_arr = __pyx_t_1; + __pyx_t_1 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":644 + * classname = self.read_int8_string() + * arr = self.read_struct(header) + * arr = mio5p.MatlabObject(arr, classname) # <<<<<<<<<<<<<< + * elif mc == mxFUNCTION_CLASS: # just a matrix of struct type + * arr = self.read_mi_matrix() + */ + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__mio5p); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 644; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__MatlabObject); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 644; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 644; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_INCREF(__pyx_v_arr); + PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_arr); + __Pyx_GIVEREF(__pyx_v_arr); + __Pyx_INCREF(__pyx_v_classname); + PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_v_classname); + __Pyx_GIVEREF(__pyx_v_classname); + __pyx_t_3 = PyObject_Call(__pyx_t_2, __pyx_t_1, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 644; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_v_arr); + __pyx_v_arr = __pyx_t_3; + __pyx_t_3 = 0; + break; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":645 + * arr = self.read_struct(header) + * arr = mio5p.MatlabObject(arr, classname) + * elif mc == mxFUNCTION_CLASS: # just a matrix of struct type # <<<<<<<<<<<<<< + * arr = self.read_mi_matrix() + * arr = mio5p.MatlabFunction(arr) + */ + case __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxFUNCTION_CLASS: + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":646 + * arr = mio5p.MatlabObject(arr, classname) + * elif mc == mxFUNCTION_CLASS: # just a matrix of struct type + * arr = self.read_mi_matrix() # <<<<<<<<<<<<<< + * arr = mio5p.MatlabFunction(arr) + * # to make them more re-writeable - don't squeeze + */ + __pyx_t_3 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_mi_matrix(__pyx_v_self, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 646; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_v_arr); + __pyx_v_arr = __pyx_t_3; + __pyx_t_3 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":647 + * elif mc == mxFUNCTION_CLASS: # just a matrix of struct type + * arr = self.read_mi_matrix() + * arr = mio5p.MatlabFunction(arr) # <<<<<<<<<<<<<< + * # to make them more re-writeable - don't squeeze + * return arr + */ + __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__mio5p); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 647; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_1 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__MatlabFunction); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 647; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 647; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_INCREF(__pyx_v_arr); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_arr); + __Pyx_GIVEREF(__pyx_v_arr); + __pyx_t_2 = PyObject_Call(__pyx_t_1, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 647; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_v_arr); + __pyx_v_arr = __pyx_t_2; + __pyx_t_2 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":649 + * arr = mio5p.MatlabFunction(arr) + * # to make them more re-writeable - don't squeeze + * return arr # <<<<<<<<<<<<<< + * elif mc == mxOPAQUE_CLASS: + * arr = self.read_opaque(header) + */ + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(__pyx_v_arr); + __pyx_r = __pyx_v_arr; + goto __pyx_L0; + break; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":650 + * # to make them more re-writeable - don't squeeze + * return arr + * elif mc == mxOPAQUE_CLASS: # <<<<<<<<<<<<<< + * arr = self.read_opaque(header) + * arr = mio5p.MatlabOpaque(arr) + */ + case __pyx_e_5scipy_2io_6matlab_10mio5_utils_mxOPAQUE_CLASS: + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":651 + * return arr + * elif mc == mxOPAQUE_CLASS: + * arr = self.read_opaque(header) # <<<<<<<<<<<<<< + * arr = mio5p.MatlabOpaque(arr) + * # to make them more re-writeable - don't squeeze + */ + __pyx_t_2 = ((PyObject *)((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_opaque(__pyx_v_self, __pyx_v_header, 0)); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 651; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_v_arr); + __pyx_v_arr = __pyx_t_2; + __pyx_t_2 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":652 + * elif mc == mxOPAQUE_CLASS: + * arr = self.read_opaque(header) + * arr = mio5p.MatlabOpaque(arr) # <<<<<<<<<<<<<< + * # to make them more re-writeable - don't squeeze + * return arr + */ + __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__mio5p); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 652; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__MatlabOpaque); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 652; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 652; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(__pyx_v_arr); + PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_arr); + __Pyx_GIVEREF(__pyx_v_arr); + __pyx_t_1 = PyObject_Call(__pyx_t_3, __pyx_t_2, NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 652; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_v_arr); + __pyx_v_arr = __pyx_t_1; + __pyx_t_1 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":654 + * arr = mio5p.MatlabOpaque(arr) + * # to make them more re-writeable - don't squeeze + * return arr # <<<<<<<<<<<<<< + * if process and self.squeeze_me: + * return squeeze_element(arr) + */ + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(__pyx_v_arr); + __pyx_r = __pyx_v_arr; + goto __pyx_L0; + break; + } + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":655 + * # to make them more re-writeable - don't squeeze + * return arr + * if process and self.squeeze_me: # <<<<<<<<<<<<<< + * return squeeze_element(arr) + * return arr + */ + if (__pyx_v_process) { + __pyx_t_4 = __pyx_v_self->squeeze_me; + } else { + __pyx_t_4 = __pyx_v_process; + } + if (__pyx_t_4) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":656 + * return arr + * if process and self.squeeze_me: + * return squeeze_element(arr) # <<<<<<<<<<<<<< + * return arr + * + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__squeeze_element); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 656; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 656; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(__pyx_v_arr); + PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_arr); + __Pyx_GIVEREF(__pyx_v_arr); + __pyx_t_3 = PyObject_Call(__pyx_t_1, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 656; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_r = __pyx_t_3; + __pyx_t_3 = 0; + goto __pyx_L0; + goto __pyx_L6; + } + __pyx_L6:; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":657 + * if process and self.squeeze_me: + * return squeeze_element(arr) + * return arr # <<<<<<<<<<<<<< + * + * cpdef cnp.ndarray read_real_complex(self, VarHeader5 header): + */ + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(__pyx_v_arr); + __pyx_r = __pyx_v_arr; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.array_from_header"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_DECREF(__pyx_v_arr); + __Pyx_DECREF((PyObject *)__pyx_v_mat_dtype); + __Pyx_DECREF(__pyx_v_classname); + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_DECREF((PyObject *)__pyx_v_header); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":593 + * return self.array_from_header(header, process) + * + * cpdef array_from_header(self, VarHeader5 header, int process=1): # <<<<<<<<<<<<<< + * ''' Read array of any class, given matrix `header` + * + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_array_from_header(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static char __pyx_doc_5scipy_2io_6matlab_10mio5_utils_10VarReader5_array_from_header[] = " Read array of any class, given matrix `header`\n\n Parameters\n ----------\n header : VarHeader5\n array header object\n process : int, optional\n If not zero, apply post-processing on returned array\n \n Returns\n -------\n arr : array or sparse array\n read array\n "; +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_array_from_header(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *__pyx_v_header = 0; + int __pyx_v_process; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + struct __pyx_opt_args_5scipy_2io_6matlab_10mio5_utils_10VarReader5_array_from_header __pyx_t_2; + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__header,&__pyx_n_s__process,0}; + __Pyx_RefNannySetupContext("array_from_header"); + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); + PyObject* values[2] = {0,0}; + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 0: + values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__header); + if (likely(values[0])) kw_args--; + else goto __pyx_L5_argtuple_error; + case 1: + if (kw_args > 1) { + PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__process); + if (unlikely(value)) { values[1] = value; kw_args--; } + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "array_from_header") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 593; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + __pyx_v_header = ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *)values[0]); + if (values[1]) { + __pyx_v_process = __Pyx_PyInt_AsInt(values[1]); if (unlikely((__pyx_v_process == (int)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 593; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } else { + __pyx_v_process = ((int)1); + } + } else { + __pyx_v_process = ((int)1); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 2: __pyx_v_process = __Pyx_PyInt_AsInt(PyTuple_GET_ITEM(__pyx_args, 1)); if (unlikely((__pyx_v_process == (int)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 593; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + case 1: __pyx_v_header = ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *)PyTuple_GET_ITEM(__pyx_args, 0)); + break; + default: goto __pyx_L5_argtuple_error; + } + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("array_from_header", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 593; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + __pyx_L3_error:; + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.array_from_header"); + return NULL; + __pyx_L4_argument_unpacking_done:; + if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_header), __pyx_ptype_5scipy_2io_6matlab_10mio5_utils_VarHeader5, 1, "header", 0))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 593; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_XDECREF(__pyx_r); + __pyx_t_2.__pyx_n = 1; + __pyx_t_2.process = __pyx_v_process; + __pyx_t_1 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->__pyx_vtab)->array_from_header(((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self), __pyx_v_header, 1, &__pyx_t_2); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 593; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.array_from_header"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":659 + * return arr + * + * cpdef cnp.ndarray read_real_complex(self, VarHeader5 header): # <<<<<<<<<<<<<< + * ''' Read real / complex matrices from stream ''' + * cdef: + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_real_complex(PyObject *__pyx_v_self, PyObject *__pyx_v_header); /*proto*/ +static PyArrayObject *__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_real_complex(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *__pyx_v_self, struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *__pyx_v_header, int __pyx_skip_dispatch) { + PyArrayObject *__pyx_v_res; + PyArrayObject *__pyx_v_res_j; + PyArrayObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + int __pyx_t_4; + struct __pyx_opt_args_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_numeric __pyx_t_5; + __Pyx_RefNannySetupContext("read_real_complex"); + __Pyx_INCREF((PyObject *)__pyx_v_self); + __Pyx_INCREF((PyObject *)__pyx_v_header); + __pyx_v_res = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); + __pyx_v_res_j = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); + /* Check if called by wrapper */ + if (unlikely(__pyx_skip_dispatch)) ; + /* Check if overriden in Python */ + else if (unlikely(Py_TYPE(((PyObject *)__pyx_v_self))->tp_dictoffset != 0)) { + __pyx_t_1 = PyObject_GetAttr(((PyObject *)__pyx_v_self), __pyx_n_s__read_real_complex); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 659; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + if (!PyCFunction_Check(__pyx_t_1) || (PyCFunction_GET_FUNCTION(__pyx_t_1) != (void *)&__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_real_complex)) { + __Pyx_XDECREF(((PyObject *)__pyx_r)); + __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 659; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(((PyObject *)__pyx_v_header)); + PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_v_header)); + __Pyx_GIVEREF(((PyObject *)__pyx_v_header)); + __pyx_t_3 = PyObject_Call(__pyx_t_1, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 659; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 659; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_r = ((PyArrayObject *)__pyx_t_3); + __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + goto __pyx_L0; + } + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + } + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":663 + * cdef: + * cnp.ndarray res, res_j + * if header.is_complex: # <<<<<<<<<<<<<< + * # avoid array copy to save memory + * res = self.read_numeric(False) + */ + __pyx_t_4 = __pyx_v_header->is_complex; + if (__pyx_t_4) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":665 + * if header.is_complex: + * # avoid array copy to save memory + * res = self.read_numeric(False) # <<<<<<<<<<<<<< + * res_j = self.read_numeric(False) + * res = res + (res_j * 1j) + */ + __pyx_t_5.__pyx_n = 1; + __pyx_t_5.copy = 0; + __pyx_t_1 = ((PyObject *)((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_numeric(__pyx_v_self, 0, &__pyx_t_5)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 665; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(((PyObject *)__pyx_v_res)); + __pyx_v_res = ((PyArrayObject *)__pyx_t_1); + __pyx_t_1 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":666 + * # avoid array copy to save memory + * res = self.read_numeric(False) + * res_j = self.read_numeric(False) # <<<<<<<<<<<<<< + * res = res + (res_j * 1j) + * else: + */ + __pyx_t_5.__pyx_n = 1; + __pyx_t_5.copy = 0; + __pyx_t_1 = ((PyObject *)((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_numeric(__pyx_v_self, 0, &__pyx_t_5)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 666; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(((PyObject *)__pyx_v_res_j)); + __pyx_v_res_j = ((PyArrayObject *)__pyx_t_1); + __pyx_t_1 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":667 + * res = self.read_numeric(False) + * res_j = self.read_numeric(False) + * res = res + (res_j * 1j) # <<<<<<<<<<<<<< + * else: + * res = self.read_numeric() + */ + __pyx_t_1 = PyComplex_FromDoubles(0.0, 1.0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 667; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_3 = PyNumber_Multiply(((PyObject *)__pyx_v_res_j), __pyx_t_1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 667; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyNumber_Add(((PyObject *)__pyx_v_res), __pyx_t_3); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 667; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 667; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(((PyObject *)__pyx_v_res)); + __pyx_v_res = ((PyArrayObject *)__pyx_t_1); + __pyx_t_1 = 0; + goto __pyx_L3; + } + /*else*/ { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":669 + * res = res + (res_j * 1j) + * else: + * res = self.read_numeric() # <<<<<<<<<<<<<< + * return res.reshape(header.dims[::-1]).T + * + */ + __pyx_t_1 = ((PyObject *)((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_numeric(__pyx_v_self, 0, NULL)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 669; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(((PyObject *)__pyx_v_res)); + __pyx_v_res = ((PyArrayObject *)__pyx_t_1); + __pyx_t_1 = 0; + } + __pyx_L3:; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":670 + * else: + * res = self.read_numeric() + * return res.reshape(header.dims[::-1]).T # <<<<<<<<<<<<<< + * + * cdef object read_sparse(self, VarHeader5 header): + */ + __Pyx_XDECREF(((PyObject *)__pyx_r)); + __pyx_t_1 = PyObject_GetAttr(((PyObject *)__pyx_v_res), __pyx_n_s__reshape); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 670; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_3 = PySlice_New(Py_None, Py_None, __pyx_int_neg_1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 670; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_2 = PyObject_GetItem(__pyx_v_header->dims, __pyx_t_3); if (!__pyx_t_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 670; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 670; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2); + __Pyx_GIVEREF(__pyx_t_2); + __pyx_t_2 = 0; + __pyx_t_2 = PyObject_Call(__pyx_t_1, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 670; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__T); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 670; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 670; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_r = ((PyArrayObject *)__pyx_t_3); + __pyx_t_3 = 0; + goto __pyx_L0; + + __pyx_r = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.read_real_complex"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_DECREF((PyObject *)__pyx_v_res); + __Pyx_DECREF((PyObject *)__pyx_v_res_j); + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_DECREF((PyObject *)__pyx_v_header); + __Pyx_XGIVEREF((PyObject *)__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":659 + * return arr + * + * cpdef cnp.ndarray read_real_complex(self, VarHeader5 header): # <<<<<<<<<<<<<< + * ''' Read real / complex matrices from stream ''' + * cdef: + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_real_complex(PyObject *__pyx_v_self, PyObject *__pyx_v_header); /*proto*/ +static char __pyx_doc_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_real_complex[] = " Read real / complex matrices from stream "; +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_real_complex(PyObject *__pyx_v_self, PyObject *__pyx_v_header) { + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("read_real_complex"); + if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_header), __pyx_ptype_5scipy_2io_6matlab_10mio5_utils_VarHeader5, 1, "header", 0))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 659; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = ((PyObject *)((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->__pyx_vtab)->read_real_complex(((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self), ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *)__pyx_v_header), 1)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 659; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.read_real_complex"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":672 + * return res.reshape(header.dims[::-1]).T + * + * cdef object read_sparse(self, VarHeader5 header): # <<<<<<<<<<<<<< + * ''' Read sparse matrices from stream ''' + * cdef cnp.ndarray rowind, indptr, data, data_j + */ + +static PyObject *__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_sparse(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *__pyx_v_self, struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *__pyx_v_header) { + PyArrayObject *__pyx_v_rowind; + PyArrayObject *__pyx_v_indptr; + PyArrayObject *__pyx_v_data; + PyArrayObject *__pyx_v_data_j; + size_t __pyx_v_M; + size_t __pyx_v_N; + size_t __pyx_v_nnz; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + int __pyx_t_2; + struct __pyx_opt_args_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_numeric __pyx_t_3; + PyObject *__pyx_t_4 = NULL; + size_t __pyx_t_5; + size_t __pyx_t_6; + PyObject *__pyx_t_7 = NULL; + PyObject *__pyx_t_8 = NULL; + PyObject *__pyx_t_9 = NULL; + PyObject *__pyx_t_10 = NULL; + __Pyx_RefNannySetupContext("read_sparse"); + __Pyx_INCREF((PyObject *)__pyx_v_self); + __Pyx_INCREF((PyObject *)__pyx_v_header); + __pyx_v_rowind = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); + __pyx_v_indptr = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); + __pyx_v_data = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); + __pyx_v_data_j = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":676 + * cdef cnp.ndarray rowind, indptr, data, data_j + * cdef size_t M, N, nnz + * rowind = self.read_numeric() # <<<<<<<<<<<<<< + * indptr = self.read_numeric() + * if header.is_complex: + */ + __pyx_t_1 = ((PyObject *)((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_numeric(__pyx_v_self, 0, NULL)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 676; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(((PyObject *)__pyx_v_rowind)); + __pyx_v_rowind = ((PyArrayObject *)__pyx_t_1); + __pyx_t_1 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":677 + * cdef size_t M, N, nnz + * rowind = self.read_numeric() + * indptr = self.read_numeric() # <<<<<<<<<<<<<< + * if header.is_complex: + * # avoid array copy to save memory + */ + __pyx_t_1 = ((PyObject *)((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_numeric(__pyx_v_self, 0, NULL)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 677; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(((PyObject *)__pyx_v_indptr)); + __pyx_v_indptr = ((PyArrayObject *)__pyx_t_1); + __pyx_t_1 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":678 + * rowind = self.read_numeric() + * indptr = self.read_numeric() + * if header.is_complex: # <<<<<<<<<<<<<< + * # avoid array copy to save memory + * data = self.read_numeric(False) + */ + __pyx_t_2 = __pyx_v_header->is_complex; + if (__pyx_t_2) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":680 + * if header.is_complex: + * # avoid array copy to save memory + * data = self.read_numeric(False) # <<<<<<<<<<<<<< + * data_j = self.read_numeric(False) + * data = data + (data_j * 1j) + */ + __pyx_t_3.__pyx_n = 1; + __pyx_t_3.copy = 0; + __pyx_t_1 = ((PyObject *)((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_numeric(__pyx_v_self, 0, &__pyx_t_3)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 680; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(((PyObject *)__pyx_v_data)); + __pyx_v_data = ((PyArrayObject *)__pyx_t_1); + __pyx_t_1 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":681 + * # avoid array copy to save memory + * data = self.read_numeric(False) + * data_j = self.read_numeric(False) # <<<<<<<<<<<<<< + * data = data + (data_j * 1j) + * else: + */ + __pyx_t_3.__pyx_n = 1; + __pyx_t_3.copy = 0; + __pyx_t_1 = ((PyObject *)((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_numeric(__pyx_v_self, 0, &__pyx_t_3)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 681; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(((PyObject *)__pyx_v_data_j)); + __pyx_v_data_j = ((PyArrayObject *)__pyx_t_1); + __pyx_t_1 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":682 + * data = self.read_numeric(False) + * data_j = self.read_numeric(False) + * data = data + (data_j * 1j) # <<<<<<<<<<<<<< + * else: + * data = self.read_numeric() + */ + __pyx_t_1 = PyComplex_FromDoubles(0.0, 1.0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 682; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_4 = PyNumber_Multiply(((PyObject *)__pyx_v_data_j), __pyx_t_1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 682; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyNumber_Add(((PyObject *)__pyx_v_data), __pyx_t_4); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 682; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 682; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(((PyObject *)__pyx_v_data)); + __pyx_v_data = ((PyArrayObject *)__pyx_t_1); + __pyx_t_1 = 0; + goto __pyx_L3; + } + /*else*/ { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":684 + * data = data + (data_j * 1j) + * else: + * data = self.read_numeric() # <<<<<<<<<<<<<< + * ''' From the matlab (TM) API documentation, last found here: + * http://www.mathworks.com/access/helpdesk/help/techdoc/matlab_external/ + */ + __pyx_t_1 = ((PyObject *)((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_numeric(__pyx_v_self, 0, NULL)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 684; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(((PyObject *)__pyx_v_data)); + __pyx_v_data = ((PyArrayObject *)__pyx_t_1); + __pyx_t_1 = 0; + } + __pyx_L3:; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":697 + * to each rowind + * ''' + * M,N = header.dims # <<<<<<<<<<<<<< + * indptr = indptr[:N+1] + * nnz = indptr[-1] + */ + if (PyTuple_CheckExact(__pyx_v_header->dims) && likely(PyTuple_GET_SIZE(__pyx_v_header->dims) == 2)) { + PyObject* tuple = __pyx_v_header->dims; + __pyx_t_1 = PyTuple_GET_ITEM(tuple, 0); __Pyx_INCREF(__pyx_t_1); + __pyx_t_5 = __Pyx_PyInt_AsSize_t(__pyx_t_1); if (unlikely((__pyx_t_5 == (size_t)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 697; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_4 = PyTuple_GET_ITEM(tuple, 1); __Pyx_INCREF(__pyx_t_4); + __pyx_t_6 = __Pyx_PyInt_AsSize_t(__pyx_t_4); if (unlikely((__pyx_t_6 == (size_t)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 697; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_v_M = __pyx_t_5; + __pyx_v_N = __pyx_t_6; + } else { + __pyx_t_7 = PyObject_GetIter(__pyx_v_header->dims); if (unlikely(!__pyx_t_7)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 697; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_7); + __pyx_t_1 = __Pyx_UnpackItem(__pyx_t_7, 0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 697; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_6 = __Pyx_PyInt_AsSize_t(__pyx_t_1); if (unlikely((__pyx_t_6 == (size_t)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 697; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_4 = __Pyx_UnpackItem(__pyx_t_7, 1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 697; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_5 = __Pyx_PyInt_AsSize_t(__pyx_t_4); if (unlikely((__pyx_t_5 == (size_t)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 697; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (__Pyx_EndUnpack(__pyx_t_7) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 697; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; + __pyx_v_M = __pyx_t_6; + __pyx_v_N = __pyx_t_5; + } + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":698 + * ''' + * M,N = header.dims + * indptr = indptr[:N+1] # <<<<<<<<<<<<<< + * nnz = indptr[-1] + * rowind = rowind[:nnz] + */ + __pyx_t_4 = PySequence_GetSlice(((PyObject *)__pyx_v_indptr), 0, (__pyx_v_N + 1)); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 698; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + if (!(likely(((__pyx_t_4) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_4, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 698; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(((PyObject *)__pyx_v_indptr)); + __pyx_v_indptr = ((PyArrayObject *)__pyx_t_4); + __pyx_t_4 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":699 + * M,N = header.dims + * indptr = indptr[:N+1] + * nnz = indptr[-1] # <<<<<<<<<<<<<< + * rowind = rowind[:nnz] + * data = data[:nnz] + */ + __pyx_t_4 = __Pyx_GetItemInt(((PyObject *)__pyx_v_indptr), -1, sizeof(long), PyInt_FromLong); if (!__pyx_t_4) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 699; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_5 = __Pyx_PyInt_AsSize_t(__pyx_t_4); if (unlikely((__pyx_t_5 == (size_t)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 699; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_v_nnz = __pyx_t_5; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":700 + * indptr = indptr[:N+1] + * nnz = indptr[-1] + * rowind = rowind[:nnz] # <<<<<<<<<<<<<< + * data = data[:nnz] + * return scipy.sparse.csc_matrix( + */ + __pyx_t_4 = PySequence_GetSlice(((PyObject *)__pyx_v_rowind), 0, __pyx_v_nnz); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 700; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + if (!(likely(((__pyx_t_4) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_4, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 700; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(((PyObject *)__pyx_v_rowind)); + __pyx_v_rowind = ((PyArrayObject *)__pyx_t_4); + __pyx_t_4 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":701 + * nnz = indptr[-1] + * rowind = rowind[:nnz] + * data = data[:nnz] # <<<<<<<<<<<<<< + * return scipy.sparse.csc_matrix( + * (data,rowind,indptr), + */ + __pyx_t_4 = PySequence_GetSlice(((PyObject *)__pyx_v_data), 0, __pyx_v_nnz); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 701; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + if (!(likely(((__pyx_t_4) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_4, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 701; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(((PyObject *)__pyx_v_data)); + __pyx_v_data = ((PyArrayObject *)__pyx_t_4); + __pyx_t_4 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":702 + * rowind = rowind[:nnz] + * data = data[:nnz] + * return scipy.sparse.csc_matrix( # <<<<<<<<<<<<<< + * (data,rowind,indptr), + * shape=(M,N)) + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_4 = __Pyx_GetName(__pyx_m, __pyx_n_s__scipy); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 702; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_1 = PyObject_GetAttr(__pyx_t_4, __pyx_n_s__sparse); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 702; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_4 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__csc_matrix); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 702; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":703 + * data = data[:nnz] + * return scipy.sparse.csc_matrix( + * (data,rowind,indptr), # <<<<<<<<<<<<<< + * shape=(M,N)) + * + */ + __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 703; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_INCREF(((PyObject *)__pyx_v_data)); + PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)__pyx_v_data)); + __Pyx_GIVEREF(((PyObject *)__pyx_v_data)); + __Pyx_INCREF(((PyObject *)__pyx_v_rowind)); + PyTuple_SET_ITEM(__pyx_t_1, 1, ((PyObject *)__pyx_v_rowind)); + __Pyx_GIVEREF(((PyObject *)__pyx_v_rowind)); + __Pyx_INCREF(((PyObject *)__pyx_v_indptr)); + PyTuple_SET_ITEM(__pyx_t_1, 2, ((PyObject *)__pyx_v_indptr)); + __Pyx_GIVEREF(((PyObject *)__pyx_v_indptr)); + __pyx_t_7 = PyTuple_New(1); if (unlikely(!__pyx_t_7)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 702; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_7); + PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_1); + __Pyx_GIVEREF(__pyx_t_1); + __pyx_t_1 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":702 + * rowind = rowind[:nnz] + * data = data[:nnz] + * return scipy.sparse.csc_matrix( # <<<<<<<<<<<<<< + * (data,rowind,indptr), + * shape=(M,N)) + */ + __pyx_t_1 = PyDict_New(); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 702; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_1)); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":704 + * return scipy.sparse.csc_matrix( + * (data,rowind,indptr), + * shape=(M,N)) # <<<<<<<<<<<<<< + * + * cpdef cnp.ndarray read_char(self, VarHeader5 header): + */ + __pyx_t_8 = __Pyx_PyInt_FromSize_t(__pyx_v_M); if (unlikely(!__pyx_t_8)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 704; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_8); + __pyx_t_9 = __Pyx_PyInt_FromSize_t(__pyx_v_N); if (unlikely(!__pyx_t_9)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 704; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_9); + __pyx_t_10 = PyTuple_New(2); if (unlikely(!__pyx_t_10)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 704; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_10); + PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_t_8); + __Pyx_GIVEREF(__pyx_t_8); + PyTuple_SET_ITEM(__pyx_t_10, 1, __pyx_t_9); + __Pyx_GIVEREF(__pyx_t_9); + __pyx_t_8 = 0; + __pyx_t_9 = 0; + if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_n_s__shape), __pyx_t_10) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 702; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; + __pyx_t_10 = PyEval_CallObjectWithKeywords(__pyx_t_4, __pyx_t_7, ((PyObject *)__pyx_t_1)); if (unlikely(!__pyx_t_10)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 702; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_10); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; + __Pyx_DECREF(((PyObject *)__pyx_t_1)); __pyx_t_1 = 0; + __pyx_r = __pyx_t_10; + __pyx_t_10 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_XDECREF(__pyx_t_7); + __Pyx_XDECREF(__pyx_t_8); + __Pyx_XDECREF(__pyx_t_9); + __Pyx_XDECREF(__pyx_t_10); + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.read_sparse"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_DECREF((PyObject *)__pyx_v_rowind); + __Pyx_DECREF((PyObject *)__pyx_v_indptr); + __Pyx_DECREF((PyObject *)__pyx_v_data); + __Pyx_DECREF((PyObject *)__pyx_v_data_j); + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_DECREF((PyObject *)__pyx_v_header); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":706 + * shape=(M,N)) + * + * cpdef cnp.ndarray read_char(self, VarHeader5 header): # <<<<<<<<<<<<<< + * ''' Read char matrices from stream as arrays + * + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_char(PyObject *__pyx_v_self, PyObject *__pyx_v_header); /*proto*/ +static PyArrayObject *__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_char(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *__pyx_v_self, struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *__pyx_v_header, int __pyx_skip_dispatch) { + __pyx_t_5numpy_uint32_t __pyx_v_mdtype; + __pyx_t_5numpy_uint32_t __pyx_v_byte_count; + char *__pyx_v_data_ptr; + PyObject *__pyx_v_data; + PyObject *__pyx_v_codec; + PyArrayObject *__pyx_v_arr; + size_t __pyx_v_length; + PyArray_Descr *__pyx_v_dt = 0; + PyObject *__pyx_v_uc_str; + PyArrayObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + struct __pyx_opt_args_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_element __pyx_t_4; + PyObject *__pyx_t_5; + int __pyx_t_6; + PyObject *__pyx_t_7 = NULL; + int __pyx_t_8; + int __pyx_t_9; + __Pyx_RefNannySetupContext("read_char"); + __Pyx_INCREF((PyObject *)__pyx_v_self); + __Pyx_INCREF((PyObject *)__pyx_v_header); + __pyx_v_data = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_codec = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_arr = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); + __pyx_v_uc_str = Py_None; __Pyx_INCREF(Py_None); + /* Check if called by wrapper */ + if (unlikely(__pyx_skip_dispatch)) ; + /* Check if overriden in Python */ + else if (unlikely(Py_TYPE(((PyObject *)__pyx_v_self))->tp_dictoffset != 0)) { + __pyx_t_1 = PyObject_GetAttr(((PyObject *)__pyx_v_self), __pyx_n_s__read_char); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 706; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + if (!PyCFunction_Check(__pyx_t_1) || (PyCFunction_GET_FUNCTION(__pyx_t_1) != (void *)&__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_char)) { + __Pyx_XDECREF(((PyObject *)__pyx_r)); + __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 706; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(((PyObject *)__pyx_v_header)); + PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_v_header)); + __Pyx_GIVEREF(((PyObject *)__pyx_v_header)); + __pyx_t_3 = PyObject_Call(__pyx_t_1, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 706; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 706; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_r = ((PyArrayObject *)__pyx_t_3); + __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + goto __pyx_L0; + } + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + } + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":732 + * object data, res, codec + * cnp.ndarray arr + * cdef size_t length = self.size_from_header(header) # <<<<<<<<<<<<<< + * data = self.read_element( + * &mdtype, &byte_count, &data_ptr, True) + */ + __pyx_v_length = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->size_from_header(__pyx_v_self, __pyx_v_header); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":734 + * cdef size_t length = self.size_from_header(header) + * data = self.read_element( + * &mdtype, &byte_count, &data_ptr, True) # <<<<<<<<<<<<<< + * # Character data can be of apparently numerical types, + * # specifically np.uint8, np.int8, np.uint16. np.unit16 can have + */ + __pyx_t_4.__pyx_n = 1; + __pyx_t_4.copy = 1; + __pyx_t_1 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_element(__pyx_v_self, (&__pyx_v_mdtype), (&__pyx_v_byte_count), ((void **)(&__pyx_v_data_ptr)), &__pyx_t_4); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 733; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_v_data); + __pyx_v_data = __pyx_t_1; + __pyx_t_1 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":739 + * # a length 1 type encoding, like ascii, or length 2 type + * # encoding + * cdef cnp.dtype dt = self.dtypes[mdtype] # <<<<<<<<<<<<<< + * if mdtype == miUINT16: + * codec = self.uint16_codec + */ + __pyx_t_5 = (__pyx_v_self->dtypes[__pyx_v_mdtype]); + __Pyx_INCREF(((PyObject *)((PyArray_Descr *)__pyx_t_5))); + __pyx_v_dt = ((PyArray_Descr *)__pyx_t_5); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":740 + * # encoding + * cdef cnp.dtype dt = self.dtypes[mdtype] + * if mdtype == miUINT16: # <<<<<<<<<<<<<< + * codec = self.uint16_codec + * if self.codecs['uint16_len'] == 1: # need LSBs only + */ + __pyx_t_6 = (__pyx_v_mdtype == __pyx_e_5scipy_2io_6matlab_10mio5_utils_miUINT16); + if (__pyx_t_6) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":741 + * cdef cnp.dtype dt = self.dtypes[mdtype] + * if mdtype == miUINT16: + * codec = self.uint16_codec # <<<<<<<<<<<<<< + * if self.codecs['uint16_len'] == 1: # need LSBs only + * arr = np.ndarray(shape=(length,), + */ + __Pyx_INCREF(__pyx_v_self->uint16_codec); + __Pyx_DECREF(__pyx_v_codec); + __pyx_v_codec = __pyx_v_self->uint16_codec; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":742 + * if mdtype == miUINT16: + * codec = self.uint16_codec + * if self.codecs['uint16_len'] == 1: # need LSBs only # <<<<<<<<<<<<<< + * arr = np.ndarray(shape=(length,), + * dtype=dt, + */ + __pyx_t_1 = PyObject_GetItem(__pyx_v_self->codecs, ((PyObject *)__pyx_n_s__uint16_len)); if (!__pyx_t_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 742; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_3 = PyObject_RichCompare(__pyx_t_1, __pyx_int_1, Py_EQ); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 742; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 742; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":743 + * codec = self.uint16_codec + * if self.codecs['uint16_len'] == 1: # need LSBs only + * arr = np.ndarray(shape=(length,), # <<<<<<<<<<<<<< + * dtype=dt, + * buffer=data) + */ + __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 743; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_1 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__ndarray); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 743; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyDict_New(); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 743; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_3)); + __pyx_t_2 = __Pyx_PyInt_FromSize_t(__pyx_v_length); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 743; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_7 = PyTuple_New(1); if (unlikely(!__pyx_t_7)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 743; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_7); + PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_2); + __Pyx_GIVEREF(__pyx_t_2); + __pyx_t_2 = 0; + if (PyDict_SetItem(__pyx_t_3, ((PyObject *)__pyx_n_s__shape), __pyx_t_7) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 743; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":744 + * if self.codecs['uint16_len'] == 1: # need LSBs only + * arr = np.ndarray(shape=(length,), + * dtype=dt, # <<<<<<<<<<<<<< + * buffer=data) + * data = arr.astype(np.uint8).tostring() + */ + if (PyDict_SetItem(__pyx_t_3, ((PyObject *)__pyx_n_s__dtype), ((PyObject *)__pyx_v_dt)) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 743; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":745 + * arr = np.ndarray(shape=(length,), + * dtype=dt, + * buffer=data) # <<<<<<<<<<<<<< + * data = arr.astype(np.uint8).tostring() + * elif mdtype == miINT8 or mdtype == miUINT8: + */ + if (PyDict_SetItem(__pyx_t_3, ((PyObject *)__pyx_n_s__buffer), __pyx_v_data) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 743; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_7 = PyEval_CallObjectWithKeywords(__pyx_t_1, ((PyObject *)__pyx_empty_tuple), ((PyObject *)__pyx_t_3)); if (unlikely(!__pyx_t_7)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 743; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_7); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(((PyObject *)__pyx_t_3)); __pyx_t_3 = 0; + if (!(likely(((__pyx_t_7) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_7, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 743; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(((PyObject *)__pyx_v_arr)); + __pyx_v_arr = ((PyArrayObject *)__pyx_t_7); + __pyx_t_7 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":746 + * dtype=dt, + * buffer=data) + * data = arr.astype(np.uint8).tostring() # <<<<<<<<<<<<<< + * elif mdtype == miINT8 or mdtype == miUINT8: + * codec = 'ascii' + */ + __pyx_t_7 = PyObject_GetAttr(((PyObject *)__pyx_v_arr), __pyx_n_s__astype); if (unlikely(!__pyx_t_7)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 746; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_7); + __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 746; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_1 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__uint8); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 746; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 746; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_1); + __Pyx_GIVEREF(__pyx_t_1); + __pyx_t_1 = 0; + __pyx_t_1 = PyObject_Call(__pyx_t_7, __pyx_t_3, NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 746; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__tostring); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 746; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyObject_Call(__pyx_t_3, ((PyObject *)__pyx_empty_tuple), NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 746; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_v_data); + __pyx_v_data = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L4; + } + __pyx_L4:; + goto __pyx_L3; + } + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":747 + * buffer=data) + * data = arr.astype(np.uint8).tostring() + * elif mdtype == miINT8 or mdtype == miUINT8: # <<<<<<<<<<<<<< + * codec = 'ascii' + * elif mdtype in self.codecs: # encoded char data + */ + __pyx_t_6 = (__pyx_v_mdtype == __pyx_e_5scipy_2io_6matlab_10mio5_utils_miINT8); + if (!__pyx_t_6) { + __pyx_t_8 = (__pyx_v_mdtype == __pyx_e_5scipy_2io_6matlab_10mio5_utils_miUINT8); + __pyx_t_9 = __pyx_t_8; + } else { + __pyx_t_9 = __pyx_t_6; + } + if (__pyx_t_9) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":748 + * data = arr.astype(np.uint8).tostring() + * elif mdtype == miINT8 or mdtype == miUINT8: + * codec = 'ascii' # <<<<<<<<<<<<<< + * elif mdtype in self.codecs: # encoded char data + * codec = self.codecs[mdtype] + */ + __Pyx_INCREF(((PyObject *)__pyx_n_s__ascii)); + __Pyx_DECREF(__pyx_v_codec); + __pyx_v_codec = ((PyObject *)__pyx_n_s__ascii); + goto __pyx_L3; + } + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":749 + * elif mdtype == miINT8 or mdtype == miUINT8: + * codec = 'ascii' + * elif mdtype in self.codecs: # encoded char data # <<<<<<<<<<<<<< + * codec = self.codecs[mdtype] + * if not codec: + */ + __pyx_t_1 = __Pyx_PyInt_to_py_npy_uint32(__pyx_v_mdtype); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 749; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_9 = ((PySequence_Contains(__pyx_v_self->codecs, __pyx_t_1))); if (unlikely(__pyx_t_9 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 749; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + if (__pyx_t_9) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":750 + * codec = 'ascii' + * elif mdtype in self.codecs: # encoded char data + * codec = self.codecs[mdtype] # <<<<<<<<<<<<<< + * if not codec: + * raise TypeError('Do not support encoding %d' % mdtype) + */ + __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_self->codecs, __pyx_v_mdtype, sizeof(__pyx_t_5numpy_uint32_t)+1, __Pyx_PyInt_to_py_npy_uint32); if (!__pyx_t_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 750; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_v_codec); + __pyx_v_codec = __pyx_t_1; + __pyx_t_1 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":751 + * elif mdtype in self.codecs: # encoded char data + * codec = self.codecs[mdtype] + * if not codec: # <<<<<<<<<<<<<< + * raise TypeError('Do not support encoding %d' % mdtype) + * else: + */ + __pyx_t_9 = __Pyx_PyObject_IsTrue(__pyx_v_codec); if (unlikely(__pyx_t_9 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 751; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_6 = (!__pyx_t_9); + if (__pyx_t_6) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":752 + * codec = self.codecs[mdtype] + * if not codec: + * raise TypeError('Do not support encoding %d' % mdtype) # <<<<<<<<<<<<<< + * else: + * raise ValueError('Type %d does not appear to be char type' + */ + __pyx_t_1 = __Pyx_PyInt_to_py_npy_uint32(__pyx_v_mdtype); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 752; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_3 = PyNumber_Remainder(((PyObject *)__pyx_kp_s_8), __pyx_t_1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 752; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 752; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + __pyx_t_3 = 0; + __pyx_t_3 = PyObject_Call(__pyx_builtin_TypeError, __pyx_t_1, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 752; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_Raise(__pyx_t_3, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + {__pyx_filename = __pyx_f[0]; __pyx_lineno = 752; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + goto __pyx_L5; + } + __pyx_L5:; + goto __pyx_L3; + } + /*else*/ { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":755 + * else: + * raise ValueError('Type %d does not appear to be char type' + * % mdtype) # <<<<<<<<<<<<<< + * uc_str = data.decode(codec) + * # cast to array to deal with 2, 4 byte width characters + */ + __pyx_t_3 = __Pyx_PyInt_to_py_npy_uint32(__pyx_v_mdtype); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 755; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_1 = PyNumber_Remainder(((PyObject *)__pyx_kp_s_9), __pyx_t_3); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 755; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 754; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_1); + __Pyx_GIVEREF(__pyx_t_1); + __pyx_t_1 = 0; + __pyx_t_1 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 754; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_Raise(__pyx_t_1, 0, 0); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + {__pyx_filename = __pyx_f[0]; __pyx_lineno = 754; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + } + __pyx_L3:; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":756 + * raise ValueError('Type %d does not appear to be char type' + * % mdtype) + * uc_str = data.decode(codec) # <<<<<<<<<<<<<< + * # cast to array to deal with 2, 4 byte width characters + * arr = np.array(uc_str) + */ + __pyx_t_1 = PyObject_GetAttr(__pyx_v_data, __pyx_n_s__decode); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 756; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 756; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_INCREF(__pyx_v_codec); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_codec); + __Pyx_GIVEREF(__pyx_v_codec); + __pyx_t_7 = PyObject_Call(__pyx_t_1, __pyx_t_3, NULL); if (unlikely(!__pyx_t_7)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 756; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_7); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_v_uc_str); + __pyx_v_uc_str = __pyx_t_7; + __pyx_t_7 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":758 + * uc_str = data.decode(codec) + * # cast to array to deal with 2, 4 byte width characters + * arr = np.array(uc_str) # <<<<<<<<<<<<<< + * dt = self.U1_dtype + * # could take this to numpy C-API level, but probably not worth + */ + __pyx_t_7 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_7)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 758; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_7); + __pyx_t_3 = PyObject_GetAttr(__pyx_t_7, __pyx_n_s__array); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 758; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; + __pyx_t_7 = PyTuple_New(1); if (unlikely(!__pyx_t_7)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 758; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_7); + __Pyx_INCREF(__pyx_v_uc_str); + PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_v_uc_str); + __Pyx_GIVEREF(__pyx_v_uc_str); + __pyx_t_1 = PyObject_Call(__pyx_t_3, __pyx_t_7, NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 758; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; + if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 758; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(((PyObject *)__pyx_v_arr)); + __pyx_v_arr = ((PyArrayObject *)__pyx_t_1); + __pyx_t_1 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":759 + * # cast to array to deal with 2, 4 byte width characters + * arr = np.array(uc_str) + * dt = self.U1_dtype # <<<<<<<<<<<<<< + * # could take this to numpy C-API level, but probably not worth + * # it + */ + __Pyx_INCREF(((PyObject *)__pyx_v_self->U1_dtype)); + __Pyx_DECREF(((PyObject *)__pyx_v_dt)); + __pyx_v_dt = __pyx_v_self->U1_dtype; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":762 + * # could take this to numpy C-API level, but probably not worth + * # it + * return np.ndarray(shape=header.dims, # <<<<<<<<<<<<<< + * dtype=dt, + * buffer=arr, + */ + __Pyx_XDECREF(((PyObject *)__pyx_r)); + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 762; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_7 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__ndarray); if (unlikely(!__pyx_t_7)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 762; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_7); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyDict_New(); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 762; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_1)); + if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_n_s__shape), __pyx_v_header->dims) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 762; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":763 + * # it + * return np.ndarray(shape=header.dims, + * dtype=dt, # <<<<<<<<<<<<<< + * buffer=arr, + * order='F') + */ + if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_n_s__dtype), ((PyObject *)__pyx_v_dt)) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 762; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":764 + * return np.ndarray(shape=header.dims, + * dtype=dt, + * buffer=arr, # <<<<<<<<<<<<<< + * order='F') + * + */ + if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_n_s__buffer), ((PyObject *)__pyx_v_arr)) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 762; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_n_s__order), ((PyObject *)__pyx_n_s__F)) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 762; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_3 = PyEval_CallObjectWithKeywords(__pyx_t_7, ((PyObject *)__pyx_empty_tuple), ((PyObject *)__pyx_t_1)); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 762; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; + __Pyx_DECREF(((PyObject *)__pyx_t_1)); __pyx_t_1 = 0; + if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 762; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_r = ((PyArrayObject *)__pyx_t_3); + __pyx_t_3 = 0; + goto __pyx_L0; + + __pyx_r = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_7); + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.read_char"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_DECREF(__pyx_v_data); + __Pyx_DECREF(__pyx_v_codec); + __Pyx_DECREF((PyObject *)__pyx_v_arr); + __Pyx_XDECREF((PyObject *)__pyx_v_dt); + __Pyx_DECREF(__pyx_v_uc_str); + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_DECREF((PyObject *)__pyx_v_header); + __Pyx_XGIVEREF((PyObject *)__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":706 + * shape=(M,N)) + * + * cpdef cnp.ndarray read_char(self, VarHeader5 header): # <<<<<<<<<<<<<< + * ''' Read char matrices from stream as arrays + * + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_char(PyObject *__pyx_v_self, PyObject *__pyx_v_header); /*proto*/ +static char __pyx_doc_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_char[] = " Read char matrices from stream as arrays\n\n Matrices of char are likely to be converted to matrices of\n string by later processing in ``array_from_header``\n "; +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_char(PyObject *__pyx_v_self, PyObject *__pyx_v_header) { + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("read_char"); + if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_header), __pyx_ptype_5scipy_2io_6matlab_10mio5_utils_VarHeader5, 1, "header", 0))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 706; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = ((PyObject *)((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->__pyx_vtab)->read_char(((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self), ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *)__pyx_v_header), 1)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 706; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.read_char"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":767 + * order='F') + * + * cpdef cnp.ndarray read_cells(self, VarHeader5 header): # <<<<<<<<<<<<<< + * ''' Read cell array from stream ''' + * cdef: + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_cells(PyObject *__pyx_v_self, PyObject *__pyx_v_header); /*proto*/ +static PyArrayObject *__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_cells(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *__pyx_v_self, struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *__pyx_v_header, int __pyx_skip_dispatch) { + size_t __pyx_v_i; + PyArrayObject *__pyx_v_result; + PyObject *__pyx_v_tupdims; + size_t __pyx_v_length; + Py_buffer __pyx_bstruct_result; + Py_ssize_t __pyx_bstride_0_result = 0; + Py_ssize_t __pyx_bshape_0_result = 0; + PyArrayObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + PyObject *__pyx_t_4 = NULL; + PyArrayObject *__pyx_t_5 = NULL; + int __pyx_t_6; + PyObject *__pyx_t_7 = NULL; + PyObject *__pyx_t_8 = NULL; + PyObject *__pyx_t_9 = NULL; + size_t __pyx_t_10; + size_t __pyx_t_11; + size_t __pyx_t_12; + PyObject **__pyx_t_13; + __Pyx_RefNannySetupContext("read_cells"); + __Pyx_INCREF((PyObject *)__pyx_v_self); + __Pyx_INCREF((PyObject *)__pyx_v_header); + __pyx_v_result = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); + __pyx_v_tupdims = Py_None; __Pyx_INCREF(Py_None); + __pyx_bstruct_result.buf = NULL; + /* Check if called by wrapper */ + if (unlikely(__pyx_skip_dispatch)) ; + /* Check if overriden in Python */ + else if (unlikely(Py_TYPE(((PyObject *)__pyx_v_self))->tp_dictoffset != 0)) { + __pyx_t_1 = PyObject_GetAttr(((PyObject *)__pyx_v_self), __pyx_n_s__read_cells); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 767; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + if (!PyCFunction_Check(__pyx_t_1) || (PyCFunction_GET_FUNCTION(__pyx_t_1) != (void *)&__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_cells)) { + __Pyx_XDECREF(((PyObject *)__pyx_r)); + __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 767; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(((PyObject *)__pyx_v_header)); + PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_v_header)); + __Pyx_GIVEREF(((PyObject *)__pyx_v_header)); + __pyx_t_3 = PyObject_Call(__pyx_t_1, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 767; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 767; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_r = ((PyArrayObject *)__pyx_t_3); + __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + goto __pyx_L0; + } + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + } + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":773 + * cnp.ndarray[object, ndim=1] result + * # Account for fortran indexing of cells + * tupdims = tuple(header.dims[::-1]) # <<<<<<<<<<<<<< + * cdef size_t length = self.size_from_header(header) + * result = np.empty(length, dtype=object) + */ + __pyx_t_1 = PySlice_New(Py_None, Py_None, __pyx_int_neg_1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 773; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_3 = PyObject_GetItem(__pyx_v_header->dims, __pyx_t_1); if (!__pyx_t_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 773; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 773; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + __pyx_t_3 = 0; + __pyx_t_3 = PyObject_Call(((PyObject *)((PyObject*)&PyTuple_Type)), __pyx_t_1, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 773; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_v_tupdims); + __pyx_v_tupdims = __pyx_t_3; + __pyx_t_3 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":774 + * # Account for fortran indexing of cells + * tupdims = tuple(header.dims[::-1]) + * cdef size_t length = self.size_from_header(header) # <<<<<<<<<<<<<< + * result = np.empty(length, dtype=object) + * for i in range(length): + */ + __pyx_v_length = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->size_from_header(__pyx_v_self, __pyx_v_header); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":775 + * tupdims = tuple(header.dims[::-1]) + * cdef size_t length = self.size_from_header(header) + * result = np.empty(length, dtype=object) # <<<<<<<<<<<<<< + * for i in range(length): + * result[i] = self.read_mi_matrix() + */ + __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 775; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_1 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__empty); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 775; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = __Pyx_PyInt_FromSize_t(__pyx_v_length); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 775; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 775; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + __pyx_t_3 = 0; + __pyx_t_3 = PyDict_New(); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 775; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_3)); + if (PyDict_SetItem(__pyx_t_3, ((PyObject *)__pyx_n_s__dtype), __pyx_builtin_object) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 775; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_4 = PyEval_CallObjectWithKeywords(__pyx_t_1, __pyx_t_2, ((PyObject *)__pyx_t_3)); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 775; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(((PyObject *)__pyx_t_3)); __pyx_t_3 = 0; + if (!(likely(((__pyx_t_4) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_4, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 775; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_5 = ((PyArrayObject *)__pyx_t_4); + { + __Pyx_BufFmt_StackElem __pyx_stack[1]; + __Pyx_SafeReleaseBuffer(&__pyx_bstruct_result); + __pyx_t_6 = __Pyx_GetBufferAndValidate(&__pyx_bstruct_result, (PyObject*)__pyx_t_5, &__Pyx_TypeInfo_object, PyBUF_FORMAT| PyBUF_STRIDES| PyBUF_WRITABLE, 1, 0, __pyx_stack); + if (unlikely(__pyx_t_6 < 0)) { + PyErr_Fetch(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9); + if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_bstruct_result, (PyObject*)__pyx_v_result, &__Pyx_TypeInfo_object, PyBUF_FORMAT| PyBUF_STRIDES| PyBUF_WRITABLE, 1, 0, __pyx_stack) == -1)) { + Py_XDECREF(__pyx_t_7); Py_XDECREF(__pyx_t_8); Py_XDECREF(__pyx_t_9); + __Pyx_RaiseBufferFallbackError(); + } else { + PyErr_Restore(__pyx_t_7, __pyx_t_8, __pyx_t_9); + } + } + __pyx_bstride_0_result = __pyx_bstruct_result.strides[0]; + __pyx_bshape_0_result = __pyx_bstruct_result.shape[0]; + if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 775; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + } + __pyx_t_5 = 0; + __Pyx_DECREF(((PyObject *)__pyx_v_result)); + __pyx_v_result = ((PyArrayObject *)__pyx_t_4); + __pyx_t_4 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":776 + * cdef size_t length = self.size_from_header(header) + * result = np.empty(length, dtype=object) + * for i in range(length): # <<<<<<<<<<<<<< + * result[i] = self.read_mi_matrix() + * return result.reshape(tupdims).T + */ + __pyx_t_10 = __pyx_v_length; + for (__pyx_t_11 = 0; __pyx_t_11 < __pyx_t_10; __pyx_t_11+=1) { + __pyx_v_i = __pyx_t_11; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":777 + * result = np.empty(length, dtype=object) + * for i in range(length): + * result[i] = self.read_mi_matrix() # <<<<<<<<<<<<<< + * return result.reshape(tupdims).T + * + */ + __pyx_t_4 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_mi_matrix(__pyx_v_self, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 777; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_12 = __pyx_v_i; + __pyx_t_6 = -1; + if (unlikely(__pyx_t_12 >= __pyx_bshape_0_result)) __pyx_t_6 = 0; + if (unlikely(__pyx_t_6 != -1)) { + __Pyx_RaiseBufferIndexError(__pyx_t_6); + {__pyx_filename = __pyx_f[0]; __pyx_lineno = 777; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + } + __pyx_t_13 = __Pyx_BufPtrStrided1d(PyObject **, __pyx_bstruct_result.buf, __pyx_t_12, __pyx_bstride_0_result); + __Pyx_GOTREF(*__pyx_t_13); + __Pyx_DECREF(*__pyx_t_13); __Pyx_INCREF(__pyx_t_4); + *__pyx_t_13 = __pyx_t_4; + __Pyx_GIVEREF(*__pyx_t_13); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + } + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":778 + * for i in range(length): + * result[i] = self.read_mi_matrix() + * return result.reshape(tupdims).T # <<<<<<<<<<<<<< + * + * def read_fieldnames(self): + */ + __Pyx_XDECREF(((PyObject *)__pyx_r)); + __pyx_t_4 = PyObject_GetAttr(((PyObject *)__pyx_v_result), __pyx_n_s__reshape); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 778; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 778; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_INCREF(__pyx_v_tupdims); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_tupdims); + __Pyx_GIVEREF(__pyx_v_tupdims); + __pyx_t_2 = PyObject_Call(__pyx_t_4, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 778; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__T); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 778; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 778; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_r = ((PyArrayObject *)__pyx_t_3); + __pyx_t_3 = 0; + goto __pyx_L0; + + __pyx_r = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_4); + { PyObject *__pyx_type, *__pyx_value, *__pyx_tb; + __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb); + __Pyx_SafeReleaseBuffer(&__pyx_bstruct_result); + __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);} + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.read_cells"); + __pyx_r = 0; + goto __pyx_L2; + __pyx_L0:; + __Pyx_SafeReleaseBuffer(&__pyx_bstruct_result); + __pyx_L2:; + __Pyx_DECREF((PyObject *)__pyx_v_result); + __Pyx_DECREF(__pyx_v_tupdims); + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_DECREF((PyObject *)__pyx_v_header); + __Pyx_XGIVEREF((PyObject *)__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":767 + * order='F') + * + * cpdef cnp.ndarray read_cells(self, VarHeader5 header): # <<<<<<<<<<<<<< + * ''' Read cell array from stream ''' + * cdef: + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_cells(PyObject *__pyx_v_self, PyObject *__pyx_v_header); /*proto*/ +static char __pyx_doc_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_cells[] = " Read cell array from stream "; +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_cells(PyObject *__pyx_v_self, PyObject *__pyx_v_header) { + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("read_cells"); + if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_header), __pyx_ptype_5scipy_2io_6matlab_10mio5_utils_VarHeader5, 1, "header", 0))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 767; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = ((PyObject *)((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->__pyx_vtab)->read_cells(((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self), ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *)__pyx_v_header), 1)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 767; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.read_cells"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":780 + * return result.reshape(tupdims).T + * + * def read_fieldnames(self): # <<<<<<<<<<<<<< + * ''' Read fieldnames for struct-like matrix ' + * + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_fieldnames(PyObject *__pyx_v_self, PyObject *unused); /*proto*/ +static char __pyx_doc_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_fieldnames[] = " Read fieldnames for struct-like matrix '\n\n Python wrapper for cdef'ed method\n "; +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_fieldnames(PyObject *__pyx_v_self, PyObject *unused) { + int __pyx_v_n_names; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("read_fieldnames"); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":786 + * ''' + * cdef int n_names + * return self.cread_fieldnames(&n_names) # <<<<<<<<<<<<<< + * + * cdef inline object cread_fieldnames(self, int *n_names_ptr): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->__pyx_vtab)->cread_fieldnames(((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self), (&__pyx_v_n_names)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 786; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.read_fieldnames"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":788 + * return self.cread_fieldnames(&n_names) + * + * cdef inline object cread_fieldnames(self, int *n_names_ptr): # <<<<<<<<<<<<<< + * cdef: + * cnp.int32_t namelength + */ + +static CYTHON_INLINE PyObject *__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_cread_fieldnames(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *__pyx_v_self, int *__pyx_v_n_names_ptr) { + __pyx_t_5numpy_int32_t __pyx_v_namelength; + int __pyx_v_i; + int __pyx_v_n_names; + PyObject *__pyx_v_name; + PyObject *__pyx_v_field_names; + int __pyx_v_res; + PyObject *__pyx_v_names = 0; + char *__pyx_v_n_ptr; + PyObject *__pyx_r = NULL; + int __pyx_t_1; + int __pyx_t_2; + PyObject *__pyx_t_3 = NULL; + PyObject *__pyx_t_4 = NULL; + Py_ssize_t __pyx_t_5; + char *__pyx_t_6; + int __pyx_t_7; + __Pyx_RefNannySetupContext("cread_fieldnames"); + __Pyx_INCREF((PyObject *)__pyx_v_self); + __pyx_v_name = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_field_names = Py_None; __Pyx_INCREF(Py_None); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":794 + * object name, field_names + * # Read field names into list + * cdef int res = self.read_into_int32s(&namelength) # <<<<<<<<<<<<<< + * if res != 1: + * raise ValueError('Only one value for namelength') + */ + __pyx_t_1 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_into_int32s(__pyx_v_self, (&__pyx_v_namelength)); if (unlikely(__pyx_t_1 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 794; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_v_res = __pyx_t_1; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":795 + * # Read field names into list + * cdef int res = self.read_into_int32s(&namelength) + * if res != 1: # <<<<<<<<<<<<<< + * raise ValueError('Only one value for namelength') + * cdef object names = self.read_int8_string() + */ + __pyx_t_2 = (__pyx_v_res != 1); + if (__pyx_t_2) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":796 + * cdef int res = self.read_into_int32s(&namelength) + * if res != 1: + * raise ValueError('Only one value for namelength') # <<<<<<<<<<<<<< + * cdef object names = self.read_int8_string() + * field_names = [] + */ + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 796; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_INCREF(((PyObject *)__pyx_kp_s_10)); + PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_10)); + __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_10)); + __pyx_t_4 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 796; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_Raise(__pyx_t_4, 0, 0); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + {__pyx_filename = __pyx_f[0]; __pyx_lineno = 796; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + goto __pyx_L3; + } + __pyx_L3:; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":797 + * if res != 1: + * raise ValueError('Only one value for namelength') + * cdef object names = self.read_int8_string() # <<<<<<<<<<<<<< + * field_names = [] + * n_names = PyString_Size(names) // namelength + */ + __pyx_t_4 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_int8_string(__pyx_v_self); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 797; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __pyx_v_names = __pyx_t_4; + __pyx_t_4 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":798 + * raise ValueError('Only one value for namelength') + * cdef object names = self.read_int8_string() + * field_names = [] # <<<<<<<<<<<<<< + * n_names = PyString_Size(names) // namelength + * cdef char *n_ptr = names + */ + __pyx_t_4 = PyList_New(0); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 798; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_4)); + __Pyx_DECREF(__pyx_v_field_names); + __pyx_v_field_names = ((PyObject *)__pyx_t_4); + __pyx_t_4 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":799 + * cdef object names = self.read_int8_string() + * field_names = [] + * n_names = PyString_Size(names) // namelength # <<<<<<<<<<<<<< + * cdef char *n_ptr = names + * for i in range(n_names): + */ + __pyx_t_5 = PyString_Size(__pyx_v_names); if (unlikely(__pyx_t_5 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 799; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + if (unlikely(__pyx_v_namelength == 0)) { + PyErr_Format(PyExc_ZeroDivisionError, "integer division or modulo by zero"); + {__pyx_filename = __pyx_f[0]; __pyx_lineno = 799; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + } + else if (sizeof(Py_ssize_t) == sizeof(long) && unlikely(__pyx_v_namelength == -1) && unlikely(UNARY_NEG_WOULD_OVERFLOW(__pyx_t_5))) { + PyErr_Format(PyExc_OverflowError, "value too large to perform division"); + {__pyx_filename = __pyx_f[0]; __pyx_lineno = 799; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + } + __pyx_v_n_names = __Pyx_div_Py_ssize_t(__pyx_t_5, __pyx_v_namelength); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":800 + * field_names = [] + * n_names = PyString_Size(names) // namelength + * cdef char *n_ptr = names # <<<<<<<<<<<<<< + * for i in range(n_names): + * name = PyString_FromString(n_ptr) + */ + __pyx_t_6 = __Pyx_PyBytes_AsString(__pyx_v_names); if (unlikely((!__pyx_t_6) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 800; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_v_n_ptr = __pyx_t_6; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":801 + * n_names = PyString_Size(names) // namelength + * cdef char *n_ptr = names + * for i in range(n_names): # <<<<<<<<<<<<<< + * name = PyString_FromString(n_ptr) + * field_names.append(name) + */ + __pyx_t_1 = __pyx_v_n_names; + for (__pyx_t_7 = 0; __pyx_t_7 < __pyx_t_1; __pyx_t_7+=1) { + __pyx_v_i = __pyx_t_7; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":802 + * cdef char *n_ptr = names + * for i in range(n_names): + * name = PyString_FromString(n_ptr) # <<<<<<<<<<<<<< + * field_names.append(name) + * n_ptr += namelength + */ + __pyx_t_4 = PyString_FromString(__pyx_v_n_ptr); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 802; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_v_name); + __pyx_v_name = __pyx_t_4; + __pyx_t_4 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":803 + * for i in range(n_names): + * name = PyString_FromString(n_ptr) + * field_names.append(name) # <<<<<<<<<<<<<< + * n_ptr += namelength + * n_names_ptr[0] = n_names + */ + __pyx_t_4 = __Pyx_PyObject_Append(__pyx_v_field_names, __pyx_v_name); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 803; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":804 + * name = PyString_FromString(n_ptr) + * field_names.append(name) + * n_ptr += namelength # <<<<<<<<<<<<<< + * n_names_ptr[0] = n_names + * return field_names + */ + __pyx_v_n_ptr += __pyx_v_namelength; + } + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":805 + * field_names.append(name) + * n_ptr += namelength + * n_names_ptr[0] = n_names # <<<<<<<<<<<<<< + * return field_names + * + */ + (__pyx_v_n_names_ptr[0]) = __pyx_v_n_names; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":806 + * n_ptr += namelength + * n_names_ptr[0] = n_names + * return field_names # <<<<<<<<<<<<<< + * + * cpdef cnp.ndarray read_struct(self, VarHeader5 header): + */ + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(__pyx_v_field_names); + __pyx_r = __pyx_v_field_names; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.cread_fieldnames"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_DECREF(__pyx_v_name); + __Pyx_DECREF(__pyx_v_field_names); + __Pyx_XDECREF(__pyx_v_names); + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":808 + * return field_names + * + * cpdef cnp.ndarray read_struct(self, VarHeader5 header): # <<<<<<<<<<<<<< + * ''' Read struct or object array from stream + * + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_struct(PyObject *__pyx_v_self, PyObject *__pyx_v_header); /*proto*/ +static PyArrayObject *__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_struct(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *__pyx_v_self, struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *__pyx_v_header, int __pyx_skip_dispatch) { + int __pyx_v_i; + int __pyx_v_n_names; + PyArrayObject *__pyx_v_rec_res; + PyArrayObject *__pyx_v_result; + PyObject *__pyx_v_dt; + PyObject *__pyx_v_tupdims; + PyObject *__pyx_v_field_names = 0; + size_t __pyx_v_length; + PyObject *__pyx_v_field_name; + PyObject *__pyx_v_obj_template; + PyObject *__pyx_v_item; + PyObject *__pyx_v_name; + Py_buffer __pyx_bstruct_result; + Py_ssize_t __pyx_bstride_0_result = 0; + Py_ssize_t __pyx_bshape_0_result = 0; + PyArrayObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + int __pyx_t_4; + int __pyx_t_5; + PyObject *__pyx_t_6 = NULL; + Py_ssize_t __pyx_t_7; + size_t __pyx_t_8; + PyArrayObject *__pyx_t_9 = NULL; + PyObject *__pyx_t_10 = NULL; + PyObject *__pyx_t_11 = NULL; + PyObject *__pyx_t_12 = NULL; + int __pyx_t_13; + int __pyx_t_14; + PyObject **__pyx_t_15; + __Pyx_RefNannySetupContext("read_struct"); + __Pyx_INCREF((PyObject *)__pyx_v_self); + __Pyx_INCREF((PyObject *)__pyx_v_header); + __pyx_v_rec_res = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); + __pyx_v_result = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); + __pyx_v_dt = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_tupdims = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_field_name = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_obj_template = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_item = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_name = Py_None; __Pyx_INCREF(Py_None); + __pyx_bstruct_result.buf = NULL; + /* Check if called by wrapper */ + if (unlikely(__pyx_skip_dispatch)) ; + /* Check if overriden in Python */ + else if (unlikely(Py_TYPE(((PyObject *)__pyx_v_self))->tp_dictoffset != 0)) { + __pyx_t_1 = PyObject_GetAttr(((PyObject *)__pyx_v_self), __pyx_n_s__read_struct); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 808; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + if (!PyCFunction_Check(__pyx_t_1) || (PyCFunction_GET_FUNCTION(__pyx_t_1) != (void *)&__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_struct)) { + __Pyx_XDECREF(((PyObject *)__pyx_r)); + __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 808; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(((PyObject *)__pyx_v_header)); + PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_v_header)); + __Pyx_GIVEREF(((PyObject *)__pyx_v_header)); + __pyx_t_3 = PyObject_Call(__pyx_t_1, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 808; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 808; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_r = ((PyArrayObject *)__pyx_t_3); + __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + goto __pyx_L0; + } + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + } + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":821 + * object dt, tupdims + * # Read field names into list + * cdef object field_names = self.cread_fieldnames(&n_names) # <<<<<<<<<<<<<< + * # Prepare struct array + * tupdims = tuple(header.dims[::-1]) + */ + __pyx_t_1 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->cread_fieldnames(__pyx_v_self, (&__pyx_v_n_names)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 821; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_v_field_names = __pyx_t_1; + __pyx_t_1 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":823 + * cdef object field_names = self.cread_fieldnames(&n_names) + * # Prepare struct array + * tupdims = tuple(header.dims[::-1]) # <<<<<<<<<<<<<< + * cdef size_t length = self.size_from_header(header) + * if self.struct_as_record: # to record arrays + */ + __pyx_t_1 = PySlice_New(Py_None, Py_None, __pyx_int_neg_1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 823; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_3 = PyObject_GetItem(__pyx_v_header->dims, __pyx_t_1); if (!__pyx_t_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 823; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 823; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + __pyx_t_3 = 0; + __pyx_t_3 = PyObject_Call(((PyObject *)((PyObject*)&PyTuple_Type)), __pyx_t_1, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 823; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_v_tupdims); + __pyx_v_tupdims = __pyx_t_3; + __pyx_t_3 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":824 + * # Prepare struct array + * tupdims = tuple(header.dims[::-1]) + * cdef size_t length = self.size_from_header(header) # <<<<<<<<<<<<<< + * if self.struct_as_record: # to record arrays + * if not n_names: + */ + __pyx_v_length = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->size_from_header(__pyx_v_self, __pyx_v_header); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":825 + * tupdims = tuple(header.dims[::-1]) + * cdef size_t length = self.size_from_header(header) + * if self.struct_as_record: # to record arrays # <<<<<<<<<<<<<< + * if not n_names: + * # If there are no field names, there is no dtype + */ + __pyx_t_4 = __pyx_v_self->struct_as_record; + if (__pyx_t_4) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":826 + * cdef size_t length = self.size_from_header(header) + * if self.struct_as_record: # to record arrays + * if not n_names: # <<<<<<<<<<<<<< + * # If there are no field names, there is no dtype + * # representation we can use, falling back to empty + */ + __pyx_t_5 = (!__pyx_v_n_names); + if (__pyx_t_5) { + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":830 + * # representation we can use, falling back to empty + * # object + * return np.empty(tupdims, dtype=object).T # <<<<<<<<<<<<<< + * dt = [(field_name, object) for field_name in field_names] + * rec_res = np.empty(length, dtype=dt) + */ + __Pyx_XDECREF(((PyObject *)__pyx_r)); + __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 830; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_1 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__empty); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 830; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 830; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_INCREF(__pyx_v_tupdims); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_tupdims); + __Pyx_GIVEREF(__pyx_v_tupdims); + __pyx_t_2 = PyDict_New(); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 830; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_2)); + if (PyDict_SetItem(__pyx_t_2, ((PyObject *)__pyx_n_s__dtype), __pyx_builtin_object) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 830; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_6 = PyEval_CallObjectWithKeywords(__pyx_t_1, __pyx_t_3, ((PyObject *)__pyx_t_2)); if (unlikely(!__pyx_t_6)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 830; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_6); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(((PyObject *)__pyx_t_2)); __pyx_t_2 = 0; + __pyx_t_2 = PyObject_GetAttr(__pyx_t_6, __pyx_n_s__T); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 830; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + if (!(likely(((__pyx_t_2) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_2, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 830; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_r = ((PyArrayObject *)__pyx_t_2); + __pyx_t_2 = 0; + goto __pyx_L0; + goto __pyx_L4; + } + __pyx_L4:; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":831 + * # object + * return np.empty(tupdims, dtype=object).T + * dt = [(field_name, object) for field_name in field_names] # <<<<<<<<<<<<<< + * rec_res = np.empty(length, dtype=dt) + * for i in range(length): + */ + __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 831; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_2)); + if (PyList_CheckExact(__pyx_v_field_names) || PyTuple_CheckExact(__pyx_v_field_names)) { + __pyx_t_7 = 0; __pyx_t_6 = __pyx_v_field_names; __Pyx_INCREF(__pyx_t_6); + } else { + __pyx_t_7 = -1; __pyx_t_6 = PyObject_GetIter(__pyx_v_field_names); if (unlikely(!__pyx_t_6)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 831; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_6); + } + for (;;) { + if (likely(PyList_CheckExact(__pyx_t_6))) { + if (__pyx_t_7 >= PyList_GET_SIZE(__pyx_t_6)) break; + __pyx_t_3 = PyList_GET_ITEM(__pyx_t_6, __pyx_t_7); __Pyx_INCREF(__pyx_t_3); __pyx_t_7++; + } else if (likely(PyTuple_CheckExact(__pyx_t_6))) { + if (__pyx_t_7 >= PyTuple_GET_SIZE(__pyx_t_6)) break; + __pyx_t_3 = PyTuple_GET_ITEM(__pyx_t_6, __pyx_t_7); __Pyx_INCREF(__pyx_t_3); __pyx_t_7++; + } else { + __pyx_t_3 = PyIter_Next(__pyx_t_6); + if (!__pyx_t_3) { + if (unlikely(PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 831; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + break; + } + __Pyx_GOTREF(__pyx_t_3); + } + __Pyx_DECREF(__pyx_v_field_name); + __pyx_v_field_name = __pyx_t_3; + __pyx_t_3 = 0; + __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 831; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_INCREF(__pyx_v_field_name); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_field_name); + __Pyx_GIVEREF(__pyx_v_field_name); + __Pyx_INCREF(__pyx_builtin_object); + PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_builtin_object); + __Pyx_GIVEREF(__pyx_builtin_object); + __pyx_t_4 = PyList_Append(__pyx_t_2, (PyObject*)__pyx_t_3); if (unlikely(__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 831; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + } + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + __Pyx_INCREF(((PyObject *)__pyx_t_2)); + __Pyx_DECREF(__pyx_v_dt); + __pyx_v_dt = ((PyObject *)__pyx_t_2); + __Pyx_DECREF(((PyObject *)__pyx_t_2)); __pyx_t_2 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":832 + * return np.empty(tupdims, dtype=object).T + * dt = [(field_name, object) for field_name in field_names] + * rec_res = np.empty(length, dtype=dt) # <<<<<<<<<<<<<< + * for i in range(length): + * for field_name in field_names: + */ + __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 832; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_6 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__empty); if (unlikely(!__pyx_t_6)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 832; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_6); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = __Pyx_PyInt_FromSize_t(__pyx_v_length); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 832; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 832; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2); + __Pyx_GIVEREF(__pyx_t_2); + __pyx_t_2 = 0; + __pyx_t_2 = PyDict_New(); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 832; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_2)); + if (PyDict_SetItem(__pyx_t_2, ((PyObject *)__pyx_n_s__dtype), __pyx_v_dt) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 832; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_1 = PyEval_CallObjectWithKeywords(__pyx_t_6, __pyx_t_3, ((PyObject *)__pyx_t_2)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 832; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(((PyObject *)__pyx_t_2)); __pyx_t_2 = 0; + if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 832; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(((PyObject *)__pyx_v_rec_res)); + __pyx_v_rec_res = ((PyArrayObject *)__pyx_t_1); + __pyx_t_1 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":833 + * dt = [(field_name, object) for field_name in field_names] + * rec_res = np.empty(length, dtype=dt) + * for i in range(length): # <<<<<<<<<<<<<< + * for field_name in field_names: + * rec_res[i][field_name] = self.read_mi_matrix() + */ + __pyx_t_8 = __pyx_v_length; + for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_8; __pyx_t_4+=1) { + __pyx_v_i = __pyx_t_4; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":834 + * rec_res = np.empty(length, dtype=dt) + * for i in range(length): + * for field_name in field_names: # <<<<<<<<<<<<<< + * rec_res[i][field_name] = self.read_mi_matrix() + * return rec_res.reshape(tupdims).T + */ + if (PyList_CheckExact(__pyx_v_field_names) || PyTuple_CheckExact(__pyx_v_field_names)) { + __pyx_t_7 = 0; __pyx_t_1 = __pyx_v_field_names; __Pyx_INCREF(__pyx_t_1); + } else { + __pyx_t_7 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_v_field_names); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 834; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + } + for (;;) { + if (likely(PyList_CheckExact(__pyx_t_1))) { + if (__pyx_t_7 >= PyList_GET_SIZE(__pyx_t_1)) break; + __pyx_t_2 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_7); __Pyx_INCREF(__pyx_t_2); __pyx_t_7++; + } else if (likely(PyTuple_CheckExact(__pyx_t_1))) { + if (__pyx_t_7 >= PyTuple_GET_SIZE(__pyx_t_1)) break; + __pyx_t_2 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_7); __Pyx_INCREF(__pyx_t_2); __pyx_t_7++; + } else { + __pyx_t_2 = PyIter_Next(__pyx_t_1); + if (!__pyx_t_2) { + if (unlikely(PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 834; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + break; + } + __Pyx_GOTREF(__pyx_t_2); + } + __Pyx_DECREF(__pyx_v_field_name); + __pyx_v_field_name = __pyx_t_2; + __pyx_t_2 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":835 + * for i in range(length): + * for field_name in field_names: + * rec_res[i][field_name] = self.read_mi_matrix() # <<<<<<<<<<<<<< + * return rec_res.reshape(tupdims).T + * # Backward compatibility with previous format + */ + __pyx_t_2 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_mi_matrix(__pyx_v_self, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 835; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = __Pyx_GetItemInt(((PyObject *)__pyx_v_rec_res), __pyx_v_i, sizeof(int), PyInt_FromLong); if (!__pyx_t_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 835; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + if (PyObject_SetItem(__pyx_t_3, __pyx_v_field_name, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 835; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + } + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + } + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":836 + * for field_name in field_names: + * rec_res[i][field_name] = self.read_mi_matrix() + * return rec_res.reshape(tupdims).T # <<<<<<<<<<<<<< + * # Backward compatibility with previous format + * obj_template = mio5p.mat_struct() + */ + __Pyx_XDECREF(((PyObject *)__pyx_r)); + __pyx_t_1 = PyObject_GetAttr(((PyObject *)__pyx_v_rec_res), __pyx_n_s__reshape); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 836; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 836; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(__pyx_v_tupdims); + PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_tupdims); + __Pyx_GIVEREF(__pyx_v_tupdims); + __pyx_t_3 = PyObject_Call(__pyx_t_1, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 836; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__T); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 836; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (!(likely(((__pyx_t_2) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_2, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 836; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_r = ((PyArrayObject *)__pyx_t_2); + __pyx_t_2 = 0; + goto __pyx_L0; + goto __pyx_L3; + } + __pyx_L3:; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":838 + * return rec_res.reshape(tupdims).T + * # Backward compatibility with previous format + * obj_template = mio5p.mat_struct() # <<<<<<<<<<<<<< + * obj_template._fieldnames = field_names + * result = np.empty(length, dtype=object) + */ + __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__mio5p); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 838; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__mat_struct); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 838; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyObject_Call(__pyx_t_3, ((PyObject *)__pyx_empty_tuple), NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 838; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_v_obj_template); + __pyx_v_obj_template = __pyx_t_2; + __pyx_t_2 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":839 + * # Backward compatibility with previous format + * obj_template = mio5p.mat_struct() + * obj_template._fieldnames = field_names # <<<<<<<<<<<<<< + * result = np.empty(length, dtype=object) + * for i in range(length): + */ + if (PyObject_SetAttr(__pyx_v_obj_template, __pyx_n_s___fieldnames, __pyx_v_field_names) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 839; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":840 + * obj_template = mio5p.mat_struct() + * obj_template._fieldnames = field_names + * result = np.empty(length, dtype=object) # <<<<<<<<<<<<<< + * for i in range(length): + * item = pycopy(obj_template) + */ + __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 840; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__empty); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 840; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = __Pyx_PyInt_FromSize_t(__pyx_v_length); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 840; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 840; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_2); + __Pyx_GIVEREF(__pyx_t_2); + __pyx_t_2 = 0; + __pyx_t_2 = PyDict_New(); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 840; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_2)); + if (PyDict_SetItem(__pyx_t_2, ((PyObject *)__pyx_n_s__dtype), __pyx_builtin_object) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 840; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_6 = PyEval_CallObjectWithKeywords(__pyx_t_3, __pyx_t_1, ((PyObject *)__pyx_t_2)); if (unlikely(!__pyx_t_6)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 840; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_6); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(((PyObject *)__pyx_t_2)); __pyx_t_2 = 0; + if (!(likely(((__pyx_t_6) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_6, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 840; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_9 = ((PyArrayObject *)__pyx_t_6); + { + __Pyx_BufFmt_StackElem __pyx_stack[1]; + __Pyx_SafeReleaseBuffer(&__pyx_bstruct_result); + __pyx_t_4 = __Pyx_GetBufferAndValidate(&__pyx_bstruct_result, (PyObject*)__pyx_t_9, &__Pyx_TypeInfo_object, PyBUF_FORMAT| PyBUF_STRIDES| PyBUF_WRITABLE, 1, 0, __pyx_stack); + if (unlikely(__pyx_t_4 < 0)) { + PyErr_Fetch(&__pyx_t_10, &__pyx_t_11, &__pyx_t_12); + if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_bstruct_result, (PyObject*)__pyx_v_result, &__Pyx_TypeInfo_object, PyBUF_FORMAT| PyBUF_STRIDES| PyBUF_WRITABLE, 1, 0, __pyx_stack) == -1)) { + Py_XDECREF(__pyx_t_10); Py_XDECREF(__pyx_t_11); Py_XDECREF(__pyx_t_12); + __Pyx_RaiseBufferFallbackError(); + } else { + PyErr_Restore(__pyx_t_10, __pyx_t_11, __pyx_t_12); + } + } + __pyx_bstride_0_result = __pyx_bstruct_result.strides[0]; + __pyx_bshape_0_result = __pyx_bstruct_result.shape[0]; + if (unlikely(__pyx_t_4 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 840; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + } + __pyx_t_9 = 0; + __Pyx_DECREF(((PyObject *)__pyx_v_result)); + __pyx_v_result = ((PyArrayObject *)__pyx_t_6); + __pyx_t_6 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":841 + * obj_template._fieldnames = field_names + * result = np.empty(length, dtype=object) + * for i in range(length): # <<<<<<<<<<<<<< + * item = pycopy(obj_template) + * for name in field_names: + */ + __pyx_t_8 = __pyx_v_length; + for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_8; __pyx_t_4+=1) { + __pyx_v_i = __pyx_t_4; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":842 + * result = np.empty(length, dtype=object) + * for i in range(length): + * item = pycopy(obj_template) # <<<<<<<<<<<<<< + * for name in field_names: + * item.__dict__[name] = self.read_mi_matrix() + */ + __pyx_t_6 = __Pyx_GetName(__pyx_m, __pyx_n_s__pycopy); if (unlikely(!__pyx_t_6)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 842; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_6); + __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 842; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(__pyx_v_obj_template); + PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_obj_template); + __Pyx_GIVEREF(__pyx_v_obj_template); + __pyx_t_1 = PyObject_Call(__pyx_t_6, __pyx_t_2, NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 842; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_v_item); + __pyx_v_item = __pyx_t_1; + __pyx_t_1 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":843 + * for i in range(length): + * item = pycopy(obj_template) + * for name in field_names: # <<<<<<<<<<<<<< + * item.__dict__[name] = self.read_mi_matrix() + * result[i] = item + */ + if (PyList_CheckExact(__pyx_v_field_names) || PyTuple_CheckExact(__pyx_v_field_names)) { + __pyx_t_7 = 0; __pyx_t_1 = __pyx_v_field_names; __Pyx_INCREF(__pyx_t_1); + } else { + __pyx_t_7 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_v_field_names); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 843; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + } + for (;;) { + if (likely(PyList_CheckExact(__pyx_t_1))) { + if (__pyx_t_7 >= PyList_GET_SIZE(__pyx_t_1)) break; + __pyx_t_2 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_7); __Pyx_INCREF(__pyx_t_2); __pyx_t_7++; + } else if (likely(PyTuple_CheckExact(__pyx_t_1))) { + if (__pyx_t_7 >= PyTuple_GET_SIZE(__pyx_t_1)) break; + __pyx_t_2 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_7); __Pyx_INCREF(__pyx_t_2); __pyx_t_7++; + } else { + __pyx_t_2 = PyIter_Next(__pyx_t_1); + if (!__pyx_t_2) { + if (unlikely(PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 843; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + break; + } + __Pyx_GOTREF(__pyx_t_2); + } + __Pyx_DECREF(__pyx_v_name); + __pyx_v_name = __pyx_t_2; + __pyx_t_2 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":844 + * item = pycopy(obj_template) + * for name in field_names: + * item.__dict__[name] = self.read_mi_matrix() # <<<<<<<<<<<<<< + * result[i] = item + * return result.reshape(tupdims).T + */ + __pyx_t_2 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_mi_matrix(__pyx_v_self, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 844; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_6 = PyObject_GetAttr(__pyx_v_item, __pyx_n_s____dict__); if (unlikely(!__pyx_t_6)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 844; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_6); + if (PyObject_SetItem(__pyx_t_6, __pyx_v_name, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 844; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + } + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":845 + * for name in field_names: + * item.__dict__[name] = self.read_mi_matrix() + * result[i] = item # <<<<<<<<<<<<<< + * return result.reshape(tupdims).T + * + */ + __pyx_t_13 = __pyx_v_i; + __pyx_t_14 = -1; + if (__pyx_t_13 < 0) { + __pyx_t_13 += __pyx_bshape_0_result; + if (unlikely(__pyx_t_13 < 0)) __pyx_t_14 = 0; + } else if (unlikely(__pyx_t_13 >= __pyx_bshape_0_result)) __pyx_t_14 = 0; + if (unlikely(__pyx_t_14 != -1)) { + __Pyx_RaiseBufferIndexError(__pyx_t_14); + {__pyx_filename = __pyx_f[0]; __pyx_lineno = 845; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + } + __pyx_t_15 = __Pyx_BufPtrStrided1d(PyObject **, __pyx_bstruct_result.buf, __pyx_t_13, __pyx_bstride_0_result); + __Pyx_GOTREF(*__pyx_t_15); + __Pyx_DECREF(*__pyx_t_15); __Pyx_INCREF(__pyx_v_item); + *__pyx_t_15 = __pyx_v_item; + __Pyx_GIVEREF(*__pyx_t_15); + } + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":846 + * item.__dict__[name] = self.read_mi_matrix() + * result[i] = item + * return result.reshape(tupdims).T # <<<<<<<<<<<<<< + * + * cpdef cnp.ndarray read_opaque(self, VarHeader5 hdr): + */ + __Pyx_XDECREF(((PyObject *)__pyx_r)); + __pyx_t_1 = PyObject_GetAttr(((PyObject *)__pyx_v_result), __pyx_n_s__reshape); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 846; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 846; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(__pyx_v_tupdims); + PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_tupdims); + __Pyx_GIVEREF(__pyx_v_tupdims); + __pyx_t_6 = PyObject_Call(__pyx_t_1, __pyx_t_2, NULL); if (unlikely(!__pyx_t_6)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 846; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_6); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyObject_GetAttr(__pyx_t_6, __pyx_n_s__T); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 846; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + if (!(likely(((__pyx_t_2) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_2, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 846; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_r = ((PyArrayObject *)__pyx_t_2); + __pyx_t_2 = 0; + goto __pyx_L0; + + __pyx_r = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_6); + { PyObject *__pyx_type, *__pyx_value, *__pyx_tb; + __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb); + __Pyx_SafeReleaseBuffer(&__pyx_bstruct_result); + __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);} + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.read_struct"); + __pyx_r = 0; + goto __pyx_L2; + __pyx_L0:; + __Pyx_SafeReleaseBuffer(&__pyx_bstruct_result); + __pyx_L2:; + __Pyx_DECREF((PyObject *)__pyx_v_rec_res); + __Pyx_DECREF((PyObject *)__pyx_v_result); + __Pyx_DECREF(__pyx_v_dt); + __Pyx_DECREF(__pyx_v_tupdims); + __Pyx_XDECREF(__pyx_v_field_names); + __Pyx_DECREF(__pyx_v_field_name); + __Pyx_DECREF(__pyx_v_obj_template); + __Pyx_DECREF(__pyx_v_item); + __Pyx_DECREF(__pyx_v_name); + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_DECREF((PyObject *)__pyx_v_header); + __Pyx_XGIVEREF((PyObject *)__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":808 + * return field_names + * + * cpdef cnp.ndarray read_struct(self, VarHeader5 header): # <<<<<<<<<<<<<< + * ''' Read struct or object array from stream + * + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_struct(PyObject *__pyx_v_self, PyObject *__pyx_v_header); /*proto*/ +static char __pyx_doc_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_struct[] = " Read struct or object array from stream\n\n Objects are just structs with an extra field *classname*,\n defined before (this here) struct format structure\n "; +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_struct(PyObject *__pyx_v_self, PyObject *__pyx_v_header) { + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("read_struct"); + if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_header), __pyx_ptype_5scipy_2io_6matlab_10mio5_utils_VarHeader5, 1, "header", 0))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 808; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = ((PyObject *)((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->__pyx_vtab)->read_struct(((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self), ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *)__pyx_v_header), 1)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 808; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.read_struct"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":848 + * return result.reshape(tupdims).T + * + * cpdef cnp.ndarray read_opaque(self, VarHeader5 hdr): # <<<<<<<<<<<<<< + * ''' Read opaque (function workspace) type + * + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_opaque(PyObject *__pyx_v_self, PyObject *__pyx_v_hdr); /*proto*/ +static PyArrayObject *__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_opaque(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *__pyx_v_self, struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *__pyx_v_hdr, int __pyx_skip_dispatch) { + PyArrayObject *__pyx_v_res = 0; + PyArrayObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + PyObject *__pyx_t_4 = NULL; + __Pyx_RefNannySetupContext("read_opaque"); + /* Check if called by wrapper */ + if (unlikely(__pyx_skip_dispatch)) ; + /* Check if overriden in Python */ + else if (unlikely(Py_TYPE(((PyObject *)__pyx_v_self))->tp_dictoffset != 0)) { + __pyx_t_1 = PyObject_GetAttr(((PyObject *)__pyx_v_self), __pyx_n_s__read_opaque); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 848; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + if (!PyCFunction_Check(__pyx_t_1) || (PyCFunction_GET_FUNCTION(__pyx_t_1) != (void *)&__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_opaque)) { + __Pyx_XDECREF(((PyObject *)__pyx_r)); + __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 848; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(((PyObject *)__pyx_v_hdr)); + PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_v_hdr)); + __Pyx_GIVEREF(((PyObject *)__pyx_v_hdr)); + __pyx_t_3 = PyObject_Call(__pyx_t_1, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 848; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 848; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_r = ((PyArrayObject *)__pyx_t_3); + __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + goto __pyx_L0; + } + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + } + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":864 + * See the comments at the beginning of ``mio5.py`` + * ''' + * cdef cnp.ndarray res = np.empty((1,), dtype=OPAQUE_DTYPE) # <<<<<<<<<<<<<< + * res[0]['s0'] = self.read_int8_string() + * res[0]['s1'] = self.read_int8_string() + */ + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 864; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_3 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__empty); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 864; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 864; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_INCREF(__pyx_int_1); + PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_int_1); + __Pyx_GIVEREF(__pyx_int_1); + __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 864; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_1); + __Pyx_GIVEREF(__pyx_t_1); + __pyx_t_1 = 0; + __pyx_t_1 = PyDict_New(); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 864; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_1)); + if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_n_s__dtype), ((PyObject *)__pyx_v_5scipy_2io_6matlab_10mio5_utils_OPAQUE_DTYPE)) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 864; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_4 = PyEval_CallObjectWithKeywords(__pyx_t_3, __pyx_t_2, ((PyObject *)__pyx_t_1)); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 864; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(((PyObject *)__pyx_t_1)); __pyx_t_1 = 0; + if (!(likely(((__pyx_t_4) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_4, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 864; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_v_res = ((PyArrayObject *)__pyx_t_4); + __pyx_t_4 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":865 + * ''' + * cdef cnp.ndarray res = np.empty((1,), dtype=OPAQUE_DTYPE) + * res[0]['s0'] = self.read_int8_string() # <<<<<<<<<<<<<< + * res[0]['s1'] = self.read_int8_string() + * res[0]['s2'] = self.read_int8_string() + */ + __pyx_t_4 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_int8_string(__pyx_v_self); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 865; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_1 = __Pyx_GetItemInt(((PyObject *)__pyx_v_res), 0, sizeof(long), PyInt_FromLong); if (!__pyx_t_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 865; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + if (PyObject_SetItem(__pyx_t_1, ((PyObject *)__pyx_n_s__s0), __pyx_t_4) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 865; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":866 + * cdef cnp.ndarray res = np.empty((1,), dtype=OPAQUE_DTYPE) + * res[0]['s0'] = self.read_int8_string() + * res[0]['s1'] = self.read_int8_string() # <<<<<<<<<<<<<< + * res[0]['s2'] = self.read_int8_string() + * res[0]['arr'] = self.read_mi_matrix() + */ + __pyx_t_4 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_int8_string(__pyx_v_self); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 866; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_1 = __Pyx_GetItemInt(((PyObject *)__pyx_v_res), 0, sizeof(long), PyInt_FromLong); if (!__pyx_t_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 866; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + if (PyObject_SetItem(__pyx_t_1, ((PyObject *)__pyx_n_s__s1), __pyx_t_4) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 866; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":867 + * res[0]['s0'] = self.read_int8_string() + * res[0]['s1'] = self.read_int8_string() + * res[0]['s2'] = self.read_int8_string() # <<<<<<<<<<<<<< + * res[0]['arr'] = self.read_mi_matrix() + * return res + */ + __pyx_t_4 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_int8_string(__pyx_v_self); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 867; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_1 = __Pyx_GetItemInt(((PyObject *)__pyx_v_res), 0, sizeof(long), PyInt_FromLong); if (!__pyx_t_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 867; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + if (PyObject_SetItem(__pyx_t_1, ((PyObject *)__pyx_n_s__s2), __pyx_t_4) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 867; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":868 + * res[0]['s1'] = self.read_int8_string() + * res[0]['s2'] = self.read_int8_string() + * res[0]['arr'] = self.read_mi_matrix() # <<<<<<<<<<<<<< + * return res + */ + __pyx_t_4 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self->__pyx_vtab)->read_mi_matrix(__pyx_v_self, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 868; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_1 = __Pyx_GetItemInt(((PyObject *)__pyx_v_res), 0, sizeof(long), PyInt_FromLong); if (!__pyx_t_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 868; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + if (PyObject_SetItem(__pyx_t_1, ((PyObject *)__pyx_n_s__arr), __pyx_t_4) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 868; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":869 + * res[0]['s2'] = self.read_int8_string() + * res[0]['arr'] = self.read_mi_matrix() + * return res # <<<<<<<<<<<<<< + */ + __Pyx_XDECREF(((PyObject *)__pyx_r)); + __Pyx_INCREF(((PyObject *)__pyx_v_res)); + __pyx_r = __pyx_v_res; + goto __pyx_L0; + + __pyx_r = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.read_opaque"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XDECREF((PyObject *)__pyx_v_res); + __Pyx_XGIVEREF((PyObject *)__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":848 + * return result.reshape(tupdims).T + * + * cpdef cnp.ndarray read_opaque(self, VarHeader5 hdr): # <<<<<<<<<<<<<< + * ''' Read opaque (function workspace) type + * + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_opaque(PyObject *__pyx_v_self, PyObject *__pyx_v_hdr); /*proto*/ +static char __pyx_doc_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_opaque[] = " Read opaque (function workspace) type\n\n Looking at some mat files, the structure of this type seems to\n be:\n\n * array flags as usual (already read into `hdr`)\n * 3 int8 strings\n * a matrix\n\n Then there's a matrix at the end of the mat file that seems have\n the anonymous founction workspaces - we load it as\n ``__function_workspace__``\n\n See the comments at the beginning of ``mio5.py``\n "; +static PyObject *__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_opaque(PyObject *__pyx_v_self, PyObject *__pyx_v_hdr) { + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("read_opaque"); + if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_hdr), __pyx_ptype_5scipy_2io_6matlab_10mio5_utils_VarHeader5, 1, "hdr", 0))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 848; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = ((PyObject *)((struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self)->__pyx_vtab)->read_opaque(((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)__pyx_v_self), ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *)__pyx_v_hdr), 1)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 848; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("scipy.io.matlab.mio5_utils.VarReader5.read_opaque"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":187 + * # experimental exception made for __getbuffer__ and __releasebuffer__ + * # -- the details of this may change. + * def __getbuffer__(ndarray self, Py_buffer* info, int flags): # <<<<<<<<<<<<<< + * # This implementation of getbuffer is geared towards Cython + * # requirements, and does not yet fullfill the PEP. + */ + +static int __pyx_pf_5numpy_7ndarray___getbuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ +static int __pyx_pf_5numpy_7ndarray___getbuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { + int __pyx_v_copy_shape; + int __pyx_v_i; + int __pyx_v_ndim; + int __pyx_v_endian_detector; + int __pyx_v_little_endian; + int __pyx_v_t; + char *__pyx_v_f; + PyArray_Descr *__pyx_v_descr = 0; + int __pyx_v_offset; + int __pyx_v_hasfields; + int __pyx_r; + int __pyx_t_1; + int __pyx_t_2; + int __pyx_t_3; + PyObject *__pyx_t_4 = NULL; + PyObject *__pyx_t_5 = NULL; + int __pyx_t_6; + int __pyx_t_7; + int __pyx_t_8; + char *__pyx_t_9; + __Pyx_RefNannySetupContext("__getbuffer__"); + if (__pyx_v_info == NULL) return 0; + __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None); + __Pyx_GIVEREF(__pyx_v_info->obj); + __Pyx_INCREF((PyObject *)__pyx_v_self); + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":193 + * # of flags + * cdef int copy_shape, i, ndim + * cdef int endian_detector = 1 # <<<<<<<<<<<<<< + * cdef bint little_endian = ((&endian_detector)[0] != 0) + * + */ + __pyx_v_endian_detector = 1; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":194 + * cdef int copy_shape, i, ndim + * cdef int endian_detector = 1 + * cdef bint little_endian = ((&endian_detector)[0] != 0) # <<<<<<<<<<<<<< + * + * ndim = PyArray_NDIM(self) + */ + __pyx_v_little_endian = ((((char *)(&__pyx_v_endian_detector))[0]) != 0); + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":196 + * cdef bint little_endian = ((&endian_detector)[0] != 0) + * + * ndim = PyArray_NDIM(self) # <<<<<<<<<<<<<< + * + * if sizeof(npy_intp) != sizeof(Py_ssize_t): + */ + __pyx_v_ndim = PyArray_NDIM(((PyArrayObject *)__pyx_v_self)); + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":198 + * ndim = PyArray_NDIM(self) + * + * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<< + * copy_shape = 1 + * else: + */ + __pyx_t_1 = ((sizeof(npy_intp)) != (sizeof(Py_ssize_t))); + if (__pyx_t_1) { + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":199 + * + * if sizeof(npy_intp) != sizeof(Py_ssize_t): + * copy_shape = 1 # <<<<<<<<<<<<<< + * else: + * copy_shape = 0 + */ + __pyx_v_copy_shape = 1; + goto __pyx_L5; + } + /*else*/ { + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":201 + * copy_shape = 1 + * else: + * copy_shape = 0 # <<<<<<<<<<<<<< + * + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) + */ + __pyx_v_copy_shape = 0; + } + __pyx_L5:; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":203 + * copy_shape = 0 + * + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) # <<<<<<<<<<<<<< + * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): + * raise ValueError(u"ndarray is not C contiguous") + */ + __pyx_t_1 = ((__pyx_v_flags & PyBUF_C_CONTIGUOUS) == PyBUF_C_CONTIGUOUS); + if (__pyx_t_1) { + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":204 + * + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) + * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): # <<<<<<<<<<<<<< + * raise ValueError(u"ndarray is not C contiguous") + * + */ + __pyx_t_2 = (!PyArray_CHKFLAGS(((PyArrayObject *)__pyx_v_self), NPY_C_CONTIGUOUS)); + __pyx_t_3 = __pyx_t_2; + } else { + __pyx_t_3 = __pyx_t_1; + } + if (__pyx_t_3) { + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":205 + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) + * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): + * raise ValueError(u"ndarray is not C contiguous") # <<<<<<<<<<<<<< + * + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) + */ + __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 205; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_INCREF(((PyObject *)__pyx_kp_u_11)); + PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)__pyx_kp_u_11)); + __Pyx_GIVEREF(((PyObject *)__pyx_kp_u_11)); + __pyx_t_5 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 205; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_Raise(__pyx_t_5, 0, 0); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + {__pyx_filename = __pyx_f[1]; __pyx_lineno = 205; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + goto __pyx_L6; + } + __pyx_L6:; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":207 + * raise ValueError(u"ndarray is not C contiguous") + * + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) # <<<<<<<<<<<<<< + * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): + * raise ValueError(u"ndarray is not Fortran contiguous") + */ + __pyx_t_3 = ((__pyx_v_flags & PyBUF_F_CONTIGUOUS) == PyBUF_F_CONTIGUOUS); + if (__pyx_t_3) { + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":208 + * + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) + * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): # <<<<<<<<<<<<<< + * raise ValueError(u"ndarray is not Fortran contiguous") + * + */ + __pyx_t_1 = (!PyArray_CHKFLAGS(((PyArrayObject *)__pyx_v_self), NPY_F_CONTIGUOUS)); + __pyx_t_2 = __pyx_t_1; + } else { + __pyx_t_2 = __pyx_t_3; + } + if (__pyx_t_2) { + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":209 + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) + * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): + * raise ValueError(u"ndarray is not Fortran contiguous") # <<<<<<<<<<<<<< + * + * info.buf = PyArray_DATA(self) + */ + __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 209; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_INCREF(((PyObject *)__pyx_kp_u_12)); + PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_kp_u_12)); + __Pyx_GIVEREF(((PyObject *)__pyx_kp_u_12)); + __pyx_t_4 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 209; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __Pyx_Raise(__pyx_t_4, 0, 0); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + {__pyx_filename = __pyx_f[1]; __pyx_lineno = 209; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + goto __pyx_L7; + } + __pyx_L7:; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":211 + * raise ValueError(u"ndarray is not Fortran contiguous") + * + * info.buf = PyArray_DATA(self) # <<<<<<<<<<<<<< + * info.ndim = ndim + * if copy_shape: + */ + __pyx_v_info->buf = PyArray_DATA(((PyArrayObject *)__pyx_v_self)); + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":212 + * + * info.buf = PyArray_DATA(self) + * info.ndim = ndim # <<<<<<<<<<<<<< + * if copy_shape: + * # Allocate new buffer for strides and shape info. This is allocated + */ + __pyx_v_info->ndim = __pyx_v_ndim; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":213 + * info.buf = PyArray_DATA(self) + * info.ndim = ndim + * if copy_shape: # <<<<<<<<<<<<<< + * # Allocate new buffer for strides and shape info. This is allocated + * # as one block, strides first. + */ + __pyx_t_6 = __pyx_v_copy_shape; + if (__pyx_t_6) { + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":216 + * # Allocate new buffer for strides and shape info. This is allocated + * # as one block, strides first. + * info.strides = stdlib.malloc(sizeof(Py_ssize_t) * ndim * 2) # <<<<<<<<<<<<<< + * info.shape = info.strides + ndim + * for i in range(ndim): + */ + __pyx_v_info->strides = ((Py_ssize_t *)malloc((((sizeof(Py_ssize_t)) * __pyx_v_ndim) * 2))); + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":217 + * # as one block, strides first. + * info.strides = stdlib.malloc(sizeof(Py_ssize_t) * ndim * 2) + * info.shape = info.strides + ndim # <<<<<<<<<<<<<< + * for i in range(ndim): + * info.strides[i] = PyArray_STRIDES(self)[i] + */ + __pyx_v_info->shape = (__pyx_v_info->strides + __pyx_v_ndim); + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":218 + * info.strides = stdlib.malloc(sizeof(Py_ssize_t) * ndim * 2) + * info.shape = info.strides + ndim + * for i in range(ndim): # <<<<<<<<<<<<<< + * info.strides[i] = PyArray_STRIDES(self)[i] + * info.shape[i] = PyArray_DIMS(self)[i] + */ + __pyx_t_6 = __pyx_v_ndim; + for (__pyx_t_7 = 0; __pyx_t_7 < __pyx_t_6; __pyx_t_7+=1) { + __pyx_v_i = __pyx_t_7; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":219 + * info.shape = info.strides + ndim + * for i in range(ndim): + * info.strides[i] = PyArray_STRIDES(self)[i] # <<<<<<<<<<<<<< + * info.shape[i] = PyArray_DIMS(self)[i] + * else: + */ + (__pyx_v_info->strides[__pyx_v_i]) = (PyArray_STRIDES(((PyArrayObject *)__pyx_v_self))[__pyx_v_i]); + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":220 + * for i in range(ndim): + * info.strides[i] = PyArray_STRIDES(self)[i] + * info.shape[i] = PyArray_DIMS(self)[i] # <<<<<<<<<<<<<< + * else: + * info.strides = PyArray_STRIDES(self) + */ + (__pyx_v_info->shape[__pyx_v_i]) = (PyArray_DIMS(((PyArrayObject *)__pyx_v_self))[__pyx_v_i]); + } + goto __pyx_L8; + } + /*else*/ { + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":222 + * info.shape[i] = PyArray_DIMS(self)[i] + * else: + * info.strides = PyArray_STRIDES(self) # <<<<<<<<<<<<<< + * info.shape = PyArray_DIMS(self) + * info.suboffsets = NULL + */ + __pyx_v_info->strides = ((Py_ssize_t *)PyArray_STRIDES(((PyArrayObject *)__pyx_v_self))); + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":223 + * else: + * info.strides = PyArray_STRIDES(self) + * info.shape = PyArray_DIMS(self) # <<<<<<<<<<<<<< + * info.suboffsets = NULL + * info.itemsize = PyArray_ITEMSIZE(self) + */ + __pyx_v_info->shape = ((Py_ssize_t *)PyArray_DIMS(((PyArrayObject *)__pyx_v_self))); + } + __pyx_L8:; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":224 + * info.strides = PyArray_STRIDES(self) + * info.shape = PyArray_DIMS(self) + * info.suboffsets = NULL # <<<<<<<<<<<<<< + * info.itemsize = PyArray_ITEMSIZE(self) + * info.readonly = not PyArray_ISWRITEABLE(self) + */ + __pyx_v_info->suboffsets = NULL; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":225 + * info.shape = PyArray_DIMS(self) + * info.suboffsets = NULL + * info.itemsize = PyArray_ITEMSIZE(self) # <<<<<<<<<<<<<< + * info.readonly = not PyArray_ISWRITEABLE(self) + * + */ + __pyx_v_info->itemsize = PyArray_ITEMSIZE(((PyArrayObject *)__pyx_v_self)); + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":226 + * info.suboffsets = NULL + * info.itemsize = PyArray_ITEMSIZE(self) + * info.readonly = not PyArray_ISWRITEABLE(self) # <<<<<<<<<<<<<< + * + * cdef int t + */ + __pyx_v_info->readonly = (!PyArray_ISWRITEABLE(((PyArrayObject *)__pyx_v_self))); + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":229 + * + * cdef int t + * cdef char* f = NULL # <<<<<<<<<<<<<< + * cdef dtype descr = self.descr + * cdef list stack + */ + __pyx_v_f = NULL; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":230 + * cdef int t + * cdef char* f = NULL + * cdef dtype descr = self.descr # <<<<<<<<<<<<<< + * cdef list stack + * cdef int offset + */ + __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_v_self)->descr)); + __pyx_v_descr = ((PyArrayObject *)__pyx_v_self)->descr; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":234 + * cdef int offset + * + * cdef bint hasfields = PyDataType_HASFIELDS(descr) # <<<<<<<<<<<<<< + * + * if not hasfields and not copy_shape: + */ + __pyx_v_hasfields = PyDataType_HASFIELDS(__pyx_v_descr); + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":236 + * cdef bint hasfields = PyDataType_HASFIELDS(descr) + * + * if not hasfields and not copy_shape: # <<<<<<<<<<<<<< + * # do not call releasebuffer + * info.obj = None + */ + __pyx_t_2 = (!__pyx_v_hasfields); + if (__pyx_t_2) { + __pyx_t_3 = (!__pyx_v_copy_shape); + __pyx_t_1 = __pyx_t_3; + } else { + __pyx_t_1 = __pyx_t_2; + } + if (__pyx_t_1) { + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":238 + * if not hasfields and not copy_shape: + * # do not call releasebuffer + * info.obj = None # <<<<<<<<<<<<<< + * else: + * # need to call releasebuffer + */ + __Pyx_INCREF(Py_None); + __Pyx_GIVEREF(Py_None); + __Pyx_GOTREF(__pyx_v_info->obj); + __Pyx_DECREF(__pyx_v_info->obj); + __pyx_v_info->obj = Py_None; + goto __pyx_L11; + } + /*else*/ { + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":241 + * else: + * # need to call releasebuffer + * info.obj = self # <<<<<<<<<<<<<< + * + * if not hasfields: + */ + __Pyx_INCREF(__pyx_v_self); + __Pyx_GIVEREF(__pyx_v_self); + __Pyx_GOTREF(__pyx_v_info->obj); + __Pyx_DECREF(__pyx_v_info->obj); + __pyx_v_info->obj = __pyx_v_self; + } + __pyx_L11:; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":243 + * info.obj = self + * + * if not hasfields: # <<<<<<<<<<<<<< + * t = descr.type_num + * if ((descr.byteorder == '>' and little_endian) or + */ + __pyx_t_1 = (!__pyx_v_hasfields); + if (__pyx_t_1) { + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":244 + * + * if not hasfields: + * t = descr.type_num # <<<<<<<<<<<<<< + * if ((descr.byteorder == '>' and little_endian) or + * (descr.byteorder == '<' and not little_endian)): + */ + __pyx_v_t = __pyx_v_descr->type_num; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":245 + * if not hasfields: + * t = descr.type_num + * if ((descr.byteorder == '>' and little_endian) or # <<<<<<<<<<<<<< + * (descr.byteorder == '<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") + */ + __pyx_t_1 = (__pyx_v_descr->byteorder == '>'); + if (__pyx_t_1) { + __pyx_t_2 = __pyx_v_little_endian; + } else { + __pyx_t_2 = __pyx_t_1; + } + if (!__pyx_t_2) { + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":246 + * t = descr.type_num + * if ((descr.byteorder == '>' and little_endian) or + * (descr.byteorder == '<' and not little_endian)): # <<<<<<<<<<<<<< + * raise ValueError(u"Non-native byte order not supported") + * if t == NPY_BYTE: f = "b" + */ + __pyx_t_1 = (__pyx_v_descr->byteorder == '<'); + if (__pyx_t_1) { + __pyx_t_3 = (!__pyx_v_little_endian); + __pyx_t_8 = __pyx_t_3; + } else { + __pyx_t_8 = __pyx_t_1; + } + __pyx_t_1 = __pyx_t_8; + } else { + __pyx_t_1 = __pyx_t_2; + } + if (__pyx_t_1) { + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":247 + * if ((descr.byteorder == '>' and little_endian) or + * (descr.byteorder == '<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") # <<<<<<<<<<<<<< + * if t == NPY_BYTE: f = "b" + * elif t == NPY_UBYTE: f = "B" + */ + __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 247; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_INCREF(((PyObject *)__pyx_kp_u_13)); + PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)__pyx_kp_u_13)); + __Pyx_GIVEREF(((PyObject *)__pyx_kp_u_13)); + __pyx_t_5 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 247; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_Raise(__pyx_t_5, 0, 0); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + {__pyx_filename = __pyx_f[1]; __pyx_lineno = 247; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + goto __pyx_L13; + } + __pyx_L13:; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":248 + * (descr.byteorder == '<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") + * if t == NPY_BYTE: f = "b" # <<<<<<<<<<<<<< + * elif t == NPY_UBYTE: f = "B" + * elif t == NPY_SHORT: f = "h" + */ + __pyx_t_1 = (__pyx_v_t == NPY_BYTE); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__b; + goto __pyx_L14; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":249 + * raise ValueError(u"Non-native byte order not supported") + * if t == NPY_BYTE: f = "b" + * elif t == NPY_UBYTE: f = "B" # <<<<<<<<<<<<<< + * elif t == NPY_SHORT: f = "h" + * elif t == NPY_USHORT: f = "H" + */ + __pyx_t_1 = (__pyx_v_t == NPY_UBYTE); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__B; + goto __pyx_L14; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":250 + * if t == NPY_BYTE: f = "b" + * elif t == NPY_UBYTE: f = "B" + * elif t == NPY_SHORT: f = "h" # <<<<<<<<<<<<<< + * elif t == NPY_USHORT: f = "H" + * elif t == NPY_INT: f = "i" + */ + __pyx_t_1 = (__pyx_v_t == NPY_SHORT); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__h; + goto __pyx_L14; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":251 + * elif t == NPY_UBYTE: f = "B" + * elif t == NPY_SHORT: f = "h" + * elif t == NPY_USHORT: f = "H" # <<<<<<<<<<<<<< + * elif t == NPY_INT: f = "i" + * elif t == NPY_UINT: f = "I" + */ + __pyx_t_1 = (__pyx_v_t == NPY_USHORT); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__H; + goto __pyx_L14; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":252 + * elif t == NPY_SHORT: f = "h" + * elif t == NPY_USHORT: f = "H" + * elif t == NPY_INT: f = "i" # <<<<<<<<<<<<<< + * elif t == NPY_UINT: f = "I" + * elif t == NPY_LONG: f = "l" + */ + __pyx_t_1 = (__pyx_v_t == NPY_INT); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__i; + goto __pyx_L14; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":253 + * elif t == NPY_USHORT: f = "H" + * elif t == NPY_INT: f = "i" + * elif t == NPY_UINT: f = "I" # <<<<<<<<<<<<<< + * elif t == NPY_LONG: f = "l" + * elif t == NPY_ULONG: f = "L" + */ + __pyx_t_1 = (__pyx_v_t == NPY_UINT); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__I; + goto __pyx_L14; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":254 + * elif t == NPY_INT: f = "i" + * elif t == NPY_UINT: f = "I" + * elif t == NPY_LONG: f = "l" # <<<<<<<<<<<<<< + * elif t == NPY_ULONG: f = "L" + * elif t == NPY_LONGLONG: f = "q" + */ + __pyx_t_1 = (__pyx_v_t == NPY_LONG); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__l; + goto __pyx_L14; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":255 + * elif t == NPY_UINT: f = "I" + * elif t == NPY_LONG: f = "l" + * elif t == NPY_ULONG: f = "L" # <<<<<<<<<<<<<< + * elif t == NPY_LONGLONG: f = "q" + * elif t == NPY_ULONGLONG: f = "Q" + */ + __pyx_t_1 = (__pyx_v_t == NPY_ULONG); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__L; + goto __pyx_L14; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":256 + * elif t == NPY_LONG: f = "l" + * elif t == NPY_ULONG: f = "L" + * elif t == NPY_LONGLONG: f = "q" # <<<<<<<<<<<<<< + * elif t == NPY_ULONGLONG: f = "Q" + * elif t == NPY_FLOAT: f = "f" + */ + __pyx_t_1 = (__pyx_v_t == NPY_LONGLONG); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__q; + goto __pyx_L14; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":257 + * elif t == NPY_ULONG: f = "L" + * elif t == NPY_LONGLONG: f = "q" + * elif t == NPY_ULONGLONG: f = "Q" # <<<<<<<<<<<<<< + * elif t == NPY_FLOAT: f = "f" + * elif t == NPY_DOUBLE: f = "d" + */ + __pyx_t_1 = (__pyx_v_t == NPY_ULONGLONG); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__Q; + goto __pyx_L14; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":258 + * elif t == NPY_LONGLONG: f = "q" + * elif t == NPY_ULONGLONG: f = "Q" + * elif t == NPY_FLOAT: f = "f" # <<<<<<<<<<<<<< + * elif t == NPY_DOUBLE: f = "d" + * elif t == NPY_LONGDOUBLE: f = "g" + */ + __pyx_t_1 = (__pyx_v_t == NPY_FLOAT); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__f; + goto __pyx_L14; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":259 + * elif t == NPY_ULONGLONG: f = "Q" + * elif t == NPY_FLOAT: f = "f" + * elif t == NPY_DOUBLE: f = "d" # <<<<<<<<<<<<<< + * elif t == NPY_LONGDOUBLE: f = "g" + * elif t == NPY_CFLOAT: f = "Zf" + */ + __pyx_t_1 = (__pyx_v_t == NPY_DOUBLE); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__d; + goto __pyx_L14; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":260 + * elif t == NPY_FLOAT: f = "f" + * elif t == NPY_DOUBLE: f = "d" + * elif t == NPY_LONGDOUBLE: f = "g" # <<<<<<<<<<<<<< + * elif t == NPY_CFLOAT: f = "Zf" + * elif t == NPY_CDOUBLE: f = "Zd" + */ + __pyx_t_1 = (__pyx_v_t == NPY_LONGDOUBLE); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__g; + goto __pyx_L14; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":261 + * elif t == NPY_DOUBLE: f = "d" + * elif t == NPY_LONGDOUBLE: f = "g" + * elif t == NPY_CFLOAT: f = "Zf" # <<<<<<<<<<<<<< + * elif t == NPY_CDOUBLE: f = "Zd" + * elif t == NPY_CLONGDOUBLE: f = "Zg" + */ + __pyx_t_1 = (__pyx_v_t == NPY_CFLOAT); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__Zf; + goto __pyx_L14; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":262 + * elif t == NPY_LONGDOUBLE: f = "g" + * elif t == NPY_CFLOAT: f = "Zf" + * elif t == NPY_CDOUBLE: f = "Zd" # <<<<<<<<<<<<<< + * elif t == NPY_CLONGDOUBLE: f = "Zg" + * elif t == NPY_OBJECT: f = "O" + */ + __pyx_t_1 = (__pyx_v_t == NPY_CDOUBLE); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__Zd; + goto __pyx_L14; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":263 + * elif t == NPY_CFLOAT: f = "Zf" + * elif t == NPY_CDOUBLE: f = "Zd" + * elif t == NPY_CLONGDOUBLE: f = "Zg" # <<<<<<<<<<<<<< + * elif t == NPY_OBJECT: f = "O" + * else: + */ + __pyx_t_1 = (__pyx_v_t == NPY_CLONGDOUBLE); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__Zg; + goto __pyx_L14; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":264 + * elif t == NPY_CDOUBLE: f = "Zd" + * elif t == NPY_CLONGDOUBLE: f = "Zg" + * elif t == NPY_OBJECT: f = "O" # <<<<<<<<<<<<<< + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) + */ + __pyx_t_1 = (__pyx_v_t == NPY_OBJECT); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__O; + goto __pyx_L14; + } + /*else*/ { + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":266 + * elif t == NPY_OBJECT: f = "O" + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) # <<<<<<<<<<<<<< + * info.format = f + * return + */ + __pyx_t_5 = PyInt_FromLong(__pyx_v_t); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 266; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_4 = PyNumber_Remainder(((PyObject *)__pyx_kp_u_14), __pyx_t_5); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 266; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 266; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4); + __Pyx_GIVEREF(__pyx_t_4); + __pyx_t_4 = 0; + __pyx_t_4 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 266; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __Pyx_Raise(__pyx_t_4, 0, 0); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + {__pyx_filename = __pyx_f[1]; __pyx_lineno = 266; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + } + __pyx_L14:; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":267 + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) + * info.format = f # <<<<<<<<<<<<<< + * return + * else: + */ + __pyx_v_info->format = __pyx_v_f; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":268 + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) + * info.format = f + * return # <<<<<<<<<<<<<< + * else: + * info.format = stdlib.malloc(_buffer_format_string_len) + */ + __pyx_r = 0; + goto __pyx_L0; + goto __pyx_L12; + } + /*else*/ { + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":270 + * return + * else: + * info.format = stdlib.malloc(_buffer_format_string_len) # <<<<<<<<<<<<<< + * info.format[0] = '^' # Native data types, manual alignment + * offset = 0 + */ + __pyx_v_info->format = ((char *)malloc(255)); + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":271 + * else: + * info.format = stdlib.malloc(_buffer_format_string_len) + * info.format[0] = '^' # Native data types, manual alignment # <<<<<<<<<<<<<< + * offset = 0 + * f = _util_dtypestring(descr, info.format + 1, + */ + (__pyx_v_info->format[0]) = '^'; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":272 + * info.format = stdlib.malloc(_buffer_format_string_len) + * info.format[0] = '^' # Native data types, manual alignment + * offset = 0 # <<<<<<<<<<<<<< + * f = _util_dtypestring(descr, info.format + 1, + * info.format + _buffer_format_string_len, + */ + __pyx_v_offset = 0; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":275 + * f = _util_dtypestring(descr, info.format + 1, + * info.format + _buffer_format_string_len, + * &offset) # <<<<<<<<<<<<<< + * f[0] = 0 # Terminate format string + * + */ + __pyx_t_9 = __pyx_f_5numpy__util_dtypestring(__pyx_v_descr, (__pyx_v_info->format + 1), (__pyx_v_info->format + 255), (&__pyx_v_offset)); if (unlikely(__pyx_t_9 == NULL)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 273; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_v_f = __pyx_t_9; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":276 + * info.format + _buffer_format_string_len, + * &offset) + * f[0] = 0 # Terminate format string # <<<<<<<<<<<<<< + * + * def __releasebuffer__(ndarray self, Py_buffer* info): + */ + (__pyx_v_f[0]) = 0; + } + __pyx_L12:; + + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_4); + __Pyx_XDECREF(__pyx_t_5); + __Pyx_AddTraceback("numpy.ndarray.__getbuffer__"); + __pyx_r = -1; + __Pyx_GOTREF(__pyx_v_info->obj); + __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = NULL; + goto __pyx_L2; + __pyx_L0:; + if (__pyx_v_info->obj == Py_None) { + __Pyx_GOTREF(Py_None); + __Pyx_DECREF(Py_None); __pyx_v_info->obj = NULL; + } + __pyx_L2:; + __Pyx_XDECREF((PyObject *)__pyx_v_descr); + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":278 + * f[0] = 0 # Terminate format string + * + * def __releasebuffer__(ndarray self, Py_buffer* info): # <<<<<<<<<<<<<< + * if PyArray_HASFIELDS(self): + * stdlib.free(info.format) + */ + +static void __pyx_pf_5numpy_7ndarray___releasebuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info); /*proto*/ +static void __pyx_pf_5numpy_7ndarray___releasebuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info) { + int __pyx_t_1; + __Pyx_RefNannySetupContext("__releasebuffer__"); + __Pyx_INCREF((PyObject *)__pyx_v_self); + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":279 + * + * def __releasebuffer__(ndarray self, Py_buffer* info): + * if PyArray_HASFIELDS(self): # <<<<<<<<<<<<<< + * stdlib.free(info.format) + * if sizeof(npy_intp) != sizeof(Py_ssize_t): + */ + __pyx_t_1 = PyArray_HASFIELDS(((PyArrayObject *)__pyx_v_self)); + if (__pyx_t_1) { + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":280 + * def __releasebuffer__(ndarray self, Py_buffer* info): + * if PyArray_HASFIELDS(self): + * stdlib.free(info.format) # <<<<<<<<<<<<<< + * if sizeof(npy_intp) != sizeof(Py_ssize_t): + * stdlib.free(info.strides) + */ + free(__pyx_v_info->format); + goto __pyx_L5; + } + __pyx_L5:; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":281 + * if PyArray_HASFIELDS(self): + * stdlib.free(info.format) + * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<< + * stdlib.free(info.strides) + * # info.shape was stored after info.strides in the same block + */ + __pyx_t_1 = ((sizeof(npy_intp)) != (sizeof(Py_ssize_t))); + if (__pyx_t_1) { + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":282 + * stdlib.free(info.format) + * if sizeof(npy_intp) != sizeof(Py_ssize_t): + * stdlib.free(info.strides) # <<<<<<<<<<<<<< + * # info.shape was stored after info.strides in the same block + * + */ + free(__pyx_v_info->strides); + goto __pyx_L6; + } + __pyx_L6:; + + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_RefNannyFinishContext(); +} + +/* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":755 + * ctypedef npy_cdouble complex_t + * + * cdef inline object PyArray_MultiIterNew1(a): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(1, a) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew1(PyObject *__pyx_v_a) { + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew1"); + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":756 + * + * cdef inline object PyArray_MultiIterNew1(a): + * return PyArray_MultiIterNew(1, a) # <<<<<<<<<<<<<< + * + * cdef inline object PyArray_MultiIterNew2(a, b): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(1, ((void *)__pyx_v_a)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 756; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew1"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":758 + * return PyArray_MultiIterNew(1, a) + * + * cdef inline object PyArray_MultiIterNew2(a, b): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(2, a, b) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew2(PyObject *__pyx_v_a, PyObject *__pyx_v_b) { + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew2"); + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":759 + * + * cdef inline object PyArray_MultiIterNew2(a, b): + * return PyArray_MultiIterNew(2, a, b) # <<<<<<<<<<<<<< + * + * cdef inline object PyArray_MultiIterNew3(a, b, c): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(2, ((void *)__pyx_v_a), ((void *)__pyx_v_b)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 759; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew2"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":761 + * return PyArray_MultiIterNew(2, a, b) + * + * cdef inline object PyArray_MultiIterNew3(a, b, c): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(3, a, b, c) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew3(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c) { + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew3"); + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":762 + * + * cdef inline object PyArray_MultiIterNew3(a, b, c): + * return PyArray_MultiIterNew(3, a, b, c) # <<<<<<<<<<<<<< + * + * cdef inline object PyArray_MultiIterNew4(a, b, c, d): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(3, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 762; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew3"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":764 + * return PyArray_MultiIterNew(3, a, b, c) + * + * cdef inline object PyArray_MultiIterNew4(a, b, c, d): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(4, a, b, c, d) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew4(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_d) { + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew4"); + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":765 + * + * cdef inline object PyArray_MultiIterNew4(a, b, c, d): + * return PyArray_MultiIterNew(4, a, b, c, d) # <<<<<<<<<<<<<< + * + * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(4, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c), ((void *)__pyx_v_d)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 765; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew4"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":767 + * return PyArray_MultiIterNew(4, a, b, c, d) + * + * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(5, a, b, c, d, e) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew5(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_d, PyObject *__pyx_v_e) { + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew5"); + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":768 + * + * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e): + * return PyArray_MultiIterNew(5, a, b, c, d, e) # <<<<<<<<<<<<<< + * + * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL: + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(5, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c), ((void *)__pyx_v_d), ((void *)__pyx_v_e)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 768; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew5"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":770 + * return PyArray_MultiIterNew(5, a, b, c, d, e) + * + * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL: # <<<<<<<<<<<<<< + * # Recursive utility function used in __getbuffer__ to get format + * # string. The new location in the format string is returned. + */ + +static CYTHON_INLINE char *__pyx_f_5numpy__util_dtypestring(PyArray_Descr *__pyx_v_descr, char *__pyx_v_f, char *__pyx_v_end, int *__pyx_v_offset) { + PyArray_Descr *__pyx_v_child; + int __pyx_v_endian_detector; + int __pyx_v_little_endian; + PyObject *__pyx_v_fields; + PyObject *__pyx_v_childname; + PyObject *__pyx_v_new_offset; + PyObject *__pyx_v_t; + char *__pyx_r; + Py_ssize_t __pyx_t_1; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + PyObject *__pyx_t_4 = NULL; + PyObject *__pyx_t_5 = NULL; + int __pyx_t_6; + int __pyx_t_7; + int __pyx_t_8; + int __pyx_t_9; + char *__pyx_t_10; + __Pyx_RefNannySetupContext("_util_dtypestring"); + __Pyx_INCREF((PyObject *)__pyx_v_descr); + __pyx_v_child = ((PyArray_Descr *)Py_None); __Pyx_INCREF(Py_None); + __pyx_v_fields = ((PyObject *)Py_None); __Pyx_INCREF(Py_None); + __pyx_v_childname = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_new_offset = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_t = Py_None; __Pyx_INCREF(Py_None); + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":777 + * cdef int delta_offset + * cdef tuple i + * cdef int endian_detector = 1 # <<<<<<<<<<<<<< + * cdef bint little_endian = ((&endian_detector)[0] != 0) + * cdef tuple fields + */ + __pyx_v_endian_detector = 1; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":778 + * cdef tuple i + * cdef int endian_detector = 1 + * cdef bint little_endian = ((&endian_detector)[0] != 0) # <<<<<<<<<<<<<< + * cdef tuple fields + * + */ + __pyx_v_little_endian = ((((char *)(&__pyx_v_endian_detector))[0]) != 0); + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":781 + * cdef tuple fields + * + * for childname in descr.names: # <<<<<<<<<<<<<< + * fields = descr.fields[childname] + * child, new_offset = fields + */ + if (likely(((PyObject *)__pyx_v_descr->names) != Py_None)) { + __pyx_t_1 = 0; __pyx_t_2 = ((PyObject *)__pyx_v_descr->names); __Pyx_INCREF(__pyx_t_2); + } else { + PyErr_SetString(PyExc_TypeError, "'NoneType' object is not iterable"); {__pyx_filename = __pyx_f[1]; __pyx_lineno = 781; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + } + for (;;) { + if (__pyx_t_1 >= PyTuple_GET_SIZE(__pyx_t_2)) break; + __pyx_t_3 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_1); __Pyx_INCREF(__pyx_t_3); __pyx_t_1++; + __Pyx_DECREF(__pyx_v_childname); + __pyx_v_childname = __pyx_t_3; + __pyx_t_3 = 0; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":782 + * + * for childname in descr.names: + * fields = descr.fields[childname] # <<<<<<<<<<<<<< + * child, new_offset = fields + * + */ + __pyx_t_3 = PyObject_GetItem(__pyx_v_descr->fields, __pyx_v_childname); if (!__pyx_t_3) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 782; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + if (!(likely(PyTuple_CheckExact(__pyx_t_3))||((__pyx_t_3) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected tuple, got %.200s", Py_TYPE(__pyx_t_3)->tp_name), 0))) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 782; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(((PyObject *)__pyx_v_fields)); + __pyx_v_fields = ((PyObject *)__pyx_t_3); + __pyx_t_3 = 0; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":783 + * for childname in descr.names: + * fields = descr.fields[childname] + * child, new_offset = fields # <<<<<<<<<<<<<< + * + * if (end - f) - (new_offset - offset[0]) < 15: + */ + if (likely(((PyObject *)__pyx_v_fields) != Py_None) && likely(PyTuple_GET_SIZE(((PyObject *)__pyx_v_fields)) == 2)) { + PyObject* tuple = ((PyObject *)__pyx_v_fields); + __pyx_t_3 = PyTuple_GET_ITEM(tuple, 0); __Pyx_INCREF(__pyx_t_3); + if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_dtype))))) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 783; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_4 = PyTuple_GET_ITEM(tuple, 1); __Pyx_INCREF(__pyx_t_4); + __Pyx_DECREF(((PyObject *)__pyx_v_child)); + __pyx_v_child = ((PyArray_Descr *)__pyx_t_3); + __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_v_new_offset); + __pyx_v_new_offset = __pyx_t_4; + __pyx_t_4 = 0; + } else { + __Pyx_UnpackTupleError(((PyObject *)__pyx_v_fields), 2); + {__pyx_filename = __pyx_f[1]; __pyx_lineno = 783; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":785 + * child, new_offset = fields + * + * if (end - f) - (new_offset - offset[0]) < 15: # <<<<<<<<<<<<<< + * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") + * + */ + __pyx_t_4 = PyInt_FromLong((__pyx_v_end - __pyx_v_f)); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 785; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyInt_FromLong((__pyx_v_offset[0])); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 785; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_5 = PyNumber_Subtract(__pyx_v_new_offset, __pyx_t_3); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 785; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyNumber_Subtract(__pyx_t_4, __pyx_t_5); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 785; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_5 = PyObject_RichCompare(__pyx_t_3, __pyx_int_15, Py_LT); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 785; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 785; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (__pyx_t_6) { + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":786 + * + * if (end - f) - (new_offset - offset[0]) < 15: + * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") # <<<<<<<<<<<<<< + * + * if ((child.byteorder == '>' and little_endian) or + */ + __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 786; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_INCREF(((PyObject *)__pyx_kp_u_15)); + PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_kp_u_15)); + __Pyx_GIVEREF(((PyObject *)__pyx_kp_u_15)); + __pyx_t_3 = PyObject_Call(__pyx_builtin_RuntimeError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 786; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __Pyx_Raise(__pyx_t_3, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + {__pyx_filename = __pyx_f[1]; __pyx_lineno = 786; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + goto __pyx_L5; + } + __pyx_L5:; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":788 + * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") + * + * if ((child.byteorder == '>' and little_endian) or # <<<<<<<<<<<<<< + * (child.byteorder == '<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") + */ + __pyx_t_6 = (__pyx_v_child->byteorder == '>'); + if (__pyx_t_6) { + __pyx_t_7 = __pyx_v_little_endian; + } else { + __pyx_t_7 = __pyx_t_6; + } + if (!__pyx_t_7) { + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":789 + * + * if ((child.byteorder == '>' and little_endian) or + * (child.byteorder == '<' and not little_endian)): # <<<<<<<<<<<<<< + * raise ValueError(u"Non-native byte order not supported") + * # One could encode it in the format string and have Cython + */ + __pyx_t_6 = (__pyx_v_child->byteorder == '<'); + if (__pyx_t_6) { + __pyx_t_8 = (!__pyx_v_little_endian); + __pyx_t_9 = __pyx_t_8; + } else { + __pyx_t_9 = __pyx_t_6; + } + __pyx_t_6 = __pyx_t_9; + } else { + __pyx_t_6 = __pyx_t_7; + } + if (__pyx_t_6) { + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":790 + * if ((child.byteorder == '>' and little_endian) or + * (child.byteorder == '<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") # <<<<<<<<<<<<<< + * # One could encode it in the format string and have Cython + * # complain instead, BUT: < and > in format strings also imply + */ + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 790; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_INCREF(((PyObject *)__pyx_kp_u_13)); + PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_u_13)); + __Pyx_GIVEREF(((PyObject *)__pyx_kp_u_13)); + __pyx_t_5 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 790; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_Raise(__pyx_t_5, 0, 0); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + {__pyx_filename = __pyx_f[1]; __pyx_lineno = 790; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + goto __pyx_L6; + } + __pyx_L6:; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":800 + * + * # Output padding bytes + * while offset[0] < new_offset: # <<<<<<<<<<<<<< + * f[0] = 120 # "x"; pad byte + * f += 1 + */ + while (1) { + __pyx_t_5 = PyInt_FromLong((__pyx_v_offset[0])); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 800; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_3 = PyObject_RichCompare(__pyx_t_5, __pyx_v_new_offset, Py_LT); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 800; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 800; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (!__pyx_t_6) break; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":801 + * # Output padding bytes + * while offset[0] < new_offset: + * f[0] = 120 # "x"; pad byte # <<<<<<<<<<<<<< + * f += 1 + * offset[0] += 1 + */ + (__pyx_v_f[0]) = 120; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":802 + * while offset[0] < new_offset: + * f[0] = 120 # "x"; pad byte + * f += 1 # <<<<<<<<<<<<<< + * offset[0] += 1 + * + */ + __pyx_v_f += 1; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":803 + * f[0] = 120 # "x"; pad byte + * f += 1 + * offset[0] += 1 # <<<<<<<<<<<<<< + * + * offset[0] += child.itemsize + */ + (__pyx_v_offset[0]) += 1; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":805 + * offset[0] += 1 + * + * offset[0] += child.itemsize # <<<<<<<<<<<<<< + * + * if not PyDataType_HASFIELDS(child): + */ + (__pyx_v_offset[0]) += __pyx_v_child->elsize; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":807 + * offset[0] += child.itemsize + * + * if not PyDataType_HASFIELDS(child): # <<<<<<<<<<<<<< + * t = child.type_num + * if end - f < 5: + */ + __pyx_t_6 = (!PyDataType_HASFIELDS(__pyx_v_child)); + if (__pyx_t_6) { + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":808 + * + * if not PyDataType_HASFIELDS(child): + * t = child.type_num # <<<<<<<<<<<<<< + * if end - f < 5: + * raise RuntimeError(u"Format string allocated too short.") + */ + __pyx_t_3 = PyInt_FromLong(__pyx_v_child->type_num); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 808; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_v_t); + __pyx_v_t = __pyx_t_3; + __pyx_t_3 = 0; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":809 + * if not PyDataType_HASFIELDS(child): + * t = child.type_num + * if end - f < 5: # <<<<<<<<<<<<<< + * raise RuntimeError(u"Format string allocated too short.") + * + */ + __pyx_t_6 = ((__pyx_v_end - __pyx_v_f) < 5); + if (__pyx_t_6) { + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":810 + * t = child.type_num + * if end - f < 5: + * raise RuntimeError(u"Format string allocated too short.") # <<<<<<<<<<<<<< + * + * # Until ticket #99 is fixed, use integers to avoid warnings + */ + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 810; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_INCREF(((PyObject *)__pyx_kp_u_16)); + PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_u_16)); + __Pyx_GIVEREF(((PyObject *)__pyx_kp_u_16)); + __pyx_t_5 = PyObject_Call(__pyx_builtin_RuntimeError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 810; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_Raise(__pyx_t_5, 0, 0); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + {__pyx_filename = __pyx_f[1]; __pyx_lineno = 810; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + goto __pyx_L10; + } + __pyx_L10:; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":813 + * + * # Until ticket #99 is fixed, use integers to avoid warnings + * if t == NPY_BYTE: f[0] = 98 #"b" # <<<<<<<<<<<<<< + * elif t == NPY_UBYTE: f[0] = 66 #"B" + * elif t == NPY_SHORT: f[0] = 104 #"h" + */ + __pyx_t_5 = PyInt_FromLong(NPY_BYTE); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 813; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_5, Py_EQ); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 813; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 813; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 98; + goto __pyx_L11; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":814 + * # Until ticket #99 is fixed, use integers to avoid warnings + * if t == NPY_BYTE: f[0] = 98 #"b" + * elif t == NPY_UBYTE: f[0] = 66 #"B" # <<<<<<<<<<<<<< + * elif t == NPY_SHORT: f[0] = 104 #"h" + * elif t == NPY_USHORT: f[0] = 72 #"H" + */ + __pyx_t_3 = PyInt_FromLong(NPY_UBYTE); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 814; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_5 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 814; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 814; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 66; + goto __pyx_L11; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":815 + * if t == NPY_BYTE: f[0] = 98 #"b" + * elif t == NPY_UBYTE: f[0] = 66 #"B" + * elif t == NPY_SHORT: f[0] = 104 #"h" # <<<<<<<<<<<<<< + * elif t == NPY_USHORT: f[0] = 72 #"H" + * elif t == NPY_INT: f[0] = 105 #"i" + */ + __pyx_t_5 = PyInt_FromLong(NPY_SHORT); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 815; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_5, Py_EQ); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 815; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 815; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 104; + goto __pyx_L11; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":816 + * elif t == NPY_UBYTE: f[0] = 66 #"B" + * elif t == NPY_SHORT: f[0] = 104 #"h" + * elif t == NPY_USHORT: f[0] = 72 #"H" # <<<<<<<<<<<<<< + * elif t == NPY_INT: f[0] = 105 #"i" + * elif t == NPY_UINT: f[0] = 73 #"I" + */ + __pyx_t_3 = PyInt_FromLong(NPY_USHORT); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 816; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_5 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 816; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 816; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 72; + goto __pyx_L11; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":817 + * elif t == NPY_SHORT: f[0] = 104 #"h" + * elif t == NPY_USHORT: f[0] = 72 #"H" + * elif t == NPY_INT: f[0] = 105 #"i" # <<<<<<<<<<<<<< + * elif t == NPY_UINT: f[0] = 73 #"I" + * elif t == NPY_LONG: f[0] = 108 #"l" + */ + __pyx_t_5 = PyInt_FromLong(NPY_INT); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 817; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_5, Py_EQ); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 817; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 817; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 105; + goto __pyx_L11; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":818 + * elif t == NPY_USHORT: f[0] = 72 #"H" + * elif t == NPY_INT: f[0] = 105 #"i" + * elif t == NPY_UINT: f[0] = 73 #"I" # <<<<<<<<<<<<<< + * elif t == NPY_LONG: f[0] = 108 #"l" + * elif t == NPY_ULONG: f[0] = 76 #"L" + */ + __pyx_t_3 = PyInt_FromLong(NPY_UINT); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 818; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_5 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 818; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 818; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 73; + goto __pyx_L11; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":819 + * elif t == NPY_INT: f[0] = 105 #"i" + * elif t == NPY_UINT: f[0] = 73 #"I" + * elif t == NPY_LONG: f[0] = 108 #"l" # <<<<<<<<<<<<<< + * elif t == NPY_ULONG: f[0] = 76 #"L" + * elif t == NPY_LONGLONG: f[0] = 113 #"q" + */ + __pyx_t_5 = PyInt_FromLong(NPY_LONG); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 819; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_5, Py_EQ); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 819; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 819; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 108; + goto __pyx_L11; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":820 + * elif t == NPY_UINT: f[0] = 73 #"I" + * elif t == NPY_LONG: f[0] = 108 #"l" + * elif t == NPY_ULONG: f[0] = 76 #"L" # <<<<<<<<<<<<<< + * elif t == NPY_LONGLONG: f[0] = 113 #"q" + * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" + */ + __pyx_t_3 = PyInt_FromLong(NPY_ULONG); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 820; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_5 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 820; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 820; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 76; + goto __pyx_L11; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":821 + * elif t == NPY_LONG: f[0] = 108 #"l" + * elif t == NPY_ULONG: f[0] = 76 #"L" + * elif t == NPY_LONGLONG: f[0] = 113 #"q" # <<<<<<<<<<<<<< + * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" + * elif t == NPY_FLOAT: f[0] = 102 #"f" + */ + __pyx_t_5 = PyInt_FromLong(NPY_LONGLONG); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 821; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_5, Py_EQ); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 821; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 821; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 113; + goto __pyx_L11; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":822 + * elif t == NPY_ULONG: f[0] = 76 #"L" + * elif t == NPY_LONGLONG: f[0] = 113 #"q" + * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" # <<<<<<<<<<<<<< + * elif t == NPY_FLOAT: f[0] = 102 #"f" + * elif t == NPY_DOUBLE: f[0] = 100 #"d" + */ + __pyx_t_3 = PyInt_FromLong(NPY_ULONGLONG); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 822; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_5 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 822; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 822; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 81; + goto __pyx_L11; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":823 + * elif t == NPY_LONGLONG: f[0] = 113 #"q" + * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" + * elif t == NPY_FLOAT: f[0] = 102 #"f" # <<<<<<<<<<<<<< + * elif t == NPY_DOUBLE: f[0] = 100 #"d" + * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" + */ + __pyx_t_5 = PyInt_FromLong(NPY_FLOAT); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 823; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_5, Py_EQ); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 823; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 823; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 102; + goto __pyx_L11; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":824 + * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" + * elif t == NPY_FLOAT: f[0] = 102 #"f" + * elif t == NPY_DOUBLE: f[0] = 100 #"d" # <<<<<<<<<<<<<< + * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" + * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf + */ + __pyx_t_3 = PyInt_FromLong(NPY_DOUBLE); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 824; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_5 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 824; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 824; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 100; + goto __pyx_L11; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":825 + * elif t == NPY_FLOAT: f[0] = 102 #"f" + * elif t == NPY_DOUBLE: f[0] = 100 #"d" + * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" # <<<<<<<<<<<<<< + * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf + * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd + */ + __pyx_t_5 = PyInt_FromLong(NPY_LONGDOUBLE); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 825; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_5, Py_EQ); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 825; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 825; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 103; + goto __pyx_L11; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":826 + * elif t == NPY_DOUBLE: f[0] = 100 #"d" + * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" + * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf # <<<<<<<<<<<<<< + * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd + * elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg + */ + __pyx_t_3 = PyInt_FromLong(NPY_CFLOAT); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 826; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_5 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 826; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 826; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 90; + (__pyx_v_f[1]) = 102; + __pyx_v_f += 1; + goto __pyx_L11; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":827 + * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" + * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf + * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd # <<<<<<<<<<<<<< + * elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg + * elif t == NPY_OBJECT: f[0] = 79 #"O" + */ + __pyx_t_5 = PyInt_FromLong(NPY_CDOUBLE); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 827; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_5, Py_EQ); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 827; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 827; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 90; + (__pyx_v_f[1]) = 100; + __pyx_v_f += 1; + goto __pyx_L11; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":828 + * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf + * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd + * elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg # <<<<<<<<<<<<<< + * elif t == NPY_OBJECT: f[0] = 79 #"O" + * else: + */ + __pyx_t_3 = PyInt_FromLong(NPY_CLONGDOUBLE); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 828; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_5 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 828; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 828; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 90; + (__pyx_v_f[1]) = 103; + __pyx_v_f += 1; + goto __pyx_L11; + } + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":829 + * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd + * elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg + * elif t == NPY_OBJECT: f[0] = 79 #"O" # <<<<<<<<<<<<<< + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) + */ + __pyx_t_5 = PyInt_FromLong(NPY_OBJECT); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 829; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_5, Py_EQ); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 829; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 829; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 79; + goto __pyx_L11; + } + /*else*/ { + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":831 + * elif t == NPY_OBJECT: f[0] = 79 #"O" + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) # <<<<<<<<<<<<<< + * f += 1 + * else: + */ + __pyx_t_3 = PyNumber_Remainder(((PyObject *)__pyx_kp_u_14), __pyx_v_t); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 831; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 831; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + __pyx_t_3 = 0; + __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 831; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __Pyx_Raise(__pyx_t_3, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + {__pyx_filename = __pyx_f[1]; __pyx_lineno = 831; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + } + __pyx_L11:; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":832 + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) + * f += 1 # <<<<<<<<<<<<<< + * else: + * # Cython ignores struct boundary information ("T{...}"), + */ + __pyx_v_f += 1; + goto __pyx_L9; + } + /*else*/ { + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":836 + * # Cython ignores struct boundary information ("T{...}"), + * # so don't output it + * f = _util_dtypestring(child, f, end, offset) # <<<<<<<<<<<<<< + * return f + * + */ + __pyx_t_10 = __pyx_f_5numpy__util_dtypestring(__pyx_v_child, __pyx_v_f, __pyx_v_end, __pyx_v_offset); if (unlikely(__pyx_t_10 == NULL)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 836; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_v_f = __pyx_t_10; + } + __pyx_L9:; + } + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":837 + * # so don't output it + * f = _util_dtypestring(child, f, end, offset) + * return f # <<<<<<<<<<<<<< + * + * + */ + __pyx_r = __pyx_v_f; + goto __pyx_L0; + + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_XDECREF(__pyx_t_5); + __Pyx_AddTraceback("numpy._util_dtypestring"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_DECREF((PyObject *)__pyx_v_child); + __Pyx_DECREF(__pyx_v_fields); + __Pyx_DECREF(__pyx_v_childname); + __Pyx_DECREF(__pyx_v_new_offset); + __Pyx_DECREF(__pyx_v_t); + __Pyx_DECREF((PyObject *)__pyx_v_descr); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":952 + * + * + * cdef inline void set_array_base(ndarray arr, object base): # <<<<<<<<<<<<<< + * cdef PyObject* baseptr + * if base is None: + */ + +static CYTHON_INLINE void __pyx_f_5numpy_set_array_base(PyArrayObject *__pyx_v_arr, PyObject *__pyx_v_base) { + PyObject *__pyx_v_baseptr; + int __pyx_t_1; + __Pyx_RefNannySetupContext("set_array_base"); + __Pyx_INCREF((PyObject *)__pyx_v_arr); + __Pyx_INCREF(__pyx_v_base); + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":954 + * cdef inline void set_array_base(ndarray arr, object base): + * cdef PyObject* baseptr + * if base is None: # <<<<<<<<<<<<<< + * baseptr = NULL + * else: + */ + __pyx_t_1 = (__pyx_v_base == Py_None); + if (__pyx_t_1) { + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":955 + * cdef PyObject* baseptr + * if base is None: + * baseptr = NULL # <<<<<<<<<<<<<< + * else: + * Py_INCREF(base) # important to do this before decref below! + */ + __pyx_v_baseptr = NULL; + goto __pyx_L3; + } + /*else*/ { + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":957 + * baseptr = NULL + * else: + * Py_INCREF(base) # important to do this before decref below! # <<<<<<<<<<<<<< + * baseptr = base + * Py_XDECREF(arr.base) + */ + Py_INCREF(__pyx_v_base); + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":958 + * else: + * Py_INCREF(base) # important to do this before decref below! + * baseptr = base # <<<<<<<<<<<<<< + * Py_XDECREF(arr.base) + * arr.base = baseptr + */ + __pyx_v_baseptr = ((PyObject *)__pyx_v_base); + } + __pyx_L3:; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":959 + * Py_INCREF(base) # important to do this before decref below! + * baseptr = base + * Py_XDECREF(arr.base) # <<<<<<<<<<<<<< + * arr.base = baseptr + * + */ + Py_XDECREF(__pyx_v_arr->base); + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":960 + * baseptr = base + * Py_XDECREF(arr.base) + * arr.base = baseptr # <<<<<<<<<<<<<< + * + * cdef inline object get_array_base(ndarray arr): + */ + __pyx_v_arr->base = __pyx_v_baseptr; + + __Pyx_DECREF((PyObject *)__pyx_v_arr); + __Pyx_DECREF(__pyx_v_base); + __Pyx_RefNannyFinishContext(); +} + +/* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":962 + * arr.base = baseptr + * + * cdef inline object get_array_base(ndarray arr): # <<<<<<<<<<<<<< + * if arr.base is NULL: + * return None + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_get_array_base(PyArrayObject *__pyx_v_arr) { + PyObject *__pyx_r = NULL; + int __pyx_t_1; + __Pyx_RefNannySetupContext("get_array_base"); + __Pyx_INCREF((PyObject *)__pyx_v_arr); + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":963 + * + * cdef inline object get_array_base(ndarray arr): + * if arr.base is NULL: # <<<<<<<<<<<<<< + * return None + * else: + */ + __pyx_t_1 = (__pyx_v_arr->base == NULL); + if (__pyx_t_1) { + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":964 + * cdef inline object get_array_base(ndarray arr): + * if arr.base is NULL: + * return None # <<<<<<<<<<<<<< + * else: + * return arr.base + */ + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(Py_None); + __pyx_r = Py_None; + goto __pyx_L0; + goto __pyx_L3; + } + /*else*/ { + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":966 + * return None + * else: + * return arr.base # <<<<<<<<<<<<<< + */ + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(((PyObject *)__pyx_v_arr->base)); + __pyx_r = ((PyObject *)__pyx_v_arr->base); + goto __pyx_L0; + } + __pyx_L3:; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + __pyx_L0:; + __Pyx_DECREF((PyObject *)__pyx_v_arr); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +static PyObject *__pyx_tp_new_5scipy_2io_6matlab_10mio5_utils_VarHeader5(PyTypeObject *t, PyObject *a, PyObject *k) { + struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *p; + PyObject *o = (*t->tp_alloc)(t, 0); + if (!o) return 0; + p = ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *)o); + p->name = Py_None; Py_INCREF(Py_None); + p->dims = Py_None; Py_INCREF(Py_None); + return o; +} + +static void __pyx_tp_dealloc_5scipy_2io_6matlab_10mio5_utils_VarHeader5(PyObject *o) { + struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *p = (struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *)o; + Py_XDECREF(p->name); + Py_XDECREF(p->dims); + (*Py_TYPE(o)->tp_free)(o); +} + +static int __pyx_tp_traverse_5scipy_2io_6matlab_10mio5_utils_VarHeader5(PyObject *o, visitproc v, void *a) { + int e; + struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *p = (struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *)o; + if (p->name) { + e = (*v)(p->name, a); if (e) return e; + } + if (p->dims) { + e = (*v)(p->dims, a); if (e) return e; + } + return 0; +} + +static int __pyx_tp_clear_5scipy_2io_6matlab_10mio5_utils_VarHeader5(PyObject *o) { + struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *p = (struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *)o; + PyObject* tmp; + tmp = ((PyObject*)p->name); + p->name = Py_None; Py_INCREF(Py_None); + Py_XDECREF(tmp); + tmp = ((PyObject*)p->dims); + p->dims = Py_None; Py_INCREF(Py_None); + Py_XDECREF(tmp); + return 0; +} + +static struct PyMethodDef __pyx_methods_5scipy_2io_6matlab_10mio5_utils_VarHeader5[] = { + {0, 0, 0, 0} +}; + +static struct PyMemberDef __pyx_members_5scipy_2io_6matlab_10mio5_utils_VarHeader5[] = { + {(char *)"name", T_OBJECT, offsetof(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5, name), READONLY, 0}, + {(char *)"mclass", T_INT, offsetof(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5, mclass), READONLY, 0}, + {(char *)"dims", T_OBJECT, offsetof(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5, dims), READONLY, 0}, + {(char *)"is_global", T_INT, offsetof(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5, is_global), 0, 0}, + {0, 0, 0, 0, 0} +}; + +static PyNumberMethods __pyx_tp_as_number_VarHeader5 = { + 0, /*nb_add*/ + 0, /*nb_subtract*/ + 0, /*nb_multiply*/ + #if PY_MAJOR_VERSION < 3 + 0, /*nb_divide*/ + #endif + 0, /*nb_remainder*/ + 0, /*nb_divmod*/ + 0, /*nb_power*/ + 0, /*nb_negative*/ + 0, /*nb_positive*/ + 0, /*nb_absolute*/ + 0, /*nb_nonzero*/ + 0, /*nb_invert*/ + 0, /*nb_lshift*/ + 0, /*nb_rshift*/ + 0, /*nb_and*/ + 0, /*nb_xor*/ + 0, /*nb_or*/ + #if PY_MAJOR_VERSION < 3 + 0, /*nb_coerce*/ + #endif + 0, /*nb_int*/ + #if PY_MAJOR_VERSION >= 3 + 0, /*reserved*/ + #else + 0, /*nb_long*/ + #endif + 0, /*nb_float*/ + #if PY_MAJOR_VERSION < 3 + 0, /*nb_oct*/ + #endif + #if PY_MAJOR_VERSION < 3 + 0, /*nb_hex*/ + #endif + 0, /*nb_inplace_add*/ + 0, /*nb_inplace_subtract*/ + 0, /*nb_inplace_multiply*/ + #if PY_MAJOR_VERSION < 3 + 0, /*nb_inplace_divide*/ + #endif + 0, /*nb_inplace_remainder*/ + 0, /*nb_inplace_power*/ + 0, /*nb_inplace_lshift*/ + 0, /*nb_inplace_rshift*/ + 0, /*nb_inplace_and*/ + 0, /*nb_inplace_xor*/ + 0, /*nb_inplace_or*/ + 0, /*nb_floor_divide*/ + 0, /*nb_true_divide*/ + 0, /*nb_inplace_floor_divide*/ + 0, /*nb_inplace_true_divide*/ + #if (PY_MAJOR_VERSION >= 3) || (Py_TPFLAGS_DEFAULT & Py_TPFLAGS_HAVE_INDEX) + 0, /*nb_index*/ + #endif +}; + +static PySequenceMethods __pyx_tp_as_sequence_VarHeader5 = { + 0, /*sq_length*/ + 0, /*sq_concat*/ + 0, /*sq_repeat*/ + 0, /*sq_item*/ + 0, /*sq_slice*/ + 0, /*sq_ass_item*/ + 0, /*sq_ass_slice*/ + 0, /*sq_contains*/ + 0, /*sq_inplace_concat*/ + 0, /*sq_inplace_repeat*/ +}; + +static PyMappingMethods __pyx_tp_as_mapping_VarHeader5 = { + 0, /*mp_length*/ + 0, /*mp_subscript*/ + 0, /*mp_ass_subscript*/ +}; + +static PyBufferProcs __pyx_tp_as_buffer_VarHeader5 = { + #if PY_MAJOR_VERSION < 3 + 0, /*bf_getreadbuffer*/ + #endif + #if PY_MAJOR_VERSION < 3 + 0, /*bf_getwritebuffer*/ + #endif + #if PY_MAJOR_VERSION < 3 + 0, /*bf_getsegcount*/ + #endif + #if PY_MAJOR_VERSION < 3 + 0, /*bf_getcharbuffer*/ + #endif + #if PY_VERSION_HEX >= 0x02060000 + 0, /*bf_getbuffer*/ + #endif + #if PY_VERSION_HEX >= 0x02060000 + 0, /*bf_releasebuffer*/ + #endif +}; + +PyTypeObject __pyx_type_5scipy_2io_6matlab_10mio5_utils_VarHeader5 = { + PyVarObject_HEAD_INIT(0, 0) + __Pyx_NAMESTR("scipy.io.matlab.mio5_utils.VarHeader5"), /*tp_name*/ + sizeof(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5), /*tp_basicsize*/ + 0, /*tp_itemsize*/ + __pyx_tp_dealloc_5scipy_2io_6matlab_10mio5_utils_VarHeader5, /*tp_dealloc*/ + 0, /*tp_print*/ + 0, /*tp_getattr*/ + 0, /*tp_setattr*/ + 0, /*tp_compare*/ + 0, /*tp_repr*/ + &__pyx_tp_as_number_VarHeader5, /*tp_as_number*/ + &__pyx_tp_as_sequence_VarHeader5, /*tp_as_sequence*/ + &__pyx_tp_as_mapping_VarHeader5, /*tp_as_mapping*/ + 0, /*tp_hash*/ + 0, /*tp_call*/ + 0, /*tp_str*/ + 0, /*tp_getattro*/ + 0, /*tp_setattro*/ + &__pyx_tp_as_buffer_VarHeader5, /*tp_as_buffer*/ + Py_TPFLAGS_DEFAULT|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ + 0, /*tp_doc*/ + __pyx_tp_traverse_5scipy_2io_6matlab_10mio5_utils_VarHeader5, /*tp_traverse*/ + __pyx_tp_clear_5scipy_2io_6matlab_10mio5_utils_VarHeader5, /*tp_clear*/ + 0, /*tp_richcompare*/ + 0, /*tp_weaklistoffset*/ + 0, /*tp_iter*/ + 0, /*tp_iternext*/ + __pyx_methods_5scipy_2io_6matlab_10mio5_utils_VarHeader5, /*tp_methods*/ + __pyx_members_5scipy_2io_6matlab_10mio5_utils_VarHeader5, /*tp_members*/ + 0, /*tp_getset*/ + 0, /*tp_base*/ + 0, /*tp_dict*/ + 0, /*tp_descr_get*/ + 0, /*tp_descr_set*/ + 0, /*tp_dictoffset*/ + 0, /*tp_init*/ + 0, /*tp_alloc*/ + __pyx_tp_new_5scipy_2io_6matlab_10mio5_utils_VarHeader5, /*tp_new*/ + 0, /*tp_free*/ + 0, /*tp_is_gc*/ + 0, /*tp_bases*/ + 0, /*tp_mro*/ + 0, /*tp_cache*/ + 0, /*tp_subclasses*/ + 0, /*tp_weaklist*/ + 0, /*tp_del*/ + #if PY_VERSION_HEX >= 0x02060000 + 0, /*tp_version_tag*/ + #endif +}; +static struct __pyx_vtabstruct_5scipy_2io_6matlab_10mio5_utils_VarReader5 __pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5; + +static PyObject *__pyx_tp_new_5scipy_2io_6matlab_10mio5_utils_VarReader5(PyTypeObject *t, PyObject *a, PyObject *k) { + struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *p; + PyObject *o = (*t->tp_alloc)(t, 0); + if (!o) return 0; + p = ((struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)o); + p->__pyx_vtab = __pyx_vtabptr_5scipy_2io_6matlab_10mio5_utils_VarReader5; + p->codecs = Py_None; Py_INCREF(Py_None); + p->uint16_codec = Py_None; Py_INCREF(Py_None); + p->cstream = ((struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *)Py_None); Py_INCREF(Py_None); + p->preader = Py_None; Py_INCREF(Py_None); + p->U1_dtype = ((PyArray_Descr *)Py_None); Py_INCREF(Py_None); + p->bool_dtype = ((PyArray_Descr *)Py_None); Py_INCREF(Py_None); + if (__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5___new__(o, a, k) < 0) { + Py_DECREF(o); o = 0; + } + return o; +} + +static void __pyx_tp_dealloc_5scipy_2io_6matlab_10mio5_utils_VarReader5(PyObject *o) { + struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *p = (struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)o; + Py_XDECREF(p->codecs); + Py_XDECREF(p->uint16_codec); + Py_XDECREF(((PyObject *)p->cstream)); + Py_XDECREF(p->preader); + Py_XDECREF(((PyObject *)p->U1_dtype)); + Py_XDECREF(((PyObject *)p->bool_dtype)); + (*Py_TYPE(o)->tp_free)(o); +} + +static int __pyx_tp_traverse_5scipy_2io_6matlab_10mio5_utils_VarReader5(PyObject *o, visitproc v, void *a) { + int e; + struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *p = (struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)o; + if (p->codecs) { + e = (*v)(p->codecs, a); if (e) return e; + } + if (p->uint16_codec) { + e = (*v)(p->uint16_codec, a); if (e) return e; + } + if (p->cstream) { + e = (*v)(((PyObject*)p->cstream), a); if (e) return e; + } + if (p->preader) { + e = (*v)(p->preader, a); if (e) return e; + } + if (p->U1_dtype) { + e = (*v)(((PyObject*)p->U1_dtype), a); if (e) return e; + } + if (p->bool_dtype) { + e = (*v)(((PyObject*)p->bool_dtype), a); if (e) return e; + } + return 0; +} + +static int __pyx_tp_clear_5scipy_2io_6matlab_10mio5_utils_VarReader5(PyObject *o) { + struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *p = (struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *)o; + PyObject* tmp; + tmp = ((PyObject*)p->codecs); + p->codecs = Py_None; Py_INCREF(Py_None); + Py_XDECREF(tmp); + tmp = ((PyObject*)p->uint16_codec); + p->uint16_codec = Py_None; Py_INCREF(Py_None); + Py_XDECREF(tmp); + tmp = ((PyObject*)p->cstream); + p->cstream = ((struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *)Py_None); Py_INCREF(Py_None); + Py_XDECREF(tmp); + tmp = ((PyObject*)p->preader); + p->preader = Py_None; Py_INCREF(Py_None); + Py_XDECREF(tmp); + tmp = ((PyObject*)p->U1_dtype); + p->U1_dtype = ((PyArray_Descr *)Py_None); Py_INCREF(Py_None); + Py_XDECREF(tmp); + tmp = ((PyObject*)p->bool_dtype); + p->bool_dtype = ((PyArray_Descr *)Py_None); Py_INCREF(Py_None); + Py_XDECREF(tmp); + return 0; +} + +static struct PyMethodDef __pyx_methods_5scipy_2io_6matlab_10mio5_utils_VarReader5[] = { + {__Pyx_NAMESTR("set_stream"), (PyCFunction)__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_set_stream, METH_O, __Pyx_DOCSTR(__pyx_doc_5scipy_2io_6matlab_10mio5_utils_10VarReader5_set_stream)}, + {__Pyx_NAMESTR("read_tag"), (PyCFunction)__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_tag, METH_NOARGS, __Pyx_DOCSTR(__pyx_doc_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_tag)}, + {__Pyx_NAMESTR("read_numeric"), (PyCFunction)__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_numeric, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_numeric)}, + {__Pyx_NAMESTR("read_full_tag"), (PyCFunction)__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_full_tag, METH_NOARGS, __Pyx_DOCSTR(__pyx_doc_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_full_tag)}, + {__Pyx_NAMESTR("read_header"), (PyCFunction)__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_header, METH_NOARGS, __Pyx_DOCSTR(__pyx_doc_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_header)}, + {__Pyx_NAMESTR("array_from_header"), (PyCFunction)__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_array_from_header, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_5scipy_2io_6matlab_10mio5_utils_10VarReader5_array_from_header)}, + {__Pyx_NAMESTR("read_real_complex"), (PyCFunction)__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_real_complex, METH_O, __Pyx_DOCSTR(__pyx_doc_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_real_complex)}, + {__Pyx_NAMESTR("read_char"), (PyCFunction)__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_char, METH_O, __Pyx_DOCSTR(__pyx_doc_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_char)}, + {__Pyx_NAMESTR("read_cells"), (PyCFunction)__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_cells, METH_O, __Pyx_DOCSTR(__pyx_doc_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_cells)}, + {__Pyx_NAMESTR("read_fieldnames"), (PyCFunction)__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_fieldnames, METH_NOARGS, __Pyx_DOCSTR(__pyx_doc_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_fieldnames)}, + {__Pyx_NAMESTR("read_struct"), (PyCFunction)__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_struct, METH_O, __Pyx_DOCSTR(__pyx_doc_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_struct)}, + {__Pyx_NAMESTR("read_opaque"), (PyCFunction)__pyx_pf_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_opaque, METH_O, __Pyx_DOCSTR(__pyx_doc_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_opaque)}, + {0, 0, 0, 0} +}; + +static struct PyMemberDef __pyx_members_5scipy_2io_6matlab_10mio5_utils_VarReader5[] = { + {(char *)"is_swapped", T_INT, offsetof(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5, is_swapped), 0, 0}, + {(char *)"little_endian", T_INT, offsetof(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5, little_endian), 0, 0}, + {0, 0, 0, 0, 0} +}; + +static PyNumberMethods __pyx_tp_as_number_VarReader5 = { + 0, /*nb_add*/ + 0, /*nb_subtract*/ + 0, /*nb_multiply*/ + #if PY_MAJOR_VERSION < 3 + 0, /*nb_divide*/ + #endif + 0, /*nb_remainder*/ + 0, /*nb_divmod*/ + 0, /*nb_power*/ + 0, /*nb_negative*/ + 0, /*nb_positive*/ + 0, /*nb_absolute*/ + 0, /*nb_nonzero*/ + 0, /*nb_invert*/ + 0, /*nb_lshift*/ + 0, /*nb_rshift*/ + 0, /*nb_and*/ + 0, /*nb_xor*/ + 0, /*nb_or*/ + #if PY_MAJOR_VERSION < 3 + 0, /*nb_coerce*/ + #endif + 0, /*nb_int*/ + #if PY_MAJOR_VERSION >= 3 + 0, /*reserved*/ + #else + 0, /*nb_long*/ + #endif + 0, /*nb_float*/ + #if PY_MAJOR_VERSION < 3 + 0, /*nb_oct*/ + #endif + #if PY_MAJOR_VERSION < 3 + 0, /*nb_hex*/ + #endif + 0, /*nb_inplace_add*/ + 0, /*nb_inplace_subtract*/ + 0, /*nb_inplace_multiply*/ + #if PY_MAJOR_VERSION < 3 + 0, /*nb_inplace_divide*/ + #endif + 0, /*nb_inplace_remainder*/ + 0, /*nb_inplace_power*/ + 0, /*nb_inplace_lshift*/ + 0, /*nb_inplace_rshift*/ + 0, /*nb_inplace_and*/ + 0, /*nb_inplace_xor*/ + 0, /*nb_inplace_or*/ + 0, /*nb_floor_divide*/ + 0, /*nb_true_divide*/ + 0, /*nb_inplace_floor_divide*/ + 0, /*nb_inplace_true_divide*/ + #if (PY_MAJOR_VERSION >= 3) || (Py_TPFLAGS_DEFAULT & Py_TPFLAGS_HAVE_INDEX) + 0, /*nb_index*/ + #endif +}; + +static PySequenceMethods __pyx_tp_as_sequence_VarReader5 = { + 0, /*sq_length*/ + 0, /*sq_concat*/ + 0, /*sq_repeat*/ + 0, /*sq_item*/ + 0, /*sq_slice*/ + 0, /*sq_ass_item*/ + 0, /*sq_ass_slice*/ + 0, /*sq_contains*/ + 0, /*sq_inplace_concat*/ + 0, /*sq_inplace_repeat*/ +}; + +static PyMappingMethods __pyx_tp_as_mapping_VarReader5 = { + 0, /*mp_length*/ + 0, /*mp_subscript*/ + 0, /*mp_ass_subscript*/ +}; + +static PyBufferProcs __pyx_tp_as_buffer_VarReader5 = { + #if PY_MAJOR_VERSION < 3 + 0, /*bf_getreadbuffer*/ + #endif + #if PY_MAJOR_VERSION < 3 + 0, /*bf_getwritebuffer*/ + #endif + #if PY_MAJOR_VERSION < 3 + 0, /*bf_getsegcount*/ + #endif + #if PY_MAJOR_VERSION < 3 + 0, /*bf_getcharbuffer*/ + #endif + #if PY_VERSION_HEX >= 0x02060000 + 0, /*bf_getbuffer*/ + #endif + #if PY_VERSION_HEX >= 0x02060000 + 0, /*bf_releasebuffer*/ + #endif +}; + +PyTypeObject __pyx_type_5scipy_2io_6matlab_10mio5_utils_VarReader5 = { + PyVarObject_HEAD_INIT(0, 0) + __Pyx_NAMESTR("scipy.io.matlab.mio5_utils.VarReader5"), /*tp_name*/ + sizeof(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5), /*tp_basicsize*/ + 0, /*tp_itemsize*/ + __pyx_tp_dealloc_5scipy_2io_6matlab_10mio5_utils_VarReader5, /*tp_dealloc*/ + 0, /*tp_print*/ + 0, /*tp_getattr*/ + 0, /*tp_setattr*/ + 0, /*tp_compare*/ + 0, /*tp_repr*/ + &__pyx_tp_as_number_VarReader5, /*tp_as_number*/ + &__pyx_tp_as_sequence_VarReader5, /*tp_as_sequence*/ + &__pyx_tp_as_mapping_VarReader5, /*tp_as_mapping*/ + 0, /*tp_hash*/ + 0, /*tp_call*/ + 0, /*tp_str*/ + 0, /*tp_getattro*/ + 0, /*tp_setattro*/ + &__pyx_tp_as_buffer_VarReader5, /*tp_as_buffer*/ + Py_TPFLAGS_DEFAULT|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ + 0, /*tp_doc*/ + __pyx_tp_traverse_5scipy_2io_6matlab_10mio5_utils_VarReader5, /*tp_traverse*/ + __pyx_tp_clear_5scipy_2io_6matlab_10mio5_utils_VarReader5, /*tp_clear*/ + 0, /*tp_richcompare*/ + 0, /*tp_weaklistoffset*/ + 0, /*tp_iter*/ + 0, /*tp_iternext*/ + __pyx_methods_5scipy_2io_6matlab_10mio5_utils_VarReader5, /*tp_methods*/ + __pyx_members_5scipy_2io_6matlab_10mio5_utils_VarReader5, /*tp_members*/ + 0, /*tp_getset*/ + 0, /*tp_base*/ + 0, /*tp_dict*/ + 0, /*tp_descr_get*/ + 0, /*tp_descr_set*/ + 0, /*tp_dictoffset*/ + 0, /*tp_init*/ + 0, /*tp_alloc*/ + __pyx_tp_new_5scipy_2io_6matlab_10mio5_utils_VarReader5, /*tp_new*/ + 0, /*tp_free*/ + 0, /*tp_is_gc*/ + 0, /*tp_bases*/ + 0, /*tp_mro*/ + 0, /*tp_cache*/ + 0, /*tp_subclasses*/ + 0, /*tp_weaklist*/ + 0, /*tp_del*/ + #if PY_VERSION_HEX >= 0x02060000 + 0, /*tp_version_tag*/ + #endif +}; + +static struct PyMethodDef __pyx_methods[] = { + {__Pyx_NAMESTR("byteswap_u4"), (PyCFunction)__pyx_pf_5scipy_2io_6matlab_10mio5_utils_byteswap_u4, METH_O, __Pyx_DOCSTR(0)}, + {0, 0, 0, 0} +}; + +static void __pyx_init_filenames(void); /*proto*/ + +#if PY_MAJOR_VERSION >= 3 +static struct PyModuleDef __pyx_moduledef = { + PyModuleDef_HEAD_INIT, + __Pyx_NAMESTR("mio5_utils"), + __Pyx_DOCSTR(__pyx_k_17), /* m_doc */ + -1, /* m_size */ + __pyx_methods /* m_methods */, + NULL, /* m_reload */ + NULL, /* m_traverse */ + NULL, /* m_clear */ + NULL /* m_free */ +}; +#endif + +static __Pyx_StringTabEntry __pyx_string_tab[] = { + {&__pyx_kp_s_1, __pyx_k_1, sizeof(__pyx_k_1), 0, 0, 1, 0}, + {&__pyx_kp_s_10, __pyx_k_10, sizeof(__pyx_k_10), 0, 0, 1, 0}, + {&__pyx_kp_u_11, __pyx_k_11, sizeof(__pyx_k_11), 0, 1, 0, 0}, + {&__pyx_kp_u_12, __pyx_k_12, sizeof(__pyx_k_12), 0, 1, 0, 0}, + {&__pyx_kp_u_13, __pyx_k_13, sizeof(__pyx_k_13), 0, 1, 0, 0}, + {&__pyx_kp_u_14, __pyx_k_14, sizeof(__pyx_k_14), 0, 1, 0, 0}, + {&__pyx_kp_u_15, __pyx_k_15, sizeof(__pyx_k_15), 0, 1, 0, 0}, + {&__pyx_kp_u_16, __pyx_k_16, sizeof(__pyx_k_16), 0, 1, 0, 0}, + {&__pyx_n_s_18, __pyx_k_18, sizeof(__pyx_k_18), 0, 0, 1, 1}, + {&__pyx_n_s_19, __pyx_k_19, sizeof(__pyx_k_19), 0, 0, 1, 1}, + {&__pyx_kp_s_2, __pyx_k_2, sizeof(__pyx_k_2), 0, 0, 1, 0}, + {&__pyx_n_s_20, __pyx_k_20, sizeof(__pyx_k_20), 0, 0, 1, 1}, + {&__pyx_n_s_21, __pyx_k_21, sizeof(__pyx_k_21), 0, 0, 1, 1}, + {&__pyx_n_s_22, __pyx_k_22, sizeof(__pyx_k_22), 0, 0, 1, 1}, + {&__pyx_kp_s_23, __pyx_k_23, sizeof(__pyx_k_23), 0, 0, 1, 0}, + {&__pyx_kp_s_24, __pyx_k_24, sizeof(__pyx_k_24), 0, 0, 1, 0}, + {&__pyx_kp_u_25, __pyx_k_25, sizeof(__pyx_k_25), 0, 1, 0, 0}, + {&__pyx_kp_u_26, __pyx_k_26, sizeof(__pyx_k_26), 0, 1, 0, 0}, + {&__pyx_kp_u_27, __pyx_k_27, sizeof(__pyx_k_27), 0, 1, 0, 0}, + {&__pyx_kp_u_28, __pyx_k_28, sizeof(__pyx_k_28), 0, 1, 0, 0}, + {&__pyx_kp_u_29, __pyx_k_29, sizeof(__pyx_k_29), 0, 1, 0, 0}, + {&__pyx_kp_s_3, __pyx_k_3, sizeof(__pyx_k_3), 0, 0, 1, 0}, + {&__pyx_kp_u_30, __pyx_k_30, sizeof(__pyx_k_30), 0, 1, 0, 0}, + {&__pyx_kp_u_31, __pyx_k_31, sizeof(__pyx_k_31), 0, 1, 0, 0}, + {&__pyx_kp_u_32, __pyx_k_32, sizeof(__pyx_k_32), 0, 1, 0, 0}, + {&__pyx_kp_u_33, __pyx_k_33, sizeof(__pyx_k_33), 0, 1, 0, 0}, + {&__pyx_kp_u_34, __pyx_k_34, sizeof(__pyx_k_34), 0, 1, 0, 0}, + {&__pyx_kp_u_35, __pyx_k_35, sizeof(__pyx_k_35), 0, 1, 0, 0}, + {&__pyx_kp_u_36, __pyx_k_36, sizeof(__pyx_k_36), 0, 1, 0, 0}, + {&__pyx_kp_s_4, __pyx_k_4, sizeof(__pyx_k_4), 0, 0, 1, 0}, + {&__pyx_kp_s_5, __pyx_k_5, sizeof(__pyx_k_5), 0, 0, 1, 0}, + {&__pyx_kp_s_6, __pyx_k_6, sizeof(__pyx_k_6), 0, 0, 1, 0}, + {&__pyx_kp_s_7, __pyx_k_7, sizeof(__pyx_k_7), 0, 0, 1, 0}, + {&__pyx_kp_s_8, __pyx_k_8, sizeof(__pyx_k_8), 0, 0, 1, 0}, + {&__pyx_kp_s_9, __pyx_k_9, sizeof(__pyx_k_9), 0, 0, 1, 0}, + {&__pyx_n_s__F, __pyx_k__F, sizeof(__pyx_k__F), 0, 0, 1, 1}, + {&__pyx_n_s__MatlabFunction, __pyx_k__MatlabFunction, sizeof(__pyx_k__MatlabFunction), 0, 0, 1, 1}, + {&__pyx_n_s__MatlabObject, __pyx_k__MatlabObject, sizeof(__pyx_k__MatlabObject), 0, 0, 1, 1}, + {&__pyx_n_s__MatlabOpaque, __pyx_k__MatlabOpaque, sizeof(__pyx_k__MatlabOpaque), 0, 0, 1, 1}, + {&__pyx_n_s__OPAQUE_DTYPE, __pyx_k__OPAQUE_DTYPE, sizeof(__pyx_k__OPAQUE_DTYPE), 0, 0, 1, 1}, + {&__pyx_n_s__RuntimeError, __pyx_k__RuntimeError, sizeof(__pyx_k__RuntimeError), 0, 0, 1, 1}, + {&__pyx_n_s__T, __pyx_k__T, sizeof(__pyx_k__T), 0, 0, 1, 1}, + {&__pyx_n_s__TypeError, __pyx_k__TypeError, sizeof(__pyx_k__TypeError), 0, 0, 1, 1}, + {&__pyx_n_s__U1_dtype, __pyx_k__U1_dtype, sizeof(__pyx_k__U1_dtype), 0, 0, 1, 1}, + {&__pyx_n_s__ValueError, __pyx_k__ValueError, sizeof(__pyx_k__ValueError), 0, 0, 1, 1}, + {&__pyx_n_s__VarReader5, __pyx_k__VarReader5, sizeof(__pyx_k__VarReader5), 0, 0, 1, 1}, + {&__pyx_n_s____dict__, __pyx_k____dict__, sizeof(__pyx_k____dict__), 0, 0, 1, 1}, + {&__pyx_n_s____main__, __pyx_k____main__, sizeof(__pyx_k____main__), 0, 0, 1, 1}, + {&__pyx_n_s____test__, __pyx_k____test__, sizeof(__pyx_k____test__), 0, 0, 1, 1}, + {&__pyx_n_s___fieldnames, __pyx_k___fieldnames, sizeof(__pyx_k___fieldnames), 0, 0, 1, 1}, + {&__pyx_n_s__arr, __pyx_k__arr, sizeof(__pyx_k__arr), 0, 0, 1, 1}, + {&__pyx_n_s__array, __pyx_k__array, sizeof(__pyx_k__array), 0, 0, 1, 1}, + {&__pyx_n_s__array_from_header, __pyx_k__array_from_header, sizeof(__pyx_k__array_from_header), 0, 0, 1, 1}, + {&__pyx_n_s__ascii, __pyx_k__ascii, sizeof(__pyx_k__ascii), 0, 0, 1, 1}, + {&__pyx_n_s__astype, __pyx_k__astype, sizeof(__pyx_k__astype), 0, 0, 1, 1}, + {&__pyx_n_s__base, __pyx_k__base, sizeof(__pyx_k__base), 0, 0, 1, 1}, + {&__pyx_n_s__basestring, __pyx_k__basestring, sizeof(__pyx_k__basestring), 0, 0, 1, 1}, + {&__pyx_n_s__bool, __pyx_k__bool, sizeof(__pyx_k__bool), 0, 0, 1, 1}, + {&__pyx_n_s__bool_dtype, __pyx_k__bool_dtype, sizeof(__pyx_k__bool_dtype), 0, 0, 1, 1}, + {&__pyx_n_s__buf, __pyx_k__buf, sizeof(__pyx_k__buf), 0, 0, 1, 1}, + {&__pyx_n_s__buffer, __pyx_k__buffer, sizeof(__pyx_k__buffer), 0, 0, 1, 1}, + {&__pyx_n_s__byte_order, __pyx_k__byte_order, sizeof(__pyx_k__byte_order), 0, 0, 1, 1}, + {&__pyx_n_s__byteorder, __pyx_k__byteorder, sizeof(__pyx_k__byteorder), 0, 0, 1, 1}, + {&__pyx_n_s__chars_as_strings, __pyx_k__chars_as_strings, sizeof(__pyx_k__chars_as_strings), 0, 0, 1, 1}, + {&__pyx_n_s__chars_to_strings, __pyx_k__chars_to_strings, sizeof(__pyx_k__chars_to_strings), 0, 0, 1, 1}, + {&__pyx_n_s__class_dtypes, __pyx_k__class_dtypes, sizeof(__pyx_k__class_dtypes), 0, 0, 1, 1}, + {&__pyx_n_s__codecs, __pyx_k__codecs, sizeof(__pyx_k__codecs), 0, 0, 1, 1}, + {&__pyx_n_s__copy, __pyx_k__copy, sizeof(__pyx_k__copy), 0, 0, 1, 1}, + {&__pyx_n_s__cread_fieldnames, __pyx_k__cread_fieldnames, sizeof(__pyx_k__cread_fieldnames), 0, 0, 1, 1}, + {&__pyx_n_s__cread_full_tag, __pyx_k__cread_full_tag, sizeof(__pyx_k__cread_full_tag), 0, 0, 1, 1}, + {&__pyx_n_s__cread_tag, __pyx_k__cread_tag, sizeof(__pyx_k__cread_tag), 0, 0, 1, 1}, + {&__pyx_n_s__csc_matrix, __pyx_k__csc_matrix, sizeof(__pyx_k__csc_matrix), 0, 0, 1, 1}, + {&__pyx_n_s__cstream, __pyx_k__cstream, sizeof(__pyx_k__cstream), 0, 0, 1, 1}, + {&__pyx_n_s__decode, __pyx_k__decode, sizeof(__pyx_k__decode), 0, 0, 1, 1}, + {&__pyx_n_s__descr, __pyx_k__descr, sizeof(__pyx_k__descr), 0, 0, 1, 1}, + {&__pyx_n_s__dims, __pyx_k__dims, sizeof(__pyx_k__dims), 0, 0, 1, 1}, + {&__pyx_n_s__dims_ptr, __pyx_k__dims_ptr, sizeof(__pyx_k__dims_ptr), 0, 0, 1, 1}, + {&__pyx_n_s__dtype, __pyx_k__dtype, sizeof(__pyx_k__dtype), 0, 0, 1, 1}, + {&__pyx_n_s__dtypes, __pyx_k__dtypes, sizeof(__pyx_k__dtypes), 0, 0, 1, 1}, + {&__pyx_n_s__empty, __pyx_k__empty, sizeof(__pyx_k__empty), 0, 0, 1, 1}, + {&__pyx_n_s__fields, __pyx_k__fields, sizeof(__pyx_k__fields), 0, 0, 1, 1}, + {&__pyx_n_s__format, __pyx_k__format, sizeof(__pyx_k__format), 0, 0, 1, 1}, + {&__pyx_n_s__header, __pyx_k__header, sizeof(__pyx_k__header), 0, 0, 1, 1}, + {&__pyx_n_s__is_complex, __pyx_k__is_complex, sizeof(__pyx_k__is_complex), 0, 0, 1, 1}, + {&__pyx_n_s__is_global, __pyx_k__is_global, sizeof(__pyx_k__is_global), 0, 0, 1, 1}, + {&__pyx_n_s__is_logical, __pyx_k__is_logical, sizeof(__pyx_k__is_logical), 0, 0, 1, 1}, + {&__pyx_n_s__is_swapped, __pyx_k__is_swapped, sizeof(__pyx_k__is_swapped), 0, 0, 1, 1}, + {&__pyx_n_s__items, __pyx_k__items, sizeof(__pyx_k__items), 0, 0, 1, 1}, + {&__pyx_n_s__itemsize, __pyx_k__itemsize, sizeof(__pyx_k__itemsize), 0, 0, 1, 1}, + {&__pyx_n_s__little, __pyx_k__little, sizeof(__pyx_k__little), 0, 0, 1, 1}, + {&__pyx_n_s__little_endian, __pyx_k__little_endian, sizeof(__pyx_k__little_endian), 0, 0, 1, 1}, + {&__pyx_n_s__mat_dtype, __pyx_k__mat_dtype, sizeof(__pyx_k__mat_dtype), 0, 0, 1, 1}, + {&__pyx_n_s__mat_stream, __pyx_k__mat_stream, sizeof(__pyx_k__mat_stream), 0, 0, 1, 1}, + {&__pyx_n_s__mat_struct, __pyx_k__mat_struct, sizeof(__pyx_k__mat_struct), 0, 0, 1, 1}, + {&__pyx_n_s__mclass, __pyx_k__mclass, sizeof(__pyx_k__mclass), 0, 0, 1, 1}, + {&__pyx_n_s__mio5p, __pyx_k__mio5p, sizeof(__pyx_k__mio5p), 0, 0, 1, 1}, + {&__pyx_n_s__miob, __pyx_k__miob, sizeof(__pyx_k__miob), 0, 0, 1, 1}, + {&__pyx_n_s__n_dims, __pyx_k__n_dims, sizeof(__pyx_k__n_dims), 0, 0, 1, 1}, + {&__pyx_n_s__name, __pyx_k__name, sizeof(__pyx_k__name), 0, 0, 1, 1}, + {&__pyx_n_s__names, __pyx_k__names, sizeof(__pyx_k__names), 0, 0, 1, 1}, + {&__pyx_n_s__native_code, __pyx_k__native_code, sizeof(__pyx_k__native_code), 0, 0, 1, 1}, + {&__pyx_n_s__ndarray, __pyx_k__ndarray, sizeof(__pyx_k__ndarray), 0, 0, 1, 1}, + {&__pyx_n_s__ndim, __pyx_k__ndim, sizeof(__pyx_k__ndim), 0, 0, 1, 1}, + {&__pyx_n_s__np, __pyx_k__np, sizeof(__pyx_k__np), 0, 0, 1, 1}, + {&__pyx_n_s__numpy, __pyx_k__numpy, sizeof(__pyx_k__numpy), 0, 0, 1, 1}, + {&__pyx_n_s__nzmax, __pyx_k__nzmax, sizeof(__pyx_k__nzmax), 0, 0, 1, 1}, + {&__pyx_n_s__obj, __pyx_k__obj, sizeof(__pyx_k__obj), 0, 0, 1, 1}, + {&__pyx_n_s__object, __pyx_k__object, sizeof(__pyx_k__object), 0, 0, 1, 1}, + {&__pyx_n_s__order, __pyx_k__order, sizeof(__pyx_k__order), 0, 0, 1, 1}, + {&__pyx_n_s__preader, __pyx_k__preader, sizeof(__pyx_k__preader), 0, 0, 1, 1}, + {&__pyx_n_s__process, __pyx_k__process, sizeof(__pyx_k__process), 0, 0, 1, 1}, + {&__pyx_n_s__pycopy, __pyx_k__pycopy, sizeof(__pyx_k__pycopy), 0, 0, 1, 1}, + {&__pyx_n_s__range, __pyx_k__range, sizeof(__pyx_k__range), 0, 0, 1, 1}, + {&__pyx_n_s__read_cells, __pyx_k__read_cells, sizeof(__pyx_k__read_cells), 0, 0, 1, 1}, + {&__pyx_n_s__read_char, __pyx_k__read_char, sizeof(__pyx_k__read_char), 0, 0, 1, 1}, + {&__pyx_n_s__read_element, __pyx_k__read_element, sizeof(__pyx_k__read_element), 0, 0, 1, 1}, + {&__pyx_n_s__read_element_into, __pyx_k__read_element_into, sizeof(__pyx_k__read_element_into), 0, 0, 1, 1}, + {&__pyx_n_s__read_fieldnames, __pyx_k__read_fieldnames, sizeof(__pyx_k__read_fieldnames), 0, 0, 1, 1}, + {&__pyx_n_s__read_full_tag, __pyx_k__read_full_tag, sizeof(__pyx_k__read_full_tag), 0, 0, 1, 1}, + {&__pyx_n_s__read_header, __pyx_k__read_header, sizeof(__pyx_k__read_header), 0, 0, 1, 1}, + {&__pyx_n_s__read_int8_string, __pyx_k__read_int8_string, sizeof(__pyx_k__read_int8_string), 0, 0, 1, 1}, + {&__pyx_n_s__read_into, __pyx_k__read_into, sizeof(__pyx_k__read_into), 0, 0, 1, 1}, + {&__pyx_n_s__read_into_int32s, __pyx_k__read_into_int32s, sizeof(__pyx_k__read_into_int32s), 0, 0, 1, 1}, + {&__pyx_n_s__read_mi_matrix, __pyx_k__read_mi_matrix, sizeof(__pyx_k__read_mi_matrix), 0, 0, 1, 1}, + {&__pyx_n_s__read_numeric, __pyx_k__read_numeric, sizeof(__pyx_k__read_numeric), 0, 0, 1, 1}, + {&__pyx_n_s__read_opaque, __pyx_k__read_opaque, sizeof(__pyx_k__read_opaque), 0, 0, 1, 1}, + {&__pyx_n_s__read_real_complex, __pyx_k__read_real_complex, sizeof(__pyx_k__read_real_complex), 0, 0, 1, 1}, + {&__pyx_n_s__read_sparse, __pyx_k__read_sparse, sizeof(__pyx_k__read_sparse), 0, 0, 1, 1}, + {&__pyx_n_s__read_string, __pyx_k__read_string, sizeof(__pyx_k__read_string), 0, 0, 1, 1}, + {&__pyx_n_s__read_struct, __pyx_k__read_struct, sizeof(__pyx_k__read_struct), 0, 0, 1, 1}, + {&__pyx_n_s__read_tag, __pyx_k__read_tag, sizeof(__pyx_k__read_tag), 0, 0, 1, 1}, + {&__pyx_n_s__readonly, __pyx_k__readonly, sizeof(__pyx_k__readonly), 0, 0, 1, 1}, + {&__pyx_n_s__reshape, __pyx_k__reshape, sizeof(__pyx_k__reshape), 0, 0, 1, 1}, + {&__pyx_n_s__s0, __pyx_k__s0, sizeof(__pyx_k__s0), 0, 0, 1, 1}, + {&__pyx_n_s__s1, __pyx_k__s1, sizeof(__pyx_k__s1), 0, 0, 1, 1}, + {&__pyx_n_s__s2, __pyx_k__s2, sizeof(__pyx_k__s2), 0, 0, 1, 1}, + {&__pyx_n_s__scipy, __pyx_k__scipy, sizeof(__pyx_k__scipy), 0, 0, 1, 1}, + {&__pyx_n_s__seek, __pyx_k__seek, sizeof(__pyx_k__seek), 0, 0, 1, 1}, + {&__pyx_n_s__set_stream, __pyx_k__set_stream, sizeof(__pyx_k__set_stream), 0, 0, 1, 1}, + {&__pyx_n_s__shape, __pyx_k__shape, sizeof(__pyx_k__shape), 0, 0, 1, 1}, + {&__pyx_n_s__size_from_header, __pyx_k__size_from_header, sizeof(__pyx_k__size_from_header), 0, 0, 1, 1}, + {&__pyx_n_s__sparse, __pyx_k__sparse, sizeof(__pyx_k__sparse), 0, 0, 1, 1}, + {&__pyx_n_s__squeeze_element, __pyx_k__squeeze_element, sizeof(__pyx_k__squeeze_element), 0, 0, 1, 1}, + {&__pyx_n_s__squeeze_me, __pyx_k__squeeze_me, sizeof(__pyx_k__squeeze_me), 0, 0, 1, 1}, + {&__pyx_n_s__strides, __pyx_k__strides, sizeof(__pyx_k__strides), 0, 0, 1, 1}, + {&__pyx_n_s__struct_as_record, __pyx_k__struct_as_record, sizeof(__pyx_k__struct_as_record), 0, 0, 1, 1}, + {&__pyx_n_s__suboffsets, __pyx_k__suboffsets, sizeof(__pyx_k__suboffsets), 0, 0, 1, 1}, + {&__pyx_n_s__swapped_code, __pyx_k__swapped_code, sizeof(__pyx_k__swapped_code), 0, 0, 1, 1}, + {&__pyx_n_s__sys, __pyx_k__sys, sizeof(__pyx_k__sys), 0, 0, 1, 1}, + {&__pyx_n_s__sys_is_le, __pyx_k__sys_is_le, sizeof(__pyx_k__sys_is_le), 0, 0, 1, 1}, + {&__pyx_n_s__tostring, __pyx_k__tostring, sizeof(__pyx_k__tostring), 0, 0, 1, 1}, + {&__pyx_n_s__type_num, __pyx_k__type_num, sizeof(__pyx_k__type_num), 0, 0, 1, 1}, + {&__pyx_n_s__uint16_codec, __pyx_k__uint16_codec, sizeof(__pyx_k__uint16_codec), 0, 0, 1, 1}, + {&__pyx_n_s__uint16_len, __pyx_k__uint16_len, sizeof(__pyx_k__uint16_len), 0, 0, 1, 1}, + {&__pyx_n_s__uint8, __pyx_k__uint8, sizeof(__pyx_k__uint8), 0, 0, 1, 1}, + {0, 0, 0, 0, 0, 0, 0} +}; +static int __Pyx_InitCachedBuiltins(void) { + __pyx_builtin_basestring = __Pyx_GetName(__pyx_b, __pyx_n_s__basestring); if (!__pyx_builtin_basestring) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 170; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_builtin_ValueError = __Pyx_GetName(__pyx_b, __pyx_n_s__ValueError); if (!__pyx_builtin_ValueError) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 273; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_builtin_TypeError = __Pyx_GetName(__pyx_b, __pyx_n_s__TypeError); if (!__pyx_builtin_TypeError) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 429; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_builtin_range = __Pyx_GetName(__pyx_b, __pyx_n_s__range); if (!__pyx_builtin_range) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 455; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_builtin_object = __Pyx_GetName(__pyx_b, __pyx_n_s__object); if (!__pyx_builtin_object) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 775; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_builtin_RuntimeError = __Pyx_GetName(__pyx_b, __pyx_n_s__RuntimeError); if (!__pyx_builtin_RuntimeError) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 786; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + return 0; + __pyx_L1_error:; + return -1; +} + +static int __Pyx_InitGlobals(void) { + if (__Pyx_InitStrings(__pyx_string_tab) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + __pyx_int_1 = PyInt_FromLong(1); if (unlikely(!__pyx_int_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + __pyx_int_neg_1 = PyInt_FromLong(-1); if (unlikely(!__pyx_int_neg_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + __pyx_int_15 = PyInt_FromLong(15); if (unlikely(!__pyx_int_15)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + return 0; + __pyx_L1_error:; + return -1; +} + +#if PY_MAJOR_VERSION < 3 +PyMODINIT_FUNC initmio5_utils(void); /*proto*/ +PyMODINIT_FUNC initmio5_utils(void) +#else +PyMODINIT_FUNC PyInit_mio5_utils(void); /*proto*/ +PyMODINIT_FUNC PyInit_mio5_utils(void) +#endif +{ + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + int __pyx_t_4; + PyObject *__pyx_t_5 = NULL; + #if CYTHON_REFNANNY + void* __pyx_refnanny = NULL; + __Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny"); + if (!__Pyx_RefNanny) { + PyErr_Clear(); + __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny"); + if (!__Pyx_RefNanny) + Py_FatalError("failed to import 'refnanny' module"); + } + __pyx_refnanny = __Pyx_RefNanny->SetupContext("PyMODINIT_FUNC PyInit_mio5_utils(void)", __LINE__, __FILE__); + #endif + __pyx_init_filenames(); + __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + #if PY_MAJOR_VERSION < 3 + __pyx_empty_bytes = PyString_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + #else + __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + #endif + /*--- Library function declarations ---*/ + /*--- Threads initialization code ---*/ + #if defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS + #ifdef WITH_THREAD /* Python build with threading support? */ + PyEval_InitThreads(); + #endif + #endif + /*--- Module creation code ---*/ + #if PY_MAJOR_VERSION < 3 + __pyx_m = Py_InitModule4(__Pyx_NAMESTR("mio5_utils"), __pyx_methods, __Pyx_DOCSTR(__pyx_k_17), 0, PYTHON_API_VERSION); + #else + __pyx_m = PyModule_Create(&__pyx_moduledef); + #endif + if (!__pyx_m) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + #if PY_MAJOR_VERSION < 3 + Py_INCREF(__pyx_m); + #endif + __pyx_b = PyImport_AddModule(__Pyx_NAMESTR(__Pyx_BUILTIN_MODULE_NAME)); + if (!__pyx_b) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + if (__Pyx_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + /*--- Initialize various global constants etc. ---*/ + if (unlikely(__Pyx_InitGlobals() < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + if (__pyx_module_is_main_scipy__io__matlab__mio5_utils) { + if (__Pyx_SetAttrString(__pyx_m, "__name__", __pyx_n_s____main__) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + } + /*--- Builtin init code ---*/ + if (unlikely(__Pyx_InitCachedBuiltins() < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + /*--- Global init code ---*/ + __pyx_v_5scipy_2io_6matlab_10mio5_utils_OPAQUE_DTYPE = ((PyArray_Descr *)Py_None); Py_INCREF(Py_None); + /*--- Function export code ---*/ + /*--- Type init code ---*/ + if (PyType_Ready(&__pyx_type_5scipy_2io_6matlab_10mio5_utils_VarHeader5) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 115; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + if (__Pyx_SetAttrString(__pyx_m, "VarHeader5", (PyObject *)&__pyx_type_5scipy_2io_6matlab_10mio5_utils_VarHeader5) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 115; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_ptype_5scipy_2io_6matlab_10mio5_utils_VarHeader5 = &__pyx_type_5scipy_2io_6matlab_10mio5_utils_VarHeader5; + __pyx_vtabptr_5scipy_2io_6matlab_10mio5_utils_VarReader5 = &__pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5; + #if PY_MAJOR_VERSION >= 3 + __pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.cread_tag = (int (*)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, __pyx_t_5numpy_uint32_t *, __pyx_t_5numpy_uint32_t *, char *))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_cread_tag; + __pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.read_element = (PyObject *(*)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, __pyx_t_5numpy_uint32_t *, __pyx_t_5numpy_uint32_t *, void **, struct __pyx_opt_args_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_element *__pyx_optional_args))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_element; + __pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.read_element_into = (void (*)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, __pyx_t_5numpy_uint32_t *, __pyx_t_5numpy_uint32_t *, void *))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_element_into; + __pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.read_numeric = (PyArrayObject *(*)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, int __pyx_skip_dispatch, struct __pyx_opt_args_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_numeric *__pyx_optional_args))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_numeric; + __pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.read_int8_string = (PyObject *(*)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_int8_string; + __pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.read_into_int32s = (int (*)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, __pyx_t_5numpy_int32_t *))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_into_int32s; + __pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.cread_full_tag = (void (*)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, __pyx_t_5numpy_uint32_t *, __pyx_t_5numpy_uint32_t *))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_cread_full_tag; + __pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.read_header = (struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *(*)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, int __pyx_skip_dispatch))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_header; + __pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.size_from_header = (size_t (*)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_size_from_header; + __pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.read_mi_matrix = (PyObject *(*)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, struct __pyx_opt_args_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_mi_matrix *__pyx_optional_args))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_mi_matrix; + __pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.array_from_header = (PyObject *(*)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *, int __pyx_skip_dispatch, struct __pyx_opt_args_5scipy_2io_6matlab_10mio5_utils_10VarReader5_array_from_header *__pyx_optional_args))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_array_from_header; + __pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.read_real_complex = (PyArrayObject *(*)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *, int __pyx_skip_dispatch))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_real_complex; + __pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.read_sparse = (PyObject *(*)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_sparse; + __pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.read_char = (PyArrayObject *(*)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *, int __pyx_skip_dispatch))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_char; + __pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.read_cells = (PyArrayObject *(*)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *, int __pyx_skip_dispatch))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_cells; + __pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.cread_fieldnames = (PyObject *(*)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, int *))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_cread_fieldnames; + __pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.read_struct = (PyArrayObject *(*)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *, int __pyx_skip_dispatch))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_struct; + __pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.read_opaque = (PyArrayObject *(*)(struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarReader5 *, struct __pyx_obj_5scipy_2io_6matlab_10mio5_utils_VarHeader5 *, int __pyx_skip_dispatch))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_opaque; + #else + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.cread_tag = (void(*)(void))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_cread_tag; + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.read_element = (void(*)(void))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_element; + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.read_element_into = (void(*)(void))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_element_into; + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.read_numeric = (void(*)(void))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_numeric; + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.read_int8_string = (void(*)(void))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_int8_string; + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.read_into_int32s = (void(*)(void))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_into_int32s; + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.cread_full_tag = (void(*)(void))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_cread_full_tag; + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.read_header = (void(*)(void))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_header; + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.size_from_header = (void(*)(void))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_size_from_header; + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.read_mi_matrix = (void(*)(void))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_mi_matrix; + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.array_from_header = (void(*)(void))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_array_from_header; + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.read_real_complex = (void(*)(void))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_real_complex; + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.read_sparse = (void(*)(void))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_sparse; + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.read_char = (void(*)(void))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_char; + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.read_cells = (void(*)(void))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_cells; + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.cread_fieldnames = (void(*)(void))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_cread_fieldnames; + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.read_struct = (void(*)(void))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_struct; + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_10mio5_utils_VarReader5.read_opaque = (void(*)(void))__pyx_f_5scipy_2io_6matlab_10mio5_utils_10VarReader5_read_opaque; + #endif + if (PyType_Ready(&__pyx_type_5scipy_2io_6matlab_10mio5_utils_VarReader5) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 127; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + if (__Pyx_SetVtable(__pyx_type_5scipy_2io_6matlab_10mio5_utils_VarReader5.tp_dict, __pyx_vtabptr_5scipy_2io_6matlab_10mio5_utils_VarReader5) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 127; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + if (__Pyx_SetAttrString(__pyx_m, "VarReader5", (PyObject *)&__pyx_type_5scipy_2io_6matlab_10mio5_utils_VarReader5) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 127; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_ptype_5scipy_2io_6matlab_10mio5_utils_VarReader5 = &__pyx_type_5scipy_2io_6matlab_10mio5_utils_VarReader5; + /*--- Type import code ---*/ + __pyx_ptype_5numpy_dtype = __Pyx_ImportType("numpy", "dtype", sizeof(PyArray_Descr), 0); if (unlikely(!__pyx_ptype_5numpy_dtype)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 148; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_ptype_5numpy_flatiter = __Pyx_ImportType("numpy", "flatiter", sizeof(PyArrayIterObject), 0); if (unlikely(!__pyx_ptype_5numpy_flatiter)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 158; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_ptype_5numpy_broadcast = __Pyx_ImportType("numpy", "broadcast", sizeof(PyArrayMultiIterObject), 0); if (unlikely(!__pyx_ptype_5numpy_broadcast)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 162; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_ptype_5numpy_ndarray = __Pyx_ImportType("numpy", "ndarray", sizeof(PyArrayObject), 0); if (unlikely(!__pyx_ptype_5numpy_ndarray)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 171; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_ptype_5numpy_ufunc = __Pyx_ImportType("numpy", "ufunc", sizeof(PyUFuncObject), 0); if (unlikely(!__pyx_ptype_5numpy_ufunc)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 848; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_ptype_5scipy_2io_6matlab_7streams_GenericStream = __Pyx_ImportType("scipy.io.matlab.streams", "GenericStream", sizeof(struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream), 1); if (unlikely(!__pyx_ptype_5scipy_2io_6matlab_7streams_GenericStream)) {__pyx_filename = __pyx_f[2]; __pyx_lineno = 3; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + if (__Pyx_GetVtable(__pyx_ptype_5scipy_2io_6matlab_7streams_GenericStream->tp_dict, &__pyx_vtabptr_5scipy_2io_6matlab_7streams_GenericStream) < 0) {__pyx_filename = __pyx_f[2]; __pyx_lineno = 3; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + /*--- Function import code ---*/ + __pyx_t_1 = __Pyx_ImportModule("scipy.io.matlab.streams"); if (!__pyx_t_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + if (__Pyx_ImportFunction(__pyx_t_1, "make_stream", (void (**)(void))&__pyx_f_5scipy_2io_6matlab_7streams_make_stream, "struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *(PyObject *, int __pyx_skip_dispatch)") < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + Py_DECREF(__pyx_t_1); __pyx_t_1 = 0; + /*--- Execution code ---*/ + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":15 + * ''' + * + * import sys # <<<<<<<<<<<<<< + * + * from copy import copy as pycopy + */ + __pyx_t_2 = __Pyx_Import(((PyObject *)__pyx_n_s__sys), 0); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 15; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + if (PyObject_SetAttr(__pyx_m, __pyx_n_s__sys, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 15; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":17 + * import sys + * + * from copy import copy as pycopy # <<<<<<<<<<<<<< + * + * from python cimport Py_INCREF, Py_DECREF + */ + __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 17; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_2)); + __Pyx_INCREF(((PyObject *)__pyx_n_s__copy)); + PyList_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_n_s__copy)); + __Pyx_GIVEREF(((PyObject *)__pyx_n_s__copy)); + __pyx_t_3 = __Pyx_Import(((PyObject *)__pyx_n_s__copy), ((PyObject *)__pyx_t_2)); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 17; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(((PyObject *)__pyx_t_2)); __pyx_t_2 = 0; + __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__copy); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 17; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + if (PyObject_SetAttr(__pyx_m, __pyx_n_s__pycopy, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 17; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":29 + * PyString_FromStringAndSize + * + * import numpy as np # <<<<<<<<<<<<<< + * cimport numpy as cnp + * + */ + __pyx_t_3 = __Pyx_Import(((PyObject *)__pyx_n_s__numpy), 0); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 29; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + if (PyObject_SetAttr(__pyx_m, __pyx_n_s__np, __pyx_t_3) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 29; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":48 + * # Numpy must be initialized before any code using the numpy C-API + * # directly + * cnp.import_array() # <<<<<<<<<<<<<< + * + * # Constant from numpy - max number of array dimensions + */ + import_array(); + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":58 + * + * cimport streams + * import scipy.io.matlab.miobase as miob # <<<<<<<<<<<<<< + * from scipy.io.matlab.mio_utils import squeeze_element, chars_to_strings + * import scipy.io.matlab.mio5_params as mio5p + */ + __pyx_t_3 = PyList_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 58; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_3)); + __Pyx_INCREF(((PyObject *)__pyx_n_s_19)); + PyList_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_n_s_19)); + __Pyx_GIVEREF(((PyObject *)__pyx_n_s_19)); + __pyx_t_2 = __Pyx_Import(((PyObject *)__pyx_n_s_18), ((PyObject *)__pyx_t_3)); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 58; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(((PyObject *)__pyx_t_3)); __pyx_t_3 = 0; + if (PyObject_SetAttr(__pyx_m, __pyx_n_s__miob, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 58; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":59 + * cimport streams + * import scipy.io.matlab.miobase as miob + * from scipy.io.matlab.mio_utils import squeeze_element, chars_to_strings # <<<<<<<<<<<<<< + * import scipy.io.matlab.mio5_params as mio5p + * import scipy.sparse + */ + __pyx_t_2 = PyList_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 59; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_2)); + __Pyx_INCREF(((PyObject *)__pyx_n_s__squeeze_element)); + PyList_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_n_s__squeeze_element)); + __Pyx_GIVEREF(((PyObject *)__pyx_n_s__squeeze_element)); + __Pyx_INCREF(((PyObject *)__pyx_n_s__chars_to_strings)); + PyList_SET_ITEM(__pyx_t_2, 1, ((PyObject *)__pyx_n_s__chars_to_strings)); + __Pyx_GIVEREF(((PyObject *)__pyx_n_s__chars_to_strings)); + __pyx_t_3 = __Pyx_Import(((PyObject *)__pyx_n_s_20), ((PyObject *)__pyx_t_2)); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 59; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(((PyObject *)__pyx_t_2)); __pyx_t_2 = 0; + __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__squeeze_element); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 59; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + if (PyObject_SetAttr(__pyx_m, __pyx_n_s__squeeze_element, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 59; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__chars_to_strings); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 59; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + if (PyObject_SetAttr(__pyx_m, __pyx_n_s__chars_to_strings, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 59; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":60 + * import scipy.io.matlab.miobase as miob + * from scipy.io.matlab.mio_utils import squeeze_element, chars_to_strings + * import scipy.io.matlab.mio5_params as mio5p # <<<<<<<<<<<<<< + * import scipy.sparse + * + */ + __pyx_t_3 = PyList_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 60; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_3)); + __Pyx_INCREF(((PyObject *)__pyx_n_s_19)); + PyList_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_n_s_19)); + __Pyx_GIVEREF(((PyObject *)__pyx_n_s_19)); + __pyx_t_2 = __Pyx_Import(((PyObject *)__pyx_n_s_21), ((PyObject *)__pyx_t_3)); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 60; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(((PyObject *)__pyx_t_3)); __pyx_t_3 = 0; + if (PyObject_SetAttr(__pyx_m, __pyx_n_s__mio5p, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 60; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":61 + * from scipy.io.matlab.mio_utils import squeeze_element, chars_to_strings + * import scipy.io.matlab.mio5_params as mio5p + * import scipy.sparse # <<<<<<<<<<<<<< + * + * + */ + __pyx_t_2 = __Pyx_Import(((PyObject *)__pyx_n_s_22), 0); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 61; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + if (PyObject_SetAttr(__pyx_m, __pyx_n_s__scipy, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 61; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":101 + * mxOBJECT_CLASS_FROM_MATRIX_H = 18 + * + * sys_is_le = sys.byteorder == 'little' # <<<<<<<<<<<<<< + * native_code = sys_is_le and '<' or '>' + * swapped_code = sys_is_le and '>' or '<' + */ + __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__sys); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 101; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__byteorder); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 101; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyObject_RichCompare(__pyx_t_3, ((PyObject *)__pyx_n_s__little), Py_EQ); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 101; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (PyObject_SetAttr(__pyx_m, __pyx_n_s__sys_is_le, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 101; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":102 + * + * sys_is_le = sys.byteorder == 'little' + * native_code = sys_is_le and '<' or '>' # <<<<<<<<<<<<<< + * swapped_code = sys_is_le and '>' or '<' + * + */ + __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__sys_is_le); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 102; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely(__pyx_t_4 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 102; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + if (__pyx_t_4) { + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_INCREF(((PyObject *)__pyx_kp_s_23)); + __pyx_t_3 = __pyx_kp_s_23; + } else { + __pyx_t_3 = __pyx_t_2; + __pyx_t_2 = 0; + } + __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_4 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 102; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + if (!__pyx_t_4) { + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_INCREF(((PyObject *)__pyx_kp_s_24)); + __pyx_t_2 = __pyx_kp_s_24; + } else { + __pyx_t_2 = __pyx_t_3; + __pyx_t_3 = 0; + } + if (PyObject_SetAttr(__pyx_m, __pyx_n_s__native_code, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 102; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":103 + * sys_is_le = sys.byteorder == 'little' + * native_code = sys_is_le and '<' or '>' + * swapped_code = sys_is_le and '>' or '<' # <<<<<<<<<<<<<< + * + * cdef cnp.dtype OPAQUE_DTYPE = mio5p.OPAQUE_DTYPE + */ + __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__sys_is_le); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 103; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely(__pyx_t_4 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 103; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + if (__pyx_t_4) { + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_INCREF(((PyObject *)__pyx_kp_s_24)); + __pyx_t_3 = __pyx_kp_s_24; + } else { + __pyx_t_3 = __pyx_t_2; + __pyx_t_2 = 0; + } + __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_4 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 103; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + if (!__pyx_t_4) { + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_INCREF(((PyObject *)__pyx_kp_s_23)); + __pyx_t_2 = __pyx_kp_s_23; + } else { + __pyx_t_2 = __pyx_t_3; + __pyx_t_3 = 0; + } + if (PyObject_SetAttr(__pyx_m, __pyx_n_s__swapped_code, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 103; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":105 + * swapped_code = sys_is_le and '>' or '<' + * + * cdef cnp.dtype OPAQUE_DTYPE = mio5p.OPAQUE_DTYPE # <<<<<<<<<<<<<< + * + * + */ + __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__mio5p); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 105; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__OPAQUE_DTYPE); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 105; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_dtype))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 105; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_v_5scipy_2io_6matlab_10mio5_utils_OPAQUE_DTYPE)); + __Pyx_DECREF(((PyObject *)__pyx_v_5scipy_2io_6matlab_10mio5_utils_OPAQUE_DTYPE)); + __Pyx_GIVEREF(__pyx_t_3); + __pyx_v_5scipy_2io_6matlab_10mio5_utils_OPAQUE_DTYPE = ((PyArray_Descr *)__pyx_t_3); + __pyx_t_3 = 0; + + /* "/home/mb312/scipybuild/scipy/scipy/io/matlab/mio5_utils.pyx":1 + * ''' Cython mio5 utility routines (-*- python -*- like) # <<<<<<<<<<<<<< + * + * ''' + */ + __pyx_t_3 = PyDict_New(); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_3)); + __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__VarReader5); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_5 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__set_stream); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_5, "__doc__"); + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (PyDict_SetItem(__pyx_t_3, ((PyObject *)__pyx_kp_u_25), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__VarReader5); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_5 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__read_tag); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_5, "__doc__"); + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (PyDict_SetItem(__pyx_t_3, ((PyObject *)__pyx_kp_u_26), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__VarReader5); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_5 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__read_numeric); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_5, "__doc__"); + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (PyDict_SetItem(__pyx_t_3, ((PyObject *)__pyx_kp_u_27), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__VarReader5); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_5 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__read_full_tag); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_5, "__doc__"); + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (PyDict_SetItem(__pyx_t_3, ((PyObject *)__pyx_kp_u_28), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__VarReader5); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_5 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__read_header); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_5, "__doc__"); + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (PyDict_SetItem(__pyx_t_3, ((PyObject *)__pyx_kp_u_29), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__VarReader5); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_5 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__array_from_header); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_5, "__doc__"); + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (PyDict_SetItem(__pyx_t_3, ((PyObject *)__pyx_kp_u_30), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__VarReader5); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_5 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__read_real_complex); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_5, "__doc__"); + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (PyDict_SetItem(__pyx_t_3, ((PyObject *)__pyx_kp_u_31), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__VarReader5); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_5 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__read_char); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_5, "__doc__"); + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (PyDict_SetItem(__pyx_t_3, ((PyObject *)__pyx_kp_u_32), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__VarReader5); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_5 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__read_cells); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_5, "__doc__"); + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (PyDict_SetItem(__pyx_t_3, ((PyObject *)__pyx_kp_u_33), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__VarReader5); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_5 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__read_fieldnames); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_5, "__doc__"); + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (PyDict_SetItem(__pyx_t_3, ((PyObject *)__pyx_kp_u_34), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__VarReader5); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_5 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__read_struct); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_5, "__doc__"); + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (PyDict_SetItem(__pyx_t_3, ((PyObject *)__pyx_kp_u_35), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__VarReader5); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_5 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__read_opaque); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_5, "__doc__"); + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (PyDict_SetItem(__pyx_t_3, ((PyObject *)__pyx_kp_u_36), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + if (PyObject_SetAttr(__pyx_m, __pyx_n_s____test__, ((PyObject *)__pyx_t_3)) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(((PyObject *)__pyx_t_3)); __pyx_t_3 = 0; + + /* "/home/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/python_type.pxd":2 + * + * cdef extern from "Python.h": # <<<<<<<<<<<<<< + * # The C structure of the objects used to describe built-in types. + * + */ + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_5); + if (__pyx_m) { + __Pyx_AddTraceback("init scipy.io.matlab.mio5_utils"); + Py_DECREF(__pyx_m); __pyx_m = 0; + } else if (!PyErr_Occurred()) { + PyErr_SetString(PyExc_ImportError, "init scipy.io.matlab.mio5_utils"); + } + __pyx_L0:; + __Pyx_RefNannyFinishContext(); + #if PY_MAJOR_VERSION < 3 + return; + #else + return __pyx_m; + #endif +} + +static const char *__pyx_filenames[] = { + "mio5_utils.pyx", + "numpy.pxd", + "streams.pxd", +}; + +/* Runtime support code */ + +static void __pyx_init_filenames(void) { + __pyx_f = __pyx_filenames; +} + +static void __Pyx_RaiseDoubleKeywordsError( + const char* func_name, + PyObject* kw_name) +{ + PyErr_Format(PyExc_TypeError, + #if PY_MAJOR_VERSION >= 3 + "%s() got multiple values for keyword argument '%U'", func_name, kw_name); + #else + "%s() got multiple values for keyword argument '%s'", func_name, + PyString_AS_STRING(kw_name)); + #endif +} + +static void __Pyx_RaiseArgtupleInvalid( + const char* func_name, + int exact, + Py_ssize_t num_min, + Py_ssize_t num_max, + Py_ssize_t num_found) +{ + Py_ssize_t num_expected; + const char *number, *more_or_less; + + if (num_found < num_min) { + num_expected = num_min; + more_or_less = "at least"; + } else { + num_expected = num_max; + more_or_less = "at most"; + } + if (exact) { + more_or_less = "exactly"; + } + number = (num_expected == 1) ? "" : "s"; + PyErr_Format(PyExc_TypeError, + #if PY_VERSION_HEX < 0x02050000 + "%s() takes %s %d positional argument%s (%d given)", + #else + "%s() takes %s %zd positional argument%s (%zd given)", + #endif + func_name, more_or_less, num_expected, number, num_found); +} + +static int __Pyx_ParseOptionalKeywords( + PyObject *kwds, + PyObject **argnames[], + PyObject *kwds2, + PyObject *values[], + Py_ssize_t num_pos_args, + const char* function_name) +{ + PyObject *key = 0, *value = 0; + Py_ssize_t pos = 0; + PyObject*** name; + PyObject*** first_kw_arg = argnames + num_pos_args; + + while (PyDict_Next(kwds, &pos, &key, &value)) { + name = first_kw_arg; + while (*name && (**name != key)) name++; + if (*name) { + values[name-argnames] = value; + } else { + #if PY_MAJOR_VERSION < 3 + if (unlikely(!PyString_CheckExact(key)) && unlikely(!PyString_Check(key))) { + #else + if (unlikely(!PyUnicode_CheckExact(key)) && unlikely(!PyUnicode_Check(key))) { + #endif + goto invalid_keyword_type; + } else { + for (name = first_kw_arg; *name; name++) { + #if PY_MAJOR_VERSION >= 3 + if (PyUnicode_GET_SIZE(**name) == PyUnicode_GET_SIZE(key) && + PyUnicode_Compare(**name, key) == 0) break; + #else + if (PyString_GET_SIZE(**name) == PyString_GET_SIZE(key) && + _PyString_Eq(**name, key)) break; + #endif + } + if (*name) { + values[name-argnames] = value; + } else { + /* unexpected keyword found */ + for (name=argnames; name != first_kw_arg; name++) { + if (**name == key) goto arg_passed_twice; + #if PY_MAJOR_VERSION >= 3 + if (PyUnicode_GET_SIZE(**name) == PyUnicode_GET_SIZE(key) && + PyUnicode_Compare(**name, key) == 0) goto arg_passed_twice; + #else + if (PyString_GET_SIZE(**name) == PyString_GET_SIZE(key) && + _PyString_Eq(**name, key)) goto arg_passed_twice; + #endif + } + if (kwds2) { + if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad; + } else { + goto invalid_keyword; + } + } + } + } + } + return 0; +arg_passed_twice: + __Pyx_RaiseDoubleKeywordsError(function_name, **name); + goto bad; +invalid_keyword_type: + PyErr_Format(PyExc_TypeError, + "%s() keywords must be strings", function_name); + goto bad; +invalid_keyword: + PyErr_Format(PyExc_TypeError, + #if PY_MAJOR_VERSION < 3 + "%s() got an unexpected keyword argument '%s'", + function_name, PyString_AsString(key)); + #else + "%s() got an unexpected keyword argument '%U'", + function_name, key); + #endif +bad: + return -1; +} + +static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) { + PyErr_Format(PyExc_ValueError, + #if PY_VERSION_HEX < 0x02050000 + "need more than %d value%s to unpack", (int)index, + #else + "need more than %zd value%s to unpack", index, + #endif + (index == 1) ? "" : "s"); +} + +static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(void) { + PyErr_SetString(PyExc_ValueError, "too many values to unpack"); +} + +static PyObject *__Pyx_UnpackItem(PyObject *iter, Py_ssize_t index) { + PyObject *item; + if (!(item = PyIter_Next(iter))) { + if (!PyErr_Occurred()) { + __Pyx_RaiseNeedMoreValuesError(index); + } + } + return item; +} + +static int __Pyx_EndUnpack(PyObject *iter) { + PyObject *item; + if ((item = PyIter_Next(iter))) { + Py_DECREF(item); + __Pyx_RaiseTooManyValuesError(); + return -1; + } + else if (!PyErr_Occurred()) + return 0; + else + return -1; +} + +static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type) { + if (unlikely(!type)) { + PyErr_Format(PyExc_SystemError, "Missing type object"); + return 0; + } + if (likely(PyObject_TypeCheck(obj, type))) + return 1; + PyErr_Format(PyExc_TypeError, "Cannot convert %.200s to %.200s", + Py_TYPE(obj)->tp_name, type->tp_name); + return 0; +} + + +static CYTHON_INLINE int __Pyx_IsLittleEndian(void) { + unsigned int n = 1; + return *(unsigned char*)(&n) != 0; +} + +typedef struct { + __Pyx_StructField root; + __Pyx_BufFmt_StackElem* head; + size_t fmt_offset; + int new_count, enc_count; + int is_complex; + char enc_type; + char packmode; +} __Pyx_BufFmt_Context; + +static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx, + __Pyx_BufFmt_StackElem* stack, + __Pyx_TypeInfo* type) { + stack[0].field = &ctx->root; + stack[0].parent_offset = 0; + ctx->root.type = type; + ctx->root.name = "buffer dtype"; + ctx->root.offset = 0; + ctx->head = stack; + ctx->head->field = &ctx->root; + ctx->fmt_offset = 0; + ctx->head->parent_offset = 0; + ctx->packmode = '@'; + ctx->new_count = 1; + ctx->enc_count = 0; + ctx->enc_type = 0; + ctx->is_complex = 0; + while (type->typegroup == 'S') { + ++ctx->head; + ctx->head->field = type->fields; + ctx->head->parent_offset = 0; + type = type->fields->type; + } +} + +static int __Pyx_BufFmt_ParseNumber(const char** ts) { + int count; + const char* t = *ts; + if (*t < '0' || *t > '9') { + return -1; + } else { + count = *t++ - '0'; + while (*t >= '0' && *t < '9') { + count *= 10; + count += *t++ - '0'; + } + } + *ts = t; + return count; +} + +static void __Pyx_BufFmt_RaiseUnexpectedChar(char ch) { + char msg[] = {ch, 0}; + PyErr_Format(PyExc_ValueError, "Unexpected format string character: '%s'", msg); +} + +static const char* __Pyx_BufFmt_DescribeTypeChar(char ch, int is_complex) { + switch (ch) { + case 'b': return "'char'"; + case 'B': return "'unsigned char'"; + case 'h': return "'short'"; + case 'H': return "'unsigned short'"; + case 'i': return "'int'"; + case 'I': return "'unsigned int'"; + case 'l': return "'long'"; + case 'L': return "'unsigned long'"; + case 'q': return "'long long'"; + case 'Q': return "'unsigned long long'"; + case 'f': return (is_complex ? "'complex float'" : "'float'"); + case 'd': return (is_complex ? "'complex double'" : "'double'"); + case 'g': return (is_complex ? "'complex long double'" : "'long double'"); + case 'T': return "a struct"; + case 'O': return "Python object"; + case 'P': return "a pointer"; + case 0: return "end"; + default: return "unparseable format string"; + } +} + +static size_t __Pyx_BufFmt_TypeCharToStandardSize(char ch, int is_complex) { + switch (ch) { + case '?': case 'c': case 'b': case 'B': return 1; + case 'h': case 'H': return 2; + case 'i': case 'I': case 'l': case 'L': return 4; + case 'q': case 'Q': return 8; + case 'f': return (is_complex ? 8 : 4); + case 'd': return (is_complex ? 16 : 8); + case 'g': { + PyErr_SetString(PyExc_ValueError, "Python does not define a standard format string size for long double ('g').."); + return 0; + } + case 'O': case 'P': return sizeof(void*); + default: + __Pyx_BufFmt_RaiseUnexpectedChar(ch); + return 0; + } +} + +static size_t __Pyx_BufFmt_TypeCharToNativeSize(char ch, int is_complex) { + switch (ch) { + case 'c': case 'b': case 'B': return 1; + case 'h': case 'H': return sizeof(short); + case 'i': case 'I': return sizeof(int); + case 'l': case 'L': return sizeof(long); + #ifdef HAVE_LONG_LONG + case 'q': case 'Q': return sizeof(PY_LONG_LONG); + #endif + case 'f': return sizeof(float) * (is_complex ? 2 : 1); + case 'd': return sizeof(double) * (is_complex ? 2 : 1); + case 'g': return sizeof(long double) * (is_complex ? 2 : 1); + case 'O': case 'P': return sizeof(void*); + default: { + __Pyx_BufFmt_RaiseUnexpectedChar(ch); + return 0; + } + } +} + +typedef struct { char c; short x; } __Pyx_st_short; +typedef struct { char c; int x; } __Pyx_st_int; +typedef struct { char c; long x; } __Pyx_st_long; +typedef struct { char c; float x; } __Pyx_st_float; +typedef struct { char c; double x; } __Pyx_st_double; +typedef struct { char c; long double x; } __Pyx_st_longdouble; +typedef struct { char c; void *x; } __Pyx_st_void_p; +#ifdef HAVE_LONG_LONG +typedef struct { char c; PY_LONG_LONG x; } __Pyx_s_long_long; +#endif + +static size_t __Pyx_BufFmt_TypeCharToAlignment(char ch, int is_complex) { + switch (ch) { + case '?': case 'c': case 'b': case 'B': return 1; + case 'h': case 'H': return sizeof(__Pyx_st_short) - sizeof(short); + case 'i': case 'I': return sizeof(__Pyx_st_int) - sizeof(int); + case 'l': case 'L': return sizeof(__Pyx_st_long) - sizeof(long); +#ifdef HAVE_LONG_LONG + case 'q': case 'Q': return sizeof(__Pyx_s_long_long) - sizeof(PY_LONG_LONG); +#endif + case 'f': return sizeof(__Pyx_st_float) - sizeof(float); + case 'd': return sizeof(__Pyx_st_double) - sizeof(double); + case 'g': return sizeof(__Pyx_st_longdouble) - sizeof(long double); + case 'P': case 'O': return sizeof(__Pyx_st_void_p) - sizeof(void*); + default: + __Pyx_BufFmt_RaiseUnexpectedChar(ch); + return 0; + } +} + +static size_t __Pyx_BufFmt_TypeCharToGroup(char ch, int is_complex) { + switch (ch) { + case 'c': case 'b': case 'h': case 'i': case 'l': case 'q': return 'I'; + case 'B': case 'H': case 'I': case 'L': case 'Q': return 'U'; + case 'f': case 'd': case 'g': return (is_complex ? 'C' : 'R'); + case 'O': return 'O'; + case 'P': return 'P'; + default: { + __Pyx_BufFmt_RaiseUnexpectedChar(ch); + return 0; + } + } +} + +static void __Pyx_BufFmt_RaiseExpected(__Pyx_BufFmt_Context* ctx) { + if (ctx->head == NULL || ctx->head->field == &ctx->root) { + const char* expected; + const char* quote; + if (ctx->head == NULL) { + expected = "end"; + quote = ""; + } else { + expected = ctx->head->field->type->name; + quote = "'"; + } + PyErr_Format(PyExc_ValueError, + "Buffer dtype mismatch, expected %s%s%s but got %s", + quote, expected, quote, + __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex)); + } else { + __Pyx_StructField* field = ctx->head->field; + __Pyx_StructField* parent = (ctx->head - 1)->field; + PyErr_Format(PyExc_ValueError, + "Buffer dtype mismatch, expected '%s' but got %s in '%s.%s'", + field->type->name, __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex), + parent->type->name, field->name); + } +} + +static int __Pyx_BufFmt_ProcessTypeChunk(__Pyx_BufFmt_Context* ctx) { + char group; + size_t size, offset; + if (ctx->enc_type == 0) return 0; + group = __Pyx_BufFmt_TypeCharToGroup(ctx->enc_type, ctx->is_complex); + do { + __Pyx_StructField* field = ctx->head->field; + __Pyx_TypeInfo* type = field->type; + + if (ctx->packmode == '@' || ctx->packmode == '^') { + size = __Pyx_BufFmt_TypeCharToNativeSize(ctx->enc_type, ctx->is_complex); + } else { + size = __Pyx_BufFmt_TypeCharToStandardSize(ctx->enc_type, ctx->is_complex); + } + if (ctx->packmode == '@') { + int align_at = __Pyx_BufFmt_TypeCharToAlignment(ctx->enc_type, ctx->is_complex); + int align_mod_offset; + if (align_at == 0) return -1; + align_mod_offset = ctx->fmt_offset % align_at; + if (align_mod_offset > 0) ctx->fmt_offset += align_at - align_mod_offset; + } + + if (type->size != size || type->typegroup != group) { + if (type->typegroup == 'C' && type->fields != NULL) { + /* special case -- treat as struct rather than complex number */ + size_t parent_offset = ctx->head->parent_offset + field->offset; + ++ctx->head; + ctx->head->field = type->fields; + ctx->head->parent_offset = parent_offset; + continue; + } + + __Pyx_BufFmt_RaiseExpected(ctx); + return -1; + } + + offset = ctx->head->parent_offset + field->offset; + if (ctx->fmt_offset != offset) { + PyErr_Format(PyExc_ValueError, + "Buffer dtype mismatch; next field is at offset %"PY_FORMAT_SIZE_T"d " + "but %"PY_FORMAT_SIZE_T"d expected", ctx->fmt_offset, offset); + return -1; + } + + ctx->fmt_offset += size; + + --ctx->enc_count; /* Consume from buffer string */ + + /* Done checking, move to next field, pushing or popping struct stack if needed */ + while (1) { + if (field == &ctx->root) { + ctx->head = NULL; + if (ctx->enc_count != 0) { + __Pyx_BufFmt_RaiseExpected(ctx); + return -1; + } + break; /* breaks both loops as ctx->enc_count == 0 */ + } + ctx->head->field = ++field; + if (field->type == NULL) { + --ctx->head; + field = ctx->head->field; + continue; + } else if (field->type->typegroup == 'S') { + size_t parent_offset = ctx->head->parent_offset + field->offset; + if (field->type->fields->type == NULL) continue; /* empty struct */ + field = field->type->fields; + ++ctx->head; + ctx->head->field = field; + ctx->head->parent_offset = parent_offset; + break; + } else { + break; + } + } + } while (ctx->enc_count); + ctx->enc_type = 0; + ctx->is_complex = 0; + return 0; +} + +static int __Pyx_BufFmt_FirstPack(__Pyx_BufFmt_Context* ctx) { + if (ctx->enc_type != 0 || ctx->packmode != '@') { + PyErr_SetString(PyExc_ValueError, "Buffer packing mode currently only allowed at beginning of format string (this is a defect)"); + return -1; + } + return 0; +} + +static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts) { + int got_Z = 0; + while (1) { + switch(*ts) { + case 0: + if (ctx->enc_type != 0 && ctx->head == NULL) { + __Pyx_BufFmt_RaiseExpected(ctx); + return NULL; + } + if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; + if (ctx->head != NULL) { + __Pyx_BufFmt_RaiseExpected(ctx); + return NULL; + } + return ts; + case ' ': + case 10: + case 13: + ++ts; + break; + case '<': + if (!__Pyx_IsLittleEndian()) { + PyErr_SetString(PyExc_ValueError, "Little-endian buffer not supported on big-endian compiler"); + return NULL; + } + if (__Pyx_BufFmt_FirstPack(ctx) == -1) return NULL; + ctx->packmode = '='; + ++ts; + break; + case '>': + case '!': + if (__Pyx_IsLittleEndian()) { + PyErr_SetString(PyExc_ValueError, "Big-endian buffer not supported on little-endian compiler"); + return NULL; + } + if (__Pyx_BufFmt_FirstPack(ctx) == -1) return NULL; + ctx->packmode = '='; + ++ts; + break; + case '=': + case '@': + case '^': + if (__Pyx_BufFmt_FirstPack(ctx) == -1) return NULL; + ctx->packmode = *ts++; + break; + case 'T': /* substruct */ + { + int i; + const char* ts_after_sub; + int struct_count = ctx->new_count; + ctx->new_count = 1; + ++ts; + if (*ts != '{') { + PyErr_SetString(PyExc_ValueError, "Buffer acquisition: Expected '{' after 'T'"); + return NULL; + } + ++ts; + ts_after_sub = ts; + for (i = 0; i != struct_count; ++i) { + ts_after_sub = __Pyx_BufFmt_CheckString(ctx, ts); + if (!ts_after_sub) return NULL; + } + ts = ts_after_sub; + } + break; + case '}': /* end of substruct; either repeat or move on */ + ++ts; + return ts; + case 'x': + if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; + ctx->fmt_offset += ctx->new_count; + ctx->new_count = 1; + ctx->enc_count = 0; + ctx->enc_type = 0; + ++ts; + break; + case 'Z': + got_Z = 1; + ++ts; + if (*ts != 'f' && *ts != 'd' && *ts != 'g') { + __Pyx_BufFmt_RaiseUnexpectedChar('Z'); + return NULL; + } /* fall through */ + case 'c': case 'b': case 'B': case 'h': case 'H': case 'i': case 'I': + case 'l': case 'L': case 'q': case 'Q': + case 'f': case 'd': case 'g': + case 'O': + if (ctx->enc_type == *ts && got_Z == ctx->is_complex) { + /* Continue pooling same type */ + ctx->enc_count += ctx->new_count; + } else { + /* New type */ + if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; + ctx->enc_count = ctx->new_count; + ctx->enc_type = *ts; + ctx->is_complex = got_Z; + } + ++ts; + ctx->new_count = 1; + got_Z = 0; + break; + default: + { + ctx->new_count = __Pyx_BufFmt_ParseNumber(&ts); + if (ctx->new_count == -1) { /* First char was not a digit */ + char msg[2] = { *ts, 0 }; + PyErr_Format(PyExc_ValueError, + "Does not understand character buffer dtype format string ('%s')", msg); + return NULL; + } + } + + } + } +} + +static CYTHON_INLINE void __Pyx_ZeroBuffer(Py_buffer* buf) { + buf->buf = NULL; + buf->obj = NULL; + buf->strides = __Pyx_zeros; + buf->shape = __Pyx_zeros; + buf->suboffsets = __Pyx_minusones; +} + +static int __Pyx_GetBufferAndValidate(Py_buffer* buf, PyObject* obj, __Pyx_TypeInfo* dtype, int flags, int nd, int cast, __Pyx_BufFmt_StackElem* stack) { + if (obj == Py_None) { + __Pyx_ZeroBuffer(buf); + return 0; + } + buf->buf = NULL; + if (__Pyx_GetBuffer(obj, buf, flags) == -1) goto fail; + if (buf->ndim != nd) { + PyErr_Format(PyExc_ValueError, + "Buffer has wrong number of dimensions (expected %d, got %d)", + nd, buf->ndim); + goto fail; + } + if (!cast) { + __Pyx_BufFmt_Context ctx; + __Pyx_BufFmt_Init(&ctx, stack, dtype); + if (!__Pyx_BufFmt_CheckString(&ctx, buf->format)) goto fail; + } + if ((unsigned)buf->itemsize != dtype->size) { + PyErr_Format(PyExc_ValueError, + "Item size of buffer (%"PY_FORMAT_SIZE_T"d byte%s) does not match size of '%s' (%"PY_FORMAT_SIZE_T"d byte%s)", + buf->itemsize, (buf->itemsize > 1) ? "s" : "", + dtype->name, + dtype->size, (dtype->size > 1) ? "s" : ""); + goto fail; + } + if (buf->suboffsets == NULL) buf->suboffsets = __Pyx_minusones; + return 0; +fail:; + __Pyx_ZeroBuffer(buf); + return -1; +} + +static CYTHON_INLINE void __Pyx_SafeReleaseBuffer(Py_buffer* info) { + if (info->buf == NULL) return; + if (info->suboffsets == __Pyx_minusones) info->suboffsets = NULL; + __Pyx_ReleaseBuffer(info); +} + +static void __Pyx_RaiseBufferFallbackError(void) { + PyErr_Format(PyExc_ValueError, + "Buffer acquisition failed on assignment; and then reacquiring the old buffer failed too!"); +} + +static void __Pyx_RaiseBufferIndexError(int axis) { + PyErr_Format(PyExc_IndexError, + "Out of bounds on buffer access (axis %d)", axis); +} + + +static CYTHON_INLINE void __Pyx_ErrRestore(PyObject *type, PyObject *value, PyObject *tb) { + PyObject *tmp_type, *tmp_value, *tmp_tb; + PyThreadState *tstate = PyThreadState_GET(); + + tmp_type = tstate->curexc_type; + tmp_value = tstate->curexc_value; + tmp_tb = tstate->curexc_traceback; + tstate->curexc_type = type; + tstate->curexc_value = value; + tstate->curexc_traceback = tb; + Py_XDECREF(tmp_type); + Py_XDECREF(tmp_value); + Py_XDECREF(tmp_tb); +} + +static CYTHON_INLINE void __Pyx_ErrFetch(PyObject **type, PyObject **value, PyObject **tb) { + PyThreadState *tstate = PyThreadState_GET(); + *type = tstate->curexc_type; + *value = tstate->curexc_value; + *tb = tstate->curexc_traceback; + + tstate->curexc_type = 0; + tstate->curexc_value = 0; + tstate->curexc_traceback = 0; +} + + +static CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t a, Py_ssize_t b) { + Py_ssize_t q = a / b; + Py_ssize_t r = a - q*b; + q -= ((r != 0) & ((r ^ b) < 0)); + return q; +} + +static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void) { + PyErr_SetString(PyExc_TypeError, "'NoneType' object is not iterable"); +} + +static void __Pyx_UnpackTupleError(PyObject *t, Py_ssize_t index) { + if (t == Py_None) { + __Pyx_RaiseNoneNotIterableError(); + } else if (PyTuple_GET_SIZE(t) < index) { + __Pyx_RaiseNeedMoreValuesError(PyTuple_GET_SIZE(t)); + } else { + __Pyx_RaiseTooManyValuesError(); + } +} + +static int __Pyx_ArgTypeTest(PyObject *obj, PyTypeObject *type, int none_allowed, + const char *name, int exact) +{ + if (!type) { + PyErr_Format(PyExc_SystemError, "Missing type object"); + return 0; + } + if (none_allowed && obj == Py_None) return 1; + else if (exact) { + if (Py_TYPE(obj) == type) return 1; + } + else { + if (PyObject_TypeCheck(obj, type)) return 1; + } + PyErr_Format(PyExc_TypeError, + "Argument '%s' has incorrect type (expected %s, got %s)", + name, type->tp_name, Py_TYPE(obj)->tp_name); + return 0; +} + +#if PY_MAJOR_VERSION < 3 +static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags) { + #if PY_VERSION_HEX >= 0x02060000 + if (Py_TYPE(obj)->tp_flags & Py_TPFLAGS_HAVE_NEWBUFFER) + return PyObject_GetBuffer(obj, view, flags); + #endif + if (PyObject_TypeCheck(obj, __pyx_ptype_5numpy_ndarray)) return __pyx_pf_5numpy_7ndarray___getbuffer__(obj, view, flags); + else { + PyErr_Format(PyExc_TypeError, "'%100s' does not have the buffer interface", Py_TYPE(obj)->tp_name); + return -1; + } +} + +static void __Pyx_ReleaseBuffer(Py_buffer *view) { + PyObject* obj = view->obj; + if (obj) { +if (PyObject_TypeCheck(obj, __pyx_ptype_5numpy_ndarray)) __pyx_pf_5numpy_7ndarray___releasebuffer__(obj, view); + Py_DECREF(obj); + view->obj = NULL; + } +} + +#endif + +static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list) { + PyObject *__import__ = 0; + PyObject *empty_list = 0; + PyObject *module = 0; + PyObject *global_dict = 0; + PyObject *empty_dict = 0; + PyObject *list; + __import__ = __Pyx_GetAttrString(__pyx_b, "__import__"); + if (!__import__) + goto bad; + if (from_list) + list = from_list; + else { + empty_list = PyList_New(0); + if (!empty_list) + goto bad; + list = empty_list; + } + global_dict = PyModule_GetDict(__pyx_m); + if (!global_dict) + goto bad; + empty_dict = PyDict_New(); + if (!empty_dict) + goto bad; + module = PyObject_CallFunctionObjArgs(__import__, + name, global_dict, empty_dict, list, NULL); +bad: + Py_XDECREF(empty_list); + Py_XDECREF(__import__); + Py_XDECREF(empty_dict); + return module; +} + +static PyObject *__Pyx_GetName(PyObject *dict, PyObject *name) { + PyObject *result; + result = PyObject_GetAttr(dict, name); + if (!result) + PyErr_SetObject(PyExc_NameError, name); + return result; +} + +static CYTHON_INLINE PyObject *__Pyx_PyInt_to_py_npy_uint32(npy_uint32 val) { + const npy_uint32 neg_one = (npy_uint32)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(npy_uint32) < sizeof(long)) { + return PyInt_FromLong((long)val); + } else if (sizeof(npy_uint32) == sizeof(long)) { + if (is_unsigned) + return PyLong_FromUnsignedLong((unsigned long)val); + else + return PyInt_FromLong((long)val); + } else { /* (sizeof(npy_uint32) > sizeof(long)) */ + if (is_unsigned) + return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG)val); + else + return PyLong_FromLongLong((PY_LONG_LONG)val); + } +} + +#if PY_MAJOR_VERSION < 3 +static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb) { + Py_XINCREF(type); + Py_XINCREF(value); + Py_XINCREF(tb); + /* First, check the traceback argument, replacing None with NULL. */ + if (tb == Py_None) { + Py_DECREF(tb); + tb = 0; + } + else if (tb != NULL && !PyTraceBack_Check(tb)) { + PyErr_SetString(PyExc_TypeError, + "raise: arg 3 must be a traceback or None"); + goto raise_error; + } + /* Next, replace a missing value with None */ + if (value == NULL) { + value = Py_None; + Py_INCREF(value); + } + #if PY_VERSION_HEX < 0x02050000 + if (!PyClass_Check(type)) + #else + if (!PyType_Check(type)) + #endif + { + /* Raising an instance. The value should be a dummy. */ + if (value != Py_None) { + PyErr_SetString(PyExc_TypeError, + "instance exception may not have a separate value"); + goto raise_error; + } + /* Normalize to raise , */ + Py_DECREF(value); + value = type; + #if PY_VERSION_HEX < 0x02050000 + if (PyInstance_Check(type)) { + type = (PyObject*) ((PyInstanceObject*)type)->in_class; + Py_INCREF(type); + } + else { + type = 0; + PyErr_SetString(PyExc_TypeError, + "raise: exception must be an old-style class or instance"); + goto raise_error; + } + #else + type = (PyObject*) Py_TYPE(type); + Py_INCREF(type); + if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) { + PyErr_SetString(PyExc_TypeError, + "raise: exception class must be a subclass of BaseException"); + goto raise_error; + } + #endif + } + + __Pyx_ErrRestore(type, value, tb); + return; +raise_error: + Py_XDECREF(value); + Py_XDECREF(type); + Py_XDECREF(tb); + return; +} + +#else /* Python 3+ */ + +static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb) { + if (tb == Py_None) { + tb = 0; + } else if (tb && !PyTraceBack_Check(tb)) { + PyErr_SetString(PyExc_TypeError, + "raise: arg 3 must be a traceback or None"); + goto bad; + } + if (value == Py_None) + value = 0; + + if (PyExceptionInstance_Check(type)) { + if (value) { + PyErr_SetString(PyExc_TypeError, + "instance exception may not have a separate value"); + goto bad; + } + value = type; + type = (PyObject*) Py_TYPE(value); + } else if (!PyExceptionClass_Check(type)) { + PyErr_SetString(PyExc_TypeError, + "raise: exception class must be a subclass of BaseException"); + goto bad; + } + + PyErr_SetObject(type, value); + + if (tb) { + PyThreadState *tstate = PyThreadState_GET(); + PyObject* tmp_tb = tstate->curexc_traceback; + if (tb != tmp_tb) { + Py_INCREF(tb); + tstate->curexc_traceback = tb; + Py_XDECREF(tmp_tb); + } + } + +bad: + return; +} +#endif + +static CYTHON_INLINE PyObject *__Pyx_PyInt_to_py_npy_int32(npy_int32 val) { + const npy_int32 neg_one = (npy_int32)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(npy_int32) < sizeof(long)) { + return PyInt_FromLong((long)val); + } else if (sizeof(npy_int32) == sizeof(long)) { + if (is_unsigned) + return PyLong_FromUnsignedLong((unsigned long)val); + else + return PyInt_FromLong((long)val); + } else { /* (sizeof(npy_int32) > sizeof(long)) */ + if (is_unsigned) + return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG)val); + else + return PyLong_FromLongLong((PY_LONG_LONG)val); + } +} + +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { + return ::std::complex< double >(x, y); + } + #else + static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { + return x + y*(__pyx_t_double_complex)_Complex_I; + } + #endif +#else + static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { + __pyx_t_double_complex z; + z.real = x; + z.imag = y; + return z; + } +#endif + +#if CYTHON_CCOMPLEX +#else + static CYTHON_INLINE int __Pyx_c_eq(__pyx_t_double_complex a, __pyx_t_double_complex b) { + return (a.real == b.real) && (a.imag == b.imag); + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + z.real = a.real + b.real; + z.imag = a.imag + b.imag; + return z; + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + z.real = a.real - b.real; + z.imag = a.imag - b.imag; + return z; + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + z.real = a.real * b.real - a.imag * b.imag; + z.imag = a.real * b.imag + a.imag * b.real; + return z; + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + double denom = b.real * b.real + b.imag * b.imag; + z.real = (a.real * b.real + a.imag * b.imag) / denom; + z.imag = (a.imag * b.real - a.real * b.imag) / denom; + return z; + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg(__pyx_t_double_complex a) { + __pyx_t_double_complex z; + z.real = -a.real; + z.imag = -a.imag; + return z; + } + static CYTHON_INLINE int __Pyx_c_is_zero(__pyx_t_double_complex a) { + return (a.real == 0) && (a.imag == 0); + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj(__pyx_t_double_complex a) { + __pyx_t_double_complex z; + z.real = a.real; + z.imag = -a.imag; + return z; + } +/* + static CYTHON_INLINE double __Pyx_c_abs(__pyx_t_double_complex z) { +#if HAVE_HYPOT + return hypot(z.real, z.imag); +#else + return sqrt(z.real*z.real + z.imag*z.imag); +#endif + } +*/ +#endif + +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) { + return ::std::complex< float >(x, y); + } + #else + static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) { + return x + y*(__pyx_t_float_complex)_Complex_I; + } + #endif +#else + static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) { + __pyx_t_float_complex z; + z.real = x; + z.imag = y; + return z; + } +#endif + +#if CYTHON_CCOMPLEX +#else + static CYTHON_INLINE int __Pyx_c_eqf(__pyx_t_float_complex a, __pyx_t_float_complex b) { + return (a.real == b.real) && (a.imag == b.imag); + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_sumf(__pyx_t_float_complex a, __pyx_t_float_complex b) { + __pyx_t_float_complex z; + z.real = a.real + b.real; + z.imag = a.imag + b.imag; + return z; + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_difff(__pyx_t_float_complex a, __pyx_t_float_complex b) { + __pyx_t_float_complex z; + z.real = a.real - b.real; + z.imag = a.imag - b.imag; + return z; + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_prodf(__pyx_t_float_complex a, __pyx_t_float_complex b) { + __pyx_t_float_complex z; + z.real = a.real * b.real - a.imag * b.imag; + z.imag = a.real * b.imag + a.imag * b.real; + return z; + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quotf(__pyx_t_float_complex a, __pyx_t_float_complex b) { + __pyx_t_float_complex z; + float denom = b.real * b.real + b.imag * b.imag; + z.real = (a.real * b.real + a.imag * b.imag) / denom; + z.imag = (a.imag * b.real - a.real * b.imag) / denom; + return z; + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_negf(__pyx_t_float_complex a) { + __pyx_t_float_complex z; + z.real = -a.real; + z.imag = -a.imag; + return z; + } + static CYTHON_INLINE int __Pyx_c_is_zerof(__pyx_t_float_complex a) { + return (a.real == 0) && (a.imag == 0); + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_conjf(__pyx_t_float_complex a) { + __pyx_t_float_complex z; + z.real = a.real; + z.imag = -a.imag; + return z; + } +/* + static CYTHON_INLINE float __Pyx_c_absf(__pyx_t_float_complex z) { +#if HAVE_HYPOT + return hypotf(z.real, z.imag); +#else + return sqrtf(z.real*z.real + z.imag*z.imag); +#endif + } +*/ +#endif + +static CYTHON_INLINE unsigned char __Pyx_PyInt_AsUnsignedChar(PyObject* x) { + const unsigned char neg_one = (unsigned char)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(unsigned char) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(unsigned char)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to unsigned char" : + "value too large to convert to unsigned char"); + } + return (unsigned char)-1; + } + return (unsigned char)val; + } + return (unsigned char)__Pyx_PyInt_AsUnsignedLong(x); +} + +static CYTHON_INLINE unsigned short __Pyx_PyInt_AsUnsignedShort(PyObject* x) { + const unsigned short neg_one = (unsigned short)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(unsigned short) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(unsigned short)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to unsigned short" : + "value too large to convert to unsigned short"); + } + return (unsigned short)-1; + } + return (unsigned short)val; + } + return (unsigned short)__Pyx_PyInt_AsUnsignedLong(x); +} + +static CYTHON_INLINE unsigned int __Pyx_PyInt_AsUnsignedInt(PyObject* x) { + const unsigned int neg_one = (unsigned int)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(unsigned int) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(unsigned int)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to unsigned int" : + "value too large to convert to unsigned int"); + } + return (unsigned int)-1; + } + return (unsigned int)val; + } + return (unsigned int)__Pyx_PyInt_AsUnsignedLong(x); +} + +static CYTHON_INLINE char __Pyx_PyInt_AsChar(PyObject* x) { + const char neg_one = (char)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(char) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(char)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to char" : + "value too large to convert to char"); + } + return (char)-1; + } + return (char)val; + } + return (char)__Pyx_PyInt_AsLong(x); +} + +static CYTHON_INLINE short __Pyx_PyInt_AsShort(PyObject* x) { + const short neg_one = (short)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(short) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(short)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to short" : + "value too large to convert to short"); + } + return (short)-1; + } + return (short)val; + } + return (short)__Pyx_PyInt_AsLong(x); +} + +static CYTHON_INLINE int __Pyx_PyInt_AsInt(PyObject* x) { + const int neg_one = (int)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(int) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(int)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to int" : + "value too large to convert to int"); + } + return (int)-1; + } + return (int)val; + } + return (int)__Pyx_PyInt_AsLong(x); +} + +static CYTHON_INLINE signed char __Pyx_PyInt_AsSignedChar(PyObject* x) { + const signed char neg_one = (signed char)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(signed char) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(signed char)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to signed char" : + "value too large to convert to signed char"); + } + return (signed char)-1; + } + return (signed char)val; + } + return (signed char)__Pyx_PyInt_AsSignedLong(x); +} + +static CYTHON_INLINE signed short __Pyx_PyInt_AsSignedShort(PyObject* x) { + const signed short neg_one = (signed short)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(signed short) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(signed short)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to signed short" : + "value too large to convert to signed short"); + } + return (signed short)-1; + } + return (signed short)val; + } + return (signed short)__Pyx_PyInt_AsSignedLong(x); +} + +static CYTHON_INLINE signed int __Pyx_PyInt_AsSignedInt(PyObject* x) { + const signed int neg_one = (signed int)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(signed int) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(signed int)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to signed int" : + "value too large to convert to signed int"); + } + return (signed int)-1; + } + return (signed int)val; + } + return (signed int)__Pyx_PyInt_AsSignedLong(x); +} + +static CYTHON_INLINE unsigned long __Pyx_PyInt_AsUnsignedLong(PyObject* x) { + const unsigned long neg_one = (unsigned long)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to unsigned long"); + return (unsigned long)-1; + } + return (unsigned long)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to unsigned long"); + return (unsigned long)-1; + } + return PyLong_AsUnsignedLong(x); + } else { + return PyLong_AsLong(x); + } + } else { + unsigned long val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (unsigned long)-1; + val = __Pyx_PyInt_AsUnsignedLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static CYTHON_INLINE unsigned PY_LONG_LONG __Pyx_PyInt_AsUnsignedLongLong(PyObject* x) { + const unsigned PY_LONG_LONG neg_one = (unsigned PY_LONG_LONG)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to unsigned PY_LONG_LONG"); + return (unsigned PY_LONG_LONG)-1; + } + return (unsigned PY_LONG_LONG)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to unsigned PY_LONG_LONG"); + return (unsigned PY_LONG_LONG)-1; + } + return PyLong_AsUnsignedLongLong(x); + } else { + return PyLong_AsLongLong(x); + } + } else { + unsigned PY_LONG_LONG val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (unsigned PY_LONG_LONG)-1; + val = __Pyx_PyInt_AsUnsignedLongLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static CYTHON_INLINE long __Pyx_PyInt_AsLong(PyObject* x) { + const long neg_one = (long)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to long"); + return (long)-1; + } + return (long)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to long"); + return (long)-1; + } + return PyLong_AsUnsignedLong(x); + } else { + return PyLong_AsLong(x); + } + } else { + long val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (long)-1; + val = __Pyx_PyInt_AsLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static CYTHON_INLINE PY_LONG_LONG __Pyx_PyInt_AsLongLong(PyObject* x) { + const PY_LONG_LONG neg_one = (PY_LONG_LONG)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to PY_LONG_LONG"); + return (PY_LONG_LONG)-1; + } + return (PY_LONG_LONG)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to PY_LONG_LONG"); + return (PY_LONG_LONG)-1; + } + return PyLong_AsUnsignedLongLong(x); + } else { + return PyLong_AsLongLong(x); + } + } else { + PY_LONG_LONG val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (PY_LONG_LONG)-1; + val = __Pyx_PyInt_AsLongLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static CYTHON_INLINE signed long __Pyx_PyInt_AsSignedLong(PyObject* x) { + const signed long neg_one = (signed long)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to signed long"); + return (signed long)-1; + } + return (signed long)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to signed long"); + return (signed long)-1; + } + return PyLong_AsUnsignedLong(x); + } else { + return PyLong_AsLong(x); + } + } else { + signed long val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (signed long)-1; + val = __Pyx_PyInt_AsSignedLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static CYTHON_INLINE signed PY_LONG_LONG __Pyx_PyInt_AsSignedLongLong(PyObject* x) { + const signed PY_LONG_LONG neg_one = (signed PY_LONG_LONG)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to signed PY_LONG_LONG"); + return (signed PY_LONG_LONG)-1; + } + return (signed PY_LONG_LONG)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to signed PY_LONG_LONG"); + return (signed PY_LONG_LONG)-1; + } + return PyLong_AsUnsignedLongLong(x); + } else { + return PyLong_AsLongLong(x); + } + } else { + signed PY_LONG_LONG val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (signed PY_LONG_LONG)-1; + val = __Pyx_PyInt_AsSignedLongLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static CYTHON_INLINE npy_uint32 __Pyx_PyInt_from_py_npy_uint32(PyObject* x) { + const npy_uint32 neg_one = (npy_uint32)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(npy_uint32) == sizeof(char)) { + if (is_unsigned) + return (npy_uint32)__Pyx_PyInt_AsUnsignedChar(x); + else + return (npy_uint32)__Pyx_PyInt_AsSignedChar(x); + } else if (sizeof(npy_uint32) == sizeof(short)) { + if (is_unsigned) + return (npy_uint32)__Pyx_PyInt_AsUnsignedShort(x); + else + return (npy_uint32)__Pyx_PyInt_AsSignedShort(x); + } else if (sizeof(npy_uint32) == sizeof(int)) { + if (is_unsigned) + return (npy_uint32)__Pyx_PyInt_AsUnsignedInt(x); + else + return (npy_uint32)__Pyx_PyInt_AsSignedInt(x); + } else if (sizeof(npy_uint32) == sizeof(long)) { + if (is_unsigned) + return (npy_uint32)__Pyx_PyInt_AsUnsignedLong(x); + else + return (npy_uint32)__Pyx_PyInt_AsSignedLong(x); + } else if (sizeof(npy_uint32) == sizeof(PY_LONG_LONG)) { + if (is_unsigned) + return (npy_uint32)__Pyx_PyInt_AsUnsignedLongLong(x); + else + return (npy_uint32)__Pyx_PyInt_AsSignedLongLong(x); +#if 0 + } else if (sizeof(npy_uint32) > sizeof(short) && + sizeof(npy_uint32) < sizeof(int)) { /* __int32 ILP64 ? */ + if (is_unsigned) + return (npy_uint32)__Pyx_PyInt_AsUnsignedInt(x); + else + return (npy_uint32)__Pyx_PyInt_AsSignedInt(x); +#endif + } + PyErr_SetString(PyExc_TypeError, "npy_uint32"); + return (npy_uint32)-1; +} + +static void __Pyx_WriteUnraisable(const char *name) { + PyObject *old_exc, *old_val, *old_tb; + PyObject *ctx; + __Pyx_ErrFetch(&old_exc, &old_val, &old_tb); + #if PY_MAJOR_VERSION < 3 + ctx = PyString_FromString(name); + #else + ctx = PyUnicode_FromString(name); + #endif + __Pyx_ErrRestore(old_exc, old_val, old_tb); + if (!ctx) { + PyErr_WriteUnraisable(Py_None); + } else { + PyErr_WriteUnraisable(ctx); + Py_DECREF(ctx); + } +} + +static int __Pyx_SetVtable(PyObject *dict, void *vtable) { +#if PY_VERSION_HEX < 0x03010000 + PyObject *ob = PyCObject_FromVoidPtr(vtable, 0); +#else + PyObject *ob = PyCapsule_New(vtable, 0, 0); +#endif + if (!ob) + goto bad; + if (PyDict_SetItemString(dict, "__pyx_vtable__", ob) < 0) + goto bad; + Py_DECREF(ob); + return 0; +bad: + Py_XDECREF(ob); + return -1; +} + +#ifndef __PYX_HAVE_RT_ImportType +#define __PYX_HAVE_RT_ImportType +static PyTypeObject *__Pyx_ImportType(const char *module_name, const char *class_name, + long size, int strict) +{ + PyObject *py_module = 0; + PyObject *result = 0; + PyObject *py_name = 0; + char warning[200]; + + py_module = __Pyx_ImportModule(module_name); + if (!py_module) + goto bad; + #if PY_MAJOR_VERSION < 3 + py_name = PyString_FromString(class_name); + #else + py_name = PyUnicode_FromString(class_name); + #endif + if (!py_name) + goto bad; + result = PyObject_GetAttr(py_module, py_name); + Py_DECREF(py_name); + py_name = 0; + Py_DECREF(py_module); + py_module = 0; + if (!result) + goto bad; + if (!PyType_Check(result)) { + PyErr_Format(PyExc_TypeError, + "%s.%s is not a type object", + module_name, class_name); + goto bad; + } + if (!strict && ((PyTypeObject *)result)->tp_basicsize > size) { + PyOS_snprintf(warning, sizeof(warning), + "%s.%s size changed, may indicate binary incompatibility", + module_name, class_name); + PyErr_WarnEx(NULL, warning, 0); + } + else if (((PyTypeObject *)result)->tp_basicsize != size) { + PyErr_Format(PyExc_ValueError, + "%s.%s has the wrong size, try recompiling", + module_name, class_name); + goto bad; + } + return (PyTypeObject *)result; +bad: + Py_XDECREF(py_module); + Py_XDECREF(result); + return 0; +} +#endif + +#ifndef __PYX_HAVE_RT_ImportModule +#define __PYX_HAVE_RT_ImportModule +static PyObject *__Pyx_ImportModule(const char *name) { + PyObject *py_name = 0; + PyObject *py_module = 0; + + #if PY_MAJOR_VERSION < 3 + py_name = PyString_FromString(name); + #else + py_name = PyUnicode_FromString(name); + #endif + if (!py_name) + goto bad; + py_module = PyImport_Import(py_name); + Py_DECREF(py_name); + return py_module; +bad: + Py_XDECREF(py_name); + return 0; +} +#endif + +static int __Pyx_GetVtable(PyObject *dict, void *vtabptr) { + PyObject *ob = PyMapping_GetItemString(dict, (char *)"__pyx_vtable__"); + if (!ob) + goto bad; +#if PY_VERSION_HEX < 0x03010000 + *(void **)vtabptr = PyCObject_AsVoidPtr(ob); +#else + *(void **)vtabptr = PyCapsule_GetPointer(ob, 0); +#endif + if (!*(void **)vtabptr) + goto bad; + Py_DECREF(ob); + return 0; +bad: + Py_XDECREF(ob); + return -1; +} + +#ifndef __PYX_HAVE_RT_ImportFunction +#define __PYX_HAVE_RT_ImportFunction +static int __Pyx_ImportFunction(PyObject *module, const char *funcname, void (**f)(void), const char *sig) { + PyObject *d = 0; + PyObject *cobj = 0; + union { + void (*fp)(void); + void *p; + } tmp; +#if PY_VERSION_HEX < 0x03010000 + const char *desc, *s1, *s2; +#endif + + d = PyObject_GetAttrString(module, (char *)"__pyx_capi__"); + if (!d) + goto bad; + cobj = PyDict_GetItemString(d, funcname); + if (!cobj) { + PyErr_Format(PyExc_ImportError, + "%s does not export expected C function %s", + PyModule_GetName(module), funcname); + goto bad; + } +#if PY_VERSION_HEX < 0x03010000 + desc = (const char *)PyCObject_GetDesc(cobj); + if (!desc) + goto bad; + s1 = desc; s2 = sig; + while (*s1 != '\0' && *s1 == *s2) { s1++; s2++; } + if (*s1 != *s2) { + PyErr_Format(PyExc_TypeError, + "C function %s.%s has wrong signature (expected %s, got %s)", + PyModule_GetName(module), funcname, sig, desc); + goto bad; + } + tmp.p = PyCObject_AsVoidPtr(cobj); +#else + if (!PyCapsule_IsValid(cobj, sig)) { + PyErr_Format(PyExc_TypeError, + "C function %s.%s has wrong signature (expected %s, got %s)", + PyModule_GetName(module), funcname, sig, PyCapsule_GetName(cobj)); + goto bad; + } + tmp.p = PyCapsule_GetPointer(cobj, sig); +#endif + *f = tmp.fp; + if (!(*f)) + goto bad; + Py_DECREF(d); + return 0; +bad: + Py_XDECREF(d); + return -1; +} +#endif + +#include "compile.h" +#include "frameobject.h" +#include "traceback.h" + +static void __Pyx_AddTraceback(const char *funcname) { + PyObject *py_srcfile = 0; + PyObject *py_funcname = 0; + PyObject *py_globals = 0; + PyCodeObject *py_code = 0; + PyFrameObject *py_frame = 0; + + #if PY_MAJOR_VERSION < 3 + py_srcfile = PyString_FromString(__pyx_filename); + #else + py_srcfile = PyUnicode_FromString(__pyx_filename); + #endif + if (!py_srcfile) goto bad; + if (__pyx_clineno) { + #if PY_MAJOR_VERSION < 3 + py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, __pyx_clineno); + #else + py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, __pyx_clineno); + #endif + } + else { + #if PY_MAJOR_VERSION < 3 + py_funcname = PyString_FromString(funcname); + #else + py_funcname = PyUnicode_FromString(funcname); + #endif + } + if (!py_funcname) goto bad; + py_globals = PyModule_GetDict(__pyx_m); + if (!py_globals) goto bad; + py_code = PyCode_New( + 0, /*int argcount,*/ + #if PY_MAJOR_VERSION >= 3 + 0, /*int kwonlyargcount,*/ + #endif + 0, /*int nlocals,*/ + 0, /*int stacksize,*/ + 0, /*int flags,*/ + __pyx_empty_bytes, /*PyObject *code,*/ + __pyx_empty_tuple, /*PyObject *consts,*/ + __pyx_empty_tuple, /*PyObject *names,*/ + __pyx_empty_tuple, /*PyObject *varnames,*/ + __pyx_empty_tuple, /*PyObject *freevars,*/ + __pyx_empty_tuple, /*PyObject *cellvars,*/ + py_srcfile, /*PyObject *filename,*/ + py_funcname, /*PyObject *name,*/ + __pyx_lineno, /*int firstlineno,*/ + __pyx_empty_bytes /*PyObject *lnotab*/ + ); + if (!py_code) goto bad; + py_frame = PyFrame_New( + PyThreadState_GET(), /*PyThreadState *tstate,*/ + py_code, /*PyCodeObject *code,*/ + py_globals, /*PyObject *globals,*/ + 0 /*PyObject *locals*/ + ); + if (!py_frame) goto bad; + py_frame->f_lineno = __pyx_lineno; + PyTraceBack_Here(py_frame); +bad: + Py_XDECREF(py_srcfile); + Py_XDECREF(py_funcname); + Py_XDECREF(py_code); + Py_XDECREF(py_frame); +} + +static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) { + while (t->p) { + #if PY_MAJOR_VERSION < 3 + if (t->is_unicode) { + *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL); + } else if (t->intern) { + *t->p = PyString_InternFromString(t->s); + } else { + *t->p = PyString_FromStringAndSize(t->s, t->n - 1); + } + #else /* Python 3+ has unicode identifiers */ + if (t->is_unicode | t->is_str) { + if (t->intern) { + *t->p = PyUnicode_InternFromString(t->s); + } else if (t->encoding) { + *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL); + } else { + *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1); + } + } else { + *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1); + } + #endif + if (!*t->p) + return -1; + ++t; + } + return 0; +} + +/* Type Conversion Functions */ + +static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { + if (x == Py_True) return 1; + else if ((x == Py_False) | (x == Py_None)) return 0; + else return PyObject_IsTrue(x); +} + +static CYTHON_INLINE PyObject* __Pyx_PyNumber_Int(PyObject* x) { + PyNumberMethods *m; + const char *name = NULL; + PyObject *res = NULL; +#if PY_VERSION_HEX < 0x03000000 + if (PyInt_Check(x) || PyLong_Check(x)) +#else + if (PyLong_Check(x)) +#endif + return Py_INCREF(x), x; + m = Py_TYPE(x)->tp_as_number; +#if PY_VERSION_HEX < 0x03000000 + if (m && m->nb_int) { + name = "int"; + res = PyNumber_Int(x); + } + else if (m && m->nb_long) { + name = "long"; + res = PyNumber_Long(x); + } +#else + if (m && m->nb_int) { + name = "int"; + res = PyNumber_Long(x); + } +#endif + if (res) { +#if PY_VERSION_HEX < 0x03000000 + if (!PyInt_Check(res) && !PyLong_Check(res)) { +#else + if (!PyLong_Check(res)) { +#endif + PyErr_Format(PyExc_TypeError, + "__%s__ returned non-%s (type %.200s)", + name, name, Py_TYPE(res)->tp_name); + Py_DECREF(res); + return NULL; + } + } + else if (!PyErr_Occurred()) { + PyErr_SetString(PyExc_TypeError, + "an integer is required"); + } + return res; +} + +static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { + Py_ssize_t ival; + PyObject* x = PyNumber_Index(b); + if (!x) return -1; + ival = PyInt_AsSsize_t(x); + Py_DECREF(x); + return ival; +} + +static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) { +#if PY_VERSION_HEX < 0x02050000 + if (ival <= LONG_MAX) + return PyInt_FromLong((long)ival); + else { + unsigned char *bytes = (unsigned char *) &ival; + int one = 1; int little = (int)*(unsigned char*)&one; + return _PyLong_FromByteArray(bytes, sizeof(size_t), little, 0); + } +#else + return PyInt_FromSize_t(ival); +#endif +} + +static CYTHON_INLINE size_t __Pyx_PyInt_AsSize_t(PyObject* x) { + unsigned PY_LONG_LONG val = __Pyx_PyInt_AsUnsignedLongLong(x); + if (unlikely(val == (unsigned PY_LONG_LONG)-1 && PyErr_Occurred())) { + return (size_t)-1; + } else if (unlikely(val != (unsigned PY_LONG_LONG)(size_t)val)) { + PyErr_SetString(PyExc_OverflowError, + "value too large to convert to size_t"); + return (size_t)-1; + } + return (size_t)val; +} + + +#endif /* Py_PYTHON_H */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/matlab/miobase.py python-scipy-0.8.0+dfsg1/scipy/io/matlab/miobase.py --- python-scipy-0.7.2+dfsg1/scipy/io/matlab/miobase.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/matlab/miobase.py 2010-07-26 15:48:31.000000000 +0100 @@ -3,33 +3,25 @@ """ Base classes for matlab (TM) file stream reading """ -import warnings - import numpy as np -from scipy.ndimage import doccer +from scipy.misc import doccer import byteordercodes as boc class MatReadError(Exception): pass +class MatWriteError(Exception): pass + doc_dict = \ {'file_arg': '''file_name : string Name of the mat file (do not need .mat extension if - appendmat==True) If name not a full path name, search for the - file on the sys.path list and use the first one found (the - current directory is searched first). Can also pass open - file-like object''', + appendmat==True) Can also pass open file-like object''', 'append_arg': '''appendmat : {True, False} optional True to append the .mat extension to the end of the given filename, if not already present''', - 'basename_arg': - '''base_name : string, optional, unused - base name for unnamed variables. The code no longer uses - this. We deprecate for this version of scipy, and will remove - it in future versions''', 'load_args': '''byte_order : {None, string}, optional None by default, implying byte order guessed from mat @@ -47,17 +39,12 @@ squeeze_me=False, chars_as_strings=False, mat_dtype=True, struct_as_record=True)''', 'struct_arg': - '''struct_as_record : {False, True} optional + '''struct_as_record : {True, False} optional Whether to load matlab structs as numpy record arrays, or as old-style numpy arrays with dtype=object. Setting this flag to - False replicates the behaviour of scipy version 0.6 (returning - numpy object arrays). The preferred setting is True, because it - allows easier round-trip load and save of matlab files. In a - future version of scipy, we will change the default setting to - True, and following versions may remove this flag entirely. For - now, we set the default to False, for backwards compatibility, but - issue a warning. Note that non-record arrays cannot be exported - via savemat.''', + False replicates the behaviour of scipy version 0.7.x (returning + numpy object arrays). The default setting is True, because it + allows easier round-trip load and save of matlab files.''', 'matstream_arg': '''mat_stream : file-like object with file API, open for reading''', @@ -80,13 +67,98 @@ docfiller = doccer.filldoc(doc_dict) +''' + + Note on architecture +====================== + +There are three sets of parameters relevant for reading files. The +first are *file read parameters* - containing options that are common +for reading the whole file, and therefore every variable within that +file. At the moment these are: + +* mat_stream +* dtypes (derived from byte code) +* byte_order +* chars_as_strings +* squeeze_me +* struct_as_record (matlab 5 files) +* class_dtypes (derived from order code, matlab 5 files) +* codecs (matlab 5 files) +* uint16_codec (matlab 5 files) + +Another set of parameters are those that apply only the the current +variable being read - the header**: + +* header related variables (different for v4 and v5 mat files) +* is_complex +* mclass +* var_stream + +With the header, we need ``next_position`` to tell us where the next +variable in the stream is. + +Then, there can be, for each element in a matrix, *element read +parameters*. An element is, for example, one element in a Matlab cell +array. At the moment these are: + +* mat_dtype + +The file-reading object contains the *file read parameters*. The +*header* is passed around as a data object, or may be read and discarded +in a single function. The *element read parameters* - the mat_dtype in +this instance, is passed into a general post-processing function - see +``mio_utils`` for details. +''' + + +def convert_dtypes(dtype_template, order_code): + ''' Convert dtypes in mapping to given order + + Parameters + ---------- + dtype_template : mapping + mapping with values returning numpy dtype from ``np.dtype(val)`` + order_code : str + an order code suitable for using in ``dtype.newbyteorder()`` + + Returns + ------- + dtypes : mapping + mapping where values have been replaced by + ``np.dtype(val).newbyteorder(order_code)`` + + ''' + dtypes = dtype_template.copy() + for k in dtypes: + dtypes[k] = np.dtype(dtypes[k]).newbyteorder(order_code) + return dtypes + + +def read_dtype(mat_stream, a_dtype): + """ + Generic get of byte stream data of known type + + Parameters + ---------- + mat_stream : file-like object + Matlam (TM) stream + a_dtype : dtype + dtype of array to read. `a_dtype` is assumed to be correct + endianness + + Returns + ------- + arr : array + Array of given datatype obtained from stream. -def small_product(arr): - ''' Faster than product for small arrays ''' - res = 1 - for e in arr: - res *= e - return res + """ + num_bytes = a_dtype.itemsize + arr = np.ndarray(shape=(), + dtype=a_dtype, + buffer=mat_stream.read(num_bytes), + order='F') + return arr def get_matfile_version(fileobj): @@ -140,20 +212,32 @@ % ret) -class MatReadError(Exception): pass - - def matdims(arr, oned_as='column'): - ''' Determine equivalent matlab dimensions for given array - + """ + Determine equivalent matlab dimensions for given array + Parameters ---------- arr : ndarray - oned_as : {'column', 'row'} string, optional + Input array. + oned_as : {'column', 'row'}, optional + Whether 1-D arrays are returned as Matlab row or column matrices. + Default is 'column'. Returns ------- - dims : shape as matlab expects + dims : tuple + Shape tuple, in the form Matlab expects it. + + Notes + ----- + We had to decide what shape a 1 dimensional array would be by + default. ``np.atleast_2d`` thinks it is a row vector. The + default for a vector in matlab (e.g. ``>> 1:12``) is a row vector. + + Versions of scipy up to and including 0.7 resulted (accidentally) + in 1-D arrays being read as column vectors. For the moment, we + maintain the same tradition here. Examples -------- @@ -176,7 +260,7 @@ >>> matdims(np.array([[[]]])) # empty 3d (0, 0, 0) - Optional argument flips 1d shape behavior + Optional argument flips 1-D shape behavior. >>> matdims(np.array([1,2]), 'row') # 1d array, 2 elements (1, 2) @@ -188,16 +272,7 @@ ... ValueError: 1D option "bizarre" is strange - Notes - ----- - We had to decide what shape a 1 dimensional array would be by - default. ``np.atleast_2d`` thinks it is a row vector. The - default for a vector in matlab (e.g. ``>> 1:12``) is a row vector. - - Versions of scipy up to and including 0.7 resulted (accidentally) - in 1d arrays being read as column vectors. For the moment, we - maintain the same tradition here. - ''' + """ if arr.size == 0: # empty return (0,) * np.max([arr.ndim, 2]) shape = arr.shape @@ -214,64 +289,26 @@ return shape -class ByteOrder(object): - ''' Namespace for byte ordering ''' - little_endian = boc.sys_is_le - native_code = boc.native_code - swapped_code = boc.swapped_code - to_numpy_code = boc.to_numpy_code - -ByteOrder = np.deprecate_with_doc(""" -We no longer use the ByteOrder class, and deprecate it; we will remove -it in future versions of scipy. Please use the -scipy.io.matlab.byteordercodes module instead. -""")(ByteOrder) - - -class MatStreamAgent(object): - ''' Base object for readers / getters from mat file streams - - Attaches to initialized stream - - Base class for "getters" - which do store state of what they are - reading on initialization, and therefore need to be initialized - before each read, and "readers" which do not store state, and only - need to be initialized once on object creation - - Implements common array reading functions - - Inputs mat_steam - MatFileReader object - ''' - - def __init__(self, mat_stream): - self.mat_stream = mat_stream - - def read_dtype(self, a_dtype): - ''' Generic get of byte stream data of known type - - Inputs - a_dtype - dtype of array +class MatVarReader(object): + ''' Abstract class defining required interface for var readers''' + def __init__(self, file_reader): + pass - a_dtype is assumed to be correct endianness - ''' - num_bytes = a_dtype.itemsize - arr = np.ndarray(shape=(), - dtype=a_dtype, - buffer=self.mat_stream.read(num_bytes), - order='F') - return arr + def read_header(self): + ''' Returns header ''' + pass - def read_ztstring(self, num_bytes): - return self.mat_stream.read(num_bytes).strip('\x00') + def array_from_header(self, header): + ''' Reads array given header ''' + pass -class MatFileReader(MatStreamAgent): +class MatFileReader(object): """ Base object for reading mat files To make this class functional, you will need to override the following methods: - set_dtypes - sets data types defs from byte order matrix_getter_factory - gives object to fetch next matrix from stream guess_byte_order - guesses file byte order from file """ @@ -283,7 +320,7 @@ squeeze_me=False, chars_as_strings=True, matlab_compatible=False, - struct_as_record=None + struct_as_record=True ): ''' Initializer for mat file reader @@ -297,174 +334,26 @@ self.dtypes = {} if not byte_order: byte_order = self.guess_byte_order() - self.order_code = byte_order # sets dtypes and other things too + else: + byte_order = boc.to_numpy_code(byte_order) + self.byte_order = byte_order + self.struct_as_record = struct_as_record if matlab_compatible: self.set_matlab_compatible() else: - self._squeeze_me = squeeze_me - self._chars_as_strings = chars_as_strings - self._mat_dtype = mat_dtype - self.processor_func = self.get_processor_func() + self.squeeze_me = squeeze_me + self.chars_as_strings = chars_as_strings + self.mat_dtype = mat_dtype def set_matlab_compatible(self): ''' Sets options to return arrays as matlab (tm) loads them ''' - self._mat_dtype = True - self._squeeze_me = False - self._chars_as_strings = False - self.processor_func = self.get_processor_func() - - def get_mat_dtype(self): - return self._mat_dtype - def set_mat_dtype(self, mat_dtype): - self._mat_dtype = mat_dtype - self.processor_func = self.get_processor_func() - mat_dtype = property(get_mat_dtype, - set_mat_dtype, - None, - 'get/set mat_dtype property') - - def get_squeeze_me(self): - return self._squeeze_me - def set_squeeze_me(self, squeeze_me): - self._squeeze_me = squeeze_me - self.processor_func = self.get_processor_func() - squeeze_me = property(get_squeeze_me, - set_squeeze_me, - None, - 'get/set squeeze me property') - - def get_chars_as_strings(self): - return self._chars_as_strings - def set_chars_as_strings(self, chars_as_strings): - self._chars_as_strings = chars_as_strings - self.processor_func = self.get_processor_func() - chars_as_strings = property(get_chars_as_strings, - set_chars_as_strings, - None, - 'get/set chars_as_strings property') - - def get_order_code(self): - return self._order_code - def set_order_code(self, order_code): - order_code = boc.to_numpy_code(order_code) - self._order_code = order_code - self.set_dtypes() - order_code = property(get_order_code, - set_order_code, - None, - 'get/set order code') - - def set_dtypes(self): - ''' Set dtype endianness. In this case we have no dtypes ''' - pass - - def convert_dtypes(self, dtype_template): - dtypes = dtype_template.copy() - for k in dtypes: - dtypes[k] = np.dtype(dtypes[k]).newbyteorder(self.order_code) - return dtypes - - def matrix_getter_factory(self): - assert False, 'Not implemented' - - def file_header(self): - return {} + self.mat_dtype = True + self.squeeze_me = False + self.chars_as_strings = False def guess_byte_order(self): ''' As we do not know what file type we have, assume native ''' - return ByteOrder.native_code - - def get_processor_func(self): - ''' Processing to apply to read matrices - - Function applies options to matrices. We have to pass this - function into the reader routines because Mat5 matrices - occur as submatrices - in cell arrays, structs and objects - - so we will not see these in the main variable getting routine - here. - - The read array is the first argument. - The getter, passed as second argument to the function, must - define properties, iff mat_dtype option is True: - - mat_dtype - data type when loaded into matlab (tm) - (None for no conversion) - - func returns the processed array - ''' - - def func(arr, getter): - if arr.dtype.kind == 'U' and self.chars_as_strings: - # Convert char array to string or array of strings - dims = arr.shape - if len(dims) >= 2: # return array of strings - n_dims = dims[:-1] - last_dim = dims[-1] - str_arr = arr.reshape( - (small_product(n_dims), - last_dim)) - dtstr = 'U%d' % (last_dim and last_dim or 1) - arr = np.empty(n_dims, dtype=dtstr) - for i in range(0, n_dims[-1]): - arr[...,i] = self.chars_to_str(str_arr[i]) - else: # return string - arr = self.chars_to_str(arr) - if self.mat_dtype: - # Apply options to replicate matlab's (TM) - # load into workspace - if getter.mat_dtype is not None: - arr = arr.astype(getter.mat_dtype) - if self.squeeze_me: - arr = np.squeeze(arr) - if not arr.size: - arr = np.array([]) - elif not arr.shape and arr.dtype.isbuiltin: # 0d coverted to scalar - arr = arr.item() - return arr - return func - - def chars_to_str(self, str_arr): - ''' Convert string array to string ''' - dt = np.dtype('U' + str(small_product(str_arr.shape))) - return np.ndarray(shape=(), - dtype = dt, - buffer = str_arr.copy()).item() - - def get_variables(self, variable_names=None): - ''' get variables from stream as dictionary - - variable_names - optional list of variable names to get - - If variable_names is None, then get all variables in file - ''' - if isinstance(variable_names, basestring): - variable_names = [variable_names] - self.mat_stream.seek(0) - mdict = self.file_header() - mdict['__globals__'] = [] - while not self.end_of_stream(): - getter = self.matrix_getter_factory() - name = getter.name - if variable_names and name not in variable_names: - getter.to_next() - continue - try: - res = getter.get_array() - except MatReadError, err: - warnings.warn( - 'Unreadable variable "%s", because "%s"' % \ - (name, err), - Warning, stacklevel=2) - res = "Read error: %s" % err - getter.to_next() - mdict[name] = res - if getter.is_global: - mdict['__globals__'].append(name) - if variable_names: - variable_names.remove(name) - if not variable_names: - break - return mdict + return boc.native_code def end_of_stream(self): b = self.mat_stream.read(1) @@ -473,94 +362,23 @@ return len(b) == 0 -class MatMatrixGetter(MatStreamAgent): - """ Base class for matrix getters - - Getters are stateful versions of agents, and record state of - current read on initialization, so need to be created for each - read - one-shot objects. - - MatrixGetters are initialized with the content of the matrix - header - - Accepts - array_reader - array reading object (see below) - header - header dictionary for matrix being read - """ - - def __init__(self, array_reader, header): - super(MatMatrixGetter, self).__init__(array_reader.mat_stream) - self.array_reader = array_reader - self.dtypes = array_reader.dtypes - self.header = header - self.name = header['name'] - - def get_array(self): - ''' Gets an array from matrix, and applies any necessary processing ''' - arr = self.get_raw_array() - return self.array_reader.processor_func(arr, self) - - def get_raw_array(self): - assert False, 'Not implemented' - - def to_next(self): - self.mat_stream.seek(self.next_position) - - -class MatArrayReader(MatStreamAgent): - ''' Base class for array readers - - The array_reader contains information about the current reading - process, such as byte ordered dtypes and the processing function - to apply to matrices as they are read, as well as routines for - reading matrix compenents. - ''' - - def __init__(self, mat_stream, dtypes, processor_func): - self.mat_stream = mat_stream - self.dtypes = dtypes - self.processor_func = processor_func - - def matrix_getter_factory(self): - assert False, 'Not implemented' - - -class MatStreamWriter(object): - ''' Base object for writing to mat files ''' - def __init__(self, file_stream, arr, name, oned_as): - self.file_stream = file_stream - self.arr = arr - dt = self.arr.dtype - if not dt.isnative: - self.arr = self.arr.astype(dt.newbyteorder('=')) - self.name = name - self.oned_as = oned_as - - def rewind(self): - self.file_stream.seek(0) - - def arr_dtype_number(self, num): - ''' Return dtype for given number of items per element''' - return np.dtype(self.arr.dtype.str[:2] + str(num)) - - def arr_to_chars(self): - ''' Convert string array to char array ''' - dims = list(self.arr.shape) - if not dims: - dims = [1] - dims.append(int(self.arr.dtype.str[2:])) - self.arr = np.ndarray(shape=dims, - dtype=self.arr_dtype_number(1), - buffer=self.arr) - - def write_bytes(self, arr): - self.file_stream.write(arr.tostring(order='F')) - - def write_string(self, s): - self.file_stream.write(s) - - -class MatFileWriter(object): - ''' Base class for Mat file writers ''' - def __init__(self, file_stream): - self.file_stream = file_stream +def arr_dtype_number(arr, num): + ''' Return dtype for given number of items per element''' + return np.dtype(arr.dtype.str[:2] + str(num)) + + +def arr_to_chars(arr): + ''' Convert string array to char array ''' + dims = list(arr.shape) + if not dims: + dims = [1] + dims.append(int(arr.dtype.str[2:])) + arr = np.ndarray(shape=dims, + dtype=arr_dtype_number(arr, 1), + buffer=arr) + empties = [arr == ''] + if not np.any(empties): + return arr + arr = arr.copy() + arr[empties] = ' ' + return arr diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/matlab/mio.py python-scipy-0.8.0+dfsg1/scipy/io/matlab/mio.py --- python-scipy-0.7.2+dfsg1/scipy/io/matlab/mio.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/matlab/mio.py 2010-07-26 15:48:31.000000000 +0100 @@ -18,13 +18,20 @@ def find_mat_file(file_name, appendmat=True): ''' Try to find .mat file on system path + Parameters + ---------- file_name : string file name for mat file %(append_arg)s + + Returns + ------- + full_name : string + possibly modified name after path search ''' warnings.warn('Searching for mat files on python system path will be ' + - 'removed in future versions of scipy', - FutureWarning, stacklevel=2) + 'removed in next version of scipy', + DeprecationWarning, stacklevel=2) if appendmat and file_name.endswith(".mat"): file_name = file_name[:-4] if os.sep in file_name: @@ -47,36 +54,51 @@ pass return full_name + +def _open_file(file_like, appendmat): + ''' Open `file_like` and return as file-like object ''' + if isinstance(file_like, basestring): + try: + return open(file_like, 'rb') + except IOError: + pass + if appendmat and not file_like.endswith('.mat'): + try: + return open(file_like + '.mat', 'rb') + except IOError: + pass + # search the python path - we'll remove this soon + full_name = find_mat_file(file_like, appendmat) + if full_name is None: + raise IOError("%s not found on the path." + % file_like) + return open(full_name, 'rb') + # not a string - maybe file-like object + try: + file_like.read(0) + except AttributeError: + raise IOError('Reader needs file name or open file-like object') + return file_like + + @docfiller def mat_reader_factory(file_name, appendmat=True, **kwargs): """Create reader for matlab .mat format files + Parameters + ---------- %(file_arg)s %(append_arg)s - %(basename_arg)s %(load_args)s %(struct_arg)s + + Returns + ------- + matreader : MatFileReader object + Initialized instance of MatFileReader class matching the mat file + type detected in `filename`. """ - if isinstance(file_name, basestring): - try: - byte_stream = open(file_name, 'rb') - except IOError: - full_name = find_mat_file(file_name, appendmat) - if full_name is None: - raise IOError, "%s not found on the path." % file_name - byte_stream = open(full_name, 'rb') - else: - try: - file_name.read(0) - except AttributeError: - raise IOError, 'Reader needs file name or open file-like object' - byte_stream = file_name - # Deal with deprecations - if kwargs.has_key('basename'): - warnings.warn( - 'basename argument will be removed in future scipy versions', - DeprecationWarning, stacklevel=2) - del kwargs['basename'] + byte_stream = _open_file(file_name, appendmat) mjv, mnv = get_matfile_version(byte_stream) if mjv == 0: return MatFile4Reader(byte_stream, **kwargs) @@ -91,14 +113,21 @@ def loadmat(file_name, mdict=None, appendmat=True, **kwargs): ''' Load Matlab(tm) file + Parameters + ---------- %(file_arg)s m_dict : dict, optional dictionary in which to insert matfile variables %(append_arg)s - %(basename_arg)s %(load_args)s %(struct_arg)s + Returns + ------- + mat_dict : dict + dictionary with variable names as keys, and loaded matrices as + values + Notes ----- v4 (Level 1.0), v6 and v7 to 7.2 matfiles are supported. @@ -127,6 +156,8 @@ This saves the arrayobjects in the given dictionary to a matlab style .mat file. + Parameters + ---------- file_name : {string, file-like object} Name of the mat file (do not need .mat extension if appendmat==True) Can also pass open file-like object diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/matlab/mio_utils.c python-scipy-0.8.0+dfsg1/scipy/io/matlab/mio_utils.c --- python-scipy-0.7.2+dfsg1/scipy/io/matlab/mio_utils.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/matlab/mio_utils.c 2010-07-26 15:48:31.000000000 +0100 @@ -0,0 +1,4594 @@ +/* Generated by Cython 0.12.1 on Wed Jun 16 17:42:35 2010 */ + +#define PY_SSIZE_T_CLEAN +#include "Python.h" +#include "structmember.h" +#ifndef Py_PYTHON_H + #error Python headers needed to compile C extensions, please install development version of Python. +#else + +#ifndef PY_LONG_LONG + #define PY_LONG_LONG LONG_LONG +#endif +#ifndef DL_EXPORT + #define DL_EXPORT(t) t +#endif +#if PY_VERSION_HEX < 0x02040000 + #define METH_COEXIST 0 + #define PyDict_CheckExact(op) (Py_TYPE(op) == &PyDict_Type) + #define PyDict_Contains(d,o) PySequence_Contains(d,o) +#endif + +#if PY_VERSION_HEX < 0x02050000 + typedef int Py_ssize_t; + #define PY_SSIZE_T_MAX INT_MAX + #define PY_SSIZE_T_MIN INT_MIN + #define PY_FORMAT_SIZE_T "" + #define PyInt_FromSsize_t(z) PyInt_FromLong(z) + #define PyInt_AsSsize_t(o) PyInt_AsLong(o) + #define PyNumber_Index(o) PyNumber_Int(o) + #define PyIndex_Check(o) PyNumber_Check(o) + #define PyErr_WarnEx(category, message, stacklevel) PyErr_Warn(category, message) +#endif + +#if PY_VERSION_HEX < 0x02060000 + #define Py_REFCNT(ob) (((PyObject*)(ob))->ob_refcnt) + #define Py_TYPE(ob) (((PyObject*)(ob))->ob_type) + #define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size) + #define PyVarObject_HEAD_INIT(type, size) \ + PyObject_HEAD_INIT(type) size, + #define PyType_Modified(t) + + typedef struct { + void *buf; + PyObject *obj; + Py_ssize_t len; + Py_ssize_t itemsize; + int readonly; + int ndim; + char *format; + Py_ssize_t *shape; + Py_ssize_t *strides; + Py_ssize_t *suboffsets; + void *internal; + } Py_buffer; + + #define PyBUF_SIMPLE 0 + #define PyBUF_WRITABLE 0x0001 + #define PyBUF_FORMAT 0x0004 + #define PyBUF_ND 0x0008 + #define PyBUF_STRIDES (0x0010 | PyBUF_ND) + #define PyBUF_C_CONTIGUOUS (0x0020 | PyBUF_STRIDES) + #define PyBUF_F_CONTIGUOUS (0x0040 | PyBUF_STRIDES) + #define PyBUF_ANY_CONTIGUOUS (0x0080 | PyBUF_STRIDES) + #define PyBUF_INDIRECT (0x0100 | PyBUF_STRIDES) + +#endif + +#if PY_MAJOR_VERSION < 3 + #define __Pyx_BUILTIN_MODULE_NAME "__builtin__" +#else + #define __Pyx_BUILTIN_MODULE_NAME "builtins" +#endif + +#if PY_MAJOR_VERSION >= 3 + #define Py_TPFLAGS_CHECKTYPES 0 + #define Py_TPFLAGS_HAVE_INDEX 0 +#endif + +#if (PY_VERSION_HEX < 0x02060000) || (PY_MAJOR_VERSION >= 3) + #define Py_TPFLAGS_HAVE_NEWBUFFER 0 +#endif + +#if PY_MAJOR_VERSION >= 3 + #define PyBaseString_Type PyUnicode_Type + #define PyString_Type PyUnicode_Type + #define PyString_CheckExact PyUnicode_CheckExact +#else + #define PyBytes_Type PyString_Type + #define PyBytes_CheckExact PyString_CheckExact +#endif + +#if PY_MAJOR_VERSION >= 3 + #define PyInt_Type PyLong_Type + #define PyInt_Check(op) PyLong_Check(op) + #define PyInt_CheckExact(op) PyLong_CheckExact(op) + #define PyInt_FromString PyLong_FromString + #define PyInt_FromUnicode PyLong_FromUnicode + #define PyInt_FromLong PyLong_FromLong + #define PyInt_FromSize_t PyLong_FromSize_t + #define PyInt_FromSsize_t PyLong_FromSsize_t + #define PyInt_AsLong PyLong_AsLong + #define PyInt_AS_LONG PyLong_AS_LONG + #define PyInt_AsSsize_t PyLong_AsSsize_t + #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask + #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask + #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y) + #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y) +#else + #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y) + #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y) + +#endif + +#if PY_MAJOR_VERSION >= 3 + #define PyMethod_New(func, self, klass) PyInstanceMethod_New(func) +#endif + +#if !defined(WIN32) && !defined(MS_WINDOWS) + #ifndef __stdcall + #define __stdcall + #endif + #ifndef __cdecl + #define __cdecl + #endif + #ifndef __fastcall + #define __fastcall + #endif +#else + #define _USE_MATH_DEFINES +#endif + +#if PY_VERSION_HEX < 0x02050000 + #define __Pyx_GetAttrString(o,n) PyObject_GetAttrString((o),((char *)(n))) + #define __Pyx_SetAttrString(o,n,a) PyObject_SetAttrString((o),((char *)(n)),(a)) + #define __Pyx_DelAttrString(o,n) PyObject_DelAttrString((o),((char *)(n))) +#else + #define __Pyx_GetAttrString(o,n) PyObject_GetAttrString((o),(n)) + #define __Pyx_SetAttrString(o,n,a) PyObject_SetAttrString((o),(n),(a)) + #define __Pyx_DelAttrString(o,n) PyObject_DelAttrString((o),(n)) +#endif + +#if PY_VERSION_HEX < 0x02050000 + #define __Pyx_NAMESTR(n) ((char *)(n)) + #define __Pyx_DOCSTR(n) ((char *)(n)) +#else + #define __Pyx_NAMESTR(n) (n) + #define __Pyx_DOCSTR(n) (n) +#endif +#ifdef __cplusplus +#define __PYX_EXTERN_C extern "C" +#else +#define __PYX_EXTERN_C extern +#endif +#include +#define __PYX_HAVE_API__scipy__io__matlab__mio_utils +#include "stdlib.h" +#include "stdio.h" +#include "numpy/arrayobject.h" +#include "numpy/ufuncobject.h" + +#ifndef CYTHON_INLINE + #if defined(__GNUC__) + #define CYTHON_INLINE __inline__ + #elif defined(_MSC_VER) + #define CYTHON_INLINE __inline + #else + #define CYTHON_INLINE + #endif +#endif + +typedef struct {PyObject **p; char *s; const long n; const char* encoding; const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; /*proto*/ + + +/* Type Conversion Predeclarations */ + +#if PY_MAJOR_VERSION < 3 +#define __Pyx_PyBytes_FromString PyString_FromString +#define __Pyx_PyBytes_FromStringAndSize PyString_FromStringAndSize +#define __Pyx_PyBytes_AsString PyString_AsString +#else +#define __Pyx_PyBytes_FromString PyBytes_FromString +#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize +#define __Pyx_PyBytes_AsString PyBytes_AsString +#endif + +#define __Pyx_PyBytes_FromUString(s) __Pyx_PyBytes_FromString((char*)s) +#define __Pyx_PyBytes_AsUString(s) ((unsigned char*) __Pyx_PyBytes_AsString(s)) + +#define __Pyx_PyBool_FromLong(b) ((b) ? (Py_INCREF(Py_True), Py_True) : (Py_INCREF(Py_False), Py_False)) +static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); +static CYTHON_INLINE PyObject* __Pyx_PyNumber_Int(PyObject* x); + +#if !defined(T_PYSSIZET) +#if PY_VERSION_HEX < 0x02050000 +#define T_PYSSIZET T_INT +#elif !defined(T_LONGLONG) +#define T_PYSSIZET \ + ((sizeof(Py_ssize_t) == sizeof(int)) ? T_INT : \ + ((sizeof(Py_ssize_t) == sizeof(long)) ? T_LONG : -1)) +#else +#define T_PYSSIZET \ + ((sizeof(Py_ssize_t) == sizeof(int)) ? T_INT : \ + ((sizeof(Py_ssize_t) == sizeof(long)) ? T_LONG : \ + ((sizeof(Py_ssize_t) == sizeof(PY_LONG_LONG)) ? T_LONGLONG : -1))) +#endif +#endif + + +#if !defined(T_ULONGLONG) +#define __Pyx_T_UNSIGNED_INT(x) \ + ((sizeof(x) == sizeof(unsigned char)) ? T_UBYTE : \ + ((sizeof(x) == sizeof(unsigned short)) ? T_USHORT : \ + ((sizeof(x) == sizeof(unsigned int)) ? T_UINT : \ + ((sizeof(x) == sizeof(unsigned long)) ? T_ULONG : -1)))) +#else +#define __Pyx_T_UNSIGNED_INT(x) \ + ((sizeof(x) == sizeof(unsigned char)) ? T_UBYTE : \ + ((sizeof(x) == sizeof(unsigned short)) ? T_USHORT : \ + ((sizeof(x) == sizeof(unsigned int)) ? T_UINT : \ + ((sizeof(x) == sizeof(unsigned long)) ? T_ULONG : \ + ((sizeof(x) == sizeof(unsigned PY_LONG_LONG)) ? T_ULONGLONG : -1))))) +#endif +#if !defined(T_LONGLONG) +#define __Pyx_T_SIGNED_INT(x) \ + ((sizeof(x) == sizeof(char)) ? T_BYTE : \ + ((sizeof(x) == sizeof(short)) ? T_SHORT : \ + ((sizeof(x) == sizeof(int)) ? T_INT : \ + ((sizeof(x) == sizeof(long)) ? T_LONG : -1)))) +#else +#define __Pyx_T_SIGNED_INT(x) \ + ((sizeof(x) == sizeof(char)) ? T_BYTE : \ + ((sizeof(x) == sizeof(short)) ? T_SHORT : \ + ((sizeof(x) == sizeof(int)) ? T_INT : \ + ((sizeof(x) == sizeof(long)) ? T_LONG : \ + ((sizeof(x) == sizeof(PY_LONG_LONG)) ? T_LONGLONG : -1))))) +#endif + +#define __Pyx_T_FLOATING(x) \ + ((sizeof(x) == sizeof(float)) ? T_FLOAT : \ + ((sizeof(x) == sizeof(double)) ? T_DOUBLE : -1)) + +#if !defined(T_SIZET) +#if !defined(T_ULONGLONG) +#define T_SIZET \ + ((sizeof(size_t) == sizeof(unsigned int)) ? T_UINT : \ + ((sizeof(size_t) == sizeof(unsigned long)) ? T_ULONG : -1)) +#else +#define T_SIZET \ + ((sizeof(size_t) == sizeof(unsigned int)) ? T_UINT : \ + ((sizeof(size_t) == sizeof(unsigned long)) ? T_ULONG : \ + ((sizeof(size_t) == sizeof(unsigned PY_LONG_LONG)) ? T_ULONGLONG : -1))) +#endif +#endif + +static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); +static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t); +static CYTHON_INLINE size_t __Pyx_PyInt_AsSize_t(PyObject*); + +#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) + + +#ifdef __GNUC__ +/* Test for GCC > 2.95 */ +#if __GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95)) +#define likely(x) __builtin_expect(!!(x), 1) +#define unlikely(x) __builtin_expect(!!(x), 0) +#else /* __GNUC__ > 2 ... */ +#define likely(x) (x) +#define unlikely(x) (x) +#endif /* __GNUC__ > 2 ... */ +#else /* __GNUC__ */ +#define likely(x) (x) +#define unlikely(x) (x) +#endif /* __GNUC__ */ + +static PyObject *__pyx_m; +static PyObject *__pyx_b; +static PyObject *__pyx_empty_tuple; +static PyObject *__pyx_empty_bytes; +static int __pyx_lineno; +static int __pyx_clineno = 0; +static const char * __pyx_cfilenm= __FILE__; +static const char *__pyx_filename; +static const char **__pyx_f; + + +#if !defined(CYTHON_CCOMPLEX) + #if defined(__cplusplus) + #define CYTHON_CCOMPLEX 1 + #elif defined(_Complex_I) + #define CYTHON_CCOMPLEX 1 + #else + #define CYTHON_CCOMPLEX 0 + #endif +#endif + +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + #include + #else + #include + #endif +#endif + +#if CYTHON_CCOMPLEX && !defined(__cplusplus) && defined(__sun__) && defined(__GNUC__) + #undef _Complex_I + #define _Complex_I 1.0fj +#endif + +typedef npy_int8 __pyx_t_5numpy_int8_t; + +typedef npy_int16 __pyx_t_5numpy_int16_t; + +typedef npy_int32 __pyx_t_5numpy_int32_t; + +typedef npy_int64 __pyx_t_5numpy_int64_t; + +typedef npy_uint8 __pyx_t_5numpy_uint8_t; + +typedef npy_uint16 __pyx_t_5numpy_uint16_t; + +typedef npy_uint32 __pyx_t_5numpy_uint32_t; + +typedef npy_uint64 __pyx_t_5numpy_uint64_t; + +typedef npy_float32 __pyx_t_5numpy_float32_t; + +typedef npy_float64 __pyx_t_5numpy_float64_t; + +typedef npy_long __pyx_t_5numpy_int_t; + +typedef npy_longlong __pyx_t_5numpy_long_t; + +typedef npy_intp __pyx_t_5numpy_intp_t; + +typedef npy_uintp __pyx_t_5numpy_uintp_t; + +typedef npy_ulong __pyx_t_5numpy_uint_t; + +typedef npy_ulonglong __pyx_t_5numpy_ulong_t; + +typedef npy_double __pyx_t_5numpy_float_t; + +typedef npy_double __pyx_t_5numpy_double_t; + +typedef npy_longdouble __pyx_t_5numpy_longdouble_t; + +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + typedef ::std::complex< float > __pyx_t_float_complex; + #else + typedef float _Complex __pyx_t_float_complex; + #endif +#else + typedef struct { float real, imag; } __pyx_t_float_complex; +#endif + +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + typedef ::std::complex< double > __pyx_t_double_complex; + #else + typedef double _Complex __pyx_t_double_complex; + #endif +#else + typedef struct { double real, imag; } __pyx_t_double_complex; +#endif + +/* Type declarations */ + +typedef npy_cfloat __pyx_t_5numpy_cfloat_t; + +typedef npy_cdouble __pyx_t_5numpy_cdouble_t; + +typedef npy_clongdouble __pyx_t_5numpy_clongdouble_t; + +typedef npy_cdouble __pyx_t_5numpy_complex_t; + +#ifndef CYTHON_REFNANNY + #define CYTHON_REFNANNY 0 +#endif + +#if CYTHON_REFNANNY + typedef struct { + void (*INCREF)(void*, PyObject*, int); + void (*DECREF)(void*, PyObject*, int); + void (*GOTREF)(void*, PyObject*, int); + void (*GIVEREF)(void*, PyObject*, int); + void* (*SetupContext)(const char*, int, const char*); + void (*FinishContext)(void**); + } __Pyx_RefNannyAPIStruct; + static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL; + static __Pyx_RefNannyAPIStruct * __Pyx_RefNannyImportAPI(const char *modname) { + PyObject *m = NULL, *p = NULL; + void *r = NULL; + m = PyImport_ImportModule((char *)modname); + if (!m) goto end; + p = PyObject_GetAttrString(m, (char *)"RefNannyAPI"); + if (!p) goto end; + r = PyLong_AsVoidPtr(p); + end: + Py_XDECREF(p); + Py_XDECREF(m); + return (__Pyx_RefNannyAPIStruct *)r; + } + #define __Pyx_RefNannySetupContext(name) void *__pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__) + #define __Pyx_RefNannyFinishContext() __Pyx_RefNanny->FinishContext(&__pyx_refnanny) + #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r);} } while(0) +#else + #define __Pyx_RefNannySetupContext(name) + #define __Pyx_RefNannyFinishContext() + #define __Pyx_INCREF(r) Py_INCREF(r) + #define __Pyx_DECREF(r) Py_DECREF(r) + #define __Pyx_GOTREF(r) + #define __Pyx_GIVEREF(r) + #define __Pyx_XDECREF(r) Py_XDECREF(r) +#endif /* CYTHON_REFNANNY */ +#define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);} } while(0) +#define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r);} } while(0) + + +static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j) { + PyObject *r; + if (!j) return NULL; + r = PyObject_GetItem(o, j); + Py_DECREF(j); + return r; +} + + +#define __Pyx_GetItemInt_List(o, i, size, to_py_func) ((size <= sizeof(Py_ssize_t)) ? \ + __Pyx_GetItemInt_List_Fast(o, i, size <= sizeof(long)) : \ + __Pyx_GetItemInt_Generic(o, to_py_func(i))) + +static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, int fits_long) { + if (likely(o != Py_None)) { + if (likely((0 <= i) & (i < PyList_GET_SIZE(o)))) { + PyObject *r = PyList_GET_ITEM(o, i); + Py_INCREF(r); + return r; + } + else if ((-PyList_GET_SIZE(o) <= i) & (i < 0)) { + PyObject *r = PyList_GET_ITEM(o, PyList_GET_SIZE(o) + i); + Py_INCREF(r); + return r; + } + } + return __Pyx_GetItemInt_Generic(o, fits_long ? PyInt_FromLong(i) : PyLong_FromLongLong(i)); +} + +#define __Pyx_GetItemInt_Tuple(o, i, size, to_py_func) ((size <= sizeof(Py_ssize_t)) ? \ + __Pyx_GetItemInt_Tuple_Fast(o, i, size <= sizeof(long)) : \ + __Pyx_GetItemInt_Generic(o, to_py_func(i))) + +static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, int fits_long) { + if (likely(o != Py_None)) { + if (likely((0 <= i) & (i < PyTuple_GET_SIZE(o)))) { + PyObject *r = PyTuple_GET_ITEM(o, i); + Py_INCREF(r); + return r; + } + else if ((-PyTuple_GET_SIZE(o) <= i) & (i < 0)) { + PyObject *r = PyTuple_GET_ITEM(o, PyTuple_GET_SIZE(o) + i); + Py_INCREF(r); + return r; + } + } + return __Pyx_GetItemInt_Generic(o, fits_long ? PyInt_FromLong(i) : PyLong_FromLongLong(i)); +} + + +#define __Pyx_GetItemInt(o, i, size, to_py_func) ((size <= sizeof(Py_ssize_t)) ? \ + __Pyx_GetItemInt_Fast(o, i, size <= sizeof(long)) : \ + __Pyx_GetItemInt_Generic(o, to_py_func(i))) + +static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, int fits_long) { + PyObject *r; + if (PyList_CheckExact(o) && ((0 <= i) & (i < PyList_GET_SIZE(o)))) { + r = PyList_GET_ITEM(o, i); + Py_INCREF(r); + } + else if (PyTuple_CheckExact(o) && ((0 <= i) & (i < PyTuple_GET_SIZE(o)))) { + r = PyTuple_GET_ITEM(o, i); + Py_INCREF(r); + } + else if (Py_TYPE(o)->tp_as_sequence && Py_TYPE(o)->tp_as_sequence->sq_item && (likely(i >= 0))) { + r = PySequence_GetItem(o, i); + } + else { + r = __Pyx_GetItemInt_Generic(o, fits_long ? PyInt_FromLong(i) : PyLong_FromLongLong(i)); + } + return r; +} + +static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type); /*proto*/ + +static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index); + +static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(void); + +static PyObject *__Pyx_UnpackItem(PyObject *, Py_ssize_t index); /*proto*/ +static int __Pyx_EndUnpack(PyObject *); /*proto*/ + +static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void); + +static void __Pyx_UnpackTupleError(PyObject *, Py_ssize_t index); /*proto*/ + +static int __Pyx_ArgTypeTest(PyObject *obj, PyTypeObject *type, int none_allowed, + const char *name, int exact); /*proto*/ + +static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list); /*proto*/ + +static PyObject *__Pyx_GetName(PyObject *dict, PyObject *name); /*proto*/ + +static CYTHON_INLINE PyObject *__Pyx_PyInt_to_py_npy_intp(npy_intp); + +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + #define __Pyx_CREAL(z) ((z).real()) + #define __Pyx_CIMAG(z) ((z).imag()) + #else + #define __Pyx_CREAL(z) (__real__(z)) + #define __Pyx_CIMAG(z) (__imag__(z)) + #endif +#else + #define __Pyx_CREAL(z) ((z).real) + #define __Pyx_CIMAG(z) ((z).imag) +#endif + +#if defined(_WIN32) && defined(__cplusplus) && CYTHON_CCOMPLEX + #define __Pyx_SET_CREAL(z,x) ((z).real(x)) + #define __Pyx_SET_CIMAG(z,y) ((z).imag(y)) +#else + #define __Pyx_SET_CREAL(z,x) __Pyx_CREAL(z) = (x) + #define __Pyx_SET_CIMAG(z,y) __Pyx_CIMAG(z) = (y) +#endif + +static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float, float); + +#if CYTHON_CCOMPLEX + #define __Pyx_c_eqf(a, b) ((a)==(b)) + #define __Pyx_c_sumf(a, b) ((a)+(b)) + #define __Pyx_c_difff(a, b) ((a)-(b)) + #define __Pyx_c_prodf(a, b) ((a)*(b)) + #define __Pyx_c_quotf(a, b) ((a)/(b)) + #define __Pyx_c_negf(a) (-(a)) + #ifdef __cplusplus + #define __Pyx_c_is_zerof(z) ((z)==(float)0) + #define __Pyx_c_conjf(z) (::std::conj(z)) + /*#define __Pyx_c_absf(z) (::std::abs(z))*/ + #else + #define __Pyx_c_is_zerof(z) ((z)==0) + #define __Pyx_c_conjf(z) (conjf(z)) + /*#define __Pyx_c_absf(z) (cabsf(z))*/ + #endif +#else + static CYTHON_INLINE int __Pyx_c_eqf(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_sumf(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_difff(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_prodf(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quotf(__pyx_t_float_complex, __pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_negf(__pyx_t_float_complex); + static CYTHON_INLINE int __Pyx_c_is_zerof(__pyx_t_float_complex); + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_conjf(__pyx_t_float_complex); + /*static CYTHON_INLINE float __Pyx_c_absf(__pyx_t_float_complex);*/ +#endif + +static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double, double); + +#if CYTHON_CCOMPLEX + #define __Pyx_c_eq(a, b) ((a)==(b)) + #define __Pyx_c_sum(a, b) ((a)+(b)) + #define __Pyx_c_diff(a, b) ((a)-(b)) + #define __Pyx_c_prod(a, b) ((a)*(b)) + #define __Pyx_c_quot(a, b) ((a)/(b)) + #define __Pyx_c_neg(a) (-(a)) + #ifdef __cplusplus + #define __Pyx_c_is_zero(z) ((z)==(double)0) + #define __Pyx_c_conj(z) (::std::conj(z)) + /*#define __Pyx_c_abs(z) (::std::abs(z))*/ + #else + #define __Pyx_c_is_zero(z) ((z)==0) + #define __Pyx_c_conj(z) (conj(z)) + /*#define __Pyx_c_abs(z) (cabs(z))*/ + #endif +#else + static CYTHON_INLINE int __Pyx_c_eq(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg(__pyx_t_double_complex); + static CYTHON_INLINE int __Pyx_c_is_zero(__pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj(__pyx_t_double_complex); + /*static CYTHON_INLINE double __Pyx_c_abs(__pyx_t_double_complex);*/ +#endif + +static CYTHON_INLINE void __Pyx_ErrRestore(PyObject *type, PyObject *value, PyObject *tb); /*proto*/ +static CYTHON_INLINE void __Pyx_ErrFetch(PyObject **type, PyObject **value, PyObject **tb); /*proto*/ + +static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb); /*proto*/ + +static CYTHON_INLINE unsigned char __Pyx_PyInt_AsUnsignedChar(PyObject *); + +static CYTHON_INLINE unsigned short __Pyx_PyInt_AsUnsignedShort(PyObject *); + +static CYTHON_INLINE unsigned int __Pyx_PyInt_AsUnsignedInt(PyObject *); + +static CYTHON_INLINE char __Pyx_PyInt_AsChar(PyObject *); + +static CYTHON_INLINE short __Pyx_PyInt_AsShort(PyObject *); + +static CYTHON_INLINE int __Pyx_PyInt_AsInt(PyObject *); + +static CYTHON_INLINE signed char __Pyx_PyInt_AsSignedChar(PyObject *); + +static CYTHON_INLINE signed short __Pyx_PyInt_AsSignedShort(PyObject *); + +static CYTHON_INLINE signed int __Pyx_PyInt_AsSignedInt(PyObject *); + +static CYTHON_INLINE unsigned long __Pyx_PyInt_AsUnsignedLong(PyObject *); + +static CYTHON_INLINE unsigned PY_LONG_LONG __Pyx_PyInt_AsUnsignedLongLong(PyObject *); + +static CYTHON_INLINE long __Pyx_PyInt_AsLong(PyObject *); + +static CYTHON_INLINE PY_LONG_LONG __Pyx_PyInt_AsLongLong(PyObject *); + +static CYTHON_INLINE signed long __Pyx_PyInt_AsSignedLong(PyObject *); + +static CYTHON_INLINE signed PY_LONG_LONG __Pyx_PyInt_AsSignedLongLong(PyObject *); + +static void __Pyx_WriteUnraisable(const char *name); /*proto*/ + +static PyTypeObject *__Pyx_ImportType(const char *module_name, const char *class_name, long size, int strict); /*proto*/ + +static PyObject *__Pyx_ImportModule(const char *name); /*proto*/ + +static void __Pyx_AddTraceback(const char *funcname); /*proto*/ + +static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); /*proto*/ +/* Module declarations from python_buffer */ + +/* Module declarations from python_ref */ + +/* Module declarations from stdlib */ + +/* Module declarations from stdio */ + +/* Module declarations from numpy */ + +/* Module declarations from numpy */ + +static PyTypeObject *__pyx_ptype_5numpy_dtype = 0; +static PyTypeObject *__pyx_ptype_5numpy_flatiter = 0; +static PyTypeObject *__pyx_ptype_5numpy_broadcast = 0; +static PyTypeObject *__pyx_ptype_5numpy_ndarray = 0; +static PyTypeObject *__pyx_ptype_5numpy_ufunc = 0; +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew1(PyObject *); /*proto*/ +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew2(PyObject *, PyObject *); /*proto*/ +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew3(PyObject *, PyObject *, PyObject *); /*proto*/ +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew4(PyObject *, PyObject *, PyObject *, PyObject *); /*proto*/ +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew5(PyObject *, PyObject *, PyObject *, PyObject *, PyObject *); /*proto*/ +static CYTHON_INLINE char *__pyx_f_5numpy__util_dtypestring(PyArray_Descr *, char *, char *, int *); /*proto*/ +static CYTHON_INLINE void __pyx_f_5numpy_set_array_base(PyArrayObject *, PyObject *); /*proto*/ +static CYTHON_INLINE PyObject *__pyx_f_5numpy_get_array_base(PyArrayObject *); /*proto*/ +/* Module declarations from scipy.io.matlab.mio_utils */ + +static size_t __pyx_f_5scipy_2io_6matlab_9mio_utils_cproduct(PyObject *, int __pyx_skip_dispatch); /*proto*/ +static PyObject *__pyx_f_5scipy_2io_6matlab_9mio_utils_squeeze_element(PyArrayObject *, int __pyx_skip_dispatch); /*proto*/ +static PyArrayObject *__pyx_f_5scipy_2io_6matlab_9mio_utils_chars_to_strings(PyObject *, int __pyx_skip_dispatch); /*proto*/ +#define __Pyx_MODULE_NAME "scipy.io.matlab.mio_utils" +int __pyx_module_is_main_scipy__io__matlab__mio_utils = 0; + +/* Implementation of scipy.io.matlab.mio_utils */ +static PyObject *__pyx_builtin_range; +static PyObject *__pyx_builtin_ValueError; +static PyObject *__pyx_builtin_RuntimeError; +static char __pyx_k_1[] = "ndarray is not C contiguous"; +static char __pyx_k_2[] = "ndarray is not Fortran contiguous"; +static char __pyx_k_3[] = "Non-native byte order not supported"; +static char __pyx_k_4[] = "unknown dtype code in numpy.pxd (%d)"; +static char __pyx_k_5[] = "Format string allocated too short, see comment in numpy.pxd"; +static char __pyx_k_6[] = "Format string allocated too short."; +static char __pyx_k_7[] = " Utilities for generic processing of return arrays from read\n"; +static char __pyx_k_8[] = "squeeze_element (line 17)"; +static char __pyx_k_9[] = "chars_to_strings (line 30)"; +static char __pyx_k__B[] = "B"; +static char __pyx_k__H[] = "H"; +static char __pyx_k__I[] = "I"; +static char __pyx_k__L[] = "L"; +static char __pyx_k__O[] = "O"; +static char __pyx_k__Q[] = "Q"; +static char __pyx_k__b[] = "b"; +static char __pyx_k__d[] = "d"; +static char __pyx_k__f[] = "f"; +static char __pyx_k__g[] = "g"; +static char __pyx_k__h[] = "h"; +static char __pyx_k__i[] = "i"; +static char __pyx_k__l[] = "l"; +static char __pyx_k__q[] = "q"; +static char __pyx_k__Zd[] = "Zd"; +static char __pyx_k__Zf[] = "Zf"; +static char __pyx_k__Zg[] = "Zg"; +static char __pyx_k__np[] = "np"; +static char __pyx_k__buf[] = "buf"; +static char __pyx_k__obj[] = "obj"; +static char __pyx_k__str[] = "str"; +static char __pyx_k__base[] = "base"; +static char __pyx_k__item[] = "item"; +static char __pyx_k__ndim[] = "ndim"; +static char __pyx_k__size[] = "size"; +static char __pyx_k__view[] = "view"; +static char __pyx_k__array[] = "array"; +static char __pyx_k__descr[] = "descr"; +static char __pyx_k__dtype[] = "dtype"; +static char __pyx_k__names[] = "names"; +static char __pyx_k__numpy[] = "numpy"; +static char __pyx_k__range[] = "range"; +static char __pyx_k__shape[] = "shape"; +static char __pyx_k__fields[] = "fields"; +static char __pyx_k__format[] = "format"; +static char __pyx_k__reshape[] = "reshape"; +static char __pyx_k__squeeze[] = "squeeze"; +static char __pyx_k__strides[] = "strides"; +static char __pyx_k____main__[] = "__main__"; +static char __pyx_k____test__[] = "__test__"; +static char __pyx_k__itemsize[] = "itemsize"; +static char __pyx_k__readonly[] = "readonly"; +static char __pyx_k__type_num[] = "type_num"; +static char __pyx_k__byteorder[] = "byteorder"; +static char __pyx_k__isbuiltin[] = "isbuiltin"; +static char __pyx_k__ValueError[] = "ValueError"; +static char __pyx_k__suboffsets[] = "suboffsets"; +static char __pyx_k__RuntimeError[] = "RuntimeError"; +static char __pyx_k__squeeze_element[] = "squeeze_element"; +static char __pyx_k__chars_to_strings[] = "chars_to_strings"; +static char __pyx_k__ascontiguousarray[] = "ascontiguousarray"; +static PyObject *__pyx_kp_u_1; +static PyObject *__pyx_kp_u_2; +static PyObject *__pyx_kp_u_3; +static PyObject *__pyx_kp_u_4; +static PyObject *__pyx_kp_u_5; +static PyObject *__pyx_kp_u_6; +static PyObject *__pyx_kp_u_8; +static PyObject *__pyx_kp_u_9; +static PyObject *__pyx_n_s__RuntimeError; +static PyObject *__pyx_n_s__ValueError; +static PyObject *__pyx_n_s____main__; +static PyObject *__pyx_n_s____test__; +static PyObject *__pyx_n_s__array; +static PyObject *__pyx_n_s__ascontiguousarray; +static PyObject *__pyx_n_s__base; +static PyObject *__pyx_n_s__buf; +static PyObject *__pyx_n_s__byteorder; +static PyObject *__pyx_n_s__chars_to_strings; +static PyObject *__pyx_n_s__descr; +static PyObject *__pyx_n_s__dtype; +static PyObject *__pyx_n_s__fields; +static PyObject *__pyx_n_s__format; +static PyObject *__pyx_n_s__isbuiltin; +static PyObject *__pyx_n_s__item; +static PyObject *__pyx_n_s__itemsize; +static PyObject *__pyx_n_s__names; +static PyObject *__pyx_n_s__ndim; +static PyObject *__pyx_n_s__np; +static PyObject *__pyx_n_s__numpy; +static PyObject *__pyx_n_s__obj; +static PyObject *__pyx_n_s__range; +static PyObject *__pyx_n_s__readonly; +static PyObject *__pyx_n_s__reshape; +static PyObject *__pyx_n_s__shape; +static PyObject *__pyx_n_s__size; +static PyObject *__pyx_n_s__squeeze; +static PyObject *__pyx_n_s__squeeze_element; +static PyObject *__pyx_n_s__str; +static PyObject *__pyx_n_s__strides; +static PyObject *__pyx_n_s__suboffsets; +static PyObject *__pyx_n_s__type_num; +static PyObject *__pyx_n_s__view; +static PyObject *__pyx_int_15; + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/mio_utils.pyx":9 + * + * + * cpdef size_t cproduct(tup): # <<<<<<<<<<<<<< + * cdef size_t res = 1 + * cdef int i + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_9mio_utils_cproduct(PyObject *__pyx_self, PyObject *__pyx_v_tup); /*proto*/ +static size_t __pyx_f_5scipy_2io_6matlab_9mio_utils_cproduct(PyObject *__pyx_v_tup, int __pyx_skip_dispatch) { + size_t __pyx_v_res; + int __pyx_v_i; + size_t __pyx_r; + Py_ssize_t __pyx_t_1; + int __pyx_t_2; + PyObject *__pyx_t_3 = NULL; + size_t __pyx_t_4; + __Pyx_RefNannySetupContext("cproduct"); + __Pyx_INCREF(__pyx_v_tup); + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/mio_utils.pyx":10 + * + * cpdef size_t cproduct(tup): + * cdef size_t res = 1 # <<<<<<<<<<<<<< + * cdef int i + * for i in range(len(tup)): + */ + __pyx_v_res = 1; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/mio_utils.pyx":12 + * cdef size_t res = 1 + * cdef int i + * for i in range(len(tup)): # <<<<<<<<<<<<<< + * res *= tup[i] + * return res + */ + __pyx_t_1 = PyObject_Length(__pyx_v_tup); if (unlikely(__pyx_t_1 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 12; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + for (__pyx_t_2 = 0; __pyx_t_2 < __pyx_t_1; __pyx_t_2+=1) { + __pyx_v_i = __pyx_t_2; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/mio_utils.pyx":13 + * cdef int i + * for i in range(len(tup)): + * res *= tup[i] # <<<<<<<<<<<<<< + * return res + * + */ + __pyx_t_3 = __Pyx_GetItemInt(__pyx_v_tup, __pyx_v_i, sizeof(int), PyInt_FromLong); if (!__pyx_t_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 13; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = __Pyx_PyInt_AsSize_t(__pyx_t_3); if (unlikely((__pyx_t_4 == (size_t)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 13; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_v_res *= __pyx_t_4; + } + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/mio_utils.pyx":14 + * for i in range(len(tup)): + * res *= tup[i] + * return res # <<<<<<<<<<<<<< + * + * + */ + __pyx_r = __pyx_v_res; + goto __pyx_L0; + + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_3); + __Pyx_WriteUnraisable("scipy.io.matlab.mio_utils.cproduct"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_DECREF(__pyx_v_tup); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/mio_utils.pyx":9 + * + * + * cpdef size_t cproduct(tup): # <<<<<<<<<<<<<< + * cdef size_t res = 1 + * cdef int i + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_9mio_utils_cproduct(PyObject *__pyx_self, PyObject *__pyx_v_tup); /*proto*/ +static PyObject *__pyx_pf_5scipy_2io_6matlab_9mio_utils_cproduct(PyObject *__pyx_self, PyObject *__pyx_v_tup) { + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("cproduct"); + __pyx_self = __pyx_self; + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = __Pyx_PyInt_FromSize_t(__pyx_f_5scipy_2io_6matlab_9mio_utils_cproduct(__pyx_v_tup, 0)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 9; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("scipy.io.matlab.mio_utils.cproduct"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/mio_utils.pyx":17 + * + * + * cpdef object squeeze_element(cnp.ndarray arr): # <<<<<<<<<<<<<< + * ''' Return squeezed element + * + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_9mio_utils_squeeze_element(PyObject *__pyx_self, PyObject *__pyx_v_arr); /*proto*/ +static PyObject *__pyx_f_5scipy_2io_6matlab_9mio_utils_squeeze_element(PyArrayObject *__pyx_v_arr, int __pyx_skip_dispatch) { + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + int __pyx_t_2; + int __pyx_t_3; + PyObject *__pyx_t_4 = NULL; + PyObject *__pyx_t_5 = NULL; + int __pyx_t_6; + __Pyx_RefNannySetupContext("squeeze_element"); + __Pyx_INCREF((PyObject *)__pyx_v_arr); + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/mio_utils.pyx":22 + * The returned object may not be an ndarray - for example if we do + * ``arr.item`` to return a ``mat_struct`` object from a struct array ''' + * if not arr.size: # <<<<<<<<<<<<<< + * return np.array([]) + * arr = np.squeeze(arr) + */ + __pyx_t_1 = PyObject_GetAttr(((PyObject *)__pyx_v_arr), __pyx_n_s__size); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 22; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_2 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 22; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_3 = (!__pyx_t_2); + if (__pyx_t_3) { + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/mio_utils.pyx":23 + * ``arr.item`` to return a ``mat_struct`` object from a struct array ''' + * if not arr.size: + * return np.array([]) # <<<<<<<<<<<<<< + * arr = np.squeeze(arr) + * if not arr.shape and arr.dtype.isbuiltin: # 0d coverted to scalar + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 23; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_4 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__array); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 23; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 23; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_1)); + __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 23; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_t_1)); + __Pyx_GIVEREF(((PyObject *)__pyx_t_1)); + __pyx_t_1 = 0; + __pyx_t_1 = PyObject_Call(__pyx_t_4, __pyx_t_5, NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 23; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + goto __pyx_L3; + } + __pyx_L3:; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/mio_utils.pyx":24 + * if not arr.size: + * return np.array([]) + * arr = np.squeeze(arr) # <<<<<<<<<<<<<< + * if not arr.shape and arr.dtype.isbuiltin: # 0d coverted to scalar + * return arr.item() + */ + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 24; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_5 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__squeeze); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 24; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 24; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_INCREF(((PyObject *)__pyx_v_arr)); + PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)__pyx_v_arr)); + __Pyx_GIVEREF(((PyObject *)__pyx_v_arr)); + __pyx_t_4 = PyObject_Call(__pyx_t_5, __pyx_t_1, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 24; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + if (!(likely(((__pyx_t_4) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_4, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 24; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(((PyObject *)__pyx_v_arr)); + __pyx_v_arr = ((PyArrayObject *)__pyx_t_4); + __pyx_t_4 = 0; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/mio_utils.pyx":25 + * return np.array([]) + * arr = np.squeeze(arr) + * if not arr.shape and arr.dtype.isbuiltin: # 0d coverted to scalar # <<<<<<<<<<<<<< + * return arr.item() + * return arr + */ + __pyx_t_3 = (!(__pyx_v_arr->dimensions != 0)); + if (__pyx_t_3) { + __pyx_t_4 = PyObject_GetAttr(((PyObject *)__pyx_v_arr), __pyx_n_s__dtype); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 25; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_1 = PyObject_GetAttr(__pyx_t_4, __pyx_n_s__isbuiltin); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 25; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_2 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 25; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_6 = __pyx_t_2; + } else { + __pyx_t_6 = __pyx_t_3; + } + if (__pyx_t_6) { + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/mio_utils.pyx":26 + * arr = np.squeeze(arr) + * if not arr.shape and arr.dtype.isbuiltin: # 0d coverted to scalar + * return arr.item() # <<<<<<<<<<<<<< + * return arr + * + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyObject_GetAttr(((PyObject *)__pyx_v_arr), __pyx_n_s__item); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 26; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_4 = PyObject_Call(__pyx_t_1, ((PyObject *)__pyx_empty_tuple), NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 26; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_r = __pyx_t_4; + __pyx_t_4 = 0; + goto __pyx_L0; + goto __pyx_L4; + } + __pyx_L4:; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/mio_utils.pyx":27 + * if not arr.shape and arr.dtype.isbuiltin: # 0d coverted to scalar + * return arr.item() + * return arr # <<<<<<<<<<<<<< + * + * + */ + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(((PyObject *)__pyx_v_arr)); + __pyx_r = ((PyObject *)__pyx_v_arr); + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_XDECREF(__pyx_t_5); + __Pyx_AddTraceback("scipy.io.matlab.mio_utils.squeeze_element"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_DECREF((PyObject *)__pyx_v_arr); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/mio_utils.pyx":17 + * + * + * cpdef object squeeze_element(cnp.ndarray arr): # <<<<<<<<<<<<<< + * ''' Return squeezed element + * + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_9mio_utils_squeeze_element(PyObject *__pyx_self, PyObject *__pyx_v_arr); /*proto*/ +static char __pyx_doc_5scipy_2io_6matlab_9mio_utils_squeeze_element[] = " Return squeezed element\n\n The returned object may not be an ndarray - for example if we do\n ``arr.item`` to return a ``mat_struct`` object from a struct array "; +static PyObject *__pyx_pf_5scipy_2io_6matlab_9mio_utils_squeeze_element(PyObject *__pyx_self, PyObject *__pyx_v_arr) { + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("squeeze_element"); + __pyx_self = __pyx_self; + if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_arr), __pyx_ptype_5numpy_ndarray, 1, "arr", 0))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 17; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = __pyx_f_5scipy_2io_6matlab_9mio_utils_squeeze_element(((PyArrayObject *)__pyx_v_arr), 0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 17; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("scipy.io.matlab.mio_utils.squeeze_element"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/mio_utils.pyx":30 + * + * + * cpdef cnp.ndarray chars_to_strings(in_arr): # <<<<<<<<<<<<<< + * ''' Convert final axis of char array to strings + * + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_9mio_utils_chars_to_strings(PyObject *__pyx_self, PyObject *__pyx_v_in_arr); /*proto*/ +static PyArrayObject *__pyx_f_5scipy_2io_6matlab_9mio_utils_chars_to_strings(PyObject *__pyx_v_in_arr, int __pyx_skip_dispatch) { + PyArrayObject *__pyx_v_arr = 0; + int __pyx_v_ndim; + npy_intp *__pyx_v_dims; + npy_intp __pyx_v_last_dim; + PyObject *__pyx_v_new_dt_str; + PyArrayObject *__pyx_r = NULL; + int __pyx_t_1; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + PyObject *__pyx_t_4 = NULL; + __Pyx_RefNannySetupContext("chars_to_strings"); + __Pyx_INCREF(__pyx_v_in_arr); + __pyx_v_new_dt_str = Py_None; __Pyx_INCREF(Py_None); + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/mio_utils.pyx":44 + * ``arr`` + * ''' + * cdef cnp.ndarray arr = in_arr # <<<<<<<<<<<<<< + * cdef int ndim = arr.ndim + * cdef cnp.npy_intp *dims = arr.shape + */ + if (!(likely(((__pyx_v_in_arr) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_in_arr, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 44; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_INCREF(__pyx_v_in_arr); + __pyx_v_arr = ((PyArrayObject *)__pyx_v_in_arr); + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/mio_utils.pyx":45 + * ''' + * cdef cnp.ndarray arr = in_arr + * cdef int ndim = arr.ndim # <<<<<<<<<<<<<< + * cdef cnp.npy_intp *dims = arr.shape + * cdef cnp.npy_intp last_dim = dims[ndim-1] + */ + __pyx_v_ndim = __pyx_v_arr->nd; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/mio_utils.pyx":46 + * cdef cnp.ndarray arr = in_arr + * cdef int ndim = arr.ndim + * cdef cnp.npy_intp *dims = arr.shape # <<<<<<<<<<<<<< + * cdef cnp.npy_intp last_dim = dims[ndim-1] + * cdef object new_dt_str + */ + __pyx_v_dims = __pyx_v_arr->dimensions; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/mio_utils.pyx":47 + * cdef int ndim = arr.ndim + * cdef cnp.npy_intp *dims = arr.shape + * cdef cnp.npy_intp last_dim = dims[ndim-1] # <<<<<<<<<<<<<< + * cdef object new_dt_str + * if last_dim == 0: # deal with empty array case + */ + __pyx_v_last_dim = (__pyx_v_dims[(__pyx_v_ndim - 1)]); + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/mio_utils.pyx":49 + * cdef cnp.npy_intp last_dim = dims[ndim-1] + * cdef object new_dt_str + * if last_dim == 0: # deal with empty array case # <<<<<<<<<<<<<< + * new_dt_str = arr.dtype.str + * else: # make new dtype string with N appended + */ + __pyx_t_1 = (__pyx_v_last_dim == 0); + if (__pyx_t_1) { + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/mio_utils.pyx":50 + * cdef object new_dt_str + * if last_dim == 0: # deal with empty array case + * new_dt_str = arr.dtype.str # <<<<<<<<<<<<<< + * else: # make new dtype string with N appended + * new_dt_str = arr.dtype.str[:-1] + str(last_dim) + */ + __pyx_t_2 = PyObject_GetAttr(((PyObject *)__pyx_v_arr), __pyx_n_s__dtype); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 50; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__str); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 50; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_v_new_dt_str); + __pyx_v_new_dt_str = __pyx_t_3; + __pyx_t_3 = 0; + goto __pyx_L3; + } + /*else*/ { + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/mio_utils.pyx":52 + * new_dt_str = arr.dtype.str + * else: # make new dtype string with N appended + * new_dt_str = arr.dtype.str[:-1] + str(last_dim) # <<<<<<<<<<<<<< + * # Copy to deal with F ordered arrays + * arr = np.ascontiguousarray(arr) + */ + __pyx_t_3 = PyObject_GetAttr(((PyObject *)__pyx_v_arr), __pyx_n_s__dtype); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 52; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__str); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 52; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PySequence_GetSlice(__pyx_t_2, 0, -1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 52; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = __Pyx_PyInt_to_py_npy_intp(__pyx_v_last_dim); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 52; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 52; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_2); + __Pyx_GIVEREF(__pyx_t_2); + __pyx_t_2 = 0; + __pyx_t_2 = PyObject_Call(((PyObject *)((PyObject*)&PyString_Type)), __pyx_t_4, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 52; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_4 = PyNumber_Add(__pyx_t_3, __pyx_t_2); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 52; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_v_new_dt_str); + __pyx_v_new_dt_str = __pyx_t_4; + __pyx_t_4 = 0; + } + __pyx_L3:; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/mio_utils.pyx":54 + * new_dt_str = arr.dtype.str[:-1] + str(last_dim) + * # Copy to deal with F ordered arrays + * arr = np.ascontiguousarray(arr) # <<<<<<<<<<<<<< + * arr = arr.view(new_dt_str) + * return arr.reshape(in_arr.shape[:-1]) + */ + __pyx_t_4 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 54; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_2 = PyObject_GetAttr(__pyx_t_4, __pyx_n_s__ascontiguousarray); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 54; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 54; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_INCREF(((PyObject *)__pyx_v_arr)); + PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)__pyx_v_arr)); + __Pyx_GIVEREF(((PyObject *)__pyx_v_arr)); + __pyx_t_3 = PyObject_Call(__pyx_t_2, __pyx_t_4, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 54; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 54; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(((PyObject *)__pyx_v_arr)); + __pyx_v_arr = ((PyArrayObject *)__pyx_t_3); + __pyx_t_3 = 0; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/mio_utils.pyx":55 + * # Copy to deal with F ordered arrays + * arr = np.ascontiguousarray(arr) + * arr = arr.view(new_dt_str) # <<<<<<<<<<<<<< + * return arr.reshape(in_arr.shape[:-1]) + */ + __pyx_t_3 = PyObject_GetAttr(((PyObject *)__pyx_v_arr), __pyx_n_s__view); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 55; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 55; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_INCREF(__pyx_v_new_dt_str); + PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v_new_dt_str); + __Pyx_GIVEREF(__pyx_v_new_dt_str); + __pyx_t_2 = PyObject_Call(__pyx_t_3, __pyx_t_4, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 55; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (!(likely(((__pyx_t_2) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_2, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 55; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(((PyObject *)__pyx_v_arr)); + __pyx_v_arr = ((PyArrayObject *)__pyx_t_2); + __pyx_t_2 = 0; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/mio_utils.pyx":56 + * arr = np.ascontiguousarray(arr) + * arr = arr.view(new_dt_str) + * return arr.reshape(in_arr.shape[:-1]) # <<<<<<<<<<<<<< + */ + __Pyx_XDECREF(((PyObject *)__pyx_r)); + __pyx_t_2 = PyObject_GetAttr(((PyObject *)__pyx_v_arr), __pyx_n_s__reshape); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 56; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_4 = PyObject_GetAttr(__pyx_v_in_arr, __pyx_n_s__shape); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 56; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PySequence_GetSlice(__pyx_t_4, 0, -1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 56; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 56; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + __pyx_t_3 = 0; + __pyx_t_3 = PyObject_Call(__pyx_t_2, __pyx_t_4, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 56; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 56; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_r = ((PyArrayObject *)__pyx_t_3); + __pyx_t_3 = 0; + goto __pyx_L0; + + __pyx_r = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_AddTraceback("scipy.io.matlab.mio_utils.chars_to_strings"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XDECREF((PyObject *)__pyx_v_arr); + __Pyx_DECREF(__pyx_v_new_dt_str); + __Pyx_DECREF(__pyx_v_in_arr); + __Pyx_XGIVEREF((PyObject *)__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/mio_utils.pyx":30 + * + * + * cpdef cnp.ndarray chars_to_strings(in_arr): # <<<<<<<<<<<<<< + * ''' Convert final axis of char array to strings + * + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_9mio_utils_chars_to_strings(PyObject *__pyx_self, PyObject *__pyx_v_in_arr); /*proto*/ +static char __pyx_doc_5scipy_2io_6matlab_9mio_utils_chars_to_strings[] = " Convert final axis of char array to strings\n\n Parameters\n ----------\n in_arr : array\n dtype of 'U1'\n \n Returns\n -------\n str_arr : array\n dtype of 'UN' where N is the length of the last dimension of\n ``arr``\n "; +static PyObject *__pyx_pf_5scipy_2io_6matlab_9mio_utils_chars_to_strings(PyObject *__pyx_self, PyObject *__pyx_v_in_arr) { + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("chars_to_strings"); + __pyx_self = __pyx_self; + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = ((PyObject *)__pyx_f_5scipy_2io_6matlab_9mio_utils_chars_to_strings(__pyx_v_in_arr, 0)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 30; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("scipy.io.matlab.mio_utils.chars_to_strings"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":187 + * # experimental exception made for __getbuffer__ and __releasebuffer__ + * # -- the details of this may change. + * def __getbuffer__(ndarray self, Py_buffer* info, int flags): # <<<<<<<<<<<<<< + * # This implementation of getbuffer is geared towards Cython + * # requirements, and does not yet fullfill the PEP. + */ + +static int __pyx_pf_5numpy_7ndarray___getbuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ +static int __pyx_pf_5numpy_7ndarray___getbuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { + int __pyx_v_copy_shape; + int __pyx_v_i; + int __pyx_v_ndim; + int __pyx_v_endian_detector; + int __pyx_v_little_endian; + int __pyx_v_t; + char *__pyx_v_f; + PyArray_Descr *__pyx_v_descr = 0; + int __pyx_v_offset; + int __pyx_v_hasfields; + int __pyx_r; + int __pyx_t_1; + int __pyx_t_2; + int __pyx_t_3; + PyObject *__pyx_t_4 = NULL; + PyObject *__pyx_t_5 = NULL; + int __pyx_t_6; + int __pyx_t_7; + int __pyx_t_8; + char *__pyx_t_9; + __Pyx_RefNannySetupContext("__getbuffer__"); + if (__pyx_v_info == NULL) return 0; + __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None); + __Pyx_GIVEREF(__pyx_v_info->obj); + __Pyx_INCREF((PyObject *)__pyx_v_self); + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":193 + * # of flags + * cdef int copy_shape, i, ndim + * cdef int endian_detector = 1 # <<<<<<<<<<<<<< + * cdef bint little_endian = ((&endian_detector)[0] != 0) + * + */ + __pyx_v_endian_detector = 1; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":194 + * cdef int copy_shape, i, ndim + * cdef int endian_detector = 1 + * cdef bint little_endian = ((&endian_detector)[0] != 0) # <<<<<<<<<<<<<< + * + * ndim = PyArray_NDIM(self) + */ + __pyx_v_little_endian = ((((char *)(&__pyx_v_endian_detector))[0]) != 0); + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":196 + * cdef bint little_endian = ((&endian_detector)[0] != 0) + * + * ndim = PyArray_NDIM(self) # <<<<<<<<<<<<<< + * + * if sizeof(npy_intp) != sizeof(Py_ssize_t): + */ + __pyx_v_ndim = PyArray_NDIM(((PyArrayObject *)__pyx_v_self)); + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":198 + * ndim = PyArray_NDIM(self) + * + * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<< + * copy_shape = 1 + * else: + */ + __pyx_t_1 = ((sizeof(npy_intp)) != (sizeof(Py_ssize_t))); + if (__pyx_t_1) { + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":199 + * + * if sizeof(npy_intp) != sizeof(Py_ssize_t): + * copy_shape = 1 # <<<<<<<<<<<<<< + * else: + * copy_shape = 0 + */ + __pyx_v_copy_shape = 1; + goto __pyx_L5; + } + /*else*/ { + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":201 + * copy_shape = 1 + * else: + * copy_shape = 0 # <<<<<<<<<<<<<< + * + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) + */ + __pyx_v_copy_shape = 0; + } + __pyx_L5:; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":203 + * copy_shape = 0 + * + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) # <<<<<<<<<<<<<< + * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): + * raise ValueError(u"ndarray is not C contiguous") + */ + __pyx_t_1 = ((__pyx_v_flags & PyBUF_C_CONTIGUOUS) == PyBUF_C_CONTIGUOUS); + if (__pyx_t_1) { + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":204 + * + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) + * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): # <<<<<<<<<<<<<< + * raise ValueError(u"ndarray is not C contiguous") + * + */ + __pyx_t_2 = (!PyArray_CHKFLAGS(((PyArrayObject *)__pyx_v_self), NPY_C_CONTIGUOUS)); + __pyx_t_3 = __pyx_t_2; + } else { + __pyx_t_3 = __pyx_t_1; + } + if (__pyx_t_3) { + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":205 + * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) + * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): + * raise ValueError(u"ndarray is not C contiguous") # <<<<<<<<<<<<<< + * + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) + */ + __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 205; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_INCREF(((PyObject *)__pyx_kp_u_1)); + PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)__pyx_kp_u_1)); + __Pyx_GIVEREF(((PyObject *)__pyx_kp_u_1)); + __pyx_t_5 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 205; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_Raise(__pyx_t_5, 0, 0); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + {__pyx_filename = __pyx_f[1]; __pyx_lineno = 205; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + goto __pyx_L6; + } + __pyx_L6:; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":207 + * raise ValueError(u"ndarray is not C contiguous") + * + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) # <<<<<<<<<<<<<< + * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): + * raise ValueError(u"ndarray is not Fortran contiguous") + */ + __pyx_t_3 = ((__pyx_v_flags & PyBUF_F_CONTIGUOUS) == PyBUF_F_CONTIGUOUS); + if (__pyx_t_3) { + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":208 + * + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) + * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): # <<<<<<<<<<<<<< + * raise ValueError(u"ndarray is not Fortran contiguous") + * + */ + __pyx_t_1 = (!PyArray_CHKFLAGS(((PyArrayObject *)__pyx_v_self), NPY_F_CONTIGUOUS)); + __pyx_t_2 = __pyx_t_1; + } else { + __pyx_t_2 = __pyx_t_3; + } + if (__pyx_t_2) { + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":209 + * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) + * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): + * raise ValueError(u"ndarray is not Fortran contiguous") # <<<<<<<<<<<<<< + * + * info.buf = PyArray_DATA(self) + */ + __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 209; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_INCREF(((PyObject *)__pyx_kp_u_2)); + PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_kp_u_2)); + __Pyx_GIVEREF(((PyObject *)__pyx_kp_u_2)); + __pyx_t_4 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 209; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __Pyx_Raise(__pyx_t_4, 0, 0); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + {__pyx_filename = __pyx_f[1]; __pyx_lineno = 209; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + goto __pyx_L7; + } + __pyx_L7:; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":211 + * raise ValueError(u"ndarray is not Fortran contiguous") + * + * info.buf = PyArray_DATA(self) # <<<<<<<<<<<<<< + * info.ndim = ndim + * if copy_shape: + */ + __pyx_v_info->buf = PyArray_DATA(((PyArrayObject *)__pyx_v_self)); + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":212 + * + * info.buf = PyArray_DATA(self) + * info.ndim = ndim # <<<<<<<<<<<<<< + * if copy_shape: + * # Allocate new buffer for strides and shape info. This is allocated + */ + __pyx_v_info->ndim = __pyx_v_ndim; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":213 + * info.buf = PyArray_DATA(self) + * info.ndim = ndim + * if copy_shape: # <<<<<<<<<<<<<< + * # Allocate new buffer for strides and shape info. This is allocated + * # as one block, strides first. + */ + __pyx_t_6 = __pyx_v_copy_shape; + if (__pyx_t_6) { + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":216 + * # Allocate new buffer for strides and shape info. This is allocated + * # as one block, strides first. + * info.strides = stdlib.malloc(sizeof(Py_ssize_t) * ndim * 2) # <<<<<<<<<<<<<< + * info.shape = info.strides + ndim + * for i in range(ndim): + */ + __pyx_v_info->strides = ((Py_ssize_t *)malloc((((sizeof(Py_ssize_t)) * __pyx_v_ndim) * 2))); + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":217 + * # as one block, strides first. + * info.strides = stdlib.malloc(sizeof(Py_ssize_t) * ndim * 2) + * info.shape = info.strides + ndim # <<<<<<<<<<<<<< + * for i in range(ndim): + * info.strides[i] = PyArray_STRIDES(self)[i] + */ + __pyx_v_info->shape = (__pyx_v_info->strides + __pyx_v_ndim); + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":218 + * info.strides = stdlib.malloc(sizeof(Py_ssize_t) * ndim * 2) + * info.shape = info.strides + ndim + * for i in range(ndim): # <<<<<<<<<<<<<< + * info.strides[i] = PyArray_STRIDES(self)[i] + * info.shape[i] = PyArray_DIMS(self)[i] + */ + __pyx_t_6 = __pyx_v_ndim; + for (__pyx_t_7 = 0; __pyx_t_7 < __pyx_t_6; __pyx_t_7+=1) { + __pyx_v_i = __pyx_t_7; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":219 + * info.shape = info.strides + ndim + * for i in range(ndim): + * info.strides[i] = PyArray_STRIDES(self)[i] # <<<<<<<<<<<<<< + * info.shape[i] = PyArray_DIMS(self)[i] + * else: + */ + (__pyx_v_info->strides[__pyx_v_i]) = (PyArray_STRIDES(((PyArrayObject *)__pyx_v_self))[__pyx_v_i]); + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":220 + * for i in range(ndim): + * info.strides[i] = PyArray_STRIDES(self)[i] + * info.shape[i] = PyArray_DIMS(self)[i] # <<<<<<<<<<<<<< + * else: + * info.strides = PyArray_STRIDES(self) + */ + (__pyx_v_info->shape[__pyx_v_i]) = (PyArray_DIMS(((PyArrayObject *)__pyx_v_self))[__pyx_v_i]); + } + goto __pyx_L8; + } + /*else*/ { + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":222 + * info.shape[i] = PyArray_DIMS(self)[i] + * else: + * info.strides = PyArray_STRIDES(self) # <<<<<<<<<<<<<< + * info.shape = PyArray_DIMS(self) + * info.suboffsets = NULL + */ + __pyx_v_info->strides = ((Py_ssize_t *)PyArray_STRIDES(((PyArrayObject *)__pyx_v_self))); + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":223 + * else: + * info.strides = PyArray_STRIDES(self) + * info.shape = PyArray_DIMS(self) # <<<<<<<<<<<<<< + * info.suboffsets = NULL + * info.itemsize = PyArray_ITEMSIZE(self) + */ + __pyx_v_info->shape = ((Py_ssize_t *)PyArray_DIMS(((PyArrayObject *)__pyx_v_self))); + } + __pyx_L8:; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":224 + * info.strides = PyArray_STRIDES(self) + * info.shape = PyArray_DIMS(self) + * info.suboffsets = NULL # <<<<<<<<<<<<<< + * info.itemsize = PyArray_ITEMSIZE(self) + * info.readonly = not PyArray_ISWRITEABLE(self) + */ + __pyx_v_info->suboffsets = NULL; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":225 + * info.shape = PyArray_DIMS(self) + * info.suboffsets = NULL + * info.itemsize = PyArray_ITEMSIZE(self) # <<<<<<<<<<<<<< + * info.readonly = not PyArray_ISWRITEABLE(self) + * + */ + __pyx_v_info->itemsize = PyArray_ITEMSIZE(((PyArrayObject *)__pyx_v_self)); + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":226 + * info.suboffsets = NULL + * info.itemsize = PyArray_ITEMSIZE(self) + * info.readonly = not PyArray_ISWRITEABLE(self) # <<<<<<<<<<<<<< + * + * cdef int t + */ + __pyx_v_info->readonly = (!PyArray_ISWRITEABLE(((PyArrayObject *)__pyx_v_self))); + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":229 + * + * cdef int t + * cdef char* f = NULL # <<<<<<<<<<<<<< + * cdef dtype descr = self.descr + * cdef list stack + */ + __pyx_v_f = NULL; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":230 + * cdef int t + * cdef char* f = NULL + * cdef dtype descr = self.descr # <<<<<<<<<<<<<< + * cdef list stack + * cdef int offset + */ + __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_v_self)->descr)); + __pyx_v_descr = ((PyArrayObject *)__pyx_v_self)->descr; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":234 + * cdef int offset + * + * cdef bint hasfields = PyDataType_HASFIELDS(descr) # <<<<<<<<<<<<<< + * + * if not hasfields and not copy_shape: + */ + __pyx_v_hasfields = PyDataType_HASFIELDS(__pyx_v_descr); + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":236 + * cdef bint hasfields = PyDataType_HASFIELDS(descr) + * + * if not hasfields and not copy_shape: # <<<<<<<<<<<<<< + * # do not call releasebuffer + * info.obj = None + */ + __pyx_t_2 = (!__pyx_v_hasfields); + if (__pyx_t_2) { + __pyx_t_3 = (!__pyx_v_copy_shape); + __pyx_t_1 = __pyx_t_3; + } else { + __pyx_t_1 = __pyx_t_2; + } + if (__pyx_t_1) { + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":238 + * if not hasfields and not copy_shape: + * # do not call releasebuffer + * info.obj = None # <<<<<<<<<<<<<< + * else: + * # need to call releasebuffer + */ + __Pyx_INCREF(Py_None); + __Pyx_GIVEREF(Py_None); + __Pyx_GOTREF(__pyx_v_info->obj); + __Pyx_DECREF(__pyx_v_info->obj); + __pyx_v_info->obj = Py_None; + goto __pyx_L11; + } + /*else*/ { + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":241 + * else: + * # need to call releasebuffer + * info.obj = self # <<<<<<<<<<<<<< + * + * if not hasfields: + */ + __Pyx_INCREF(__pyx_v_self); + __Pyx_GIVEREF(__pyx_v_self); + __Pyx_GOTREF(__pyx_v_info->obj); + __Pyx_DECREF(__pyx_v_info->obj); + __pyx_v_info->obj = __pyx_v_self; + } + __pyx_L11:; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":243 + * info.obj = self + * + * if not hasfields: # <<<<<<<<<<<<<< + * t = descr.type_num + * if ((descr.byteorder == '>' and little_endian) or + */ + __pyx_t_1 = (!__pyx_v_hasfields); + if (__pyx_t_1) { + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":244 + * + * if not hasfields: + * t = descr.type_num # <<<<<<<<<<<<<< + * if ((descr.byteorder == '>' and little_endian) or + * (descr.byteorder == '<' and not little_endian)): + */ + __pyx_v_t = __pyx_v_descr->type_num; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":245 + * if not hasfields: + * t = descr.type_num + * if ((descr.byteorder == '>' and little_endian) or # <<<<<<<<<<<<<< + * (descr.byteorder == '<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") + */ + __pyx_t_1 = (__pyx_v_descr->byteorder == '>'); + if (__pyx_t_1) { + __pyx_t_2 = __pyx_v_little_endian; + } else { + __pyx_t_2 = __pyx_t_1; + } + if (!__pyx_t_2) { + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":246 + * t = descr.type_num + * if ((descr.byteorder == '>' and little_endian) or + * (descr.byteorder == '<' and not little_endian)): # <<<<<<<<<<<<<< + * raise ValueError(u"Non-native byte order not supported") + * if t == NPY_BYTE: f = "b" + */ + __pyx_t_1 = (__pyx_v_descr->byteorder == '<'); + if (__pyx_t_1) { + __pyx_t_3 = (!__pyx_v_little_endian); + __pyx_t_8 = __pyx_t_3; + } else { + __pyx_t_8 = __pyx_t_1; + } + __pyx_t_1 = __pyx_t_8; + } else { + __pyx_t_1 = __pyx_t_2; + } + if (__pyx_t_1) { + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":247 + * if ((descr.byteorder == '>' and little_endian) or + * (descr.byteorder == '<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") # <<<<<<<<<<<<<< + * if t == NPY_BYTE: f = "b" + * elif t == NPY_UBYTE: f = "B" + */ + __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 247; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_INCREF(((PyObject *)__pyx_kp_u_3)); + PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)__pyx_kp_u_3)); + __Pyx_GIVEREF(((PyObject *)__pyx_kp_u_3)); + __pyx_t_5 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 247; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_Raise(__pyx_t_5, 0, 0); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + {__pyx_filename = __pyx_f[1]; __pyx_lineno = 247; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + goto __pyx_L13; + } + __pyx_L13:; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":248 + * (descr.byteorder == '<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") + * if t == NPY_BYTE: f = "b" # <<<<<<<<<<<<<< + * elif t == NPY_UBYTE: f = "B" + * elif t == NPY_SHORT: f = "h" + */ + __pyx_t_1 = (__pyx_v_t == NPY_BYTE); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__b; + goto __pyx_L14; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":249 + * raise ValueError(u"Non-native byte order not supported") + * if t == NPY_BYTE: f = "b" + * elif t == NPY_UBYTE: f = "B" # <<<<<<<<<<<<<< + * elif t == NPY_SHORT: f = "h" + * elif t == NPY_USHORT: f = "H" + */ + __pyx_t_1 = (__pyx_v_t == NPY_UBYTE); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__B; + goto __pyx_L14; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":250 + * if t == NPY_BYTE: f = "b" + * elif t == NPY_UBYTE: f = "B" + * elif t == NPY_SHORT: f = "h" # <<<<<<<<<<<<<< + * elif t == NPY_USHORT: f = "H" + * elif t == NPY_INT: f = "i" + */ + __pyx_t_1 = (__pyx_v_t == NPY_SHORT); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__h; + goto __pyx_L14; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":251 + * elif t == NPY_UBYTE: f = "B" + * elif t == NPY_SHORT: f = "h" + * elif t == NPY_USHORT: f = "H" # <<<<<<<<<<<<<< + * elif t == NPY_INT: f = "i" + * elif t == NPY_UINT: f = "I" + */ + __pyx_t_1 = (__pyx_v_t == NPY_USHORT); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__H; + goto __pyx_L14; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":252 + * elif t == NPY_SHORT: f = "h" + * elif t == NPY_USHORT: f = "H" + * elif t == NPY_INT: f = "i" # <<<<<<<<<<<<<< + * elif t == NPY_UINT: f = "I" + * elif t == NPY_LONG: f = "l" + */ + __pyx_t_1 = (__pyx_v_t == NPY_INT); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__i; + goto __pyx_L14; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":253 + * elif t == NPY_USHORT: f = "H" + * elif t == NPY_INT: f = "i" + * elif t == NPY_UINT: f = "I" # <<<<<<<<<<<<<< + * elif t == NPY_LONG: f = "l" + * elif t == NPY_ULONG: f = "L" + */ + __pyx_t_1 = (__pyx_v_t == NPY_UINT); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__I; + goto __pyx_L14; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":254 + * elif t == NPY_INT: f = "i" + * elif t == NPY_UINT: f = "I" + * elif t == NPY_LONG: f = "l" # <<<<<<<<<<<<<< + * elif t == NPY_ULONG: f = "L" + * elif t == NPY_LONGLONG: f = "q" + */ + __pyx_t_1 = (__pyx_v_t == NPY_LONG); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__l; + goto __pyx_L14; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":255 + * elif t == NPY_UINT: f = "I" + * elif t == NPY_LONG: f = "l" + * elif t == NPY_ULONG: f = "L" # <<<<<<<<<<<<<< + * elif t == NPY_LONGLONG: f = "q" + * elif t == NPY_ULONGLONG: f = "Q" + */ + __pyx_t_1 = (__pyx_v_t == NPY_ULONG); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__L; + goto __pyx_L14; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":256 + * elif t == NPY_LONG: f = "l" + * elif t == NPY_ULONG: f = "L" + * elif t == NPY_LONGLONG: f = "q" # <<<<<<<<<<<<<< + * elif t == NPY_ULONGLONG: f = "Q" + * elif t == NPY_FLOAT: f = "f" + */ + __pyx_t_1 = (__pyx_v_t == NPY_LONGLONG); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__q; + goto __pyx_L14; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":257 + * elif t == NPY_ULONG: f = "L" + * elif t == NPY_LONGLONG: f = "q" + * elif t == NPY_ULONGLONG: f = "Q" # <<<<<<<<<<<<<< + * elif t == NPY_FLOAT: f = "f" + * elif t == NPY_DOUBLE: f = "d" + */ + __pyx_t_1 = (__pyx_v_t == NPY_ULONGLONG); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__Q; + goto __pyx_L14; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":258 + * elif t == NPY_LONGLONG: f = "q" + * elif t == NPY_ULONGLONG: f = "Q" + * elif t == NPY_FLOAT: f = "f" # <<<<<<<<<<<<<< + * elif t == NPY_DOUBLE: f = "d" + * elif t == NPY_LONGDOUBLE: f = "g" + */ + __pyx_t_1 = (__pyx_v_t == NPY_FLOAT); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__f; + goto __pyx_L14; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":259 + * elif t == NPY_ULONGLONG: f = "Q" + * elif t == NPY_FLOAT: f = "f" + * elif t == NPY_DOUBLE: f = "d" # <<<<<<<<<<<<<< + * elif t == NPY_LONGDOUBLE: f = "g" + * elif t == NPY_CFLOAT: f = "Zf" + */ + __pyx_t_1 = (__pyx_v_t == NPY_DOUBLE); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__d; + goto __pyx_L14; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":260 + * elif t == NPY_FLOAT: f = "f" + * elif t == NPY_DOUBLE: f = "d" + * elif t == NPY_LONGDOUBLE: f = "g" # <<<<<<<<<<<<<< + * elif t == NPY_CFLOAT: f = "Zf" + * elif t == NPY_CDOUBLE: f = "Zd" + */ + __pyx_t_1 = (__pyx_v_t == NPY_LONGDOUBLE); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__g; + goto __pyx_L14; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":261 + * elif t == NPY_DOUBLE: f = "d" + * elif t == NPY_LONGDOUBLE: f = "g" + * elif t == NPY_CFLOAT: f = "Zf" # <<<<<<<<<<<<<< + * elif t == NPY_CDOUBLE: f = "Zd" + * elif t == NPY_CLONGDOUBLE: f = "Zg" + */ + __pyx_t_1 = (__pyx_v_t == NPY_CFLOAT); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__Zf; + goto __pyx_L14; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":262 + * elif t == NPY_LONGDOUBLE: f = "g" + * elif t == NPY_CFLOAT: f = "Zf" + * elif t == NPY_CDOUBLE: f = "Zd" # <<<<<<<<<<<<<< + * elif t == NPY_CLONGDOUBLE: f = "Zg" + * elif t == NPY_OBJECT: f = "O" + */ + __pyx_t_1 = (__pyx_v_t == NPY_CDOUBLE); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__Zd; + goto __pyx_L14; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":263 + * elif t == NPY_CFLOAT: f = "Zf" + * elif t == NPY_CDOUBLE: f = "Zd" + * elif t == NPY_CLONGDOUBLE: f = "Zg" # <<<<<<<<<<<<<< + * elif t == NPY_OBJECT: f = "O" + * else: + */ + __pyx_t_1 = (__pyx_v_t == NPY_CLONGDOUBLE); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__Zg; + goto __pyx_L14; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":264 + * elif t == NPY_CDOUBLE: f = "Zd" + * elif t == NPY_CLONGDOUBLE: f = "Zg" + * elif t == NPY_OBJECT: f = "O" # <<<<<<<<<<<<<< + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) + */ + __pyx_t_1 = (__pyx_v_t == NPY_OBJECT); + if (__pyx_t_1) { + __pyx_v_f = __pyx_k__O; + goto __pyx_L14; + } + /*else*/ { + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":266 + * elif t == NPY_OBJECT: f = "O" + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) # <<<<<<<<<<<<<< + * info.format = f + * return + */ + __pyx_t_5 = PyInt_FromLong(__pyx_v_t); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 266; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_4 = PyNumber_Remainder(((PyObject *)__pyx_kp_u_4), __pyx_t_5); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 266; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 266; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4); + __Pyx_GIVEREF(__pyx_t_4); + __pyx_t_4 = 0; + __pyx_t_4 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 266; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __Pyx_Raise(__pyx_t_4, 0, 0); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + {__pyx_filename = __pyx_f[1]; __pyx_lineno = 266; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + } + __pyx_L14:; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":267 + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) + * info.format = f # <<<<<<<<<<<<<< + * return + * else: + */ + __pyx_v_info->format = __pyx_v_f; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":268 + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) + * info.format = f + * return # <<<<<<<<<<<<<< + * else: + * info.format = stdlib.malloc(_buffer_format_string_len) + */ + __pyx_r = 0; + goto __pyx_L0; + goto __pyx_L12; + } + /*else*/ { + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":270 + * return + * else: + * info.format = stdlib.malloc(_buffer_format_string_len) # <<<<<<<<<<<<<< + * info.format[0] = '^' # Native data types, manual alignment + * offset = 0 + */ + __pyx_v_info->format = ((char *)malloc(255)); + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":271 + * else: + * info.format = stdlib.malloc(_buffer_format_string_len) + * info.format[0] = '^' # Native data types, manual alignment # <<<<<<<<<<<<<< + * offset = 0 + * f = _util_dtypestring(descr, info.format + 1, + */ + (__pyx_v_info->format[0]) = '^'; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":272 + * info.format = stdlib.malloc(_buffer_format_string_len) + * info.format[0] = '^' # Native data types, manual alignment + * offset = 0 # <<<<<<<<<<<<<< + * f = _util_dtypestring(descr, info.format + 1, + * info.format + _buffer_format_string_len, + */ + __pyx_v_offset = 0; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":275 + * f = _util_dtypestring(descr, info.format + 1, + * info.format + _buffer_format_string_len, + * &offset) # <<<<<<<<<<<<<< + * f[0] = 0 # Terminate format string + * + */ + __pyx_t_9 = __pyx_f_5numpy__util_dtypestring(__pyx_v_descr, (__pyx_v_info->format + 1), (__pyx_v_info->format + 255), (&__pyx_v_offset)); if (unlikely(__pyx_t_9 == NULL)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 273; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_v_f = __pyx_t_9; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":276 + * info.format + _buffer_format_string_len, + * &offset) + * f[0] = 0 # Terminate format string # <<<<<<<<<<<<<< + * + * def __releasebuffer__(ndarray self, Py_buffer* info): + */ + (__pyx_v_f[0]) = 0; + } + __pyx_L12:; + + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_4); + __Pyx_XDECREF(__pyx_t_5); + __Pyx_AddTraceback("numpy.ndarray.__getbuffer__"); + __pyx_r = -1; + __Pyx_GOTREF(__pyx_v_info->obj); + __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = NULL; + goto __pyx_L2; + __pyx_L0:; + if (__pyx_v_info->obj == Py_None) { + __Pyx_GOTREF(Py_None); + __Pyx_DECREF(Py_None); __pyx_v_info->obj = NULL; + } + __pyx_L2:; + __Pyx_XDECREF((PyObject *)__pyx_v_descr); + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":278 + * f[0] = 0 # Terminate format string + * + * def __releasebuffer__(ndarray self, Py_buffer* info): # <<<<<<<<<<<<<< + * if PyArray_HASFIELDS(self): + * stdlib.free(info.format) + */ + +static void __pyx_pf_5numpy_7ndarray___releasebuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info); /*proto*/ +static void __pyx_pf_5numpy_7ndarray___releasebuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info) { + int __pyx_t_1; + __Pyx_RefNannySetupContext("__releasebuffer__"); + __Pyx_INCREF((PyObject *)__pyx_v_self); + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":279 + * + * def __releasebuffer__(ndarray self, Py_buffer* info): + * if PyArray_HASFIELDS(self): # <<<<<<<<<<<<<< + * stdlib.free(info.format) + * if sizeof(npy_intp) != sizeof(Py_ssize_t): + */ + __pyx_t_1 = PyArray_HASFIELDS(((PyArrayObject *)__pyx_v_self)); + if (__pyx_t_1) { + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":280 + * def __releasebuffer__(ndarray self, Py_buffer* info): + * if PyArray_HASFIELDS(self): + * stdlib.free(info.format) # <<<<<<<<<<<<<< + * if sizeof(npy_intp) != sizeof(Py_ssize_t): + * stdlib.free(info.strides) + */ + free(__pyx_v_info->format); + goto __pyx_L5; + } + __pyx_L5:; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":281 + * if PyArray_HASFIELDS(self): + * stdlib.free(info.format) + * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<< + * stdlib.free(info.strides) + * # info.shape was stored after info.strides in the same block + */ + __pyx_t_1 = ((sizeof(npy_intp)) != (sizeof(Py_ssize_t))); + if (__pyx_t_1) { + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":282 + * stdlib.free(info.format) + * if sizeof(npy_intp) != sizeof(Py_ssize_t): + * stdlib.free(info.strides) # <<<<<<<<<<<<<< + * # info.shape was stored after info.strides in the same block + * + */ + free(__pyx_v_info->strides); + goto __pyx_L6; + } + __pyx_L6:; + + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_RefNannyFinishContext(); +} + +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":755 + * ctypedef npy_cdouble complex_t + * + * cdef inline object PyArray_MultiIterNew1(a): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(1, a) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew1(PyObject *__pyx_v_a) { + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew1"); + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":756 + * + * cdef inline object PyArray_MultiIterNew1(a): + * return PyArray_MultiIterNew(1, a) # <<<<<<<<<<<<<< + * + * cdef inline object PyArray_MultiIterNew2(a, b): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(1, ((void *)__pyx_v_a)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 756; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew1"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":758 + * return PyArray_MultiIterNew(1, a) + * + * cdef inline object PyArray_MultiIterNew2(a, b): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(2, a, b) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew2(PyObject *__pyx_v_a, PyObject *__pyx_v_b) { + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew2"); + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":759 + * + * cdef inline object PyArray_MultiIterNew2(a, b): + * return PyArray_MultiIterNew(2, a, b) # <<<<<<<<<<<<<< + * + * cdef inline object PyArray_MultiIterNew3(a, b, c): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(2, ((void *)__pyx_v_a), ((void *)__pyx_v_b)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 759; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew2"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":761 + * return PyArray_MultiIterNew(2, a, b) + * + * cdef inline object PyArray_MultiIterNew3(a, b, c): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(3, a, b, c) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew3(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c) { + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew3"); + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":762 + * + * cdef inline object PyArray_MultiIterNew3(a, b, c): + * return PyArray_MultiIterNew(3, a, b, c) # <<<<<<<<<<<<<< + * + * cdef inline object PyArray_MultiIterNew4(a, b, c, d): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(3, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 762; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew3"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":764 + * return PyArray_MultiIterNew(3, a, b, c) + * + * cdef inline object PyArray_MultiIterNew4(a, b, c, d): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(4, a, b, c, d) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew4(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_d) { + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew4"); + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":765 + * + * cdef inline object PyArray_MultiIterNew4(a, b, c, d): + * return PyArray_MultiIterNew(4, a, b, c, d) # <<<<<<<<<<<<<< + * + * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(4, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c), ((void *)__pyx_v_d)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 765; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew4"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":767 + * return PyArray_MultiIterNew(4, a, b, c, d) + * + * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e): # <<<<<<<<<<<<<< + * return PyArray_MultiIterNew(5, a, b, c, d, e) + * + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew5(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_d, PyObject *__pyx_v_e) { + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("PyArray_MultiIterNew5"); + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":768 + * + * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e): + * return PyArray_MultiIterNew(5, a, b, c, d, e) # <<<<<<<<<<<<<< + * + * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL: + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyArray_MultiIterNew(5, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c), ((void *)__pyx_v_d), ((void *)__pyx_v_e)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 768; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("numpy.PyArray_MultiIterNew5"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":770 + * return PyArray_MultiIterNew(5, a, b, c, d, e) + * + * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL: # <<<<<<<<<<<<<< + * # Recursive utility function used in __getbuffer__ to get format + * # string. The new location in the format string is returned. + */ + +static CYTHON_INLINE char *__pyx_f_5numpy__util_dtypestring(PyArray_Descr *__pyx_v_descr, char *__pyx_v_f, char *__pyx_v_end, int *__pyx_v_offset) { + PyArray_Descr *__pyx_v_child; + int __pyx_v_endian_detector; + int __pyx_v_little_endian; + PyObject *__pyx_v_fields; + PyObject *__pyx_v_childname; + PyObject *__pyx_v_new_offset; + PyObject *__pyx_v_t; + char *__pyx_r; + Py_ssize_t __pyx_t_1; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + PyObject *__pyx_t_4 = NULL; + PyObject *__pyx_t_5 = NULL; + int __pyx_t_6; + int __pyx_t_7; + int __pyx_t_8; + int __pyx_t_9; + char *__pyx_t_10; + __Pyx_RefNannySetupContext("_util_dtypestring"); + __Pyx_INCREF((PyObject *)__pyx_v_descr); + __pyx_v_child = ((PyArray_Descr *)Py_None); __Pyx_INCREF(Py_None); + __pyx_v_fields = ((PyObject *)Py_None); __Pyx_INCREF(Py_None); + __pyx_v_childname = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_new_offset = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_t = Py_None; __Pyx_INCREF(Py_None); + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":777 + * cdef int delta_offset + * cdef tuple i + * cdef int endian_detector = 1 # <<<<<<<<<<<<<< + * cdef bint little_endian = ((&endian_detector)[0] != 0) + * cdef tuple fields + */ + __pyx_v_endian_detector = 1; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":778 + * cdef tuple i + * cdef int endian_detector = 1 + * cdef bint little_endian = ((&endian_detector)[0] != 0) # <<<<<<<<<<<<<< + * cdef tuple fields + * + */ + __pyx_v_little_endian = ((((char *)(&__pyx_v_endian_detector))[0]) != 0); + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":781 + * cdef tuple fields + * + * for childname in descr.names: # <<<<<<<<<<<<<< + * fields = descr.fields[childname] + * child, new_offset = fields + */ + if (likely(((PyObject *)__pyx_v_descr->names) != Py_None)) { + __pyx_t_1 = 0; __pyx_t_2 = ((PyObject *)__pyx_v_descr->names); __Pyx_INCREF(__pyx_t_2); + } else { + PyErr_SetString(PyExc_TypeError, "'NoneType' object is not iterable"); {__pyx_filename = __pyx_f[1]; __pyx_lineno = 781; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + } + for (;;) { + if (__pyx_t_1 >= PyTuple_GET_SIZE(__pyx_t_2)) break; + __pyx_t_3 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_1); __Pyx_INCREF(__pyx_t_3); __pyx_t_1++; + __Pyx_DECREF(__pyx_v_childname); + __pyx_v_childname = __pyx_t_3; + __pyx_t_3 = 0; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":782 + * + * for childname in descr.names: + * fields = descr.fields[childname] # <<<<<<<<<<<<<< + * child, new_offset = fields + * + */ + __pyx_t_3 = PyObject_GetItem(__pyx_v_descr->fields, __pyx_v_childname); if (!__pyx_t_3) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 782; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + if (!(likely(PyTuple_CheckExact(__pyx_t_3))||((__pyx_t_3) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected tuple, got %.200s", Py_TYPE(__pyx_t_3)->tp_name), 0))) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 782; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(((PyObject *)__pyx_v_fields)); + __pyx_v_fields = ((PyObject *)__pyx_t_3); + __pyx_t_3 = 0; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":783 + * for childname in descr.names: + * fields = descr.fields[childname] + * child, new_offset = fields # <<<<<<<<<<<<<< + * + * if (end - f) - (new_offset - offset[0]) < 15: + */ + if (likely(((PyObject *)__pyx_v_fields) != Py_None) && likely(PyTuple_GET_SIZE(((PyObject *)__pyx_v_fields)) == 2)) { + PyObject* tuple = ((PyObject *)__pyx_v_fields); + __pyx_t_3 = PyTuple_GET_ITEM(tuple, 0); __Pyx_INCREF(__pyx_t_3); + if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_dtype))))) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 783; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_4 = PyTuple_GET_ITEM(tuple, 1); __Pyx_INCREF(__pyx_t_4); + __Pyx_DECREF(((PyObject *)__pyx_v_child)); + __pyx_v_child = ((PyArray_Descr *)__pyx_t_3); + __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_v_new_offset); + __pyx_v_new_offset = __pyx_t_4; + __pyx_t_4 = 0; + } else { + __Pyx_UnpackTupleError(((PyObject *)__pyx_v_fields), 2); + {__pyx_filename = __pyx_f[1]; __pyx_lineno = 783; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":785 + * child, new_offset = fields + * + * if (end - f) - (new_offset - offset[0]) < 15: # <<<<<<<<<<<<<< + * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") + * + */ + __pyx_t_4 = PyInt_FromLong((__pyx_v_end - __pyx_v_f)); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 785; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyInt_FromLong((__pyx_v_offset[0])); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 785; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_5 = PyNumber_Subtract(__pyx_v_new_offset, __pyx_t_3); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 785; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyNumber_Subtract(__pyx_t_4, __pyx_t_5); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 785; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_5 = PyObject_RichCompare(__pyx_t_3, __pyx_int_15, Py_LT); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 785; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 785; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (__pyx_t_6) { + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":786 + * + * if (end - f) - (new_offset - offset[0]) < 15: + * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") # <<<<<<<<<<<<<< + * + * if ((child.byteorder == '>' and little_endian) or + */ + __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 786; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_INCREF(((PyObject *)__pyx_kp_u_5)); + PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_kp_u_5)); + __Pyx_GIVEREF(((PyObject *)__pyx_kp_u_5)); + __pyx_t_3 = PyObject_Call(__pyx_builtin_RuntimeError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 786; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __Pyx_Raise(__pyx_t_3, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + {__pyx_filename = __pyx_f[1]; __pyx_lineno = 786; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + goto __pyx_L5; + } + __pyx_L5:; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":788 + * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") + * + * if ((child.byteorder == '>' and little_endian) or # <<<<<<<<<<<<<< + * (child.byteorder == '<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") + */ + __pyx_t_6 = (__pyx_v_child->byteorder == '>'); + if (__pyx_t_6) { + __pyx_t_7 = __pyx_v_little_endian; + } else { + __pyx_t_7 = __pyx_t_6; + } + if (!__pyx_t_7) { + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":789 + * + * if ((child.byteorder == '>' and little_endian) or + * (child.byteorder == '<' and not little_endian)): # <<<<<<<<<<<<<< + * raise ValueError(u"Non-native byte order not supported") + * # One could encode it in the format string and have Cython + */ + __pyx_t_6 = (__pyx_v_child->byteorder == '<'); + if (__pyx_t_6) { + __pyx_t_8 = (!__pyx_v_little_endian); + __pyx_t_9 = __pyx_t_8; + } else { + __pyx_t_9 = __pyx_t_6; + } + __pyx_t_6 = __pyx_t_9; + } else { + __pyx_t_6 = __pyx_t_7; + } + if (__pyx_t_6) { + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":790 + * if ((child.byteorder == '>' and little_endian) or + * (child.byteorder == '<' and not little_endian)): + * raise ValueError(u"Non-native byte order not supported") # <<<<<<<<<<<<<< + * # One could encode it in the format string and have Cython + * # complain instead, BUT: < and > in format strings also imply + */ + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 790; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_INCREF(((PyObject *)__pyx_kp_u_3)); + PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_u_3)); + __Pyx_GIVEREF(((PyObject *)__pyx_kp_u_3)); + __pyx_t_5 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 790; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_Raise(__pyx_t_5, 0, 0); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + {__pyx_filename = __pyx_f[1]; __pyx_lineno = 790; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + goto __pyx_L6; + } + __pyx_L6:; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":800 + * + * # Output padding bytes + * while offset[0] < new_offset: # <<<<<<<<<<<<<< + * f[0] = 120 # "x"; pad byte + * f += 1 + */ + while (1) { + __pyx_t_5 = PyInt_FromLong((__pyx_v_offset[0])); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 800; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_3 = PyObject_RichCompare(__pyx_t_5, __pyx_v_new_offset, Py_LT); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 800; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 800; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (!__pyx_t_6) break; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":801 + * # Output padding bytes + * while offset[0] < new_offset: + * f[0] = 120 # "x"; pad byte # <<<<<<<<<<<<<< + * f += 1 + * offset[0] += 1 + */ + (__pyx_v_f[0]) = 120; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":802 + * while offset[0] < new_offset: + * f[0] = 120 # "x"; pad byte + * f += 1 # <<<<<<<<<<<<<< + * offset[0] += 1 + * + */ + __pyx_v_f += 1; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":803 + * f[0] = 120 # "x"; pad byte + * f += 1 + * offset[0] += 1 # <<<<<<<<<<<<<< + * + * offset[0] += child.itemsize + */ + (__pyx_v_offset[0]) += 1; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":805 + * offset[0] += 1 + * + * offset[0] += child.itemsize # <<<<<<<<<<<<<< + * + * if not PyDataType_HASFIELDS(child): + */ + (__pyx_v_offset[0]) += __pyx_v_child->elsize; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":807 + * offset[0] += child.itemsize + * + * if not PyDataType_HASFIELDS(child): # <<<<<<<<<<<<<< + * t = child.type_num + * if end - f < 5: + */ + __pyx_t_6 = (!PyDataType_HASFIELDS(__pyx_v_child)); + if (__pyx_t_6) { + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":808 + * + * if not PyDataType_HASFIELDS(child): + * t = child.type_num # <<<<<<<<<<<<<< + * if end - f < 5: + * raise RuntimeError(u"Format string allocated too short.") + */ + __pyx_t_3 = PyInt_FromLong(__pyx_v_child->type_num); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 808; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_v_t); + __pyx_v_t = __pyx_t_3; + __pyx_t_3 = 0; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":809 + * if not PyDataType_HASFIELDS(child): + * t = child.type_num + * if end - f < 5: # <<<<<<<<<<<<<< + * raise RuntimeError(u"Format string allocated too short.") + * + */ + __pyx_t_6 = ((__pyx_v_end - __pyx_v_f) < 5); + if (__pyx_t_6) { + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":810 + * t = child.type_num + * if end - f < 5: + * raise RuntimeError(u"Format string allocated too short.") # <<<<<<<<<<<<<< + * + * # Until ticket #99 is fixed, use integers to avoid warnings + */ + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 810; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_INCREF(((PyObject *)__pyx_kp_u_6)); + PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_u_6)); + __Pyx_GIVEREF(((PyObject *)__pyx_kp_u_6)); + __pyx_t_5 = PyObject_Call(__pyx_builtin_RuntimeError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 810; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_Raise(__pyx_t_5, 0, 0); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + {__pyx_filename = __pyx_f[1]; __pyx_lineno = 810; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + goto __pyx_L10; + } + __pyx_L10:; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":813 + * + * # Until ticket #99 is fixed, use integers to avoid warnings + * if t == NPY_BYTE: f[0] = 98 #"b" # <<<<<<<<<<<<<< + * elif t == NPY_UBYTE: f[0] = 66 #"B" + * elif t == NPY_SHORT: f[0] = 104 #"h" + */ + __pyx_t_5 = PyInt_FromLong(NPY_BYTE); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 813; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_5, Py_EQ); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 813; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 813; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 98; + goto __pyx_L11; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":814 + * # Until ticket #99 is fixed, use integers to avoid warnings + * if t == NPY_BYTE: f[0] = 98 #"b" + * elif t == NPY_UBYTE: f[0] = 66 #"B" # <<<<<<<<<<<<<< + * elif t == NPY_SHORT: f[0] = 104 #"h" + * elif t == NPY_USHORT: f[0] = 72 #"H" + */ + __pyx_t_3 = PyInt_FromLong(NPY_UBYTE); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 814; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_5 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 814; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 814; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 66; + goto __pyx_L11; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":815 + * if t == NPY_BYTE: f[0] = 98 #"b" + * elif t == NPY_UBYTE: f[0] = 66 #"B" + * elif t == NPY_SHORT: f[0] = 104 #"h" # <<<<<<<<<<<<<< + * elif t == NPY_USHORT: f[0] = 72 #"H" + * elif t == NPY_INT: f[0] = 105 #"i" + */ + __pyx_t_5 = PyInt_FromLong(NPY_SHORT); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 815; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_5, Py_EQ); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 815; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 815; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 104; + goto __pyx_L11; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":816 + * elif t == NPY_UBYTE: f[0] = 66 #"B" + * elif t == NPY_SHORT: f[0] = 104 #"h" + * elif t == NPY_USHORT: f[0] = 72 #"H" # <<<<<<<<<<<<<< + * elif t == NPY_INT: f[0] = 105 #"i" + * elif t == NPY_UINT: f[0] = 73 #"I" + */ + __pyx_t_3 = PyInt_FromLong(NPY_USHORT); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 816; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_5 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 816; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 816; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 72; + goto __pyx_L11; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":817 + * elif t == NPY_SHORT: f[0] = 104 #"h" + * elif t == NPY_USHORT: f[0] = 72 #"H" + * elif t == NPY_INT: f[0] = 105 #"i" # <<<<<<<<<<<<<< + * elif t == NPY_UINT: f[0] = 73 #"I" + * elif t == NPY_LONG: f[0] = 108 #"l" + */ + __pyx_t_5 = PyInt_FromLong(NPY_INT); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 817; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_5, Py_EQ); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 817; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 817; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 105; + goto __pyx_L11; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":818 + * elif t == NPY_USHORT: f[0] = 72 #"H" + * elif t == NPY_INT: f[0] = 105 #"i" + * elif t == NPY_UINT: f[0] = 73 #"I" # <<<<<<<<<<<<<< + * elif t == NPY_LONG: f[0] = 108 #"l" + * elif t == NPY_ULONG: f[0] = 76 #"L" + */ + __pyx_t_3 = PyInt_FromLong(NPY_UINT); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 818; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_5 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 818; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 818; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 73; + goto __pyx_L11; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":819 + * elif t == NPY_INT: f[0] = 105 #"i" + * elif t == NPY_UINT: f[0] = 73 #"I" + * elif t == NPY_LONG: f[0] = 108 #"l" # <<<<<<<<<<<<<< + * elif t == NPY_ULONG: f[0] = 76 #"L" + * elif t == NPY_LONGLONG: f[0] = 113 #"q" + */ + __pyx_t_5 = PyInt_FromLong(NPY_LONG); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 819; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_5, Py_EQ); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 819; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 819; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 108; + goto __pyx_L11; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":820 + * elif t == NPY_UINT: f[0] = 73 #"I" + * elif t == NPY_LONG: f[0] = 108 #"l" + * elif t == NPY_ULONG: f[0] = 76 #"L" # <<<<<<<<<<<<<< + * elif t == NPY_LONGLONG: f[0] = 113 #"q" + * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" + */ + __pyx_t_3 = PyInt_FromLong(NPY_ULONG); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 820; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_5 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 820; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 820; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 76; + goto __pyx_L11; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":821 + * elif t == NPY_LONG: f[0] = 108 #"l" + * elif t == NPY_ULONG: f[0] = 76 #"L" + * elif t == NPY_LONGLONG: f[0] = 113 #"q" # <<<<<<<<<<<<<< + * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" + * elif t == NPY_FLOAT: f[0] = 102 #"f" + */ + __pyx_t_5 = PyInt_FromLong(NPY_LONGLONG); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 821; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_5, Py_EQ); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 821; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 821; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 113; + goto __pyx_L11; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":822 + * elif t == NPY_ULONG: f[0] = 76 #"L" + * elif t == NPY_LONGLONG: f[0] = 113 #"q" + * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" # <<<<<<<<<<<<<< + * elif t == NPY_FLOAT: f[0] = 102 #"f" + * elif t == NPY_DOUBLE: f[0] = 100 #"d" + */ + __pyx_t_3 = PyInt_FromLong(NPY_ULONGLONG); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 822; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_5 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 822; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 822; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 81; + goto __pyx_L11; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":823 + * elif t == NPY_LONGLONG: f[0] = 113 #"q" + * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" + * elif t == NPY_FLOAT: f[0] = 102 #"f" # <<<<<<<<<<<<<< + * elif t == NPY_DOUBLE: f[0] = 100 #"d" + * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" + */ + __pyx_t_5 = PyInt_FromLong(NPY_FLOAT); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 823; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_5, Py_EQ); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 823; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 823; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 102; + goto __pyx_L11; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":824 + * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" + * elif t == NPY_FLOAT: f[0] = 102 #"f" + * elif t == NPY_DOUBLE: f[0] = 100 #"d" # <<<<<<<<<<<<<< + * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" + * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf + */ + __pyx_t_3 = PyInt_FromLong(NPY_DOUBLE); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 824; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_5 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 824; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 824; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 100; + goto __pyx_L11; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":825 + * elif t == NPY_FLOAT: f[0] = 102 #"f" + * elif t == NPY_DOUBLE: f[0] = 100 #"d" + * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" # <<<<<<<<<<<<<< + * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf + * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd + */ + __pyx_t_5 = PyInt_FromLong(NPY_LONGDOUBLE); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 825; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_5, Py_EQ); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 825; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 825; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 103; + goto __pyx_L11; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":826 + * elif t == NPY_DOUBLE: f[0] = 100 #"d" + * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" + * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf # <<<<<<<<<<<<<< + * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd + * elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg + */ + __pyx_t_3 = PyInt_FromLong(NPY_CFLOAT); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 826; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_5 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 826; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 826; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 90; + (__pyx_v_f[1]) = 102; + __pyx_v_f += 1; + goto __pyx_L11; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":827 + * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" + * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf + * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd # <<<<<<<<<<<<<< + * elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg + * elif t == NPY_OBJECT: f[0] = 79 #"O" + */ + __pyx_t_5 = PyInt_FromLong(NPY_CDOUBLE); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 827; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_5, Py_EQ); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 827; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 827; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 90; + (__pyx_v_f[1]) = 100; + __pyx_v_f += 1; + goto __pyx_L11; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":828 + * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf + * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd + * elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg # <<<<<<<<<<<<<< + * elif t == NPY_OBJECT: f[0] = 79 #"O" + * else: + */ + __pyx_t_3 = PyInt_FromLong(NPY_CLONGDOUBLE); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 828; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_5 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 828; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 828; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 90; + (__pyx_v_f[1]) = 103; + __pyx_v_f += 1; + goto __pyx_L11; + } + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":829 + * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd + * elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg + * elif t == NPY_OBJECT: f[0] = 79 #"O" # <<<<<<<<<<<<<< + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) + */ + __pyx_t_5 = PyInt_FromLong(NPY_OBJECT); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 829; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_5, Py_EQ); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 829; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 829; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_6) { + (__pyx_v_f[0]) = 79; + goto __pyx_L11; + } + /*else*/ { + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":831 + * elif t == NPY_OBJECT: f[0] = 79 #"O" + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) # <<<<<<<<<<<<<< + * f += 1 + * else: + */ + __pyx_t_3 = PyNumber_Remainder(((PyObject *)__pyx_kp_u_4), __pyx_v_t); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 831; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 831; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + __pyx_t_3 = 0; + __pyx_t_3 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_5, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 831; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __Pyx_Raise(__pyx_t_3, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + {__pyx_filename = __pyx_f[1]; __pyx_lineno = 831; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + } + __pyx_L11:; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":832 + * else: + * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) + * f += 1 # <<<<<<<<<<<<<< + * else: + * # Cython ignores struct boundary information ("T{...}"), + */ + __pyx_v_f += 1; + goto __pyx_L9; + } + /*else*/ { + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":836 + * # Cython ignores struct boundary information ("T{...}"), + * # so don't output it + * f = _util_dtypestring(child, f, end, offset) # <<<<<<<<<<<<<< + * return f + * + */ + __pyx_t_10 = __pyx_f_5numpy__util_dtypestring(__pyx_v_child, __pyx_v_f, __pyx_v_end, __pyx_v_offset); if (unlikely(__pyx_t_10 == NULL)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 836; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_v_f = __pyx_t_10; + } + __pyx_L9:; + } + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":837 + * # so don't output it + * f = _util_dtypestring(child, f, end, offset) + * return f # <<<<<<<<<<<<<< + * + * + */ + __pyx_r = __pyx_v_f; + goto __pyx_L0; + + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_XDECREF(__pyx_t_5); + __Pyx_AddTraceback("numpy._util_dtypestring"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_DECREF((PyObject *)__pyx_v_child); + __Pyx_DECREF(__pyx_v_fields); + __Pyx_DECREF(__pyx_v_childname); + __Pyx_DECREF(__pyx_v_new_offset); + __Pyx_DECREF(__pyx_v_t); + __Pyx_DECREF((PyObject *)__pyx_v_descr); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":952 + * + * + * cdef inline void set_array_base(ndarray arr, object base): # <<<<<<<<<<<<<< + * cdef PyObject* baseptr + * if base is None: + */ + +static CYTHON_INLINE void __pyx_f_5numpy_set_array_base(PyArrayObject *__pyx_v_arr, PyObject *__pyx_v_base) { + PyObject *__pyx_v_baseptr; + int __pyx_t_1; + __Pyx_RefNannySetupContext("set_array_base"); + __Pyx_INCREF((PyObject *)__pyx_v_arr); + __Pyx_INCREF(__pyx_v_base); + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":954 + * cdef inline void set_array_base(ndarray arr, object base): + * cdef PyObject* baseptr + * if base is None: # <<<<<<<<<<<<<< + * baseptr = NULL + * else: + */ + __pyx_t_1 = (__pyx_v_base == Py_None); + if (__pyx_t_1) { + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":955 + * cdef PyObject* baseptr + * if base is None: + * baseptr = NULL # <<<<<<<<<<<<<< + * else: + * Py_INCREF(base) # important to do this before decref below! + */ + __pyx_v_baseptr = NULL; + goto __pyx_L3; + } + /*else*/ { + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":957 + * baseptr = NULL + * else: + * Py_INCREF(base) # important to do this before decref below! # <<<<<<<<<<<<<< + * baseptr = base + * Py_XDECREF(arr.base) + */ + Py_INCREF(__pyx_v_base); + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":958 + * else: + * Py_INCREF(base) # important to do this before decref below! + * baseptr = base # <<<<<<<<<<<<<< + * Py_XDECREF(arr.base) + * arr.base = baseptr + */ + __pyx_v_baseptr = ((PyObject *)__pyx_v_base); + } + __pyx_L3:; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":959 + * Py_INCREF(base) # important to do this before decref below! + * baseptr = base + * Py_XDECREF(arr.base) # <<<<<<<<<<<<<< + * arr.base = baseptr + * + */ + Py_XDECREF(__pyx_v_arr->base); + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":960 + * baseptr = base + * Py_XDECREF(arr.base) + * arr.base = baseptr # <<<<<<<<<<<<<< + * + * cdef inline object get_array_base(ndarray arr): + */ + __pyx_v_arr->base = __pyx_v_baseptr; + + __Pyx_DECREF((PyObject *)__pyx_v_arr); + __Pyx_DECREF(__pyx_v_base); + __Pyx_RefNannyFinishContext(); +} + +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":962 + * arr.base = baseptr + * + * cdef inline object get_array_base(ndarray arr): # <<<<<<<<<<<<<< + * if arr.base is NULL: + * return None + */ + +static CYTHON_INLINE PyObject *__pyx_f_5numpy_get_array_base(PyArrayObject *__pyx_v_arr) { + PyObject *__pyx_r = NULL; + int __pyx_t_1; + __Pyx_RefNannySetupContext("get_array_base"); + __Pyx_INCREF((PyObject *)__pyx_v_arr); + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":963 + * + * cdef inline object get_array_base(ndarray arr): + * if arr.base is NULL: # <<<<<<<<<<<<<< + * return None + * else: + */ + __pyx_t_1 = (__pyx_v_arr->base == NULL); + if (__pyx_t_1) { + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":964 + * cdef inline object get_array_base(ndarray arr): + * if arr.base is NULL: + * return None # <<<<<<<<<<<<<< + * else: + * return arr.base + */ + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(Py_None); + __pyx_r = Py_None; + goto __pyx_L0; + goto __pyx_L3; + } + /*else*/ { + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":966 + * return None + * else: + * return arr.base # <<<<<<<<<<<<<< + */ + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(((PyObject *)__pyx_v_arr->base)); + __pyx_r = ((PyObject *)__pyx_v_arr->base); + goto __pyx_L0; + } + __pyx_L3:; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + __pyx_L0:; + __Pyx_DECREF((PyObject *)__pyx_v_arr); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +static struct PyMethodDef __pyx_methods[] = { + {__Pyx_NAMESTR("cproduct"), (PyCFunction)__pyx_pf_5scipy_2io_6matlab_9mio_utils_cproduct, METH_O, __Pyx_DOCSTR(0)}, + {__Pyx_NAMESTR("squeeze_element"), (PyCFunction)__pyx_pf_5scipy_2io_6matlab_9mio_utils_squeeze_element, METH_O, __Pyx_DOCSTR(__pyx_doc_5scipy_2io_6matlab_9mio_utils_squeeze_element)}, + {__Pyx_NAMESTR("chars_to_strings"), (PyCFunction)__pyx_pf_5scipy_2io_6matlab_9mio_utils_chars_to_strings, METH_O, __Pyx_DOCSTR(__pyx_doc_5scipy_2io_6matlab_9mio_utils_chars_to_strings)}, + {0, 0, 0, 0} +}; + +static void __pyx_init_filenames(void); /*proto*/ + +#if PY_MAJOR_VERSION >= 3 +static struct PyModuleDef __pyx_moduledef = { + PyModuleDef_HEAD_INIT, + __Pyx_NAMESTR("mio_utils"), + __Pyx_DOCSTR(__pyx_k_7), /* m_doc */ + -1, /* m_size */ + __pyx_methods /* m_methods */, + NULL, /* m_reload */ + NULL, /* m_traverse */ + NULL, /* m_clear */ + NULL /* m_free */ +}; +#endif + +static __Pyx_StringTabEntry __pyx_string_tab[] = { + {&__pyx_kp_u_1, __pyx_k_1, sizeof(__pyx_k_1), 0, 1, 0, 0}, + {&__pyx_kp_u_2, __pyx_k_2, sizeof(__pyx_k_2), 0, 1, 0, 0}, + {&__pyx_kp_u_3, __pyx_k_3, sizeof(__pyx_k_3), 0, 1, 0, 0}, + {&__pyx_kp_u_4, __pyx_k_4, sizeof(__pyx_k_4), 0, 1, 0, 0}, + {&__pyx_kp_u_5, __pyx_k_5, sizeof(__pyx_k_5), 0, 1, 0, 0}, + {&__pyx_kp_u_6, __pyx_k_6, sizeof(__pyx_k_6), 0, 1, 0, 0}, + {&__pyx_kp_u_8, __pyx_k_8, sizeof(__pyx_k_8), 0, 1, 0, 0}, + {&__pyx_kp_u_9, __pyx_k_9, sizeof(__pyx_k_9), 0, 1, 0, 0}, + {&__pyx_n_s__RuntimeError, __pyx_k__RuntimeError, sizeof(__pyx_k__RuntimeError), 0, 0, 1, 1}, + {&__pyx_n_s__ValueError, __pyx_k__ValueError, sizeof(__pyx_k__ValueError), 0, 0, 1, 1}, + {&__pyx_n_s____main__, __pyx_k____main__, sizeof(__pyx_k____main__), 0, 0, 1, 1}, + {&__pyx_n_s____test__, __pyx_k____test__, sizeof(__pyx_k____test__), 0, 0, 1, 1}, + {&__pyx_n_s__array, __pyx_k__array, sizeof(__pyx_k__array), 0, 0, 1, 1}, + {&__pyx_n_s__ascontiguousarray, __pyx_k__ascontiguousarray, sizeof(__pyx_k__ascontiguousarray), 0, 0, 1, 1}, + {&__pyx_n_s__base, __pyx_k__base, sizeof(__pyx_k__base), 0, 0, 1, 1}, + {&__pyx_n_s__buf, __pyx_k__buf, sizeof(__pyx_k__buf), 0, 0, 1, 1}, + {&__pyx_n_s__byteorder, __pyx_k__byteorder, sizeof(__pyx_k__byteorder), 0, 0, 1, 1}, + {&__pyx_n_s__chars_to_strings, __pyx_k__chars_to_strings, sizeof(__pyx_k__chars_to_strings), 0, 0, 1, 1}, + {&__pyx_n_s__descr, __pyx_k__descr, sizeof(__pyx_k__descr), 0, 0, 1, 1}, + {&__pyx_n_s__dtype, __pyx_k__dtype, sizeof(__pyx_k__dtype), 0, 0, 1, 1}, + {&__pyx_n_s__fields, __pyx_k__fields, sizeof(__pyx_k__fields), 0, 0, 1, 1}, + {&__pyx_n_s__format, __pyx_k__format, sizeof(__pyx_k__format), 0, 0, 1, 1}, + {&__pyx_n_s__isbuiltin, __pyx_k__isbuiltin, sizeof(__pyx_k__isbuiltin), 0, 0, 1, 1}, + {&__pyx_n_s__item, __pyx_k__item, sizeof(__pyx_k__item), 0, 0, 1, 1}, + {&__pyx_n_s__itemsize, __pyx_k__itemsize, sizeof(__pyx_k__itemsize), 0, 0, 1, 1}, + {&__pyx_n_s__names, __pyx_k__names, sizeof(__pyx_k__names), 0, 0, 1, 1}, + {&__pyx_n_s__ndim, __pyx_k__ndim, sizeof(__pyx_k__ndim), 0, 0, 1, 1}, + {&__pyx_n_s__np, __pyx_k__np, sizeof(__pyx_k__np), 0, 0, 1, 1}, + {&__pyx_n_s__numpy, __pyx_k__numpy, sizeof(__pyx_k__numpy), 0, 0, 1, 1}, + {&__pyx_n_s__obj, __pyx_k__obj, sizeof(__pyx_k__obj), 0, 0, 1, 1}, + {&__pyx_n_s__range, __pyx_k__range, sizeof(__pyx_k__range), 0, 0, 1, 1}, + {&__pyx_n_s__readonly, __pyx_k__readonly, sizeof(__pyx_k__readonly), 0, 0, 1, 1}, + {&__pyx_n_s__reshape, __pyx_k__reshape, sizeof(__pyx_k__reshape), 0, 0, 1, 1}, + {&__pyx_n_s__shape, __pyx_k__shape, sizeof(__pyx_k__shape), 0, 0, 1, 1}, + {&__pyx_n_s__size, __pyx_k__size, sizeof(__pyx_k__size), 0, 0, 1, 1}, + {&__pyx_n_s__squeeze, __pyx_k__squeeze, sizeof(__pyx_k__squeeze), 0, 0, 1, 1}, + {&__pyx_n_s__squeeze_element, __pyx_k__squeeze_element, sizeof(__pyx_k__squeeze_element), 0, 0, 1, 1}, + {&__pyx_n_s__str, __pyx_k__str, sizeof(__pyx_k__str), 0, 0, 1, 1}, + {&__pyx_n_s__strides, __pyx_k__strides, sizeof(__pyx_k__strides), 0, 0, 1, 1}, + {&__pyx_n_s__suboffsets, __pyx_k__suboffsets, sizeof(__pyx_k__suboffsets), 0, 0, 1, 1}, + {&__pyx_n_s__type_num, __pyx_k__type_num, sizeof(__pyx_k__type_num), 0, 0, 1, 1}, + {&__pyx_n_s__view, __pyx_k__view, sizeof(__pyx_k__view), 0, 0, 1, 1}, + {0, 0, 0, 0, 0, 0, 0} +}; +static int __Pyx_InitCachedBuiltins(void) { + __pyx_builtin_range = __Pyx_GetName(__pyx_b, __pyx_n_s__range); if (!__pyx_builtin_range) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 12; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_builtin_ValueError = __Pyx_GetName(__pyx_b, __pyx_n_s__ValueError); if (!__pyx_builtin_ValueError) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 205; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_builtin_RuntimeError = __Pyx_GetName(__pyx_b, __pyx_n_s__RuntimeError); if (!__pyx_builtin_RuntimeError) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 786; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + return 0; + __pyx_L1_error:; + return -1; +} + +static int __Pyx_InitGlobals(void) { + if (__Pyx_InitStrings(__pyx_string_tab) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + __pyx_int_15 = PyInt_FromLong(15); if (unlikely(!__pyx_int_15)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + return 0; + __pyx_L1_error:; + return -1; +} + +#if PY_MAJOR_VERSION < 3 +PyMODINIT_FUNC initmio_utils(void); /*proto*/ +PyMODINIT_FUNC initmio_utils(void) +#else +PyMODINIT_FUNC PyInit_mio_utils(void); /*proto*/ +PyMODINIT_FUNC PyInit_mio_utils(void) +#endif +{ + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + #if CYTHON_REFNANNY + void* __pyx_refnanny = NULL; + __Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny"); + if (!__Pyx_RefNanny) { + PyErr_Clear(); + __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny"); + if (!__Pyx_RefNanny) + Py_FatalError("failed to import 'refnanny' module"); + } + __pyx_refnanny = __Pyx_RefNanny->SetupContext("PyMODINIT_FUNC PyInit_mio_utils(void)", __LINE__, __FILE__); + #endif + __pyx_init_filenames(); + __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + #if PY_MAJOR_VERSION < 3 + __pyx_empty_bytes = PyString_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + #else + __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + #endif + /*--- Library function declarations ---*/ + /*--- Threads initialization code ---*/ + #if defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS + #ifdef WITH_THREAD /* Python build with threading support? */ + PyEval_InitThreads(); + #endif + #endif + /*--- Module creation code ---*/ + #if PY_MAJOR_VERSION < 3 + __pyx_m = Py_InitModule4(__Pyx_NAMESTR("mio_utils"), __pyx_methods, __Pyx_DOCSTR(__pyx_k_7), 0, PYTHON_API_VERSION); + #else + __pyx_m = PyModule_Create(&__pyx_moduledef); + #endif + if (!__pyx_m) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + #if PY_MAJOR_VERSION < 3 + Py_INCREF(__pyx_m); + #endif + __pyx_b = PyImport_AddModule(__Pyx_NAMESTR(__Pyx_BUILTIN_MODULE_NAME)); + if (!__pyx_b) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + if (__Pyx_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + /*--- Initialize various global constants etc. ---*/ + if (unlikely(__Pyx_InitGlobals() < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + if (__pyx_module_is_main_scipy__io__matlab__mio_utils) { + if (__Pyx_SetAttrString(__pyx_m, "__name__", __pyx_n_s____main__) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + } + /*--- Builtin init code ---*/ + if (unlikely(__Pyx_InitCachedBuiltins() < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + /*--- Global init code ---*/ + /*--- Function export code ---*/ + /*--- Type init code ---*/ + /*--- Type import code ---*/ + __pyx_ptype_5numpy_dtype = __Pyx_ImportType("numpy", "dtype", sizeof(PyArray_Descr), 0); if (unlikely(!__pyx_ptype_5numpy_dtype)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 148; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_ptype_5numpy_flatiter = __Pyx_ImportType("numpy", "flatiter", sizeof(PyArrayIterObject), 0); if (unlikely(!__pyx_ptype_5numpy_flatiter)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 158; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_ptype_5numpy_broadcast = __Pyx_ImportType("numpy", "broadcast", sizeof(PyArrayMultiIterObject), 0); if (unlikely(!__pyx_ptype_5numpy_broadcast)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 162; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_ptype_5numpy_ndarray = __Pyx_ImportType("numpy", "ndarray", sizeof(PyArrayObject), 0); if (unlikely(!__pyx_ptype_5numpy_ndarray)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 171; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_ptype_5numpy_ufunc = __Pyx_ImportType("numpy", "ufunc", sizeof(PyUFuncObject), 0); if (unlikely(!__pyx_ptype_5numpy_ufunc)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 848; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + /*--- Function import code ---*/ + /*--- Execution code ---*/ + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/mio_utils.pyx":5 + * ''' + * + * import numpy as np # <<<<<<<<<<<<<< + * cimport numpy as cnp + * + */ + __pyx_t_1 = __Pyx_Import(((PyObject *)__pyx_n_s__numpy), 0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 5; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + if (PyObject_SetAttr(__pyx_m, __pyx_n_s__np, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 5; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/mio_utils.pyx":1 + * # -*- python -*- like file # <<<<<<<<<<<<<< + * ''' Utilities for generic processing of return arrays from read + * ''' + */ + __pyx_t_1 = PyDict_New(); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_1)); + __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__squeeze_element); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = __Pyx_GetAttrString(__pyx_t_2, "__doc__"); + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_8), __pyx_t_3) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyObject_GetAttr(__pyx_m, __pyx_n_s__chars_to_strings); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_9), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + if (PyObject_SetAttr(__pyx_m, __pyx_n_s____test__, ((PyObject *)__pyx_t_1)) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(((PyObject *)__pyx_t_1)); __pyx_t_1 = 0; + + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/stdlib.pxd":2 + * + * cdef extern from "stdlib.h" nogil: # <<<<<<<<<<<<<< + * void free(void *ptr) + * void *malloc(size_t size) + */ + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + if (__pyx_m) { + __Pyx_AddTraceback("init scipy.io.matlab.mio_utils"); + Py_DECREF(__pyx_m); __pyx_m = 0; + } else if (!PyErr_Occurred()) { + PyErr_SetString(PyExc_ImportError, "init scipy.io.matlab.mio_utils"); + } + __pyx_L0:; + __Pyx_RefNannyFinishContext(); + #if PY_MAJOR_VERSION < 3 + return; + #else + return __pyx_m; + #endif +} + +static const char *__pyx_filenames[] = { + "mio_utils.pyx", + "numpy.pxd", +}; + +/* Runtime support code */ + +static void __pyx_init_filenames(void) { + __pyx_f = __pyx_filenames; +} + + +static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type) { + if (unlikely(!type)) { + PyErr_Format(PyExc_SystemError, "Missing type object"); + return 0; + } + if (likely(PyObject_TypeCheck(obj, type))) + return 1; + PyErr_Format(PyExc_TypeError, "Cannot convert %.200s to %.200s", + Py_TYPE(obj)->tp_name, type->tp_name); + return 0; +} + +static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) { + PyErr_Format(PyExc_ValueError, + #if PY_VERSION_HEX < 0x02050000 + "need more than %d value%s to unpack", (int)index, + #else + "need more than %zd value%s to unpack", index, + #endif + (index == 1) ? "" : "s"); +} + +static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(void) { + PyErr_SetString(PyExc_ValueError, "too many values to unpack"); +} + +static PyObject *__Pyx_UnpackItem(PyObject *iter, Py_ssize_t index) { + PyObject *item; + if (!(item = PyIter_Next(iter))) { + if (!PyErr_Occurred()) { + __Pyx_RaiseNeedMoreValuesError(index); + } + } + return item; +} + +static int __Pyx_EndUnpack(PyObject *iter) { + PyObject *item; + if ((item = PyIter_Next(iter))) { + Py_DECREF(item); + __Pyx_RaiseTooManyValuesError(); + return -1; + } + else if (!PyErr_Occurred()) + return 0; + else + return -1; +} + +static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void) { + PyErr_SetString(PyExc_TypeError, "'NoneType' object is not iterable"); +} + +static void __Pyx_UnpackTupleError(PyObject *t, Py_ssize_t index) { + if (t == Py_None) { + __Pyx_RaiseNoneNotIterableError(); + } else if (PyTuple_GET_SIZE(t) < index) { + __Pyx_RaiseNeedMoreValuesError(PyTuple_GET_SIZE(t)); + } else { + __Pyx_RaiseTooManyValuesError(); + } +} + +static int __Pyx_ArgTypeTest(PyObject *obj, PyTypeObject *type, int none_allowed, + const char *name, int exact) +{ + if (!type) { + PyErr_Format(PyExc_SystemError, "Missing type object"); + return 0; + } + if (none_allowed && obj == Py_None) return 1; + else if (exact) { + if (Py_TYPE(obj) == type) return 1; + } + else { + if (PyObject_TypeCheck(obj, type)) return 1; + } + PyErr_Format(PyExc_TypeError, + "Argument '%s' has incorrect type (expected %s, got %s)", + name, type->tp_name, Py_TYPE(obj)->tp_name); + return 0; +} + +static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list) { + PyObject *__import__ = 0; + PyObject *empty_list = 0; + PyObject *module = 0; + PyObject *global_dict = 0; + PyObject *empty_dict = 0; + PyObject *list; + __import__ = __Pyx_GetAttrString(__pyx_b, "__import__"); + if (!__import__) + goto bad; + if (from_list) + list = from_list; + else { + empty_list = PyList_New(0); + if (!empty_list) + goto bad; + list = empty_list; + } + global_dict = PyModule_GetDict(__pyx_m); + if (!global_dict) + goto bad; + empty_dict = PyDict_New(); + if (!empty_dict) + goto bad; + module = PyObject_CallFunctionObjArgs(__import__, + name, global_dict, empty_dict, list, NULL); +bad: + Py_XDECREF(empty_list); + Py_XDECREF(__import__); + Py_XDECREF(empty_dict); + return module; +} + +static PyObject *__Pyx_GetName(PyObject *dict, PyObject *name) { + PyObject *result; + result = PyObject_GetAttr(dict, name); + if (!result) + PyErr_SetObject(PyExc_NameError, name); + return result; +} + +static CYTHON_INLINE PyObject *__Pyx_PyInt_to_py_npy_intp(npy_intp val) { + const npy_intp neg_one = (npy_intp)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(npy_intp) < sizeof(long)) { + return PyInt_FromLong((long)val); + } else if (sizeof(npy_intp) == sizeof(long)) { + if (is_unsigned) + return PyLong_FromUnsignedLong((unsigned long)val); + else + return PyInt_FromLong((long)val); + } else { /* (sizeof(npy_intp) > sizeof(long)) */ + if (is_unsigned) + return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG)val); + else + return PyLong_FromLongLong((PY_LONG_LONG)val); + } +} + +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) { + return ::std::complex< float >(x, y); + } + #else + static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) { + return x + y*(__pyx_t_float_complex)_Complex_I; + } + #endif +#else + static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float x, float y) { + __pyx_t_float_complex z; + z.real = x; + z.imag = y; + return z; + } +#endif + +#if CYTHON_CCOMPLEX +#else + static CYTHON_INLINE int __Pyx_c_eqf(__pyx_t_float_complex a, __pyx_t_float_complex b) { + return (a.real == b.real) && (a.imag == b.imag); + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_sumf(__pyx_t_float_complex a, __pyx_t_float_complex b) { + __pyx_t_float_complex z; + z.real = a.real + b.real; + z.imag = a.imag + b.imag; + return z; + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_difff(__pyx_t_float_complex a, __pyx_t_float_complex b) { + __pyx_t_float_complex z; + z.real = a.real - b.real; + z.imag = a.imag - b.imag; + return z; + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_prodf(__pyx_t_float_complex a, __pyx_t_float_complex b) { + __pyx_t_float_complex z; + z.real = a.real * b.real - a.imag * b.imag; + z.imag = a.real * b.imag + a.imag * b.real; + return z; + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quotf(__pyx_t_float_complex a, __pyx_t_float_complex b) { + __pyx_t_float_complex z; + float denom = b.real * b.real + b.imag * b.imag; + z.real = (a.real * b.real + a.imag * b.imag) / denom; + z.imag = (a.imag * b.real - a.real * b.imag) / denom; + return z; + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_negf(__pyx_t_float_complex a) { + __pyx_t_float_complex z; + z.real = -a.real; + z.imag = -a.imag; + return z; + } + static CYTHON_INLINE int __Pyx_c_is_zerof(__pyx_t_float_complex a) { + return (a.real == 0) && (a.imag == 0); + } + static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_conjf(__pyx_t_float_complex a) { + __pyx_t_float_complex z; + z.real = a.real; + z.imag = -a.imag; + return z; + } +/* + static CYTHON_INLINE float __Pyx_c_absf(__pyx_t_float_complex z) { +#if HAVE_HYPOT + return hypotf(z.real, z.imag); +#else + return sqrtf(z.real*z.real + z.imag*z.imag); +#endif + } +*/ +#endif + +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { + return ::std::complex< double >(x, y); + } + #else + static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { + return x + y*(__pyx_t_double_complex)_Complex_I; + } + #endif +#else + static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { + __pyx_t_double_complex z; + z.real = x; + z.imag = y; + return z; + } +#endif + +#if CYTHON_CCOMPLEX +#else + static CYTHON_INLINE int __Pyx_c_eq(__pyx_t_double_complex a, __pyx_t_double_complex b) { + return (a.real == b.real) && (a.imag == b.imag); + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + z.real = a.real + b.real; + z.imag = a.imag + b.imag; + return z; + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + z.real = a.real - b.real; + z.imag = a.imag - b.imag; + return z; + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + z.real = a.real * b.real - a.imag * b.imag; + z.imag = a.real * b.imag + a.imag * b.real; + return z; + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + double denom = b.real * b.real + b.imag * b.imag; + z.real = (a.real * b.real + a.imag * b.imag) / denom; + z.imag = (a.imag * b.real - a.real * b.imag) / denom; + return z; + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg(__pyx_t_double_complex a) { + __pyx_t_double_complex z; + z.real = -a.real; + z.imag = -a.imag; + return z; + } + static CYTHON_INLINE int __Pyx_c_is_zero(__pyx_t_double_complex a) { + return (a.real == 0) && (a.imag == 0); + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj(__pyx_t_double_complex a) { + __pyx_t_double_complex z; + z.real = a.real; + z.imag = -a.imag; + return z; + } +/* + static CYTHON_INLINE double __Pyx_c_abs(__pyx_t_double_complex z) { +#if HAVE_HYPOT + return hypot(z.real, z.imag); +#else + return sqrt(z.real*z.real + z.imag*z.imag); +#endif + } +*/ +#endif + +static CYTHON_INLINE void __Pyx_ErrRestore(PyObject *type, PyObject *value, PyObject *tb) { + PyObject *tmp_type, *tmp_value, *tmp_tb; + PyThreadState *tstate = PyThreadState_GET(); + + tmp_type = tstate->curexc_type; + tmp_value = tstate->curexc_value; + tmp_tb = tstate->curexc_traceback; + tstate->curexc_type = type; + tstate->curexc_value = value; + tstate->curexc_traceback = tb; + Py_XDECREF(tmp_type); + Py_XDECREF(tmp_value); + Py_XDECREF(tmp_tb); +} + +static CYTHON_INLINE void __Pyx_ErrFetch(PyObject **type, PyObject **value, PyObject **tb) { + PyThreadState *tstate = PyThreadState_GET(); + *type = tstate->curexc_type; + *value = tstate->curexc_value; + *tb = tstate->curexc_traceback; + + tstate->curexc_type = 0; + tstate->curexc_value = 0; + tstate->curexc_traceback = 0; +} + + +#if PY_MAJOR_VERSION < 3 +static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb) { + Py_XINCREF(type); + Py_XINCREF(value); + Py_XINCREF(tb); + /* First, check the traceback argument, replacing None with NULL. */ + if (tb == Py_None) { + Py_DECREF(tb); + tb = 0; + } + else if (tb != NULL && !PyTraceBack_Check(tb)) { + PyErr_SetString(PyExc_TypeError, + "raise: arg 3 must be a traceback or None"); + goto raise_error; + } + /* Next, replace a missing value with None */ + if (value == NULL) { + value = Py_None; + Py_INCREF(value); + } + #if PY_VERSION_HEX < 0x02050000 + if (!PyClass_Check(type)) + #else + if (!PyType_Check(type)) + #endif + { + /* Raising an instance. The value should be a dummy. */ + if (value != Py_None) { + PyErr_SetString(PyExc_TypeError, + "instance exception may not have a separate value"); + goto raise_error; + } + /* Normalize to raise , */ + Py_DECREF(value); + value = type; + #if PY_VERSION_HEX < 0x02050000 + if (PyInstance_Check(type)) { + type = (PyObject*) ((PyInstanceObject*)type)->in_class; + Py_INCREF(type); + } + else { + type = 0; + PyErr_SetString(PyExc_TypeError, + "raise: exception must be an old-style class or instance"); + goto raise_error; + } + #else + type = (PyObject*) Py_TYPE(type); + Py_INCREF(type); + if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) { + PyErr_SetString(PyExc_TypeError, + "raise: exception class must be a subclass of BaseException"); + goto raise_error; + } + #endif + } + + __Pyx_ErrRestore(type, value, tb); + return; +raise_error: + Py_XDECREF(value); + Py_XDECREF(type); + Py_XDECREF(tb); + return; +} + +#else /* Python 3+ */ + +static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb) { + if (tb == Py_None) { + tb = 0; + } else if (tb && !PyTraceBack_Check(tb)) { + PyErr_SetString(PyExc_TypeError, + "raise: arg 3 must be a traceback or None"); + goto bad; + } + if (value == Py_None) + value = 0; + + if (PyExceptionInstance_Check(type)) { + if (value) { + PyErr_SetString(PyExc_TypeError, + "instance exception may not have a separate value"); + goto bad; + } + value = type; + type = (PyObject*) Py_TYPE(value); + } else if (!PyExceptionClass_Check(type)) { + PyErr_SetString(PyExc_TypeError, + "raise: exception class must be a subclass of BaseException"); + goto bad; + } + + PyErr_SetObject(type, value); + + if (tb) { + PyThreadState *tstate = PyThreadState_GET(); + PyObject* tmp_tb = tstate->curexc_traceback; + if (tb != tmp_tb) { + Py_INCREF(tb); + tstate->curexc_traceback = tb; + Py_XDECREF(tmp_tb); + } + } + +bad: + return; +} +#endif + +static CYTHON_INLINE unsigned char __Pyx_PyInt_AsUnsignedChar(PyObject* x) { + const unsigned char neg_one = (unsigned char)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(unsigned char) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(unsigned char)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to unsigned char" : + "value too large to convert to unsigned char"); + } + return (unsigned char)-1; + } + return (unsigned char)val; + } + return (unsigned char)__Pyx_PyInt_AsUnsignedLong(x); +} + +static CYTHON_INLINE unsigned short __Pyx_PyInt_AsUnsignedShort(PyObject* x) { + const unsigned short neg_one = (unsigned short)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(unsigned short) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(unsigned short)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to unsigned short" : + "value too large to convert to unsigned short"); + } + return (unsigned short)-1; + } + return (unsigned short)val; + } + return (unsigned short)__Pyx_PyInt_AsUnsignedLong(x); +} + +static CYTHON_INLINE unsigned int __Pyx_PyInt_AsUnsignedInt(PyObject* x) { + const unsigned int neg_one = (unsigned int)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(unsigned int) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(unsigned int)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to unsigned int" : + "value too large to convert to unsigned int"); + } + return (unsigned int)-1; + } + return (unsigned int)val; + } + return (unsigned int)__Pyx_PyInt_AsUnsignedLong(x); +} + +static CYTHON_INLINE char __Pyx_PyInt_AsChar(PyObject* x) { + const char neg_one = (char)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(char) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(char)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to char" : + "value too large to convert to char"); + } + return (char)-1; + } + return (char)val; + } + return (char)__Pyx_PyInt_AsLong(x); +} + +static CYTHON_INLINE short __Pyx_PyInt_AsShort(PyObject* x) { + const short neg_one = (short)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(short) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(short)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to short" : + "value too large to convert to short"); + } + return (short)-1; + } + return (short)val; + } + return (short)__Pyx_PyInt_AsLong(x); +} + +static CYTHON_INLINE int __Pyx_PyInt_AsInt(PyObject* x) { + const int neg_one = (int)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(int) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(int)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to int" : + "value too large to convert to int"); + } + return (int)-1; + } + return (int)val; + } + return (int)__Pyx_PyInt_AsLong(x); +} + +static CYTHON_INLINE signed char __Pyx_PyInt_AsSignedChar(PyObject* x) { + const signed char neg_one = (signed char)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(signed char) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(signed char)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to signed char" : + "value too large to convert to signed char"); + } + return (signed char)-1; + } + return (signed char)val; + } + return (signed char)__Pyx_PyInt_AsSignedLong(x); +} + +static CYTHON_INLINE signed short __Pyx_PyInt_AsSignedShort(PyObject* x) { + const signed short neg_one = (signed short)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(signed short) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(signed short)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to signed short" : + "value too large to convert to signed short"); + } + return (signed short)-1; + } + return (signed short)val; + } + return (signed short)__Pyx_PyInt_AsSignedLong(x); +} + +static CYTHON_INLINE signed int __Pyx_PyInt_AsSignedInt(PyObject* x) { + const signed int neg_one = (signed int)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(signed int) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(signed int)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to signed int" : + "value too large to convert to signed int"); + } + return (signed int)-1; + } + return (signed int)val; + } + return (signed int)__Pyx_PyInt_AsSignedLong(x); +} + +static CYTHON_INLINE unsigned long __Pyx_PyInt_AsUnsignedLong(PyObject* x) { + const unsigned long neg_one = (unsigned long)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to unsigned long"); + return (unsigned long)-1; + } + return (unsigned long)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to unsigned long"); + return (unsigned long)-1; + } + return PyLong_AsUnsignedLong(x); + } else { + return PyLong_AsLong(x); + } + } else { + unsigned long val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (unsigned long)-1; + val = __Pyx_PyInt_AsUnsignedLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static CYTHON_INLINE unsigned PY_LONG_LONG __Pyx_PyInt_AsUnsignedLongLong(PyObject* x) { + const unsigned PY_LONG_LONG neg_one = (unsigned PY_LONG_LONG)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to unsigned PY_LONG_LONG"); + return (unsigned PY_LONG_LONG)-1; + } + return (unsigned PY_LONG_LONG)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to unsigned PY_LONG_LONG"); + return (unsigned PY_LONG_LONG)-1; + } + return PyLong_AsUnsignedLongLong(x); + } else { + return PyLong_AsLongLong(x); + } + } else { + unsigned PY_LONG_LONG val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (unsigned PY_LONG_LONG)-1; + val = __Pyx_PyInt_AsUnsignedLongLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static CYTHON_INLINE long __Pyx_PyInt_AsLong(PyObject* x) { + const long neg_one = (long)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to long"); + return (long)-1; + } + return (long)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to long"); + return (long)-1; + } + return PyLong_AsUnsignedLong(x); + } else { + return PyLong_AsLong(x); + } + } else { + long val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (long)-1; + val = __Pyx_PyInt_AsLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static CYTHON_INLINE PY_LONG_LONG __Pyx_PyInt_AsLongLong(PyObject* x) { + const PY_LONG_LONG neg_one = (PY_LONG_LONG)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to PY_LONG_LONG"); + return (PY_LONG_LONG)-1; + } + return (PY_LONG_LONG)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to PY_LONG_LONG"); + return (PY_LONG_LONG)-1; + } + return PyLong_AsUnsignedLongLong(x); + } else { + return PyLong_AsLongLong(x); + } + } else { + PY_LONG_LONG val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (PY_LONG_LONG)-1; + val = __Pyx_PyInt_AsLongLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static CYTHON_INLINE signed long __Pyx_PyInt_AsSignedLong(PyObject* x) { + const signed long neg_one = (signed long)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to signed long"); + return (signed long)-1; + } + return (signed long)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to signed long"); + return (signed long)-1; + } + return PyLong_AsUnsignedLong(x); + } else { + return PyLong_AsLong(x); + } + } else { + signed long val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (signed long)-1; + val = __Pyx_PyInt_AsSignedLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static CYTHON_INLINE signed PY_LONG_LONG __Pyx_PyInt_AsSignedLongLong(PyObject* x) { + const signed PY_LONG_LONG neg_one = (signed PY_LONG_LONG)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to signed PY_LONG_LONG"); + return (signed PY_LONG_LONG)-1; + } + return (signed PY_LONG_LONG)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to signed PY_LONG_LONG"); + return (signed PY_LONG_LONG)-1; + } + return PyLong_AsUnsignedLongLong(x); + } else { + return PyLong_AsLongLong(x); + } + } else { + signed PY_LONG_LONG val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (signed PY_LONG_LONG)-1; + val = __Pyx_PyInt_AsSignedLongLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static void __Pyx_WriteUnraisable(const char *name) { + PyObject *old_exc, *old_val, *old_tb; + PyObject *ctx; + __Pyx_ErrFetch(&old_exc, &old_val, &old_tb); + #if PY_MAJOR_VERSION < 3 + ctx = PyString_FromString(name); + #else + ctx = PyUnicode_FromString(name); + #endif + __Pyx_ErrRestore(old_exc, old_val, old_tb); + if (!ctx) { + PyErr_WriteUnraisable(Py_None); + } else { + PyErr_WriteUnraisable(ctx); + Py_DECREF(ctx); + } +} + +#ifndef __PYX_HAVE_RT_ImportType +#define __PYX_HAVE_RT_ImportType +static PyTypeObject *__Pyx_ImportType(const char *module_name, const char *class_name, + long size, int strict) +{ + PyObject *py_module = 0; + PyObject *result = 0; + PyObject *py_name = 0; + char warning[200]; + + py_module = __Pyx_ImportModule(module_name); + if (!py_module) + goto bad; + #if PY_MAJOR_VERSION < 3 + py_name = PyString_FromString(class_name); + #else + py_name = PyUnicode_FromString(class_name); + #endif + if (!py_name) + goto bad; + result = PyObject_GetAttr(py_module, py_name); + Py_DECREF(py_name); + py_name = 0; + Py_DECREF(py_module); + py_module = 0; + if (!result) + goto bad; + if (!PyType_Check(result)) { + PyErr_Format(PyExc_TypeError, + "%s.%s is not a type object", + module_name, class_name); + goto bad; + } + if (!strict && ((PyTypeObject *)result)->tp_basicsize > size) { + PyOS_snprintf(warning, sizeof(warning), + "%s.%s size changed, may indicate binary incompatibility", + module_name, class_name); + PyErr_WarnEx(NULL, warning, 0); + } + else if (((PyTypeObject *)result)->tp_basicsize != size) { + PyErr_Format(PyExc_ValueError, + "%s.%s has the wrong size, try recompiling", + module_name, class_name); + goto bad; + } + return (PyTypeObject *)result; +bad: + Py_XDECREF(py_module); + Py_XDECREF(result); + return 0; +} +#endif + +#ifndef __PYX_HAVE_RT_ImportModule +#define __PYX_HAVE_RT_ImportModule +static PyObject *__Pyx_ImportModule(const char *name) { + PyObject *py_name = 0; + PyObject *py_module = 0; + + #if PY_MAJOR_VERSION < 3 + py_name = PyString_FromString(name); + #else + py_name = PyUnicode_FromString(name); + #endif + if (!py_name) + goto bad; + py_module = PyImport_Import(py_name); + Py_DECREF(py_name); + return py_module; +bad: + Py_XDECREF(py_name); + return 0; +} +#endif + +#include "compile.h" +#include "frameobject.h" +#include "traceback.h" + +static void __Pyx_AddTraceback(const char *funcname) { + PyObject *py_srcfile = 0; + PyObject *py_funcname = 0; + PyObject *py_globals = 0; + PyCodeObject *py_code = 0; + PyFrameObject *py_frame = 0; + + #if PY_MAJOR_VERSION < 3 + py_srcfile = PyString_FromString(__pyx_filename); + #else + py_srcfile = PyUnicode_FromString(__pyx_filename); + #endif + if (!py_srcfile) goto bad; + if (__pyx_clineno) { + #if PY_MAJOR_VERSION < 3 + py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, __pyx_clineno); + #else + py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, __pyx_clineno); + #endif + } + else { + #if PY_MAJOR_VERSION < 3 + py_funcname = PyString_FromString(funcname); + #else + py_funcname = PyUnicode_FromString(funcname); + #endif + } + if (!py_funcname) goto bad; + py_globals = PyModule_GetDict(__pyx_m); + if (!py_globals) goto bad; + py_code = PyCode_New( + 0, /*int argcount,*/ + #if PY_MAJOR_VERSION >= 3 + 0, /*int kwonlyargcount,*/ + #endif + 0, /*int nlocals,*/ + 0, /*int stacksize,*/ + 0, /*int flags,*/ + __pyx_empty_bytes, /*PyObject *code,*/ + __pyx_empty_tuple, /*PyObject *consts,*/ + __pyx_empty_tuple, /*PyObject *names,*/ + __pyx_empty_tuple, /*PyObject *varnames,*/ + __pyx_empty_tuple, /*PyObject *freevars,*/ + __pyx_empty_tuple, /*PyObject *cellvars,*/ + py_srcfile, /*PyObject *filename,*/ + py_funcname, /*PyObject *name,*/ + __pyx_lineno, /*int firstlineno,*/ + __pyx_empty_bytes /*PyObject *lnotab*/ + ); + if (!py_code) goto bad; + py_frame = PyFrame_New( + PyThreadState_GET(), /*PyThreadState *tstate,*/ + py_code, /*PyCodeObject *code,*/ + py_globals, /*PyObject *globals,*/ + 0 /*PyObject *locals*/ + ); + if (!py_frame) goto bad; + py_frame->f_lineno = __pyx_lineno; + PyTraceBack_Here(py_frame); +bad: + Py_XDECREF(py_srcfile); + Py_XDECREF(py_funcname); + Py_XDECREF(py_code); + Py_XDECREF(py_frame); +} + +static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) { + while (t->p) { + #if PY_MAJOR_VERSION < 3 + if (t->is_unicode) { + *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL); + } else if (t->intern) { + *t->p = PyString_InternFromString(t->s); + } else { + *t->p = PyString_FromStringAndSize(t->s, t->n - 1); + } + #else /* Python 3+ has unicode identifiers */ + if (t->is_unicode | t->is_str) { + if (t->intern) { + *t->p = PyUnicode_InternFromString(t->s); + } else if (t->encoding) { + *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL); + } else { + *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1); + } + } else { + *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1); + } + #endif + if (!*t->p) + return -1; + ++t; + } + return 0; +} + +/* Type Conversion Functions */ + +static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { + if (x == Py_True) return 1; + else if ((x == Py_False) | (x == Py_None)) return 0; + else return PyObject_IsTrue(x); +} + +static CYTHON_INLINE PyObject* __Pyx_PyNumber_Int(PyObject* x) { + PyNumberMethods *m; + const char *name = NULL; + PyObject *res = NULL; +#if PY_VERSION_HEX < 0x03000000 + if (PyInt_Check(x) || PyLong_Check(x)) +#else + if (PyLong_Check(x)) +#endif + return Py_INCREF(x), x; + m = Py_TYPE(x)->tp_as_number; +#if PY_VERSION_HEX < 0x03000000 + if (m && m->nb_int) { + name = "int"; + res = PyNumber_Int(x); + } + else if (m && m->nb_long) { + name = "long"; + res = PyNumber_Long(x); + } +#else + if (m && m->nb_int) { + name = "int"; + res = PyNumber_Long(x); + } +#endif + if (res) { +#if PY_VERSION_HEX < 0x03000000 + if (!PyInt_Check(res) && !PyLong_Check(res)) { +#else + if (!PyLong_Check(res)) { +#endif + PyErr_Format(PyExc_TypeError, + "__%s__ returned non-%s (type %.200s)", + name, name, Py_TYPE(res)->tp_name); + Py_DECREF(res); + return NULL; + } + } + else if (!PyErr_Occurred()) { + PyErr_SetString(PyExc_TypeError, + "an integer is required"); + } + return res; +} + +static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { + Py_ssize_t ival; + PyObject* x = PyNumber_Index(b); + if (!x) return -1; + ival = PyInt_AsSsize_t(x); + Py_DECREF(x); + return ival; +} + +static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) { +#if PY_VERSION_HEX < 0x02050000 + if (ival <= LONG_MAX) + return PyInt_FromLong((long)ival); + else { + unsigned char *bytes = (unsigned char *) &ival; + int one = 1; int little = (int)*(unsigned char*)&one; + return _PyLong_FromByteArray(bytes, sizeof(size_t), little, 0); + } +#else + return PyInt_FromSize_t(ival); +#endif +} + +static CYTHON_INLINE size_t __Pyx_PyInt_AsSize_t(PyObject* x) { + unsigned PY_LONG_LONG val = __Pyx_PyInt_AsUnsignedLongLong(x); + if (unlikely(val == (unsigned PY_LONG_LONG)-1 && PyErr_Occurred())) { + return (size_t)-1; + } else if (unlikely(val != (unsigned PY_LONG_LONG)(size_t)val)) { + PyErr_SetString(PyExc_OverflowError, + "value too large to convert to size_t"); + return (size_t)-1; + } + return (size_t)val; +} + + +#endif /* Py_PYTHON_H */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/matlab/numpy_rephrasing.h python-scipy-0.8.0+dfsg1/scipy/io/matlab/numpy_rephrasing.h --- python-scipy-0.7.2+dfsg1/scipy/io/matlab/numpy_rephrasing.h 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/matlab/numpy_rephrasing.h 2010-07-26 15:48:31.000000000 +0100 @@ -0,0 +1,5 @@ +#include +#define PyArray_Set_BASE(arr, obj) PyArray_BASE(arr) = obj +#define PyArray_PyANewFromDescr(descr, nd, dims, data, parent) \ + PyArray_NewFromDescr(&PyArray_Type, descr, nd, dims, \ + NULL, data, 0, parent) diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/matlab/SConscript python-scipy-0.8.0+dfsg1/scipy/io/matlab/SConscript --- python-scipy-0.7.2+dfsg1/scipy/io/matlab/SConscript 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/matlab/SConscript 2010-07-26 15:48:31.000000000 +0100 @@ -0,0 +1,6 @@ +from numscons import GetNumpyEnvironment + +env = GetNumpyEnvironment(ARGUMENTS) +env.NumpyPythonExtension('streams', source='streams.c') +env.NumpyPythonExtension('mio_utils', source='mio_utils.c') +env.NumpyPythonExtension('mio5_utils', source='mio5_utils.c') diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/matlab/SConstruct python-scipy-0.8.0+dfsg1/scipy/io/matlab/SConstruct --- python-scipy-0.7.2+dfsg1/scipy/io/matlab/SConstruct 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/matlab/SConstruct 2010-07-26 15:48:31.000000000 +0100 @@ -0,0 +1,2 @@ +from numscons import GetInitEnvironment +GetInitEnvironment(ARGUMENTS).DistutilsSConscript('SConscript') diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/matlab/setup.py python-scipy-0.8.0+dfsg1/scipy/io/matlab/setup.py --- python-scipy-0.7.2+dfsg1/scipy/io/matlab/setup.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/matlab/setup.py 2010-07-26 15:48:31.000000000 +0100 @@ -3,7 +3,11 @@ def configuration(parent_package='io',top_path=None): from numpy.distutils.misc_util import Configuration config = Configuration('matlab', parent_package, top_path) + config.add_extension('streams', sources=['streams.c']) + config.add_extension('mio_utils', sources=['mio_utils.c']) + config.add_extension('mio5_utils', sources=['mio5_utils.c']) config.add_data_dir('tests') + config.add_data_dir('benchmarks') return config if __name__ == '__main__': diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/matlab/setupscons.py python-scipy-0.8.0+dfsg1/scipy/io/matlab/setupscons.py --- python-scipy-0.7.2+dfsg1/scipy/io/matlab/setupscons.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/matlab/setupscons.py 2010-07-26 15:48:31.000000000 +0100 @@ -0,0 +1,13 @@ +#!/usr/bin/env python + +def configuration(parent_package='io',top_path=None): + from numpy.distutils.misc_util import Configuration + config = Configuration('matlab', parent_package, top_path) + config.add_sconscript('SConstruct') + config.add_data_dir('tests') + config.add_data_dir('benchmarks') + return config + +if __name__ == '__main__': + from numpy.distutils.core import setup + setup(**configuration(top_path='').todict()) diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/matlab/streams.c python-scipy-0.8.0+dfsg1/scipy/io/matlab/streams.c --- python-scipy-0.7.2+dfsg1/scipy/io/matlab/streams.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/matlab/streams.c 2010-07-26 15:48:31.000000000 +0100 @@ -0,0 +1,4516 @@ +/* Generated by Cython 0.12.1 on Wed Jun 16 17:42:35 2010 */ + +#define PY_SSIZE_T_CLEAN +#include "Python.h" +#include "structmember.h" +#ifndef Py_PYTHON_H + #error Python headers needed to compile C extensions, please install development version of Python. +#else + +#ifndef PY_LONG_LONG + #define PY_LONG_LONG LONG_LONG +#endif +#ifndef DL_EXPORT + #define DL_EXPORT(t) t +#endif +#if PY_VERSION_HEX < 0x02040000 + #define METH_COEXIST 0 + #define PyDict_CheckExact(op) (Py_TYPE(op) == &PyDict_Type) + #define PyDict_Contains(d,o) PySequence_Contains(d,o) +#endif + +#if PY_VERSION_HEX < 0x02050000 + typedef int Py_ssize_t; + #define PY_SSIZE_T_MAX INT_MAX + #define PY_SSIZE_T_MIN INT_MIN + #define PY_FORMAT_SIZE_T "" + #define PyInt_FromSsize_t(z) PyInt_FromLong(z) + #define PyInt_AsSsize_t(o) PyInt_AsLong(o) + #define PyNumber_Index(o) PyNumber_Int(o) + #define PyIndex_Check(o) PyNumber_Check(o) + #define PyErr_WarnEx(category, message, stacklevel) PyErr_Warn(category, message) +#endif + +#if PY_VERSION_HEX < 0x02060000 + #define Py_REFCNT(ob) (((PyObject*)(ob))->ob_refcnt) + #define Py_TYPE(ob) (((PyObject*)(ob))->ob_type) + #define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size) + #define PyVarObject_HEAD_INIT(type, size) \ + PyObject_HEAD_INIT(type) size, + #define PyType_Modified(t) + + typedef struct { + void *buf; + PyObject *obj; + Py_ssize_t len; + Py_ssize_t itemsize; + int readonly; + int ndim; + char *format; + Py_ssize_t *shape; + Py_ssize_t *strides; + Py_ssize_t *suboffsets; + void *internal; + } Py_buffer; + + #define PyBUF_SIMPLE 0 + #define PyBUF_WRITABLE 0x0001 + #define PyBUF_FORMAT 0x0004 + #define PyBUF_ND 0x0008 + #define PyBUF_STRIDES (0x0010 | PyBUF_ND) + #define PyBUF_C_CONTIGUOUS (0x0020 | PyBUF_STRIDES) + #define PyBUF_F_CONTIGUOUS (0x0040 | PyBUF_STRIDES) + #define PyBUF_ANY_CONTIGUOUS (0x0080 | PyBUF_STRIDES) + #define PyBUF_INDIRECT (0x0100 | PyBUF_STRIDES) + +#endif + +#if PY_MAJOR_VERSION < 3 + #define __Pyx_BUILTIN_MODULE_NAME "__builtin__" +#else + #define __Pyx_BUILTIN_MODULE_NAME "builtins" +#endif + +#if PY_MAJOR_VERSION >= 3 + #define Py_TPFLAGS_CHECKTYPES 0 + #define Py_TPFLAGS_HAVE_INDEX 0 +#endif + +#if (PY_VERSION_HEX < 0x02060000) || (PY_MAJOR_VERSION >= 3) + #define Py_TPFLAGS_HAVE_NEWBUFFER 0 +#endif + +#if PY_MAJOR_VERSION >= 3 + #define PyBaseString_Type PyUnicode_Type + #define PyString_Type PyUnicode_Type + #define PyString_CheckExact PyUnicode_CheckExact +#else + #define PyBytes_Type PyString_Type + #define PyBytes_CheckExact PyString_CheckExact +#endif + +#if PY_MAJOR_VERSION >= 3 + #define PyInt_Type PyLong_Type + #define PyInt_Check(op) PyLong_Check(op) + #define PyInt_CheckExact(op) PyLong_CheckExact(op) + #define PyInt_FromString PyLong_FromString + #define PyInt_FromUnicode PyLong_FromUnicode + #define PyInt_FromLong PyLong_FromLong + #define PyInt_FromSize_t PyLong_FromSize_t + #define PyInt_FromSsize_t PyLong_FromSsize_t + #define PyInt_AsLong PyLong_AsLong + #define PyInt_AS_LONG PyLong_AS_LONG + #define PyInt_AsSsize_t PyLong_AsSsize_t + #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask + #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask + #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y) + #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y) +#else + #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y) + #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y) + +#endif + +#if PY_MAJOR_VERSION >= 3 + #define PyMethod_New(func, self, klass) PyInstanceMethod_New(func) +#endif + +#if !defined(WIN32) && !defined(MS_WINDOWS) + #ifndef __stdcall + #define __stdcall + #endif + #ifndef __cdecl + #define __cdecl + #endif + #ifndef __fastcall + #define __fastcall + #endif +#else + #define _USE_MATH_DEFINES +#endif + +#if PY_VERSION_HEX < 0x02050000 + #define __Pyx_GetAttrString(o,n) PyObject_GetAttrString((o),((char *)(n))) + #define __Pyx_SetAttrString(o,n,a) PyObject_SetAttrString((o),((char *)(n)),(a)) + #define __Pyx_DelAttrString(o,n) PyObject_DelAttrString((o),((char *)(n))) +#else + #define __Pyx_GetAttrString(o,n) PyObject_GetAttrString((o),(n)) + #define __Pyx_SetAttrString(o,n,a) PyObject_SetAttrString((o),(n),(a)) + #define __Pyx_DelAttrString(o,n) PyObject_DelAttrString((o),(n)) +#endif + +#if PY_VERSION_HEX < 0x02050000 + #define __Pyx_NAMESTR(n) ((char *)(n)) + #define __Pyx_DOCSTR(n) ((char *)(n)) +#else + #define __Pyx_NAMESTR(n) (n) + #define __Pyx_DOCSTR(n) (n) +#endif +#ifdef __cplusplus +#define __PYX_EXTERN_C extern "C" +#else +#define __PYX_EXTERN_C extern +#endif +#include +#define __PYX_HAVE_API__scipy__io__matlab__streams +#include "stdlib.h" +#include "fileobject.h" +#include "cStringIO.h" + +#ifndef CYTHON_INLINE + #if defined(__GNUC__) + #define CYTHON_INLINE __inline__ + #elif defined(_MSC_VER) + #define CYTHON_INLINE __inline + #else + #define CYTHON_INLINE + #endif +#endif + +typedef struct {PyObject **p; char *s; const long n; const char* encoding; const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; /*proto*/ + + +/* Type Conversion Predeclarations */ + +#if PY_MAJOR_VERSION < 3 +#define __Pyx_PyBytes_FromString PyString_FromString +#define __Pyx_PyBytes_FromStringAndSize PyString_FromStringAndSize +#define __Pyx_PyBytes_AsString PyString_AsString +#else +#define __Pyx_PyBytes_FromString PyBytes_FromString +#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize +#define __Pyx_PyBytes_AsString PyBytes_AsString +#endif + +#define __Pyx_PyBytes_FromUString(s) __Pyx_PyBytes_FromString((char*)s) +#define __Pyx_PyBytes_AsUString(s) ((unsigned char*) __Pyx_PyBytes_AsString(s)) + +#define __Pyx_PyBool_FromLong(b) ((b) ? (Py_INCREF(Py_True), Py_True) : (Py_INCREF(Py_False), Py_False)) +static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); +static CYTHON_INLINE PyObject* __Pyx_PyNumber_Int(PyObject* x); + +#if !defined(T_PYSSIZET) +#if PY_VERSION_HEX < 0x02050000 +#define T_PYSSIZET T_INT +#elif !defined(T_LONGLONG) +#define T_PYSSIZET \ + ((sizeof(Py_ssize_t) == sizeof(int)) ? T_INT : \ + ((sizeof(Py_ssize_t) == sizeof(long)) ? T_LONG : -1)) +#else +#define T_PYSSIZET \ + ((sizeof(Py_ssize_t) == sizeof(int)) ? T_INT : \ + ((sizeof(Py_ssize_t) == sizeof(long)) ? T_LONG : \ + ((sizeof(Py_ssize_t) == sizeof(PY_LONG_LONG)) ? T_LONGLONG : -1))) +#endif +#endif + + +#if !defined(T_ULONGLONG) +#define __Pyx_T_UNSIGNED_INT(x) \ + ((sizeof(x) == sizeof(unsigned char)) ? T_UBYTE : \ + ((sizeof(x) == sizeof(unsigned short)) ? T_USHORT : \ + ((sizeof(x) == sizeof(unsigned int)) ? T_UINT : \ + ((sizeof(x) == sizeof(unsigned long)) ? T_ULONG : -1)))) +#else +#define __Pyx_T_UNSIGNED_INT(x) \ + ((sizeof(x) == sizeof(unsigned char)) ? T_UBYTE : \ + ((sizeof(x) == sizeof(unsigned short)) ? T_USHORT : \ + ((sizeof(x) == sizeof(unsigned int)) ? T_UINT : \ + ((sizeof(x) == sizeof(unsigned long)) ? T_ULONG : \ + ((sizeof(x) == sizeof(unsigned PY_LONG_LONG)) ? T_ULONGLONG : -1))))) +#endif +#if !defined(T_LONGLONG) +#define __Pyx_T_SIGNED_INT(x) \ + ((sizeof(x) == sizeof(char)) ? T_BYTE : \ + ((sizeof(x) == sizeof(short)) ? T_SHORT : \ + ((sizeof(x) == sizeof(int)) ? T_INT : \ + ((sizeof(x) == sizeof(long)) ? T_LONG : -1)))) +#else +#define __Pyx_T_SIGNED_INT(x) \ + ((sizeof(x) == sizeof(char)) ? T_BYTE : \ + ((sizeof(x) == sizeof(short)) ? T_SHORT : \ + ((sizeof(x) == sizeof(int)) ? T_INT : \ + ((sizeof(x) == sizeof(long)) ? T_LONG : \ + ((sizeof(x) == sizeof(PY_LONG_LONG)) ? T_LONGLONG : -1))))) +#endif + +#define __Pyx_T_FLOATING(x) \ + ((sizeof(x) == sizeof(float)) ? T_FLOAT : \ + ((sizeof(x) == sizeof(double)) ? T_DOUBLE : -1)) + +#if !defined(T_SIZET) +#if !defined(T_ULONGLONG) +#define T_SIZET \ + ((sizeof(size_t) == sizeof(unsigned int)) ? T_UINT : \ + ((sizeof(size_t) == sizeof(unsigned long)) ? T_ULONG : -1)) +#else +#define T_SIZET \ + ((sizeof(size_t) == sizeof(unsigned int)) ? T_UINT : \ + ((sizeof(size_t) == sizeof(unsigned long)) ? T_ULONG : \ + ((sizeof(size_t) == sizeof(unsigned PY_LONG_LONG)) ? T_ULONGLONG : -1))) +#endif +#endif + +static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); +static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t); +static CYTHON_INLINE size_t __Pyx_PyInt_AsSize_t(PyObject*); + +#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) + + +#ifdef __GNUC__ +/* Test for GCC > 2.95 */ +#if __GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95)) +#define likely(x) __builtin_expect(!!(x), 1) +#define unlikely(x) __builtin_expect(!!(x), 0) +#else /* __GNUC__ > 2 ... */ +#define likely(x) (x) +#define unlikely(x) (x) +#endif /* __GNUC__ > 2 ... */ +#else /* __GNUC__ */ +#define likely(x) (x) +#define unlikely(x) (x) +#endif /* __GNUC__ */ + +static PyObject *__pyx_m; +static PyObject *__pyx_b; +static PyObject *__pyx_empty_tuple; +static PyObject *__pyx_empty_bytes; +static int __pyx_lineno; +static int __pyx_clineno = 0; +static const char * __pyx_cfilenm= __FILE__; +static const char *__pyx_filename; +static const char **__pyx_f; + + +/* Type declarations */ + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pxd":6 + * cdef object fobj + * + * cpdef int seek(self, long int offset, int whence=*) except -1 # <<<<<<<<<<<<<< + * cpdef long int tell(self) except -1 + * cdef int read_into(self, void *buf, size_t n) except -1 + */ + +struct __pyx_opt_args_5scipy_2io_6matlab_7streams_13GenericStream_seek { + int __pyx_n; + int whence; +}; + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pxd":9 + * cpdef long int tell(self) except -1 + * cdef int read_into(self, void *buf, size_t n) except -1 + * cdef object read_string(self, size_t n, void **pp, int copy=*) # <<<<<<<<<<<<<< + * + * cpdef GenericStream make_stream(object fobj) + */ + +struct __pyx_opt_args_5scipy_2io_6matlab_7streams_13GenericStream_read_string { + int __pyx_n; + int copy; +}; + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":89 + * cdef class cStringStream(GenericStream): + * + * cpdef int seek(self, long int offset, int whence=0) except -1: # <<<<<<<<<<<<<< + * cdef char *ptr + * if whence == 1 and offset >=0: # forward, from here + */ + +struct __pyx_opt_args_5scipy_2io_6matlab_7streams_13cStringStream_seek { + int __pyx_n; + int whence; +}; + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":109 + * return 0 + * + * cdef object read_string(self, size_t n, void **pp, int copy=True): # <<<<<<<<<<<<<< + * ''' Make new memory, wrap with object + * + */ + +struct __pyx_opt_args_5scipy_2io_6matlab_7streams_13cStringStream_read_string { + int __pyx_n; + int copy; +}; + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":132 + * self.file = PyFile_AsFile(fobj) + * + * cpdef int seek(self, long int offset, int whence=0) except -1: # <<<<<<<<<<<<<< + * cdef int ret + * ''' move `offset` bytes in stream + */ + +struct __pyx_opt_args_5scipy_2io_6matlab_7streams_10FileStream_seek { + int __pyx_n; + int whence; +}; + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":173 + * return 0 + * + * cdef object read_string(self, size_t n, void **pp, int copy=True): # <<<<<<<<<<<<<< + * ''' Make new memory, wrap with object ''' + * cdef object obj = pyalloc_v(n, pp) + */ + +struct __pyx_opt_args_5scipy_2io_6matlab_7streams_10FileStream_read_string { + int __pyx_n; + int copy; +}; + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pxd":3 + * # -*- python -*- or rather like + * + * cdef class GenericStream: # <<<<<<<<<<<<<< + * cdef object fobj + * + */ + +struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream { + PyObject_HEAD + struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_GenericStream *__pyx_vtab; + PyObject *fobj; +}; + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":87 + * + * + * cdef class cStringStream(GenericStream): # <<<<<<<<<<<<<< + * + * cpdef int seek(self, long int offset, int whence=0) except -1: + */ + +struct __pyx_obj_5scipy_2io_6matlab_7streams_cStringStream { + struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream __pyx_base; +}; + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":125 + * + * + * cdef class FileStream(GenericStream): # <<<<<<<<<<<<<< + * cdef FILE* file + * + */ + +struct __pyx_obj_5scipy_2io_6matlab_7streams_FileStream { + struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream __pyx_base; + FILE *file; +}; + + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":47 + * + * + * cdef class GenericStream: # <<<<<<<<<<<<<< + * + * def __init__(self, fobj): + */ + +struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_GenericStream { + int (*seek)(struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *, long, int __pyx_skip_dispatch, struct __pyx_opt_args_5scipy_2io_6matlab_7streams_13GenericStream_seek *__pyx_optional_args); + long (*tell)(struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *, int __pyx_skip_dispatch); + int (*read_into)(struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *, void *, size_t); + PyObject *(*read_string)(struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *, size_t, void **, struct __pyx_opt_args_5scipy_2io_6matlab_7streams_13GenericStream_read_string *__pyx_optional_args); +}; +static struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_GenericStream *__pyx_vtabptr_5scipy_2io_6matlab_7streams_GenericStream; + + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":125 + * + * + * cdef class FileStream(GenericStream): # <<<<<<<<<<<<<< + * cdef FILE* file + * + */ + +struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_FileStream { + struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_GenericStream __pyx_base; +}; +static struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_FileStream *__pyx_vtabptr_5scipy_2io_6matlab_7streams_FileStream; + + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":87 + * + * + * cdef class cStringStream(GenericStream): # <<<<<<<<<<<<<< + * + * cpdef int seek(self, long int offset, int whence=0) except -1: + */ + +struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_cStringStream { + struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_GenericStream __pyx_base; +}; +static struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_cStringStream *__pyx_vtabptr_5scipy_2io_6matlab_7streams_cStringStream; + +#ifndef CYTHON_REFNANNY + #define CYTHON_REFNANNY 0 +#endif + +#if CYTHON_REFNANNY + typedef struct { + void (*INCREF)(void*, PyObject*, int); + void (*DECREF)(void*, PyObject*, int); + void (*GOTREF)(void*, PyObject*, int); + void (*GIVEREF)(void*, PyObject*, int); + void* (*SetupContext)(const char*, int, const char*); + void (*FinishContext)(void**); + } __Pyx_RefNannyAPIStruct; + static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL; + static __Pyx_RefNannyAPIStruct * __Pyx_RefNannyImportAPI(const char *modname) { + PyObject *m = NULL, *p = NULL; + void *r = NULL; + m = PyImport_ImportModule((char *)modname); + if (!m) goto end; + p = PyObject_GetAttrString(m, (char *)"RefNannyAPI"); + if (!p) goto end; + r = PyLong_AsVoidPtr(p); + end: + Py_XDECREF(p); + Py_XDECREF(m); + return (__Pyx_RefNannyAPIStruct *)r; + } + #define __Pyx_RefNannySetupContext(name) void *__pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__) + #define __Pyx_RefNannyFinishContext() __Pyx_RefNanny->FinishContext(&__pyx_refnanny) + #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r);} } while(0) +#else + #define __Pyx_RefNannySetupContext(name) + #define __Pyx_RefNannyFinishContext() + #define __Pyx_INCREF(r) Py_INCREF(r) + #define __Pyx_DECREF(r) Py_DECREF(r) + #define __Pyx_GOTREF(r) + #define __Pyx_GIVEREF(r) + #define __Pyx_XDECREF(r) Py_XDECREF(r) +#endif /* CYTHON_REFNANNY */ +#define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);} } while(0) +#define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r);} } while(0) + +static void __Pyx_RaiseDoubleKeywordsError( + const char* func_name, PyObject* kw_name); /*proto*/ + +static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact, + Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found); /*proto*/ + +static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[], PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args, const char* function_name); /*proto*/ + +static int __Pyx_ArgTypeTest(PyObject *obj, PyTypeObject *type, int none_allowed, + const char *name, int exact); /*proto*/ + +static PyObject *__Pyx_GetName(PyObject *dict, PyObject *name); /*proto*/ + +static CYTHON_INLINE void __Pyx_ErrRestore(PyObject *type, PyObject *value, PyObject *tb); /*proto*/ +static CYTHON_INLINE void __Pyx_ErrFetch(PyObject **type, PyObject **value, PyObject **tb); /*proto*/ + +static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb); /*proto*/ + +static CYTHON_INLINE unsigned char __Pyx_PyInt_AsUnsignedChar(PyObject *); + +static CYTHON_INLINE unsigned short __Pyx_PyInt_AsUnsignedShort(PyObject *); + +static CYTHON_INLINE unsigned int __Pyx_PyInt_AsUnsignedInt(PyObject *); + +static CYTHON_INLINE char __Pyx_PyInt_AsChar(PyObject *); + +static CYTHON_INLINE short __Pyx_PyInt_AsShort(PyObject *); + +static CYTHON_INLINE int __Pyx_PyInt_AsInt(PyObject *); + +static CYTHON_INLINE signed char __Pyx_PyInt_AsSignedChar(PyObject *); + +static CYTHON_INLINE signed short __Pyx_PyInt_AsSignedShort(PyObject *); + +static CYTHON_INLINE signed int __Pyx_PyInt_AsSignedInt(PyObject *); + +static CYTHON_INLINE unsigned long __Pyx_PyInt_AsUnsignedLong(PyObject *); + +static CYTHON_INLINE unsigned PY_LONG_LONG __Pyx_PyInt_AsUnsignedLongLong(PyObject *); + +static CYTHON_INLINE long __Pyx_PyInt_AsLong(PyObject *); + +static CYTHON_INLINE PY_LONG_LONG __Pyx_PyInt_AsLongLong(PyObject *); + +static CYTHON_INLINE signed long __Pyx_PyInt_AsSignedLong(PyObject *); + +static CYTHON_INLINE signed PY_LONG_LONG __Pyx_PyInt_AsSignedLongLong(PyObject *); + +static int __Pyx_ExportFunction(const char *name, void (*f)(void), const char *sig); /*proto*/ + +static int __Pyx_SetVtable(PyObject *dict, void *vtable); /*proto*/ + +static PyTypeObject *__Pyx_ImportType(const char *module_name, const char *class_name, long size, int strict); /*proto*/ + +static PyObject *__Pyx_ImportModule(const char *name); /*proto*/ + +static void __Pyx_AddTraceback(const char *funcname); /*proto*/ + +static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); /*proto*/ +/* Module declarations from python_ref */ + +/* Module declarations from python_string */ + +/* Module declarations from scipy.io.matlab.pyalloc */ + +static CYTHON_INLINE PyObject *__pyx_f_5scipy_2io_6matlab_7pyalloc_pyalloc_v(Py_ssize_t, void **); /*proto*/ +/* Module declarations from __builtin__ */ + +/* Module declarations from scipy.io.matlab.streams */ + +static PyTypeObject *__pyx_ptype_5scipy_2io_6matlab_7streams_GenericStream = 0; +static PyTypeObject *__pyx_ptype_5scipy_2io_6matlab_7streams_file = 0; +static PyTypeObject *__pyx_ptype_5scipy_2io_6matlab_7streams_cStringStream = 0; +static PyTypeObject *__pyx_ptype_5scipy_2io_6matlab_7streams_FileStream = 0; +static struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *__pyx_f_5scipy_2io_6matlab_7streams_make_stream(PyObject *, int __pyx_skip_dispatch); /*proto*/ +#define __Pyx_MODULE_NAME "scipy.io.matlab.streams" +int __pyx_module_is_main_scipy__io__matlab__streams = 0; + +/* Implementation of scipy.io.matlab.streams */ +static PyObject *__pyx_builtin_IOError; +static char __pyx_k_1[] = "could not read bytes"; +static char __pyx_k_2[] = "Failed seek"; +static char __pyx_k_3[] = "Could not read bytes"; +static char __pyx_k_4[] = " "; +static char __pyx_k__A[] = "A"; +static char __pyx_k__n[] = "n"; +static char __pyx_k__st[] = "st"; +static char __pyx_k__file[] = "file"; +static char __pyx_k__fobj[] = "fobj"; +static char __pyx_k__read[] = "read"; +static char __pyx_k__seek[] = "seek"; +static char __pyx_k__tell[] = "tell"; +static char __pyx_k__offset[] = "offset"; +static char __pyx_k__whence[] = "whence"; +static char __pyx_k__IOError[] = "IOError"; +static char __pyx_k____main__[] = "__main__"; +static char __pyx_k__read_into[] = "read_into"; +static char __pyx_k__read_string[] = "read_string"; +static PyObject *__pyx_kp_s_1; +static PyObject *__pyx_kp_s_2; +static PyObject *__pyx_kp_s_3; +static PyObject *__pyx_kp_s_4; +static PyObject *__pyx_n_s__A; +static PyObject *__pyx_n_s__IOError; +static PyObject *__pyx_n_s____main__; +static PyObject *__pyx_n_s__file; +static PyObject *__pyx_n_s__fobj; +static PyObject *__pyx_n_s__n; +static PyObject *__pyx_n_s__offset; +static PyObject *__pyx_n_s__read; +static PyObject *__pyx_n_s__read_into; +static PyObject *__pyx_n_s__read_string; +static PyObject *__pyx_n_s__seek; +static PyObject *__pyx_n_s__st; +static PyObject *__pyx_n_s__tell; +static PyObject *__pyx_n_s__whence; + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":49 + * cdef class GenericStream: + * + * def __init__(self, fobj): # <<<<<<<<<<<<<< + * self.fobj = fobj + * + */ + +static int __pyx_pf_5scipy_2io_6matlab_7streams_13GenericStream___init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static int __pyx_pf_5scipy_2io_6matlab_7streams_13GenericStream___init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + PyObject *__pyx_v_fobj = 0; + int __pyx_r; + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__fobj,0}; + __Pyx_RefNannySetupContext("__init__"); + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); + PyObject* values[1] = {0}; + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 0: + values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__fobj); + if (likely(values[0])) kw_args--; + else goto __pyx_L5_argtuple_error; + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "__init__") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 49; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + __pyx_v_fobj = values[0]; + } else if (PyTuple_GET_SIZE(__pyx_args) != 1) { + goto __pyx_L5_argtuple_error; + } else { + __pyx_v_fobj = PyTuple_GET_ITEM(__pyx_args, 0); + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("__init__", 1, 1, 1, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 49; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + __pyx_L3_error:; + __Pyx_AddTraceback("scipy.io.matlab.streams.GenericStream.__init__"); + return -1; + __pyx_L4_argument_unpacking_done:; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":50 + * + * def __init__(self, fobj): + * self.fobj = fobj # <<<<<<<<<<<<<< + * + * cpdef int seek(self, long int offset, int whence=0) except -1: + */ + __Pyx_INCREF(__pyx_v_fobj); + __Pyx_GIVEREF(__pyx_v_fobj); + __Pyx_GOTREF(((struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *)__pyx_v_self)->fobj); + __Pyx_DECREF(((struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *)__pyx_v_self)->fobj); + ((struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *)__pyx_v_self)->fobj = __pyx_v_fobj; + + __pyx_r = 0; + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":52 + * self.fobj = fobj + * + * cpdef int seek(self, long int offset, int whence=0) except -1: # <<<<<<<<<<<<<< + * self.fobj.seek(offset, whence) + * return 0 + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_7streams_13GenericStream_seek(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static int __pyx_f_5scipy_2io_6matlab_7streams_13GenericStream_seek(struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *__pyx_v_self, long __pyx_v_offset, int __pyx_skip_dispatch, struct __pyx_opt_args_5scipy_2io_6matlab_7streams_13GenericStream_seek *__pyx_optional_args) { + int __pyx_v_whence = ((int)0); + int __pyx_r; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + PyObject *__pyx_t_4 = NULL; + int __pyx_t_5; + __Pyx_RefNannySetupContext("seek"); + if (__pyx_optional_args) { + if (__pyx_optional_args->__pyx_n > 0) { + __pyx_v_whence = __pyx_optional_args->whence; + } + } + /* Check if called by wrapper */ + if (unlikely(__pyx_skip_dispatch)) ; + /* Check if overriden in Python */ + else if (unlikely(Py_TYPE(((PyObject *)__pyx_v_self))->tp_dictoffset != 0)) { + __pyx_t_1 = PyObject_GetAttr(((PyObject *)__pyx_v_self), __pyx_n_s__seek); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 52; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + if (!PyCFunction_Check(__pyx_t_1) || (PyCFunction_GET_FUNCTION(__pyx_t_1) != (void *)&__pyx_pf_5scipy_2io_6matlab_7streams_13GenericStream_seek)) { + __pyx_t_2 = PyInt_FromLong(__pyx_v_offset); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 52; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyInt_FromLong(__pyx_v_whence); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 52; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 52; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_2); + __Pyx_GIVEREF(__pyx_t_2); + PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + __pyx_t_2 = 0; + __pyx_t_3 = 0; + __pyx_t_3 = PyObject_Call(__pyx_t_1, __pyx_t_4, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 52; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_5 = __Pyx_PyInt_AsInt(__pyx_t_3); if (unlikely((__pyx_t_5 == (int)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 52; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_r = __pyx_t_5; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + goto __pyx_L0; + } + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + } + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":53 + * + * cpdef int seek(self, long int offset, int whence=0) except -1: + * self.fobj.seek(offset, whence) # <<<<<<<<<<<<<< + * return 0 + * + */ + __pyx_t_1 = PyObject_GetAttr(__pyx_v_self->fobj, __pyx_n_s__seek); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 53; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_3 = PyInt_FromLong(__pyx_v_offset); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 53; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyInt_FromLong(__pyx_v_whence); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 53; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 53; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_4); + __Pyx_GIVEREF(__pyx_t_4); + __pyx_t_3 = 0; + __pyx_t_4 = 0; + __pyx_t_4 = PyObject_Call(__pyx_t_1, __pyx_t_2, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 53; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":54 + * cpdef int seek(self, long int offset, int whence=0) except -1: + * self.fobj.seek(offset, whence) + * return 0 # <<<<<<<<<<<<<< + * + * cpdef long int tell(self) except -1: + */ + __pyx_r = 0; + goto __pyx_L0; + + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_AddTraceback("scipy.io.matlab.streams.GenericStream.seek"); + __pyx_r = -1; + __pyx_L0:; + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":52 + * self.fobj = fobj + * + * cpdef int seek(self, long int offset, int whence=0) except -1: # <<<<<<<<<<<<<< + * self.fobj.seek(offset, whence) + * return 0 + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_7streams_13GenericStream_seek(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static PyObject *__pyx_pf_5scipy_2io_6matlab_7streams_13GenericStream_seek(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + long __pyx_v_offset; + int __pyx_v_whence; + PyObject *__pyx_r = NULL; + int __pyx_t_1; + struct __pyx_opt_args_5scipy_2io_6matlab_7streams_13GenericStream_seek __pyx_t_2; + PyObject *__pyx_t_3 = NULL; + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__offset,&__pyx_n_s__whence,0}; + __Pyx_RefNannySetupContext("seek"); + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); + PyObject* values[2] = {0,0}; + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 0: + values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__offset); + if (likely(values[0])) kw_args--; + else goto __pyx_L5_argtuple_error; + case 1: + if (kw_args > 1) { + PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__whence); + if (unlikely(value)) { values[1] = value; kw_args--; } + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "seek") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 52; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + __pyx_v_offset = __Pyx_PyInt_AsLong(values[0]); if (unlikely((__pyx_v_offset == (long)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 52; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + if (values[1]) { + __pyx_v_whence = __Pyx_PyInt_AsInt(values[1]); if (unlikely((__pyx_v_whence == (int)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 52; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } else { + __pyx_v_whence = ((int)0); + } + } else { + __pyx_v_whence = ((int)0); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 2: __pyx_v_whence = __Pyx_PyInt_AsInt(PyTuple_GET_ITEM(__pyx_args, 1)); if (unlikely((__pyx_v_whence == (int)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 52; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + case 1: __pyx_v_offset = __Pyx_PyInt_AsLong(PyTuple_GET_ITEM(__pyx_args, 0)); if (unlikely((__pyx_v_offset == (long)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 52; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + break; + default: goto __pyx_L5_argtuple_error; + } + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("seek", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 52; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + __pyx_L3_error:; + __Pyx_AddTraceback("scipy.io.matlab.streams.GenericStream.seek"); + return NULL; + __pyx_L4_argument_unpacking_done:; + __Pyx_XDECREF(__pyx_r); + __pyx_t_2.__pyx_n = 1; + __pyx_t_2.whence = __pyx_v_whence; + __pyx_t_1 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_GenericStream *)((struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *)__pyx_v_self)->__pyx_vtab)->seek(((struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *)__pyx_v_self), __pyx_v_offset, 1, &__pyx_t_2); if (unlikely(__pyx_t_1 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 52; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_3 = PyInt_FromLong(__pyx_t_1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 52; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_r = __pyx_t_3; + __pyx_t_3 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_3); + __Pyx_AddTraceback("scipy.io.matlab.streams.GenericStream.seek"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":56 + * return 0 + * + * cpdef long int tell(self) except -1: # <<<<<<<<<<<<<< + * return self.fobj.tell() + * + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_7streams_13GenericStream_tell(PyObject *__pyx_v_self, PyObject *unused); /*proto*/ +static long __pyx_f_5scipy_2io_6matlab_7streams_13GenericStream_tell(struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *__pyx_v_self, int __pyx_skip_dispatch) { + long __pyx_r; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + long __pyx_t_3; + __Pyx_RefNannySetupContext("tell"); + /* Check if called by wrapper */ + if (unlikely(__pyx_skip_dispatch)) ; + /* Check if overriden in Python */ + else if (unlikely(Py_TYPE(((PyObject *)__pyx_v_self))->tp_dictoffset != 0)) { + __pyx_t_1 = PyObject_GetAttr(((PyObject *)__pyx_v_self), __pyx_n_s__tell); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 56; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + if (!PyCFunction_Check(__pyx_t_1) || (PyCFunction_GET_FUNCTION(__pyx_t_1) != (void *)&__pyx_pf_5scipy_2io_6matlab_7streams_13GenericStream_tell)) { + __pyx_t_2 = PyObject_Call(__pyx_t_1, ((PyObject *)__pyx_empty_tuple), NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 56; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = __Pyx_PyInt_AsLong(__pyx_t_2); if (unlikely((__pyx_t_3 == (long)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 56; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_r = __pyx_t_3; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + goto __pyx_L0; + } + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + } + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":57 + * + * cpdef long int tell(self) except -1: + * return self.fobj.tell() # <<<<<<<<<<<<<< + * + * def read(self, n_bytes): + */ + __pyx_t_1 = PyObject_GetAttr(__pyx_v_self->fobj, __pyx_n_s__tell); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 57; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = PyObject_Call(__pyx_t_1, ((PyObject *)__pyx_empty_tuple), NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 57; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_3 = __Pyx_PyInt_AsLong(__pyx_t_2); if (unlikely((__pyx_t_3 == (long)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 57; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_r = __pyx_t_3; + goto __pyx_L0; + + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_AddTraceback("scipy.io.matlab.streams.GenericStream.tell"); + __pyx_r = -1; + __pyx_L0:; + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":56 + * return 0 + * + * cpdef long int tell(self) except -1: # <<<<<<<<<<<<<< + * return self.fobj.tell() + * + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_7streams_13GenericStream_tell(PyObject *__pyx_v_self, PyObject *unused); /*proto*/ +static PyObject *__pyx_pf_5scipy_2io_6matlab_7streams_13GenericStream_tell(PyObject *__pyx_v_self, PyObject *unused) { + PyObject *__pyx_r = NULL; + long __pyx_t_1; + PyObject *__pyx_t_2 = NULL; + __Pyx_RefNannySetupContext("tell"); + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_GenericStream *)((struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *)__pyx_v_self)->__pyx_vtab)->tell(((struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *)__pyx_v_self), 1); if (unlikely(__pyx_t_1 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 56; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_2 = PyInt_FromLong(__pyx_t_1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 56; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_r = __pyx_t_2; + __pyx_t_2 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_2); + __Pyx_AddTraceback("scipy.io.matlab.streams.GenericStream.tell"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":59 + * return self.fobj.tell() + * + * def read(self, n_bytes): # <<<<<<<<<<<<<< + * return self.fobj.read(n_bytes) + * + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_7streams_13GenericStream_read(PyObject *__pyx_v_self, PyObject *__pyx_v_n_bytes); /*proto*/ +static PyObject *__pyx_pf_5scipy_2io_6matlab_7streams_13GenericStream_read(PyObject *__pyx_v_self, PyObject *__pyx_v_n_bytes) { + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + __Pyx_RefNannySetupContext("read"); + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":60 + * + * def read(self, n_bytes): + * return self.fobj.read(n_bytes) # <<<<<<<<<<<<<< + * + * cdef int read_into(self, void *buf, size_t n) except -1: + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = PyObject_GetAttr(((struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *)__pyx_v_self)->fobj, __pyx_n_s__read); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 60; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 60; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(__pyx_v_n_bytes); + PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_n_bytes); + __Pyx_GIVEREF(__pyx_v_n_bytes); + __pyx_t_3 = PyObject_Call(__pyx_t_1, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 60; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_r = __pyx_t_3; + __pyx_t_3 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_AddTraceback("scipy.io.matlab.streams.GenericStream.read"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":62 + * return self.fobj.read(n_bytes) + * + * cdef int read_into(self, void *buf, size_t n) except -1: # <<<<<<<<<<<<<< + * ''' Read n bytes from stream into pre-allocated buffer `buf` + * ''' + */ + +static int __pyx_f_5scipy_2io_6matlab_7streams_13GenericStream_read_into(struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *__pyx_v_self, void *__pyx_v_buf, size_t __pyx_v_n) { + char *__pyx_v_d_ptr; + PyObject *__pyx_v_data; + int __pyx_r; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + Py_ssize_t __pyx_t_4; + int __pyx_t_5; + char *__pyx_t_6; + __Pyx_RefNannySetupContext("read_into"); + __Pyx_INCREF((PyObject *)__pyx_v_self); + __pyx_v_data = Py_None; __Pyx_INCREF(Py_None); + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":66 + * ''' + * cdef char* d_ptr + * data = self.fobj.read(n) # <<<<<<<<<<<<<< + * if PyString_Size(data) != n: + * raise IOError('could not read bytes') + */ + __pyx_t_1 = PyObject_GetAttr(__pyx_v_self->fobj, __pyx_n_s__read); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 66; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = __Pyx_PyInt_FromSize_t(__pyx_v_n); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 66; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 66; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2); + __Pyx_GIVEREF(__pyx_t_2); + __pyx_t_2 = 0; + __pyx_t_2 = PyObject_Call(__pyx_t_1, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 66; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_v_data); + __pyx_v_data = __pyx_t_2; + __pyx_t_2 = 0; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":67 + * cdef char* d_ptr + * data = self.fobj.read(n) + * if PyString_Size(data) != n: # <<<<<<<<<<<<<< + * raise IOError('could not read bytes') + * return -1 + */ + __pyx_t_4 = PyString_Size(__pyx_v_data); if (unlikely(__pyx_t_4 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 67; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_5 = (__pyx_t_4 != __pyx_v_n); + if (__pyx_t_5) { + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":68 + * data = self.fobj.read(n) + * if PyString_Size(data) != n: + * raise IOError('could not read bytes') # <<<<<<<<<<<<<< + * return -1 + * d_ptr = data + */ + __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 68; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(((PyObject *)__pyx_kp_s_1)); + PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_1)); + __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_1)); + __pyx_t_3 = PyObject_Call(__pyx_builtin_IOError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 68; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_Raise(__pyx_t_3, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + {__pyx_filename = __pyx_f[0]; __pyx_lineno = 68; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":69 + * if PyString_Size(data) != n: + * raise IOError('could not read bytes') + * return -1 # <<<<<<<<<<<<<< + * d_ptr = data + * memcpy(buf, d_ptr, n) + */ + __pyx_r = -1; + goto __pyx_L0; + goto __pyx_L3; + } + __pyx_L3:; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":70 + * raise IOError('could not read bytes') + * return -1 + * d_ptr = data # <<<<<<<<<<<<<< + * memcpy(buf, d_ptr, n) + * return 0 + */ + __pyx_t_6 = __Pyx_PyBytes_AsString(__pyx_v_data); if (unlikely((!__pyx_t_6) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 70; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_v_d_ptr = __pyx_t_6; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":71 + * return -1 + * d_ptr = data + * memcpy(buf, d_ptr, n) # <<<<<<<<<<<<<< + * return 0 + * + */ + memcpy(__pyx_v_buf, __pyx_v_d_ptr, __pyx_v_n); + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":72 + * d_ptr = data + * memcpy(buf, d_ptr, n) + * return 0 # <<<<<<<<<<<<<< + * + * cdef object read_string(self, size_t n, void **pp, int copy=True): + */ + __pyx_r = 0; + goto __pyx_L0; + + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_AddTraceback("scipy.io.matlab.streams.GenericStream.read_into"); + __pyx_r = -1; + __pyx_L0:; + __Pyx_DECREF(__pyx_v_data); + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":74 + * return 0 + * + * cdef object read_string(self, size_t n, void **pp, int copy=True): # <<<<<<<<<<<<<< + * ''' Make new memory, wrap with object ''' + * data = self.fobj.read(n) + */ + +static PyObject *__pyx_f_5scipy_2io_6matlab_7streams_13GenericStream_read_string(struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *__pyx_v_self, size_t __pyx_v_n, void **__pyx_v_pp, struct __pyx_opt_args_5scipy_2io_6matlab_7streams_13GenericStream_read_string *__pyx_optional_args) { + int __pyx_v_copy = ((int)1); + PyObject *__pyx_v_data; + PyObject *__pyx_v_d_copy = 0; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + Py_ssize_t __pyx_t_4; + int __pyx_t_5; + __Pyx_RefNannySetupContext("read_string"); + if (__pyx_optional_args) { + if (__pyx_optional_args->__pyx_n > 0) { + __pyx_v_copy = __pyx_optional_args->copy; + } + } + __Pyx_INCREF((PyObject *)__pyx_v_self); + __pyx_v_data = Py_None; __Pyx_INCREF(Py_None); + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":76 + * cdef object read_string(self, size_t n, void **pp, int copy=True): + * ''' Make new memory, wrap with object ''' + * data = self.fobj.read(n) # <<<<<<<<<<<<<< + * if PyString_Size(data) != n: + * raise IOError('could not read bytes') + */ + __pyx_t_1 = PyObject_GetAttr(__pyx_v_self->fobj, __pyx_n_s__read); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 76; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = __Pyx_PyInt_FromSize_t(__pyx_v_n); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 76; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 76; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2); + __Pyx_GIVEREF(__pyx_t_2); + __pyx_t_2 = 0; + __pyx_t_2 = PyObject_Call(__pyx_t_1, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 76; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_v_data); + __pyx_v_data = __pyx_t_2; + __pyx_t_2 = 0; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":77 + * ''' Make new memory, wrap with object ''' + * data = self.fobj.read(n) + * if PyString_Size(data) != n: # <<<<<<<<<<<<<< + * raise IOError('could not read bytes') + * if copy != True: + */ + __pyx_t_4 = PyString_Size(__pyx_v_data); if (unlikely(__pyx_t_4 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 77; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_5 = (__pyx_t_4 != __pyx_v_n); + if (__pyx_t_5) { + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":78 + * data = self.fobj.read(n) + * if PyString_Size(data) != n: + * raise IOError('could not read bytes') # <<<<<<<<<<<<<< + * if copy != True: + * pp[0] = PyString_AS_STRING(data) + */ + __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 78; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(((PyObject *)__pyx_kp_s_1)); + PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_1)); + __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_1)); + __pyx_t_3 = PyObject_Call(__pyx_builtin_IOError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 78; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_Raise(__pyx_t_3, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + {__pyx_filename = __pyx_f[0]; __pyx_lineno = 78; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + goto __pyx_L3; + } + __pyx_L3:; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":79 + * if PyString_Size(data) != n: + * raise IOError('could not read bytes') + * if copy != True: # <<<<<<<<<<<<<< + * pp[0] = PyString_AS_STRING(data) + * return data + */ + __pyx_t_5 = (__pyx_v_copy != 1); + if (__pyx_t_5) { + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":80 + * raise IOError('could not read bytes') + * if copy != True: + * pp[0] = PyString_AS_STRING(data) # <<<<<<<<<<<<<< + * return data + * cdef object d_copy = pyalloc_v(n, pp) + */ + (__pyx_v_pp[0]) = ((void *)PyString_AS_STRING(__pyx_v_data)); + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":81 + * if copy != True: + * pp[0] = PyString_AS_STRING(data) + * return data # <<<<<<<<<<<<<< + * cdef object d_copy = pyalloc_v(n, pp) + * memcpy(pp[0], PyString_AS_STRING(data), n) + */ + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(__pyx_v_data); + __pyx_r = __pyx_v_data; + goto __pyx_L0; + goto __pyx_L4; + } + __pyx_L4:; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":82 + * pp[0] = PyString_AS_STRING(data) + * return data + * cdef object d_copy = pyalloc_v(n, pp) # <<<<<<<<<<<<<< + * memcpy(pp[0], PyString_AS_STRING(data), n) + * return d_copy + */ + __pyx_t_3 = __pyx_f_5scipy_2io_6matlab_7pyalloc_pyalloc_v(__pyx_v_n, __pyx_v_pp); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 82; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_v_d_copy = __pyx_t_3; + __pyx_t_3 = 0; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":83 + * return data + * cdef object d_copy = pyalloc_v(n, pp) + * memcpy(pp[0], PyString_AS_STRING(data), n) # <<<<<<<<<<<<<< + * return d_copy + * + */ + memcpy((__pyx_v_pp[0]), PyString_AS_STRING(__pyx_v_data), __pyx_v_n); + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":84 + * cdef object d_copy = pyalloc_v(n, pp) + * memcpy(pp[0], PyString_AS_STRING(data), n) + * return d_copy # <<<<<<<<<<<<<< + * + * + */ + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(__pyx_v_d_copy); + __pyx_r = __pyx_v_d_copy; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_AddTraceback("scipy.io.matlab.streams.GenericStream.read_string"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_DECREF(__pyx_v_data); + __Pyx_XDECREF(__pyx_v_d_copy); + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":89 + * cdef class cStringStream(GenericStream): + * + * cpdef int seek(self, long int offset, int whence=0) except -1: # <<<<<<<<<<<<<< + * cdef char *ptr + * if whence == 1 and offset >=0: # forward, from here + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_7streams_13cStringStream_seek(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static int __pyx_f_5scipy_2io_6matlab_7streams_13cStringStream_seek(struct __pyx_obj_5scipy_2io_6matlab_7streams_cStringStream *__pyx_v_self, long __pyx_v_offset, int __pyx_skip_dispatch, struct __pyx_opt_args_5scipy_2io_6matlab_7streams_13cStringStream_seek *__pyx_optional_args) { + int __pyx_v_whence = ((int)0); + char *__pyx_v_ptr; + int __pyx_r; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + PyObject *__pyx_t_4 = NULL; + int __pyx_t_5; + int __pyx_t_6; + int __pyx_t_7; + int __pyx_t_8; + struct __pyx_opt_args_5scipy_2io_6matlab_7streams_13GenericStream_seek __pyx_t_9; + __Pyx_RefNannySetupContext("seek"); + if (__pyx_optional_args) { + if (__pyx_optional_args->__pyx_n > 0) { + __pyx_v_whence = __pyx_optional_args->whence; + } + } + __Pyx_INCREF((PyObject *)__pyx_v_self); + /* Check if called by wrapper */ + if (unlikely(__pyx_skip_dispatch)) ; + /* Check if overriden in Python */ + else if (unlikely(Py_TYPE(((PyObject *)__pyx_v_self))->tp_dictoffset != 0)) { + __pyx_t_1 = PyObject_GetAttr(((PyObject *)__pyx_v_self), __pyx_n_s__seek); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 89; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + if (!PyCFunction_Check(__pyx_t_1) || (PyCFunction_GET_FUNCTION(__pyx_t_1) != (void *)&__pyx_pf_5scipy_2io_6matlab_7streams_13cStringStream_seek)) { + __pyx_t_2 = PyInt_FromLong(__pyx_v_offset); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 89; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyInt_FromLong(__pyx_v_whence); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 89; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 89; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_2); + __Pyx_GIVEREF(__pyx_t_2); + PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + __pyx_t_2 = 0; + __pyx_t_3 = 0; + __pyx_t_3 = PyObject_Call(__pyx_t_1, __pyx_t_4, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 89; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_5 = __Pyx_PyInt_AsInt(__pyx_t_3); if (unlikely((__pyx_t_5 == (int)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 89; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_r = __pyx_t_5; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + goto __pyx_L0; + } + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + } + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":91 + * cpdef int seek(self, long int offset, int whence=0) except -1: + * cdef char *ptr + * if whence == 1 and offset >=0: # forward, from here # <<<<<<<<<<<<<< + * StringIO_cread(self.fobj, &ptr, offset) + * return 0 + */ + __pyx_t_6 = (__pyx_v_whence == 1); + if (__pyx_t_6) { + __pyx_t_7 = (__pyx_v_offset >= 0); + __pyx_t_8 = __pyx_t_7; + } else { + __pyx_t_8 = __pyx_t_6; + } + if (__pyx_t_8) { + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":92 + * cdef char *ptr + * if whence == 1 and offset >=0: # forward, from here + * StringIO_cread(self.fobj, &ptr, offset) # <<<<<<<<<<<<<< + * return 0 + * else: # use python interface + */ + PycStringIO->cread(__pyx_v_self->__pyx_base.fobj, (&__pyx_v_ptr), __pyx_v_offset); + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":93 + * if whence == 1 and offset >=0: # forward, from here + * StringIO_cread(self.fobj, &ptr, offset) + * return 0 # <<<<<<<<<<<<<< + * else: # use python interface + * return GenericStream.seek(self, offset, whence) + */ + __pyx_r = 0; + goto __pyx_L0; + goto __pyx_L3; + } + /*else*/ { + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":95 + * return 0 + * else: # use python interface + * return GenericStream.seek(self, offset, whence) # <<<<<<<<<<<<<< + * + * cdef int read_into(self, void *buf, size_t n) except -1: + */ + __pyx_t_9.__pyx_n = 1; + __pyx_t_9.whence = __pyx_v_whence; + __pyx_t_5 = __pyx_vtabptr_5scipy_2io_6matlab_7streams_GenericStream->seek(((struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *)__pyx_v_self), __pyx_v_offset, 1, &__pyx_t_9); if (unlikely(__pyx_t_5 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 95; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_r = __pyx_t_5; + goto __pyx_L0; + } + __pyx_L3:; + + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_AddTraceback("scipy.io.matlab.streams.cStringStream.seek"); + __pyx_r = -1; + __pyx_L0:; + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":89 + * cdef class cStringStream(GenericStream): + * + * cpdef int seek(self, long int offset, int whence=0) except -1: # <<<<<<<<<<<<<< + * cdef char *ptr + * if whence == 1 and offset >=0: # forward, from here + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_7streams_13cStringStream_seek(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static PyObject *__pyx_pf_5scipy_2io_6matlab_7streams_13cStringStream_seek(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + long __pyx_v_offset; + int __pyx_v_whence; + PyObject *__pyx_r = NULL; + int __pyx_t_1; + struct __pyx_opt_args_5scipy_2io_6matlab_7streams_13GenericStream_seek __pyx_t_2; + PyObject *__pyx_t_3 = NULL; + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__offset,&__pyx_n_s__whence,0}; + __Pyx_RefNannySetupContext("seek"); + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); + PyObject* values[2] = {0,0}; + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 0: + values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__offset); + if (likely(values[0])) kw_args--; + else goto __pyx_L5_argtuple_error; + case 1: + if (kw_args > 1) { + PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__whence); + if (unlikely(value)) { values[1] = value; kw_args--; } + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "seek") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 89; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + __pyx_v_offset = __Pyx_PyInt_AsLong(values[0]); if (unlikely((__pyx_v_offset == (long)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 89; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + if (values[1]) { + __pyx_v_whence = __Pyx_PyInt_AsInt(values[1]); if (unlikely((__pyx_v_whence == (int)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 89; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } else { + __pyx_v_whence = ((int)0); + } + } else { + __pyx_v_whence = ((int)0); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 2: __pyx_v_whence = __Pyx_PyInt_AsInt(PyTuple_GET_ITEM(__pyx_args, 1)); if (unlikely((__pyx_v_whence == (int)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 89; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + case 1: __pyx_v_offset = __Pyx_PyInt_AsLong(PyTuple_GET_ITEM(__pyx_args, 0)); if (unlikely((__pyx_v_offset == (long)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 89; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + break; + default: goto __pyx_L5_argtuple_error; + } + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("seek", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 89; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + __pyx_L3_error:; + __Pyx_AddTraceback("scipy.io.matlab.streams.cStringStream.seek"); + return NULL; + __pyx_L4_argument_unpacking_done:; + __Pyx_XDECREF(__pyx_r); + __pyx_t_2.__pyx_n = 1; + __pyx_t_2.whence = __pyx_v_whence; + __pyx_t_1 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_cStringStream *)((struct __pyx_obj_5scipy_2io_6matlab_7streams_cStringStream *)__pyx_v_self)->__pyx_base.__pyx_vtab)->__pyx_base.seek(((struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *)__pyx_v_self), __pyx_v_offset, 1, &__pyx_t_2); if (unlikely(__pyx_t_1 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 89; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_3 = PyInt_FromLong(__pyx_t_1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 89; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_r = __pyx_t_3; + __pyx_t_3 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_3); + __Pyx_AddTraceback("scipy.io.matlab.streams.cStringStream.seek"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":97 + * return GenericStream.seek(self, offset, whence) + * + * cdef int read_into(self, void *buf, size_t n) except -1: # <<<<<<<<<<<<<< + * ''' Read n bytes from stream into pre-allocated buffer `buf` + * ''' + */ + +static int __pyx_f_5scipy_2io_6matlab_7streams_13cStringStream_read_into(struct __pyx_obj_5scipy_2io_6matlab_7streams_cStringStream *__pyx_v_self, void *__pyx_v_buf, size_t __pyx_v_n) { + size_t __pyx_v_n_red; + char *__pyx_v_d_ptr; + int __pyx_r; + int __pyx_t_1; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + __Pyx_RefNannySetupContext("read_into"); + __Pyx_INCREF((PyObject *)__pyx_v_self); + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":103 + * size_t n_red + * char* d_ptr + * n_red = StringIO_cread(self.fobj, &d_ptr, n) # <<<<<<<<<<<<<< + * if n_red != n: + * raise IOError('could not read bytes') + */ + __pyx_v_n_red = PycStringIO->cread(__pyx_v_self->__pyx_base.fobj, (&__pyx_v_d_ptr), __pyx_v_n); + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":104 + * char* d_ptr + * n_red = StringIO_cread(self.fobj, &d_ptr, n) + * if n_red != n: # <<<<<<<<<<<<<< + * raise IOError('could not read bytes') + * memcpy(buf, d_ptr, n) + */ + __pyx_t_1 = (__pyx_v_n_red != __pyx_v_n); + if (__pyx_t_1) { + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":105 + * n_red = StringIO_cread(self.fobj, &d_ptr, n) + * if n_red != n: + * raise IOError('could not read bytes') # <<<<<<<<<<<<<< + * memcpy(buf, d_ptr, n) + * return 0 + */ + __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 105; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(((PyObject *)__pyx_kp_s_1)); + PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_1)); + __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_1)); + __pyx_t_3 = PyObject_Call(__pyx_builtin_IOError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 105; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_Raise(__pyx_t_3, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + {__pyx_filename = __pyx_f[0]; __pyx_lineno = 105; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + goto __pyx_L3; + } + __pyx_L3:; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":106 + * if n_red != n: + * raise IOError('could not read bytes') + * memcpy(buf, d_ptr, n) # <<<<<<<<<<<<<< + * return 0 + * + */ + memcpy(__pyx_v_buf, ((void *)__pyx_v_d_ptr), __pyx_v_n); + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":107 + * raise IOError('could not read bytes') + * memcpy(buf, d_ptr, n) + * return 0 # <<<<<<<<<<<<<< + * + * cdef object read_string(self, size_t n, void **pp, int copy=True): + */ + __pyx_r = 0; + goto __pyx_L0; + + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_AddTraceback("scipy.io.matlab.streams.cStringStream.read_into"); + __pyx_r = -1; + __pyx_L0:; + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":109 + * return 0 + * + * cdef object read_string(self, size_t n, void **pp, int copy=True): # <<<<<<<<<<<<<< + * ''' Make new memory, wrap with object + * + */ + +static PyObject *__pyx_f_5scipy_2io_6matlab_7streams_13cStringStream_read_string(struct __pyx_obj_5scipy_2io_6matlab_7streams_cStringStream *__pyx_v_self, size_t __pyx_v_n, void **__pyx_v_pp, struct __pyx_opt_args_5scipy_2io_6matlab_7streams_13cStringStream_read_string *__pyx_optional_args) { + int __pyx_v_copy = ((int)1); + char *__pyx_v_d_ptr; + PyObject *__pyx_v_obj; + size_t __pyx_v_n_red; + PyObject *__pyx_r = NULL; + int __pyx_t_1; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + __Pyx_RefNannySetupContext("read_string"); + if (__pyx_optional_args) { + if (__pyx_optional_args->__pyx_n > 0) { + __pyx_v_copy = __pyx_optional_args->copy; + } + } + __Pyx_INCREF((PyObject *)__pyx_v_self); + __pyx_v_obj = Py_None; __Pyx_INCREF(Py_None); + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":117 + * char *d_ptr + * object obj + * cdef size_t n_red = StringIO_cread(self.fobj, &d_ptr, n) # <<<<<<<<<<<<<< + * if n_red != n: + * raise IOError('could not read bytes') + */ + __pyx_v_n_red = PycStringIO->cread(__pyx_v_self->__pyx_base.fobj, (&__pyx_v_d_ptr), __pyx_v_n); + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":118 + * object obj + * cdef size_t n_red = StringIO_cread(self.fobj, &d_ptr, n) + * if n_red != n: # <<<<<<<<<<<<<< + * raise IOError('could not read bytes') + * obj = pyalloc_v(n, pp) + */ + __pyx_t_1 = (__pyx_v_n_red != __pyx_v_n); + if (__pyx_t_1) { + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":119 + * cdef size_t n_red = StringIO_cread(self.fobj, &d_ptr, n) + * if n_red != n: + * raise IOError('could not read bytes') # <<<<<<<<<<<<<< + * obj = pyalloc_v(n, pp) + * memcpy(pp[0], d_ptr, n) + */ + __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 119; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(((PyObject *)__pyx_kp_s_1)); + PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_1)); + __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_1)); + __pyx_t_3 = PyObject_Call(__pyx_builtin_IOError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 119; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_Raise(__pyx_t_3, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + {__pyx_filename = __pyx_f[0]; __pyx_lineno = 119; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + goto __pyx_L3; + } + __pyx_L3:; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":120 + * if n_red != n: + * raise IOError('could not read bytes') + * obj = pyalloc_v(n, pp) # <<<<<<<<<<<<<< + * memcpy(pp[0], d_ptr, n) + * return obj + */ + __pyx_t_3 = __pyx_f_5scipy_2io_6matlab_7pyalloc_pyalloc_v(__pyx_v_n, __pyx_v_pp); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 120; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_v_obj); + __pyx_v_obj = __pyx_t_3; + __pyx_t_3 = 0; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":121 + * raise IOError('could not read bytes') + * obj = pyalloc_v(n, pp) + * memcpy(pp[0], d_ptr, n) # <<<<<<<<<<<<<< + * return obj + * + */ + memcpy((__pyx_v_pp[0]), __pyx_v_d_ptr, __pyx_v_n); + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":122 + * obj = pyalloc_v(n, pp) + * memcpy(pp[0], d_ptr, n) + * return obj # <<<<<<<<<<<<<< + * + * + */ + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(__pyx_v_obj); + __pyx_r = __pyx_v_obj; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_AddTraceback("scipy.io.matlab.streams.cStringStream.read_string"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_DECREF(__pyx_v_obj); + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":128 + * cdef FILE* file + * + * def __init__(self, fobj): # <<<<<<<<<<<<<< + * self.fobj = fobj + * self.file = PyFile_AsFile(fobj) + */ + +static int __pyx_pf_5scipy_2io_6matlab_7streams_10FileStream___init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static int __pyx_pf_5scipy_2io_6matlab_7streams_10FileStream___init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + PyObject *__pyx_v_fobj = 0; + int __pyx_r; + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__fobj,0}; + __Pyx_RefNannySetupContext("__init__"); + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); + PyObject* values[1] = {0}; + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 0: + values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__fobj); + if (likely(values[0])) kw_args--; + else goto __pyx_L5_argtuple_error; + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "__init__") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 128; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + __pyx_v_fobj = values[0]; + } else if (PyTuple_GET_SIZE(__pyx_args) != 1) { + goto __pyx_L5_argtuple_error; + } else { + __pyx_v_fobj = PyTuple_GET_ITEM(__pyx_args, 0); + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("__init__", 1, 1, 1, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 128; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + __pyx_L3_error:; + __Pyx_AddTraceback("scipy.io.matlab.streams.FileStream.__init__"); + return -1; + __pyx_L4_argument_unpacking_done:; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":129 + * + * def __init__(self, fobj): + * self.fobj = fobj # <<<<<<<<<<<<<< + * self.file = PyFile_AsFile(fobj) + * + */ + __Pyx_INCREF(__pyx_v_fobj); + __Pyx_GIVEREF(__pyx_v_fobj); + __Pyx_GOTREF(((struct __pyx_obj_5scipy_2io_6matlab_7streams_FileStream *)__pyx_v_self)->__pyx_base.fobj); + __Pyx_DECREF(((struct __pyx_obj_5scipy_2io_6matlab_7streams_FileStream *)__pyx_v_self)->__pyx_base.fobj); + ((struct __pyx_obj_5scipy_2io_6matlab_7streams_FileStream *)__pyx_v_self)->__pyx_base.fobj = __pyx_v_fobj; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":130 + * def __init__(self, fobj): + * self.fobj = fobj + * self.file = PyFile_AsFile(fobj) # <<<<<<<<<<<<<< + * + * cpdef int seek(self, long int offset, int whence=0) except -1: + */ + ((struct __pyx_obj_5scipy_2io_6matlab_7streams_FileStream *)__pyx_v_self)->file = PyFile_AsFile(__pyx_v_fobj); + + __pyx_r = 0; + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":132 + * self.file = PyFile_AsFile(fobj) + * + * cpdef int seek(self, long int offset, int whence=0) except -1: # <<<<<<<<<<<<<< + * cdef int ret + * ''' move `offset` bytes in stream + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_7streams_10FileStream_seek(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static int __pyx_f_5scipy_2io_6matlab_7streams_10FileStream_seek(struct __pyx_obj_5scipy_2io_6matlab_7streams_FileStream *__pyx_v_self, long __pyx_v_offset, int __pyx_skip_dispatch, struct __pyx_opt_args_5scipy_2io_6matlab_7streams_10FileStream_seek *__pyx_optional_args) { + int __pyx_v_whence = ((int)0); + int __pyx_v_ret; + int __pyx_r; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + PyObject *__pyx_t_4 = NULL; + int __pyx_t_5; + __Pyx_RefNannySetupContext("seek"); + if (__pyx_optional_args) { + if (__pyx_optional_args->__pyx_n > 0) { + __pyx_v_whence = __pyx_optional_args->whence; + } + } + __Pyx_INCREF((PyObject *)__pyx_v_self); + /* Check if called by wrapper */ + if (unlikely(__pyx_skip_dispatch)) ; + /* Check if overriden in Python */ + else if (unlikely(Py_TYPE(((PyObject *)__pyx_v_self))->tp_dictoffset != 0)) { + __pyx_t_1 = PyObject_GetAttr(((PyObject *)__pyx_v_self), __pyx_n_s__seek); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 132; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + if (!PyCFunction_Check(__pyx_t_1) || (PyCFunction_GET_FUNCTION(__pyx_t_1) != (void *)&__pyx_pf_5scipy_2io_6matlab_7streams_10FileStream_seek)) { + __pyx_t_2 = PyInt_FromLong(__pyx_v_offset); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 132; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyInt_FromLong(__pyx_v_whence); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 132; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 132; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_2); + __Pyx_GIVEREF(__pyx_t_2); + PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + __pyx_t_2 = 0; + __pyx_t_3 = 0; + __pyx_t_3 = PyObject_Call(__pyx_t_1, __pyx_t_4, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 132; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_5 = __Pyx_PyInt_AsInt(__pyx_t_3); if (unlikely((__pyx_t_5 == (int)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 132; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_r = __pyx_t_5; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + goto __pyx_L0; + } + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + } + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":152 + * ret : int + * ''' + * ret = fseek(self.file, offset, whence) # <<<<<<<<<<<<<< + * if ret: + * raise IOError('Failed seek') + */ + __pyx_v_ret = fseek(__pyx_v_self->file, __pyx_v_offset, __pyx_v_whence); + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":153 + * ''' + * ret = fseek(self.file, offset, whence) + * if ret: # <<<<<<<<<<<<<< + * raise IOError('Failed seek') + * return -1 + */ + __pyx_t_5 = __pyx_v_ret; + if (__pyx_t_5) { + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":154 + * ret = fseek(self.file, offset, whence) + * if ret: + * raise IOError('Failed seek') # <<<<<<<<<<<<<< + * return -1 + * return ret + */ + __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 154; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_INCREF(((PyObject *)__pyx_kp_s_2)); + PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)__pyx_kp_s_2)); + __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_2)); + __pyx_t_3 = PyObject_Call(__pyx_builtin_IOError, __pyx_t_1, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 154; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_Raise(__pyx_t_3, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + {__pyx_filename = __pyx_f[0]; __pyx_lineno = 154; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":155 + * if ret: + * raise IOError('Failed seek') + * return -1 # <<<<<<<<<<<<<< + * return ret + * + */ + __pyx_r = -1; + goto __pyx_L0; + goto __pyx_L3; + } + __pyx_L3:; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":156 + * raise IOError('Failed seek') + * return -1 + * return ret # <<<<<<<<<<<<<< + * + * cpdef long int tell(self): + */ + __pyx_r = __pyx_v_ret; + goto __pyx_L0; + + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_AddTraceback("scipy.io.matlab.streams.FileStream.seek"); + __pyx_r = -1; + __pyx_L0:; + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":132 + * self.file = PyFile_AsFile(fobj) + * + * cpdef int seek(self, long int offset, int whence=0) except -1: # <<<<<<<<<<<<<< + * cdef int ret + * ''' move `offset` bytes in stream + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_7streams_10FileStream_seek(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static PyObject *__pyx_pf_5scipy_2io_6matlab_7streams_10FileStream_seek(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + long __pyx_v_offset; + int __pyx_v_whence; + PyObject *__pyx_r = NULL; + int __pyx_t_1; + struct __pyx_opt_args_5scipy_2io_6matlab_7streams_13GenericStream_seek __pyx_t_2; + PyObject *__pyx_t_3 = NULL; + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__offset,&__pyx_n_s__whence,0}; + __Pyx_RefNannySetupContext("seek"); + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); + PyObject* values[2] = {0,0}; + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 0: + values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__offset); + if (likely(values[0])) kw_args--; + else goto __pyx_L5_argtuple_error; + case 1: + if (kw_args > 1) { + PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__whence); + if (unlikely(value)) { values[1] = value; kw_args--; } + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "seek") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 132; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + __pyx_v_offset = __Pyx_PyInt_AsLong(values[0]); if (unlikely((__pyx_v_offset == (long)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 132; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + if (values[1]) { + __pyx_v_whence = __Pyx_PyInt_AsInt(values[1]); if (unlikely((__pyx_v_whence == (int)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 132; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } else { + __pyx_v_whence = ((int)0); + } + } else { + __pyx_v_whence = ((int)0); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 2: __pyx_v_whence = __Pyx_PyInt_AsInt(PyTuple_GET_ITEM(__pyx_args, 1)); if (unlikely((__pyx_v_whence == (int)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 132; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + case 1: __pyx_v_offset = __Pyx_PyInt_AsLong(PyTuple_GET_ITEM(__pyx_args, 0)); if (unlikely((__pyx_v_offset == (long)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 132; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + break; + default: goto __pyx_L5_argtuple_error; + } + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("seek", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 132; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + __pyx_L3_error:; + __Pyx_AddTraceback("scipy.io.matlab.streams.FileStream.seek"); + return NULL; + __pyx_L4_argument_unpacking_done:; + __Pyx_XDECREF(__pyx_r); + __pyx_t_2.__pyx_n = 1; + __pyx_t_2.whence = __pyx_v_whence; + __pyx_t_1 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_FileStream *)((struct __pyx_obj_5scipy_2io_6matlab_7streams_FileStream *)__pyx_v_self)->__pyx_base.__pyx_vtab)->__pyx_base.seek(((struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *)__pyx_v_self), __pyx_v_offset, 1, &__pyx_t_2); if (unlikely(__pyx_t_1 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 132; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_3 = PyInt_FromLong(__pyx_t_1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 132; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_r = __pyx_t_3; + __pyx_t_3 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_3); + __Pyx_AddTraceback("scipy.io.matlab.streams.FileStream.seek"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":158 + * return ret + * + * cpdef long int tell(self): # <<<<<<<<<<<<<< + * return ftell(self.file) + * + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_7streams_10FileStream_tell(PyObject *__pyx_v_self, PyObject *unused); /*proto*/ +static long __pyx_f_5scipy_2io_6matlab_7streams_10FileStream_tell(struct __pyx_obj_5scipy_2io_6matlab_7streams_FileStream *__pyx_v_self, int __pyx_skip_dispatch) { + long __pyx_r; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + long __pyx_t_3; + __Pyx_RefNannySetupContext("tell"); + /* Check if called by wrapper */ + if (unlikely(__pyx_skip_dispatch)) ; + /* Check if overriden in Python */ + else if (unlikely(Py_TYPE(((PyObject *)__pyx_v_self))->tp_dictoffset != 0)) { + __pyx_t_1 = PyObject_GetAttr(((PyObject *)__pyx_v_self), __pyx_n_s__tell); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 158; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + if (!PyCFunction_Check(__pyx_t_1) || (PyCFunction_GET_FUNCTION(__pyx_t_1) != (void *)&__pyx_pf_5scipy_2io_6matlab_7streams_10FileStream_tell)) { + __pyx_t_2 = PyObject_Call(__pyx_t_1, ((PyObject *)__pyx_empty_tuple), NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 158; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = __Pyx_PyInt_AsLong(__pyx_t_2); if (unlikely((__pyx_t_3 == (long)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 158; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_r = __pyx_t_3; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + goto __pyx_L0; + } + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + } + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":159 + * + * cpdef long int tell(self): + * return ftell(self.file) # <<<<<<<<<<<<<< + * + * cdef int read_into(self, void *buf, size_t n) except -1: + */ + __pyx_r = ftell(__pyx_v_self->file); + goto __pyx_L0; + + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_AddTraceback("scipy.io.matlab.streams.FileStream.tell"); + __pyx_r = -1; + __pyx_L0:; + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":158 + * return ret + * + * cpdef long int tell(self): # <<<<<<<<<<<<<< + * return ftell(self.file) + * + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_7streams_10FileStream_tell(PyObject *__pyx_v_self, PyObject *unused); /*proto*/ +static PyObject *__pyx_pf_5scipy_2io_6matlab_7streams_10FileStream_tell(PyObject *__pyx_v_self, PyObject *unused) { + PyObject *__pyx_r = NULL; + long __pyx_t_1; + PyObject *__pyx_t_2 = NULL; + __Pyx_RefNannySetupContext("tell"); + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_FileStream *)((struct __pyx_obj_5scipy_2io_6matlab_7streams_FileStream *)__pyx_v_self)->__pyx_base.__pyx_vtab)->__pyx_base.tell(((struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *)__pyx_v_self), 1); if (unlikely(__pyx_t_1 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 158; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_2 = PyInt_FromLong(__pyx_t_1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 158; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_r = __pyx_t_2; + __pyx_t_2 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_2); + __Pyx_AddTraceback("scipy.io.matlab.streams.FileStream.tell"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":161 + * return ftell(self.file) + * + * cdef int read_into(self, void *buf, size_t n) except -1: # <<<<<<<<<<<<<< + * ''' Read n bytes from stream into pre-allocated buffer `buf` + * ''' + */ + +static int __pyx_f_5scipy_2io_6matlab_7streams_10FileStream_read_into(struct __pyx_obj_5scipy_2io_6matlab_7streams_FileStream *__pyx_v_self, void *__pyx_v_buf, size_t __pyx_v_n) { + size_t __pyx_v_n_red; + int __pyx_r; + int __pyx_t_1; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + __Pyx_RefNannySetupContext("read_into"); + __Pyx_INCREF((PyObject *)__pyx_v_self); + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":167 + * size_t n_red + * char* d_ptr + * n_red = fread(buf, 1, n, self.file) # <<<<<<<<<<<<<< + * if n_red != n: + * raise IOError('Could not read bytes') + */ + __pyx_v_n_red = fread(__pyx_v_buf, 1, __pyx_v_n, __pyx_v_self->file); + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":168 + * char* d_ptr + * n_red = fread(buf, 1, n, self.file) + * if n_red != n: # <<<<<<<<<<<<<< + * raise IOError('Could not read bytes') + * return -1 + */ + __pyx_t_1 = (__pyx_v_n_red != __pyx_v_n); + if (__pyx_t_1) { + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":169 + * n_red = fread(buf, 1, n, self.file) + * if n_red != n: + * raise IOError('Could not read bytes') # <<<<<<<<<<<<<< + * return -1 + * return 0 + */ + __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 169; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(((PyObject *)__pyx_kp_s_3)); + PyTuple_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_kp_s_3)); + __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_3)); + __pyx_t_3 = PyObject_Call(__pyx_builtin_IOError, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 169; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_Raise(__pyx_t_3, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + {__pyx_filename = __pyx_f[0]; __pyx_lineno = 169; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":170 + * if n_red != n: + * raise IOError('Could not read bytes') + * return -1 # <<<<<<<<<<<<<< + * return 0 + * + */ + __pyx_r = -1; + goto __pyx_L0; + goto __pyx_L3; + } + __pyx_L3:; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":171 + * raise IOError('Could not read bytes') + * return -1 + * return 0 # <<<<<<<<<<<<<< + * + * cdef object read_string(self, size_t n, void **pp, int copy=True): + */ + __pyx_r = 0; + goto __pyx_L0; + + __pyx_r = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_AddTraceback("scipy.io.matlab.streams.FileStream.read_into"); + __pyx_r = -1; + __pyx_L0:; + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":173 + * return 0 + * + * cdef object read_string(self, size_t n, void **pp, int copy=True): # <<<<<<<<<<<<<< + * ''' Make new memory, wrap with object ''' + * cdef object obj = pyalloc_v(n, pp) + */ + +static PyObject *__pyx_f_5scipy_2io_6matlab_7streams_10FileStream_read_string(struct __pyx_obj_5scipy_2io_6matlab_7streams_FileStream *__pyx_v_self, size_t __pyx_v_n, void **__pyx_v_pp, struct __pyx_opt_args_5scipy_2io_6matlab_7streams_10FileStream_read_string *__pyx_optional_args) { + int __pyx_v_copy = ((int)1); + PyObject *__pyx_v_obj = 0; + size_t __pyx_v_n_red; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + int __pyx_t_2; + PyObject *__pyx_t_3 = NULL; + __Pyx_RefNannySetupContext("read_string"); + if (__pyx_optional_args) { + if (__pyx_optional_args->__pyx_n > 0) { + __pyx_v_copy = __pyx_optional_args->copy; + } + } + __Pyx_INCREF((PyObject *)__pyx_v_self); + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":175 + * cdef object read_string(self, size_t n, void **pp, int copy=True): + * ''' Make new memory, wrap with object ''' + * cdef object obj = pyalloc_v(n, pp) # <<<<<<<<<<<<<< + * cdef size_t n_red = fread(pp[0], 1, n, self.file) + * if n_red != n: + */ + __pyx_t_1 = __pyx_f_5scipy_2io_6matlab_7pyalloc_pyalloc_v(__pyx_v_n, __pyx_v_pp); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 175; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_v_obj = __pyx_t_1; + __pyx_t_1 = 0; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":176 + * ''' Make new memory, wrap with object ''' + * cdef object obj = pyalloc_v(n, pp) + * cdef size_t n_red = fread(pp[0], 1, n, self.file) # <<<<<<<<<<<<<< + * if n_red != n: + * raise IOError('could not read bytes') + */ + __pyx_v_n_red = fread((__pyx_v_pp[0]), 1, __pyx_v_n, __pyx_v_self->file); + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":177 + * cdef object obj = pyalloc_v(n, pp) + * cdef size_t n_red = fread(pp[0], 1, n, self.file) + * if n_red != n: # <<<<<<<<<<<<<< + * raise IOError('could not read bytes') + * return obj + */ + __pyx_t_2 = (__pyx_v_n_red != __pyx_v_n); + if (__pyx_t_2) { + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":178 + * cdef size_t n_red = fread(pp[0], 1, n, self.file) + * if n_red != n: + * raise IOError('could not read bytes') # <<<<<<<<<<<<<< + * return obj + * + */ + __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 178; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_INCREF(((PyObject *)__pyx_kp_s_1)); + PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)__pyx_kp_s_1)); + __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_1)); + __pyx_t_3 = PyObject_Call(__pyx_builtin_IOError, __pyx_t_1, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 178; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_Raise(__pyx_t_3, 0, 0); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + {__pyx_filename = __pyx_f[0]; __pyx_lineno = 178; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + goto __pyx_L3; + } + __pyx_L3:; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":179 + * if n_red != n: + * raise IOError('could not read bytes') + * return obj # <<<<<<<<<<<<<< + * + * + */ + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(__pyx_v_obj); + __pyx_r = __pyx_v_obj; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_AddTraceback("scipy.io.matlab.streams.FileStream.read_string"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XDECREF(__pyx_v_obj); + __Pyx_DECREF((PyObject *)__pyx_v_self); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":182 + * + * + * def _read_into(GenericStream st, size_t n): # <<<<<<<<<<<<<< + * # for testing only. Use st.read instead + * cdef char * d_ptr + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_7streams__read_into(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static PyObject *__pyx_pf_5scipy_2io_6matlab_7streams__read_into(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *__pyx_v_st = 0; + size_t __pyx_v_n; + char *__pyx_v_d_ptr; + PyObject *__pyx_v_my_str; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + char *__pyx_t_3; + int __pyx_t_4; + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__st,&__pyx_n_s__n,0}; + __Pyx_RefNannySetupContext("_read_into"); + __pyx_self = __pyx_self; + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); + PyObject* values[2] = {0,0}; + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 0: + values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__st); + if (likely(values[0])) kw_args--; + else goto __pyx_L5_argtuple_error; + case 1: + values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__n); + if (likely(values[1])) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("_read_into", 1, 2, 2, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 182; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "_read_into") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 182; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + __pyx_v_st = ((struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *)values[0]); + __pyx_v_n = __Pyx_PyInt_AsSize_t(values[1]); if (unlikely((__pyx_v_n == (size_t)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 182; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } else if (PyTuple_GET_SIZE(__pyx_args) != 2) { + goto __pyx_L5_argtuple_error; + } else { + __pyx_v_st = ((struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *)PyTuple_GET_ITEM(__pyx_args, 0)); + __pyx_v_n = __Pyx_PyInt_AsSize_t(PyTuple_GET_ITEM(__pyx_args, 1)); if (unlikely((__pyx_v_n == (size_t)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 182; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("_read_into", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 182; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + __pyx_L3_error:; + __Pyx_AddTraceback("scipy.io.matlab.streams._read_into"); + return NULL; + __pyx_L4_argument_unpacking_done:; + __pyx_v_my_str = Py_None; __Pyx_INCREF(Py_None); + if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_st), __pyx_ptype_5scipy_2io_6matlab_7streams_GenericStream, 1, "st", 0))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 182; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":185 + * # for testing only. Use st.read instead + * cdef char * d_ptr + * my_str = ' ' * n # <<<<<<<<<<<<<< + * d_ptr = my_str + * st.read_into(d_ptr, n) + */ + __pyx_t_1 = __Pyx_PyInt_FromSize_t(__pyx_v_n); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 185; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = PyNumber_Multiply(((PyObject *)__pyx_kp_s_4), __pyx_t_1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 185; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_v_my_str); + __pyx_v_my_str = __pyx_t_2; + __pyx_t_2 = 0; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":186 + * cdef char * d_ptr + * my_str = ' ' * n + * d_ptr = my_str # <<<<<<<<<<<<<< + * st.read_into(d_ptr, n) + * return my_str + */ + __pyx_t_3 = __Pyx_PyBytes_AsString(__pyx_v_my_str); if (unlikely((!__pyx_t_3) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 186; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_v_d_ptr = __pyx_t_3; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":187 + * my_str = ' ' * n + * d_ptr = my_str + * st.read_into(d_ptr, n) # <<<<<<<<<<<<<< + * return my_str + * + */ + __pyx_t_4 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_GenericStream *)__pyx_v_st->__pyx_vtab)->read_into(__pyx_v_st, __pyx_v_d_ptr, __pyx_v_n); if (unlikely(__pyx_t_4 == -1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 187; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":188 + * d_ptr = my_str + * st.read_into(d_ptr, n) + * return my_str # <<<<<<<<<<<<<< + * + * + */ + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(__pyx_v_my_str); + __pyx_r = __pyx_v_my_str; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_AddTraceback("scipy.io.matlab.streams._read_into"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_DECREF(__pyx_v_my_str); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":191 + * + * + * def _read_string(GenericStream st, size_t n): # <<<<<<<<<<<<<< + * # for testing only. Use st.read instead + * cdef char *d_ptr + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_7streams__read_string(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static PyObject *__pyx_pf_5scipy_2io_6matlab_7streams__read_string(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *__pyx_v_st = 0; + size_t __pyx_v_n; + char *__pyx_v_d_ptr; + PyObject *__pyx_v_obj = 0; + PyObject *__pyx_v_my_str; + char *__pyx_v_mys_ptr; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + struct __pyx_opt_args_5scipy_2io_6matlab_7streams_13GenericStream_read_string __pyx_t_2; + PyObject *__pyx_t_3 = NULL; + char *__pyx_t_4; + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__st,&__pyx_n_s__n,0}; + __Pyx_RefNannySetupContext("_read_string"); + __pyx_self = __pyx_self; + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); + PyObject* values[2] = {0,0}; + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 0: + values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__st); + if (likely(values[0])) kw_args--; + else goto __pyx_L5_argtuple_error; + case 1: + values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__n); + if (likely(values[1])) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("_read_string", 1, 2, 2, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 191; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "_read_string") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 191; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + __pyx_v_st = ((struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *)values[0]); + __pyx_v_n = __Pyx_PyInt_AsSize_t(values[1]); if (unlikely((__pyx_v_n == (size_t)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 191; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } else if (PyTuple_GET_SIZE(__pyx_args) != 2) { + goto __pyx_L5_argtuple_error; + } else { + __pyx_v_st = ((struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *)PyTuple_GET_ITEM(__pyx_args, 0)); + __pyx_v_n = __Pyx_PyInt_AsSize_t(PyTuple_GET_ITEM(__pyx_args, 1)); if (unlikely((__pyx_v_n == (size_t)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 191; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("_read_string", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 191; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + __pyx_L3_error:; + __Pyx_AddTraceback("scipy.io.matlab.streams._read_string"); + return NULL; + __pyx_L4_argument_unpacking_done:; + __pyx_v_my_str = Py_None; __Pyx_INCREF(Py_None); + if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_st), __pyx_ptype_5scipy_2io_6matlab_7streams_GenericStream, 1, "st", 0))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 191; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":194 + * # for testing only. Use st.read instead + * cdef char *d_ptr + * cdef object obj = st.read_string(n, &d_ptr, True) # <<<<<<<<<<<<<< + * my_str = 'A' * n + * cdef char *mys_ptr = my_str + */ + __pyx_t_2.__pyx_n = 1; + __pyx_t_2.copy = 1; + __pyx_t_1 = ((struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_GenericStream *)__pyx_v_st->__pyx_vtab)->read_string(__pyx_v_st, __pyx_v_n, ((void **)(&__pyx_v_d_ptr)), &__pyx_t_2); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 194; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_v_obj = __pyx_t_1; + __pyx_t_1 = 0; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":195 + * cdef char *d_ptr + * cdef object obj = st.read_string(n, &d_ptr, True) + * my_str = 'A' * n # <<<<<<<<<<<<<< + * cdef char *mys_ptr = my_str + * memcpy(mys_ptr, d_ptr, n) + */ + __pyx_t_1 = __Pyx_PyInt_FromSize_t(__pyx_v_n); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 195; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_3 = PyNumber_Multiply(((PyObject *)__pyx_n_s__A), __pyx_t_1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 195; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_v_my_str); + __pyx_v_my_str = __pyx_t_3; + __pyx_t_3 = 0; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":196 + * cdef object obj = st.read_string(n, &d_ptr, True) + * my_str = 'A' * n + * cdef char *mys_ptr = my_str # <<<<<<<<<<<<<< + * memcpy(mys_ptr, d_ptr, n) + * return my_str + */ + __pyx_t_4 = __Pyx_PyBytes_AsString(__pyx_v_my_str); if (unlikely((!__pyx_t_4) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 196; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_v_mys_ptr = __pyx_t_4; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":197 + * my_str = 'A' * n + * cdef char *mys_ptr = my_str + * memcpy(mys_ptr, d_ptr, n) # <<<<<<<<<<<<<< + * return my_str + * + */ + memcpy(__pyx_v_mys_ptr, __pyx_v_d_ptr, __pyx_v_n); + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":198 + * cdef char *mys_ptr = my_str + * memcpy(mys_ptr, d_ptr, n) + * return my_str # <<<<<<<<<<<<<< + * + * + */ + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(__pyx_v_my_str); + __pyx_r = __pyx_v_my_str; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_AddTraceback("scipy.io.matlab.streams._read_string"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XDECREF(__pyx_v_obj); + __Pyx_DECREF(__pyx_v_my_str); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":201 + * + * + * cpdef GenericStream make_stream(object fobj): # <<<<<<<<<<<<<< + * ''' Make stream of correct type for file-like `fobj` + * ''' + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_7streams_make_stream(PyObject *__pyx_self, PyObject *__pyx_v_fobj); /*proto*/ +static struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *__pyx_f_5scipy_2io_6matlab_7streams_make_stream(PyObject *__pyx_v_fobj, int __pyx_skip_dispatch) { + struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *__pyx_r = NULL; + int __pyx_t_1; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + int __pyx_t_4; + int __pyx_t_5; + __Pyx_RefNannySetupContext("make_stream"); + __Pyx_INCREF(__pyx_v_fobj); + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":204 + * ''' Make stream of correct type for file-like `fobj` + * ''' + * if isinstance(fobj, file): # <<<<<<<<<<<<<< + * return FileStream(fobj) + * elif PycStringIO_InputCheck(fobj) or PycStringIO_OutputCheck(fobj): + */ + __pyx_t_1 = PyObject_TypeCheck(__pyx_v_fobj, ((PyTypeObject *)((PyObject*)__pyx_ptype_5scipy_2io_6matlab_7streams_file))); + if (__pyx_t_1) { + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":205 + * ''' + * if isinstance(fobj, file): + * return FileStream(fobj) # <<<<<<<<<<<<<< + * elif PycStringIO_InputCheck(fobj) or PycStringIO_OutputCheck(fobj): + * return cStringStream(fobj) + */ + __Pyx_XDECREF(((PyObject *)__pyx_r)); + __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 205; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(__pyx_v_fobj); + PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_fobj); + __Pyx_GIVEREF(__pyx_v_fobj); + __pyx_t_3 = PyObject_Call(((PyObject *)((PyObject*)__pyx_ptype_5scipy_2io_6matlab_7streams_FileStream)), __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 205; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_r = ((struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *)__pyx_t_3); + __pyx_t_3 = 0; + goto __pyx_L0; + goto __pyx_L3; + } + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":206 + * if isinstance(fobj, file): + * return FileStream(fobj) + * elif PycStringIO_InputCheck(fobj) or PycStringIO_OutputCheck(fobj): # <<<<<<<<<<<<<< + * return cStringStream(fobj) + * return GenericStream(fobj) + */ + __pyx_t_1 = PycStringIO_InputCheck(__pyx_v_fobj); + if (!__pyx_t_1) { + __pyx_t_4 = PycStringIO_OutputCheck(__pyx_v_fobj); + __pyx_t_5 = __pyx_t_4; + } else { + __pyx_t_5 = __pyx_t_1; + } + if (__pyx_t_5) { + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":207 + * return FileStream(fobj) + * elif PycStringIO_InputCheck(fobj) or PycStringIO_OutputCheck(fobj): + * return cStringStream(fobj) # <<<<<<<<<<<<<< + * return GenericStream(fobj) + * + */ + __Pyx_XDECREF(((PyObject *)__pyx_r)); + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 207; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_INCREF(__pyx_v_fobj); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_fobj); + __Pyx_GIVEREF(__pyx_v_fobj); + __pyx_t_2 = PyObject_Call(((PyObject *)((PyObject*)__pyx_ptype_5scipy_2io_6matlab_7streams_cStringStream)), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 207; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_r = ((struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *)__pyx_t_2); + __pyx_t_2 = 0; + goto __pyx_L0; + goto __pyx_L3; + } + __pyx_L3:; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":208 + * elif PycStringIO_InputCheck(fobj) or PycStringIO_OutputCheck(fobj): + * return cStringStream(fobj) + * return GenericStream(fobj) # <<<<<<<<<<<<<< + * + * + */ + __Pyx_XDECREF(((PyObject *)__pyx_r)); + __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 208; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(__pyx_v_fobj); + PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_fobj); + __Pyx_GIVEREF(__pyx_v_fobj); + __pyx_t_3 = PyObject_Call(((PyObject *)((PyObject*)__pyx_ptype_5scipy_2io_6matlab_7streams_GenericStream)), __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 208; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_r = ((struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *)__pyx_t_3); + __pyx_t_3 = 0; + goto __pyx_L0; + + __pyx_r = ((struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *)Py_None); __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_AddTraceback("scipy.io.matlab.streams.make_stream"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_DECREF(__pyx_v_fobj); + __Pyx_XGIVEREF((PyObject *)__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":201 + * + * + * cpdef GenericStream make_stream(object fobj): # <<<<<<<<<<<<<< + * ''' Make stream of correct type for file-like `fobj` + * ''' + */ + +static PyObject *__pyx_pf_5scipy_2io_6matlab_7streams_make_stream(PyObject *__pyx_self, PyObject *__pyx_v_fobj); /*proto*/ +static char __pyx_doc_5scipy_2io_6matlab_7streams_make_stream[] = " Make stream of correct type for file-like `fobj`\n "; +static PyObject *__pyx_pf_5scipy_2io_6matlab_7streams_make_stream(PyObject *__pyx_self, PyObject *__pyx_v_fobj) { + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("make_stream"); + __pyx_self = __pyx_self; + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = ((PyObject *)__pyx_f_5scipy_2io_6matlab_7streams_make_stream(__pyx_v_fobj, 0)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 201; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("scipy.io.matlab.streams.make_stream"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/pyalloc.pxd":8 + * + * # Function to allocate, wrap memory via Python string creation + * cdef inline object pyalloc_v(Py_ssize_t n, void **pp): # <<<<<<<<<<<<<< + * cdef object ob = PyString_FromStringAndSize(NULL, n) + * pp[0] = PyString_AS_STRING(ob) + */ + +static CYTHON_INLINE PyObject *__pyx_f_5scipy_2io_6matlab_7pyalloc_pyalloc_v(Py_ssize_t __pyx_v_n, void **__pyx_v_pp) { + PyObject *__pyx_v_ob = 0; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + __Pyx_RefNannySetupContext("pyalloc_v"); + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/pyalloc.pxd":9 + * # Function to allocate, wrap memory via Python string creation + * cdef inline object pyalloc_v(Py_ssize_t n, void **pp): + * cdef object ob = PyString_FromStringAndSize(NULL, n) # <<<<<<<<<<<<<< + * pp[0] = PyString_AS_STRING(ob) + * return ob + */ + __pyx_t_1 = PyString_FromStringAndSize(NULL, __pyx_v_n); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 9; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_v_ob = __pyx_t_1; + __pyx_t_1 = 0; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/pyalloc.pxd":10 + * cdef inline object pyalloc_v(Py_ssize_t n, void **pp): + * cdef object ob = PyString_FromStringAndSize(NULL, n) + * pp[0] = PyString_AS_STRING(ob) # <<<<<<<<<<<<<< + * return ob + * + */ + (__pyx_v_pp[0]) = ((void *)PyString_AS_STRING(__pyx_v_ob)); + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/pyalloc.pxd":11 + * cdef object ob = PyString_FromStringAndSize(NULL, n) + * pp[0] = PyString_AS_STRING(ob) + * return ob # <<<<<<<<<<<<<< + * + * + */ + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(__pyx_v_ob); + __pyx_r = __pyx_v_ob; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_AddTraceback("scipy.io.matlab.pyalloc.pyalloc_v"); + __pyx_r = 0; + __pyx_L0:; + __Pyx_XDECREF(__pyx_v_ob); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} +static struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_GenericStream __pyx_vtable_5scipy_2io_6matlab_7streams_GenericStream; + +static PyObject *__pyx_tp_new_5scipy_2io_6matlab_7streams_GenericStream(PyTypeObject *t, PyObject *a, PyObject *k) { + struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *p; + PyObject *o = (*t->tp_alloc)(t, 0); + if (!o) return 0; + p = ((struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *)o); + p->__pyx_vtab = __pyx_vtabptr_5scipy_2io_6matlab_7streams_GenericStream; + p->fobj = Py_None; Py_INCREF(Py_None); + return o; +} + +static void __pyx_tp_dealloc_5scipy_2io_6matlab_7streams_GenericStream(PyObject *o) { + struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *p = (struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *)o; + Py_XDECREF(p->fobj); + (*Py_TYPE(o)->tp_free)(o); +} + +static int __pyx_tp_traverse_5scipy_2io_6matlab_7streams_GenericStream(PyObject *o, visitproc v, void *a) { + int e; + struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *p = (struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *)o; + if (p->fobj) { + e = (*v)(p->fobj, a); if (e) return e; + } + return 0; +} + +static int __pyx_tp_clear_5scipy_2io_6matlab_7streams_GenericStream(PyObject *o) { + struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *p = (struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *)o; + PyObject* tmp; + tmp = ((PyObject*)p->fobj); + p->fobj = Py_None; Py_INCREF(Py_None); + Py_XDECREF(tmp); + return 0; +} + +static struct PyMethodDef __pyx_methods_5scipy_2io_6matlab_7streams_GenericStream[] = { + {__Pyx_NAMESTR("seek"), (PyCFunction)__pyx_pf_5scipy_2io_6matlab_7streams_13GenericStream_seek, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(0)}, + {__Pyx_NAMESTR("tell"), (PyCFunction)__pyx_pf_5scipy_2io_6matlab_7streams_13GenericStream_tell, METH_NOARGS, __Pyx_DOCSTR(0)}, + {__Pyx_NAMESTR("read"), (PyCFunction)__pyx_pf_5scipy_2io_6matlab_7streams_13GenericStream_read, METH_O, __Pyx_DOCSTR(0)}, + {0, 0, 0, 0} +}; + +static PyNumberMethods __pyx_tp_as_number_GenericStream = { + 0, /*nb_add*/ + 0, /*nb_subtract*/ + 0, /*nb_multiply*/ + #if PY_MAJOR_VERSION < 3 + 0, /*nb_divide*/ + #endif + 0, /*nb_remainder*/ + 0, /*nb_divmod*/ + 0, /*nb_power*/ + 0, /*nb_negative*/ + 0, /*nb_positive*/ + 0, /*nb_absolute*/ + 0, /*nb_nonzero*/ + 0, /*nb_invert*/ + 0, /*nb_lshift*/ + 0, /*nb_rshift*/ + 0, /*nb_and*/ + 0, /*nb_xor*/ + 0, /*nb_or*/ + #if PY_MAJOR_VERSION < 3 + 0, /*nb_coerce*/ + #endif + 0, /*nb_int*/ + #if PY_MAJOR_VERSION >= 3 + 0, /*reserved*/ + #else + 0, /*nb_long*/ + #endif + 0, /*nb_float*/ + #if PY_MAJOR_VERSION < 3 + 0, /*nb_oct*/ + #endif + #if PY_MAJOR_VERSION < 3 + 0, /*nb_hex*/ + #endif + 0, /*nb_inplace_add*/ + 0, /*nb_inplace_subtract*/ + 0, /*nb_inplace_multiply*/ + #if PY_MAJOR_VERSION < 3 + 0, /*nb_inplace_divide*/ + #endif + 0, /*nb_inplace_remainder*/ + 0, /*nb_inplace_power*/ + 0, /*nb_inplace_lshift*/ + 0, /*nb_inplace_rshift*/ + 0, /*nb_inplace_and*/ + 0, /*nb_inplace_xor*/ + 0, /*nb_inplace_or*/ + 0, /*nb_floor_divide*/ + 0, /*nb_true_divide*/ + 0, /*nb_inplace_floor_divide*/ + 0, /*nb_inplace_true_divide*/ + #if (PY_MAJOR_VERSION >= 3) || (Py_TPFLAGS_DEFAULT & Py_TPFLAGS_HAVE_INDEX) + 0, /*nb_index*/ + #endif +}; + +static PySequenceMethods __pyx_tp_as_sequence_GenericStream = { + 0, /*sq_length*/ + 0, /*sq_concat*/ + 0, /*sq_repeat*/ + 0, /*sq_item*/ + 0, /*sq_slice*/ + 0, /*sq_ass_item*/ + 0, /*sq_ass_slice*/ + 0, /*sq_contains*/ + 0, /*sq_inplace_concat*/ + 0, /*sq_inplace_repeat*/ +}; + +static PyMappingMethods __pyx_tp_as_mapping_GenericStream = { + 0, /*mp_length*/ + 0, /*mp_subscript*/ + 0, /*mp_ass_subscript*/ +}; + +static PyBufferProcs __pyx_tp_as_buffer_GenericStream = { + #if PY_MAJOR_VERSION < 3 + 0, /*bf_getreadbuffer*/ + #endif + #if PY_MAJOR_VERSION < 3 + 0, /*bf_getwritebuffer*/ + #endif + #if PY_MAJOR_VERSION < 3 + 0, /*bf_getsegcount*/ + #endif + #if PY_MAJOR_VERSION < 3 + 0, /*bf_getcharbuffer*/ + #endif + #if PY_VERSION_HEX >= 0x02060000 + 0, /*bf_getbuffer*/ + #endif + #if PY_VERSION_HEX >= 0x02060000 + 0, /*bf_releasebuffer*/ + #endif +}; + +PyTypeObject __pyx_type_5scipy_2io_6matlab_7streams_GenericStream = { + PyVarObject_HEAD_INIT(0, 0) + __Pyx_NAMESTR("scipy.io.matlab.streams.GenericStream"), /*tp_name*/ + sizeof(struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream), /*tp_basicsize*/ + 0, /*tp_itemsize*/ + __pyx_tp_dealloc_5scipy_2io_6matlab_7streams_GenericStream, /*tp_dealloc*/ + 0, /*tp_print*/ + 0, /*tp_getattr*/ + 0, /*tp_setattr*/ + 0, /*tp_compare*/ + 0, /*tp_repr*/ + &__pyx_tp_as_number_GenericStream, /*tp_as_number*/ + &__pyx_tp_as_sequence_GenericStream, /*tp_as_sequence*/ + &__pyx_tp_as_mapping_GenericStream, /*tp_as_mapping*/ + 0, /*tp_hash*/ + 0, /*tp_call*/ + 0, /*tp_str*/ + 0, /*tp_getattro*/ + 0, /*tp_setattro*/ + &__pyx_tp_as_buffer_GenericStream, /*tp_as_buffer*/ + Py_TPFLAGS_DEFAULT|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ + 0, /*tp_doc*/ + __pyx_tp_traverse_5scipy_2io_6matlab_7streams_GenericStream, /*tp_traverse*/ + __pyx_tp_clear_5scipy_2io_6matlab_7streams_GenericStream, /*tp_clear*/ + 0, /*tp_richcompare*/ + 0, /*tp_weaklistoffset*/ + 0, /*tp_iter*/ + 0, /*tp_iternext*/ + __pyx_methods_5scipy_2io_6matlab_7streams_GenericStream, /*tp_methods*/ + 0, /*tp_members*/ + 0, /*tp_getset*/ + 0, /*tp_base*/ + 0, /*tp_dict*/ + 0, /*tp_descr_get*/ + 0, /*tp_descr_set*/ + 0, /*tp_dictoffset*/ + __pyx_pf_5scipy_2io_6matlab_7streams_13GenericStream___init__, /*tp_init*/ + 0, /*tp_alloc*/ + __pyx_tp_new_5scipy_2io_6matlab_7streams_GenericStream, /*tp_new*/ + 0, /*tp_free*/ + 0, /*tp_is_gc*/ + 0, /*tp_bases*/ + 0, /*tp_mro*/ + 0, /*tp_cache*/ + 0, /*tp_subclasses*/ + 0, /*tp_weaklist*/ + 0, /*tp_del*/ + #if PY_VERSION_HEX >= 0x02060000 + 0, /*tp_version_tag*/ + #endif +}; +static struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_cStringStream __pyx_vtable_5scipy_2io_6matlab_7streams_cStringStream; + +static PyObject *__pyx_tp_new_5scipy_2io_6matlab_7streams_cStringStream(PyTypeObject *t, PyObject *a, PyObject *k) { + struct __pyx_obj_5scipy_2io_6matlab_7streams_cStringStream *p; + PyObject *o = __pyx_tp_new_5scipy_2io_6matlab_7streams_GenericStream(t, a, k); + if (!o) return 0; + p = ((struct __pyx_obj_5scipy_2io_6matlab_7streams_cStringStream *)o); + p->__pyx_base.__pyx_vtab = (struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_GenericStream*)__pyx_vtabptr_5scipy_2io_6matlab_7streams_cStringStream; + return o; +} + +static struct PyMethodDef __pyx_methods_5scipy_2io_6matlab_7streams_cStringStream[] = { + {__Pyx_NAMESTR("seek"), (PyCFunction)__pyx_pf_5scipy_2io_6matlab_7streams_13cStringStream_seek, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(0)}, + {0, 0, 0, 0} +}; + +static PyNumberMethods __pyx_tp_as_number_cStringStream = { + 0, /*nb_add*/ + 0, /*nb_subtract*/ + 0, /*nb_multiply*/ + #if PY_MAJOR_VERSION < 3 + 0, /*nb_divide*/ + #endif + 0, /*nb_remainder*/ + 0, /*nb_divmod*/ + 0, /*nb_power*/ + 0, /*nb_negative*/ + 0, /*nb_positive*/ + 0, /*nb_absolute*/ + 0, /*nb_nonzero*/ + 0, /*nb_invert*/ + 0, /*nb_lshift*/ + 0, /*nb_rshift*/ + 0, /*nb_and*/ + 0, /*nb_xor*/ + 0, /*nb_or*/ + #if PY_MAJOR_VERSION < 3 + 0, /*nb_coerce*/ + #endif + 0, /*nb_int*/ + #if PY_MAJOR_VERSION >= 3 + 0, /*reserved*/ + #else + 0, /*nb_long*/ + #endif + 0, /*nb_float*/ + #if PY_MAJOR_VERSION < 3 + 0, /*nb_oct*/ + #endif + #if PY_MAJOR_VERSION < 3 + 0, /*nb_hex*/ + #endif + 0, /*nb_inplace_add*/ + 0, /*nb_inplace_subtract*/ + 0, /*nb_inplace_multiply*/ + #if PY_MAJOR_VERSION < 3 + 0, /*nb_inplace_divide*/ + #endif + 0, /*nb_inplace_remainder*/ + 0, /*nb_inplace_power*/ + 0, /*nb_inplace_lshift*/ + 0, /*nb_inplace_rshift*/ + 0, /*nb_inplace_and*/ + 0, /*nb_inplace_xor*/ + 0, /*nb_inplace_or*/ + 0, /*nb_floor_divide*/ + 0, /*nb_true_divide*/ + 0, /*nb_inplace_floor_divide*/ + 0, /*nb_inplace_true_divide*/ + #if (PY_MAJOR_VERSION >= 3) || (Py_TPFLAGS_DEFAULT & Py_TPFLAGS_HAVE_INDEX) + 0, /*nb_index*/ + #endif +}; + +static PySequenceMethods __pyx_tp_as_sequence_cStringStream = { + 0, /*sq_length*/ + 0, /*sq_concat*/ + 0, /*sq_repeat*/ + 0, /*sq_item*/ + 0, /*sq_slice*/ + 0, /*sq_ass_item*/ + 0, /*sq_ass_slice*/ + 0, /*sq_contains*/ + 0, /*sq_inplace_concat*/ + 0, /*sq_inplace_repeat*/ +}; + +static PyMappingMethods __pyx_tp_as_mapping_cStringStream = { + 0, /*mp_length*/ + 0, /*mp_subscript*/ + 0, /*mp_ass_subscript*/ +}; + +static PyBufferProcs __pyx_tp_as_buffer_cStringStream = { + #if PY_MAJOR_VERSION < 3 + 0, /*bf_getreadbuffer*/ + #endif + #if PY_MAJOR_VERSION < 3 + 0, /*bf_getwritebuffer*/ + #endif + #if PY_MAJOR_VERSION < 3 + 0, /*bf_getsegcount*/ + #endif + #if PY_MAJOR_VERSION < 3 + 0, /*bf_getcharbuffer*/ + #endif + #if PY_VERSION_HEX >= 0x02060000 + 0, /*bf_getbuffer*/ + #endif + #if PY_VERSION_HEX >= 0x02060000 + 0, /*bf_releasebuffer*/ + #endif +}; + +PyTypeObject __pyx_type_5scipy_2io_6matlab_7streams_cStringStream = { + PyVarObject_HEAD_INIT(0, 0) + __Pyx_NAMESTR("scipy.io.matlab.streams.cStringStream"), /*tp_name*/ + sizeof(struct __pyx_obj_5scipy_2io_6matlab_7streams_cStringStream), /*tp_basicsize*/ + 0, /*tp_itemsize*/ + __pyx_tp_dealloc_5scipy_2io_6matlab_7streams_GenericStream, /*tp_dealloc*/ + 0, /*tp_print*/ + 0, /*tp_getattr*/ + 0, /*tp_setattr*/ + 0, /*tp_compare*/ + 0, /*tp_repr*/ + &__pyx_tp_as_number_cStringStream, /*tp_as_number*/ + &__pyx_tp_as_sequence_cStringStream, /*tp_as_sequence*/ + &__pyx_tp_as_mapping_cStringStream, /*tp_as_mapping*/ + 0, /*tp_hash*/ + 0, /*tp_call*/ + 0, /*tp_str*/ + 0, /*tp_getattro*/ + 0, /*tp_setattro*/ + &__pyx_tp_as_buffer_cStringStream, /*tp_as_buffer*/ + Py_TPFLAGS_DEFAULT|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ + 0, /*tp_doc*/ + __pyx_tp_traverse_5scipy_2io_6matlab_7streams_GenericStream, /*tp_traverse*/ + __pyx_tp_clear_5scipy_2io_6matlab_7streams_GenericStream, /*tp_clear*/ + 0, /*tp_richcompare*/ + 0, /*tp_weaklistoffset*/ + 0, /*tp_iter*/ + 0, /*tp_iternext*/ + __pyx_methods_5scipy_2io_6matlab_7streams_cStringStream, /*tp_methods*/ + 0, /*tp_members*/ + 0, /*tp_getset*/ + 0, /*tp_base*/ + 0, /*tp_dict*/ + 0, /*tp_descr_get*/ + 0, /*tp_descr_set*/ + 0, /*tp_dictoffset*/ + 0, /*tp_init*/ + 0, /*tp_alloc*/ + __pyx_tp_new_5scipy_2io_6matlab_7streams_cStringStream, /*tp_new*/ + 0, /*tp_free*/ + 0, /*tp_is_gc*/ + 0, /*tp_bases*/ + 0, /*tp_mro*/ + 0, /*tp_cache*/ + 0, /*tp_subclasses*/ + 0, /*tp_weaklist*/ + 0, /*tp_del*/ + #if PY_VERSION_HEX >= 0x02060000 + 0, /*tp_version_tag*/ + #endif +}; +static struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_FileStream __pyx_vtable_5scipy_2io_6matlab_7streams_FileStream; + +static PyObject *__pyx_tp_new_5scipy_2io_6matlab_7streams_FileStream(PyTypeObject *t, PyObject *a, PyObject *k) { + struct __pyx_obj_5scipy_2io_6matlab_7streams_FileStream *p; + PyObject *o = __pyx_tp_new_5scipy_2io_6matlab_7streams_GenericStream(t, a, k); + if (!o) return 0; + p = ((struct __pyx_obj_5scipy_2io_6matlab_7streams_FileStream *)o); + p->__pyx_base.__pyx_vtab = (struct __pyx_vtabstruct_5scipy_2io_6matlab_7streams_GenericStream*)__pyx_vtabptr_5scipy_2io_6matlab_7streams_FileStream; + return o; +} + +static struct PyMethodDef __pyx_methods_5scipy_2io_6matlab_7streams_FileStream[] = { + {__Pyx_NAMESTR("seek"), (PyCFunction)__pyx_pf_5scipy_2io_6matlab_7streams_10FileStream_seek, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(0)}, + {__Pyx_NAMESTR("tell"), (PyCFunction)__pyx_pf_5scipy_2io_6matlab_7streams_10FileStream_tell, METH_NOARGS, __Pyx_DOCSTR(0)}, + {0, 0, 0, 0} +}; + +static PyNumberMethods __pyx_tp_as_number_FileStream = { + 0, /*nb_add*/ + 0, /*nb_subtract*/ + 0, /*nb_multiply*/ + #if PY_MAJOR_VERSION < 3 + 0, /*nb_divide*/ + #endif + 0, /*nb_remainder*/ + 0, /*nb_divmod*/ + 0, /*nb_power*/ + 0, /*nb_negative*/ + 0, /*nb_positive*/ + 0, /*nb_absolute*/ + 0, /*nb_nonzero*/ + 0, /*nb_invert*/ + 0, /*nb_lshift*/ + 0, /*nb_rshift*/ + 0, /*nb_and*/ + 0, /*nb_xor*/ + 0, /*nb_or*/ + #if PY_MAJOR_VERSION < 3 + 0, /*nb_coerce*/ + #endif + 0, /*nb_int*/ + #if PY_MAJOR_VERSION >= 3 + 0, /*reserved*/ + #else + 0, /*nb_long*/ + #endif + 0, /*nb_float*/ + #if PY_MAJOR_VERSION < 3 + 0, /*nb_oct*/ + #endif + #if PY_MAJOR_VERSION < 3 + 0, /*nb_hex*/ + #endif + 0, /*nb_inplace_add*/ + 0, /*nb_inplace_subtract*/ + 0, /*nb_inplace_multiply*/ + #if PY_MAJOR_VERSION < 3 + 0, /*nb_inplace_divide*/ + #endif + 0, /*nb_inplace_remainder*/ + 0, /*nb_inplace_power*/ + 0, /*nb_inplace_lshift*/ + 0, /*nb_inplace_rshift*/ + 0, /*nb_inplace_and*/ + 0, /*nb_inplace_xor*/ + 0, /*nb_inplace_or*/ + 0, /*nb_floor_divide*/ + 0, /*nb_true_divide*/ + 0, /*nb_inplace_floor_divide*/ + 0, /*nb_inplace_true_divide*/ + #if (PY_MAJOR_VERSION >= 3) || (Py_TPFLAGS_DEFAULT & Py_TPFLAGS_HAVE_INDEX) + 0, /*nb_index*/ + #endif +}; + +static PySequenceMethods __pyx_tp_as_sequence_FileStream = { + 0, /*sq_length*/ + 0, /*sq_concat*/ + 0, /*sq_repeat*/ + 0, /*sq_item*/ + 0, /*sq_slice*/ + 0, /*sq_ass_item*/ + 0, /*sq_ass_slice*/ + 0, /*sq_contains*/ + 0, /*sq_inplace_concat*/ + 0, /*sq_inplace_repeat*/ +}; + +static PyMappingMethods __pyx_tp_as_mapping_FileStream = { + 0, /*mp_length*/ + 0, /*mp_subscript*/ + 0, /*mp_ass_subscript*/ +}; + +static PyBufferProcs __pyx_tp_as_buffer_FileStream = { + #if PY_MAJOR_VERSION < 3 + 0, /*bf_getreadbuffer*/ + #endif + #if PY_MAJOR_VERSION < 3 + 0, /*bf_getwritebuffer*/ + #endif + #if PY_MAJOR_VERSION < 3 + 0, /*bf_getsegcount*/ + #endif + #if PY_MAJOR_VERSION < 3 + 0, /*bf_getcharbuffer*/ + #endif + #if PY_VERSION_HEX >= 0x02060000 + 0, /*bf_getbuffer*/ + #endif + #if PY_VERSION_HEX >= 0x02060000 + 0, /*bf_releasebuffer*/ + #endif +}; + +PyTypeObject __pyx_type_5scipy_2io_6matlab_7streams_FileStream = { + PyVarObject_HEAD_INIT(0, 0) + __Pyx_NAMESTR("scipy.io.matlab.streams.FileStream"), /*tp_name*/ + sizeof(struct __pyx_obj_5scipy_2io_6matlab_7streams_FileStream), /*tp_basicsize*/ + 0, /*tp_itemsize*/ + __pyx_tp_dealloc_5scipy_2io_6matlab_7streams_GenericStream, /*tp_dealloc*/ + 0, /*tp_print*/ + 0, /*tp_getattr*/ + 0, /*tp_setattr*/ + 0, /*tp_compare*/ + 0, /*tp_repr*/ + &__pyx_tp_as_number_FileStream, /*tp_as_number*/ + &__pyx_tp_as_sequence_FileStream, /*tp_as_sequence*/ + &__pyx_tp_as_mapping_FileStream, /*tp_as_mapping*/ + 0, /*tp_hash*/ + 0, /*tp_call*/ + 0, /*tp_str*/ + 0, /*tp_getattro*/ + 0, /*tp_setattro*/ + &__pyx_tp_as_buffer_FileStream, /*tp_as_buffer*/ + Py_TPFLAGS_DEFAULT|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ + 0, /*tp_doc*/ + __pyx_tp_traverse_5scipy_2io_6matlab_7streams_GenericStream, /*tp_traverse*/ + __pyx_tp_clear_5scipy_2io_6matlab_7streams_GenericStream, /*tp_clear*/ + 0, /*tp_richcompare*/ + 0, /*tp_weaklistoffset*/ + 0, /*tp_iter*/ + 0, /*tp_iternext*/ + __pyx_methods_5scipy_2io_6matlab_7streams_FileStream, /*tp_methods*/ + 0, /*tp_members*/ + 0, /*tp_getset*/ + 0, /*tp_base*/ + 0, /*tp_dict*/ + 0, /*tp_descr_get*/ + 0, /*tp_descr_set*/ + 0, /*tp_dictoffset*/ + __pyx_pf_5scipy_2io_6matlab_7streams_10FileStream___init__, /*tp_init*/ + 0, /*tp_alloc*/ + __pyx_tp_new_5scipy_2io_6matlab_7streams_FileStream, /*tp_new*/ + 0, /*tp_free*/ + 0, /*tp_is_gc*/ + 0, /*tp_bases*/ + 0, /*tp_mro*/ + 0, /*tp_cache*/ + 0, /*tp_subclasses*/ + 0, /*tp_weaklist*/ + 0, /*tp_del*/ + #if PY_VERSION_HEX >= 0x02060000 + 0, /*tp_version_tag*/ + #endif +}; + +static struct PyMethodDef __pyx_methods[] = { + {__Pyx_NAMESTR("_read_into"), (PyCFunction)__pyx_pf_5scipy_2io_6matlab_7streams__read_into, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(0)}, + {__Pyx_NAMESTR("_read_string"), (PyCFunction)__pyx_pf_5scipy_2io_6matlab_7streams__read_string, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(0)}, + {__Pyx_NAMESTR("make_stream"), (PyCFunction)__pyx_pf_5scipy_2io_6matlab_7streams_make_stream, METH_O, __Pyx_DOCSTR(__pyx_doc_5scipy_2io_6matlab_7streams_make_stream)}, + {0, 0, 0, 0} +}; + +static void __pyx_init_filenames(void); /*proto*/ + +#if PY_MAJOR_VERSION >= 3 +static struct PyModuleDef __pyx_moduledef = { + PyModuleDef_HEAD_INIT, + __Pyx_NAMESTR("streams"), + 0, /* m_doc */ + -1, /* m_size */ + __pyx_methods /* m_methods */, + NULL, /* m_reload */ + NULL, /* m_traverse */ + NULL, /* m_clear */ + NULL /* m_free */ +}; +#endif + +static __Pyx_StringTabEntry __pyx_string_tab[] = { + {&__pyx_kp_s_1, __pyx_k_1, sizeof(__pyx_k_1), 0, 0, 1, 0}, + {&__pyx_kp_s_2, __pyx_k_2, sizeof(__pyx_k_2), 0, 0, 1, 0}, + {&__pyx_kp_s_3, __pyx_k_3, sizeof(__pyx_k_3), 0, 0, 1, 0}, + {&__pyx_kp_s_4, __pyx_k_4, sizeof(__pyx_k_4), 0, 0, 1, 0}, + {&__pyx_n_s__A, __pyx_k__A, sizeof(__pyx_k__A), 0, 0, 1, 1}, + {&__pyx_n_s__IOError, __pyx_k__IOError, sizeof(__pyx_k__IOError), 0, 0, 1, 1}, + {&__pyx_n_s____main__, __pyx_k____main__, sizeof(__pyx_k____main__), 0, 0, 1, 1}, + {&__pyx_n_s__file, __pyx_k__file, sizeof(__pyx_k__file), 0, 0, 1, 1}, + {&__pyx_n_s__fobj, __pyx_k__fobj, sizeof(__pyx_k__fobj), 0, 0, 1, 1}, + {&__pyx_n_s__n, __pyx_k__n, sizeof(__pyx_k__n), 0, 0, 1, 1}, + {&__pyx_n_s__offset, __pyx_k__offset, sizeof(__pyx_k__offset), 0, 0, 1, 1}, + {&__pyx_n_s__read, __pyx_k__read, sizeof(__pyx_k__read), 0, 0, 1, 1}, + {&__pyx_n_s__read_into, __pyx_k__read_into, sizeof(__pyx_k__read_into), 0, 0, 1, 1}, + {&__pyx_n_s__read_string, __pyx_k__read_string, sizeof(__pyx_k__read_string), 0, 0, 1, 1}, + {&__pyx_n_s__seek, __pyx_k__seek, sizeof(__pyx_k__seek), 0, 0, 1, 1}, + {&__pyx_n_s__st, __pyx_k__st, sizeof(__pyx_k__st), 0, 0, 1, 1}, + {&__pyx_n_s__tell, __pyx_k__tell, sizeof(__pyx_k__tell), 0, 0, 1, 1}, + {&__pyx_n_s__whence, __pyx_k__whence, sizeof(__pyx_k__whence), 0, 0, 1, 1}, + {0, 0, 0, 0, 0, 0, 0} +}; +static int __Pyx_InitCachedBuiltins(void) { + __pyx_builtin_IOError = __Pyx_GetName(__pyx_b, __pyx_n_s__IOError); if (!__pyx_builtin_IOError) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 68; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + return 0; + __pyx_L1_error:; + return -1; +} + +static int __Pyx_InitGlobals(void) { + if (__Pyx_InitStrings(__pyx_string_tab) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + return 0; + __pyx_L1_error:; + return -1; +} + +#if PY_MAJOR_VERSION < 3 +PyMODINIT_FUNC initstreams(void); /*proto*/ +PyMODINIT_FUNC initstreams(void) +#else +PyMODINIT_FUNC PyInit_streams(void); /*proto*/ +PyMODINIT_FUNC PyInit_streams(void) +#endif +{ + #if CYTHON_REFNANNY + void* __pyx_refnanny = NULL; + __Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny"); + if (!__Pyx_RefNanny) { + PyErr_Clear(); + __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny"); + if (!__Pyx_RefNanny) + Py_FatalError("failed to import 'refnanny' module"); + } + __pyx_refnanny = __Pyx_RefNanny->SetupContext("PyMODINIT_FUNC PyInit_streams(void)", __LINE__, __FILE__); + #endif + __pyx_init_filenames(); + __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + #if PY_MAJOR_VERSION < 3 + __pyx_empty_bytes = PyString_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + #else + __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + #endif + /*--- Library function declarations ---*/ + /*--- Threads initialization code ---*/ + #if defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS + #ifdef WITH_THREAD /* Python build with threading support? */ + PyEval_InitThreads(); + #endif + #endif + /*--- Module creation code ---*/ + #if PY_MAJOR_VERSION < 3 + __pyx_m = Py_InitModule4(__Pyx_NAMESTR("streams"), __pyx_methods, 0, 0, PYTHON_API_VERSION); + #else + __pyx_m = PyModule_Create(&__pyx_moduledef); + #endif + if (!__pyx_m) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + #if PY_MAJOR_VERSION < 3 + Py_INCREF(__pyx_m); + #endif + __pyx_b = PyImport_AddModule(__Pyx_NAMESTR(__Pyx_BUILTIN_MODULE_NAME)); + if (!__pyx_b) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + if (__Pyx_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + /*--- Initialize various global constants etc. ---*/ + if (unlikely(__Pyx_InitGlobals() < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + if (__pyx_module_is_main_scipy__io__matlab__streams) { + if (__Pyx_SetAttrString(__pyx_m, "__name__", __pyx_n_s____main__) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + } + /*--- Builtin init code ---*/ + if (unlikely(__Pyx_InitCachedBuiltins() < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + /*--- Global init code ---*/ + /*--- Function export code ---*/ + if (__Pyx_ExportFunction("make_stream", (void (*)(void))__pyx_f_5scipy_2io_6matlab_7streams_make_stream, "struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *(PyObject *, int __pyx_skip_dispatch)") < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + /*--- Type init code ---*/ + __pyx_vtabptr_5scipy_2io_6matlab_7streams_GenericStream = &__pyx_vtable_5scipy_2io_6matlab_7streams_GenericStream; + #if PY_MAJOR_VERSION >= 3 + __pyx_vtable_5scipy_2io_6matlab_7streams_GenericStream.seek = (int (*)(struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *, long, int __pyx_skip_dispatch, struct __pyx_opt_args_5scipy_2io_6matlab_7streams_13GenericStream_seek *__pyx_optional_args))__pyx_f_5scipy_2io_6matlab_7streams_13GenericStream_seek; + __pyx_vtable_5scipy_2io_6matlab_7streams_GenericStream.tell = (long (*)(struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *, int __pyx_skip_dispatch))__pyx_f_5scipy_2io_6matlab_7streams_13GenericStream_tell; + __pyx_vtable_5scipy_2io_6matlab_7streams_GenericStream.read_into = (int (*)(struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *, void *, size_t))__pyx_f_5scipy_2io_6matlab_7streams_13GenericStream_read_into; + __pyx_vtable_5scipy_2io_6matlab_7streams_GenericStream.read_string = (PyObject *(*)(struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *, size_t, void **, struct __pyx_opt_args_5scipy_2io_6matlab_7streams_13GenericStream_read_string *__pyx_optional_args))__pyx_f_5scipy_2io_6matlab_7streams_13GenericStream_read_string; + #else + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_7streams_GenericStream.seek = (void(*)(void))__pyx_f_5scipy_2io_6matlab_7streams_13GenericStream_seek; + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_7streams_GenericStream.tell = (void(*)(void))__pyx_f_5scipy_2io_6matlab_7streams_13GenericStream_tell; + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_7streams_GenericStream.read_into = (void(*)(void))__pyx_f_5scipy_2io_6matlab_7streams_13GenericStream_read_into; + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_7streams_GenericStream.read_string = (void(*)(void))__pyx_f_5scipy_2io_6matlab_7streams_13GenericStream_read_string; + #endif + if (PyType_Ready(&__pyx_type_5scipy_2io_6matlab_7streams_GenericStream) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 47; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + if (__Pyx_SetVtable(__pyx_type_5scipy_2io_6matlab_7streams_GenericStream.tp_dict, __pyx_vtabptr_5scipy_2io_6matlab_7streams_GenericStream) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 47; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + if (__Pyx_SetAttrString(__pyx_m, "GenericStream", (PyObject *)&__pyx_type_5scipy_2io_6matlab_7streams_GenericStream) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 47; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_ptype_5scipy_2io_6matlab_7streams_GenericStream = &__pyx_type_5scipy_2io_6matlab_7streams_GenericStream; + __pyx_ptype_5scipy_2io_6matlab_7streams_file = __Pyx_ImportType(__Pyx_BUILTIN_MODULE_NAME, "file", sizeof(PyFileObject), 0); if (unlikely(!__pyx_ptype_5scipy_2io_6matlab_7streams_file)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 26; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_vtabptr_5scipy_2io_6matlab_7streams_cStringStream = &__pyx_vtable_5scipy_2io_6matlab_7streams_cStringStream; + __pyx_vtable_5scipy_2io_6matlab_7streams_cStringStream.__pyx_base = *__pyx_vtabptr_5scipy_2io_6matlab_7streams_GenericStream; + #if PY_MAJOR_VERSION >= 3 + __pyx_vtable_5scipy_2io_6matlab_7streams_cStringStream.__pyx_base.seek = (int (*)(struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *, long, int __pyx_skip_dispatch, struct __pyx_opt_args_5scipy_2io_6matlab_7streams_13GenericStream_seek *__pyx_optional_args))__pyx_f_5scipy_2io_6matlab_7streams_13cStringStream_seek; + __pyx_vtable_5scipy_2io_6matlab_7streams_cStringStream.__pyx_base.read_into = (int (*)(struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *, void *, size_t))__pyx_f_5scipy_2io_6matlab_7streams_13cStringStream_read_into; + __pyx_vtable_5scipy_2io_6matlab_7streams_cStringStream.__pyx_base.read_string = (PyObject *(*)(struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *, size_t, void **, struct __pyx_opt_args_5scipy_2io_6matlab_7streams_13GenericStream_read_string *__pyx_optional_args))__pyx_f_5scipy_2io_6matlab_7streams_13cStringStream_read_string; + #else + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_7streams_cStringStream.__pyx_base.seek = (void(*)(void))__pyx_f_5scipy_2io_6matlab_7streams_13cStringStream_seek; + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_7streams_cStringStream.__pyx_base.read_into = (void(*)(void))__pyx_f_5scipy_2io_6matlab_7streams_13cStringStream_read_into; + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_7streams_cStringStream.__pyx_base.read_string = (void(*)(void))__pyx_f_5scipy_2io_6matlab_7streams_13cStringStream_read_string; + #endif + __pyx_type_5scipy_2io_6matlab_7streams_cStringStream.tp_base = __pyx_ptype_5scipy_2io_6matlab_7streams_GenericStream; + if (PyType_Ready(&__pyx_type_5scipy_2io_6matlab_7streams_cStringStream) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 87; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + if (__Pyx_SetVtable(__pyx_type_5scipy_2io_6matlab_7streams_cStringStream.tp_dict, __pyx_vtabptr_5scipy_2io_6matlab_7streams_cStringStream) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 87; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + if (__Pyx_SetAttrString(__pyx_m, "cStringStream", (PyObject *)&__pyx_type_5scipy_2io_6matlab_7streams_cStringStream) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 87; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_ptype_5scipy_2io_6matlab_7streams_cStringStream = &__pyx_type_5scipy_2io_6matlab_7streams_cStringStream; + __pyx_vtabptr_5scipy_2io_6matlab_7streams_FileStream = &__pyx_vtable_5scipy_2io_6matlab_7streams_FileStream; + __pyx_vtable_5scipy_2io_6matlab_7streams_FileStream.__pyx_base = *__pyx_vtabptr_5scipy_2io_6matlab_7streams_GenericStream; + #if PY_MAJOR_VERSION >= 3 + __pyx_vtable_5scipy_2io_6matlab_7streams_FileStream.__pyx_base.seek = (int (*)(struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *, long, int __pyx_skip_dispatch, struct __pyx_opt_args_5scipy_2io_6matlab_7streams_13GenericStream_seek *__pyx_optional_args))__pyx_f_5scipy_2io_6matlab_7streams_10FileStream_seek; + __pyx_vtable_5scipy_2io_6matlab_7streams_FileStream.__pyx_base.tell = (long (*)(struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *, int __pyx_skip_dispatch))__pyx_f_5scipy_2io_6matlab_7streams_10FileStream_tell; + __pyx_vtable_5scipy_2io_6matlab_7streams_FileStream.__pyx_base.read_into = (int (*)(struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *, void *, size_t))__pyx_f_5scipy_2io_6matlab_7streams_10FileStream_read_into; + __pyx_vtable_5scipy_2io_6matlab_7streams_FileStream.__pyx_base.read_string = (PyObject *(*)(struct __pyx_obj_5scipy_2io_6matlab_7streams_GenericStream *, size_t, void **, struct __pyx_opt_args_5scipy_2io_6matlab_7streams_13GenericStream_read_string *__pyx_optional_args))__pyx_f_5scipy_2io_6matlab_7streams_10FileStream_read_string; + #else + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_7streams_FileStream.__pyx_base.seek = (void(*)(void))__pyx_f_5scipy_2io_6matlab_7streams_10FileStream_seek; + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_7streams_FileStream.__pyx_base.tell = (void(*)(void))__pyx_f_5scipy_2io_6matlab_7streams_10FileStream_tell; + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_7streams_FileStream.__pyx_base.read_into = (void(*)(void))__pyx_f_5scipy_2io_6matlab_7streams_10FileStream_read_into; + *(void(**)(void))&__pyx_vtable_5scipy_2io_6matlab_7streams_FileStream.__pyx_base.read_string = (void(*)(void))__pyx_f_5scipy_2io_6matlab_7streams_10FileStream_read_string; + #endif + __pyx_type_5scipy_2io_6matlab_7streams_FileStream.tp_base = __pyx_ptype_5scipy_2io_6matlab_7streams_GenericStream; + if (PyType_Ready(&__pyx_type_5scipy_2io_6matlab_7streams_FileStream) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 125; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + if (__Pyx_SetVtable(__pyx_type_5scipy_2io_6matlab_7streams_FileStream.tp_dict, __pyx_vtabptr_5scipy_2io_6matlab_7streams_FileStream) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 125; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + if (__Pyx_SetAttrString(__pyx_m, "FileStream", (PyObject *)&__pyx_type_5scipy_2io_6matlab_7streams_FileStream) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 125; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_ptype_5scipy_2io_6matlab_7streams_FileStream = &__pyx_type_5scipy_2io_6matlab_7streams_FileStream; + /*--- Type import code ---*/ + /*--- Function import code ---*/ + /*--- Execution code ---*/ + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pyx":44 + * + * # initialize cStringIO + * PycString_IMPORT # <<<<<<<<<<<<<< + * + * + */ + PycString_IMPORT; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/io/matlab/streams.pxd":1 + * # -*- python -*- or rather like # <<<<<<<<<<<<<< + * + * cdef class GenericStream: + */ + goto __pyx_L0; + __pyx_L1_error:; + if (__pyx_m) { + __Pyx_AddTraceback("init scipy.io.matlab.streams"); + Py_DECREF(__pyx_m); __pyx_m = 0; + } else if (!PyErr_Occurred()) { + PyErr_SetString(PyExc_ImportError, "init scipy.io.matlab.streams"); + } + __pyx_L0:; + __Pyx_RefNannyFinishContext(); + #if PY_MAJOR_VERSION < 3 + return; + #else + return __pyx_m; + #endif +} + +static const char *__pyx_filenames[] = { + "streams.pyx", + "pyalloc.pxd", +}; + +/* Runtime support code */ + +static void __pyx_init_filenames(void) { + __pyx_f = __pyx_filenames; +} + +static void __Pyx_RaiseDoubleKeywordsError( + const char* func_name, + PyObject* kw_name) +{ + PyErr_Format(PyExc_TypeError, + #if PY_MAJOR_VERSION >= 3 + "%s() got multiple values for keyword argument '%U'", func_name, kw_name); + #else + "%s() got multiple values for keyword argument '%s'", func_name, + PyString_AS_STRING(kw_name)); + #endif +} + +static void __Pyx_RaiseArgtupleInvalid( + const char* func_name, + int exact, + Py_ssize_t num_min, + Py_ssize_t num_max, + Py_ssize_t num_found) +{ + Py_ssize_t num_expected; + const char *number, *more_or_less; + + if (num_found < num_min) { + num_expected = num_min; + more_or_less = "at least"; + } else { + num_expected = num_max; + more_or_less = "at most"; + } + if (exact) { + more_or_less = "exactly"; + } + number = (num_expected == 1) ? "" : "s"; + PyErr_Format(PyExc_TypeError, + #if PY_VERSION_HEX < 0x02050000 + "%s() takes %s %d positional argument%s (%d given)", + #else + "%s() takes %s %zd positional argument%s (%zd given)", + #endif + func_name, more_or_less, num_expected, number, num_found); +} + +static int __Pyx_ParseOptionalKeywords( + PyObject *kwds, + PyObject **argnames[], + PyObject *kwds2, + PyObject *values[], + Py_ssize_t num_pos_args, + const char* function_name) +{ + PyObject *key = 0, *value = 0; + Py_ssize_t pos = 0; + PyObject*** name; + PyObject*** first_kw_arg = argnames + num_pos_args; + + while (PyDict_Next(kwds, &pos, &key, &value)) { + name = first_kw_arg; + while (*name && (**name != key)) name++; + if (*name) { + values[name-argnames] = value; + } else { + #if PY_MAJOR_VERSION < 3 + if (unlikely(!PyString_CheckExact(key)) && unlikely(!PyString_Check(key))) { + #else + if (unlikely(!PyUnicode_CheckExact(key)) && unlikely(!PyUnicode_Check(key))) { + #endif + goto invalid_keyword_type; + } else { + for (name = first_kw_arg; *name; name++) { + #if PY_MAJOR_VERSION >= 3 + if (PyUnicode_GET_SIZE(**name) == PyUnicode_GET_SIZE(key) && + PyUnicode_Compare(**name, key) == 0) break; + #else + if (PyString_GET_SIZE(**name) == PyString_GET_SIZE(key) && + _PyString_Eq(**name, key)) break; + #endif + } + if (*name) { + values[name-argnames] = value; + } else { + /* unexpected keyword found */ + for (name=argnames; name != first_kw_arg; name++) { + if (**name == key) goto arg_passed_twice; + #if PY_MAJOR_VERSION >= 3 + if (PyUnicode_GET_SIZE(**name) == PyUnicode_GET_SIZE(key) && + PyUnicode_Compare(**name, key) == 0) goto arg_passed_twice; + #else + if (PyString_GET_SIZE(**name) == PyString_GET_SIZE(key) && + _PyString_Eq(**name, key)) goto arg_passed_twice; + #endif + } + if (kwds2) { + if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad; + } else { + goto invalid_keyword; + } + } + } + } + } + return 0; +arg_passed_twice: + __Pyx_RaiseDoubleKeywordsError(function_name, **name); + goto bad; +invalid_keyword_type: + PyErr_Format(PyExc_TypeError, + "%s() keywords must be strings", function_name); + goto bad; +invalid_keyword: + PyErr_Format(PyExc_TypeError, + #if PY_MAJOR_VERSION < 3 + "%s() got an unexpected keyword argument '%s'", + function_name, PyString_AsString(key)); + #else + "%s() got an unexpected keyword argument '%U'", + function_name, key); + #endif +bad: + return -1; +} + +static int __Pyx_ArgTypeTest(PyObject *obj, PyTypeObject *type, int none_allowed, + const char *name, int exact) +{ + if (!type) { + PyErr_Format(PyExc_SystemError, "Missing type object"); + return 0; + } + if (none_allowed && obj == Py_None) return 1; + else if (exact) { + if (Py_TYPE(obj) == type) return 1; + } + else { + if (PyObject_TypeCheck(obj, type)) return 1; + } + PyErr_Format(PyExc_TypeError, + "Argument '%s' has incorrect type (expected %s, got %s)", + name, type->tp_name, Py_TYPE(obj)->tp_name); + return 0; +} + +static PyObject *__Pyx_GetName(PyObject *dict, PyObject *name) { + PyObject *result; + result = PyObject_GetAttr(dict, name); + if (!result) + PyErr_SetObject(PyExc_NameError, name); + return result; +} + +static CYTHON_INLINE void __Pyx_ErrRestore(PyObject *type, PyObject *value, PyObject *tb) { + PyObject *tmp_type, *tmp_value, *tmp_tb; + PyThreadState *tstate = PyThreadState_GET(); + + tmp_type = tstate->curexc_type; + tmp_value = tstate->curexc_value; + tmp_tb = tstate->curexc_traceback; + tstate->curexc_type = type; + tstate->curexc_value = value; + tstate->curexc_traceback = tb; + Py_XDECREF(tmp_type); + Py_XDECREF(tmp_value); + Py_XDECREF(tmp_tb); +} + +static CYTHON_INLINE void __Pyx_ErrFetch(PyObject **type, PyObject **value, PyObject **tb) { + PyThreadState *tstate = PyThreadState_GET(); + *type = tstate->curexc_type; + *value = tstate->curexc_value; + *tb = tstate->curexc_traceback; + + tstate->curexc_type = 0; + tstate->curexc_value = 0; + tstate->curexc_traceback = 0; +} + + +#if PY_MAJOR_VERSION < 3 +static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb) { + Py_XINCREF(type); + Py_XINCREF(value); + Py_XINCREF(tb); + /* First, check the traceback argument, replacing None with NULL. */ + if (tb == Py_None) { + Py_DECREF(tb); + tb = 0; + } + else if (tb != NULL && !PyTraceBack_Check(tb)) { + PyErr_SetString(PyExc_TypeError, + "raise: arg 3 must be a traceback or None"); + goto raise_error; + } + /* Next, replace a missing value with None */ + if (value == NULL) { + value = Py_None; + Py_INCREF(value); + } + #if PY_VERSION_HEX < 0x02050000 + if (!PyClass_Check(type)) + #else + if (!PyType_Check(type)) + #endif + { + /* Raising an instance. The value should be a dummy. */ + if (value != Py_None) { + PyErr_SetString(PyExc_TypeError, + "instance exception may not have a separate value"); + goto raise_error; + } + /* Normalize to raise , */ + Py_DECREF(value); + value = type; + #if PY_VERSION_HEX < 0x02050000 + if (PyInstance_Check(type)) { + type = (PyObject*) ((PyInstanceObject*)type)->in_class; + Py_INCREF(type); + } + else { + type = 0; + PyErr_SetString(PyExc_TypeError, + "raise: exception must be an old-style class or instance"); + goto raise_error; + } + #else + type = (PyObject*) Py_TYPE(type); + Py_INCREF(type); + if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) { + PyErr_SetString(PyExc_TypeError, + "raise: exception class must be a subclass of BaseException"); + goto raise_error; + } + #endif + } + + __Pyx_ErrRestore(type, value, tb); + return; +raise_error: + Py_XDECREF(value); + Py_XDECREF(type); + Py_XDECREF(tb); + return; +} + +#else /* Python 3+ */ + +static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb) { + if (tb == Py_None) { + tb = 0; + } else if (tb && !PyTraceBack_Check(tb)) { + PyErr_SetString(PyExc_TypeError, + "raise: arg 3 must be a traceback or None"); + goto bad; + } + if (value == Py_None) + value = 0; + + if (PyExceptionInstance_Check(type)) { + if (value) { + PyErr_SetString(PyExc_TypeError, + "instance exception may not have a separate value"); + goto bad; + } + value = type; + type = (PyObject*) Py_TYPE(value); + } else if (!PyExceptionClass_Check(type)) { + PyErr_SetString(PyExc_TypeError, + "raise: exception class must be a subclass of BaseException"); + goto bad; + } + + PyErr_SetObject(type, value); + + if (tb) { + PyThreadState *tstate = PyThreadState_GET(); + PyObject* tmp_tb = tstate->curexc_traceback; + if (tb != tmp_tb) { + Py_INCREF(tb); + tstate->curexc_traceback = tb; + Py_XDECREF(tmp_tb); + } + } + +bad: + return; +} +#endif + +static CYTHON_INLINE unsigned char __Pyx_PyInt_AsUnsignedChar(PyObject* x) { + const unsigned char neg_one = (unsigned char)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(unsigned char) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(unsigned char)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to unsigned char" : + "value too large to convert to unsigned char"); + } + return (unsigned char)-1; + } + return (unsigned char)val; + } + return (unsigned char)__Pyx_PyInt_AsUnsignedLong(x); +} + +static CYTHON_INLINE unsigned short __Pyx_PyInt_AsUnsignedShort(PyObject* x) { + const unsigned short neg_one = (unsigned short)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(unsigned short) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(unsigned short)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to unsigned short" : + "value too large to convert to unsigned short"); + } + return (unsigned short)-1; + } + return (unsigned short)val; + } + return (unsigned short)__Pyx_PyInt_AsUnsignedLong(x); +} + +static CYTHON_INLINE unsigned int __Pyx_PyInt_AsUnsignedInt(PyObject* x) { + const unsigned int neg_one = (unsigned int)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(unsigned int) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(unsigned int)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to unsigned int" : + "value too large to convert to unsigned int"); + } + return (unsigned int)-1; + } + return (unsigned int)val; + } + return (unsigned int)__Pyx_PyInt_AsUnsignedLong(x); +} + +static CYTHON_INLINE char __Pyx_PyInt_AsChar(PyObject* x) { + const char neg_one = (char)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(char) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(char)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to char" : + "value too large to convert to char"); + } + return (char)-1; + } + return (char)val; + } + return (char)__Pyx_PyInt_AsLong(x); +} + +static CYTHON_INLINE short __Pyx_PyInt_AsShort(PyObject* x) { + const short neg_one = (short)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(short) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(short)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to short" : + "value too large to convert to short"); + } + return (short)-1; + } + return (short)val; + } + return (short)__Pyx_PyInt_AsLong(x); +} + +static CYTHON_INLINE int __Pyx_PyInt_AsInt(PyObject* x) { + const int neg_one = (int)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(int) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(int)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to int" : + "value too large to convert to int"); + } + return (int)-1; + } + return (int)val; + } + return (int)__Pyx_PyInt_AsLong(x); +} + +static CYTHON_INLINE signed char __Pyx_PyInt_AsSignedChar(PyObject* x) { + const signed char neg_one = (signed char)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(signed char) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(signed char)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to signed char" : + "value too large to convert to signed char"); + } + return (signed char)-1; + } + return (signed char)val; + } + return (signed char)__Pyx_PyInt_AsSignedLong(x); +} + +static CYTHON_INLINE signed short __Pyx_PyInt_AsSignedShort(PyObject* x) { + const signed short neg_one = (signed short)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(signed short) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(signed short)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to signed short" : + "value too large to convert to signed short"); + } + return (signed short)-1; + } + return (signed short)val; + } + return (signed short)__Pyx_PyInt_AsSignedLong(x); +} + +static CYTHON_INLINE signed int __Pyx_PyInt_AsSignedInt(PyObject* x) { + const signed int neg_one = (signed int)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(signed int) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(signed int)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to signed int" : + "value too large to convert to signed int"); + } + return (signed int)-1; + } + return (signed int)val; + } + return (signed int)__Pyx_PyInt_AsSignedLong(x); +} + +static CYTHON_INLINE unsigned long __Pyx_PyInt_AsUnsignedLong(PyObject* x) { + const unsigned long neg_one = (unsigned long)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to unsigned long"); + return (unsigned long)-1; + } + return (unsigned long)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to unsigned long"); + return (unsigned long)-1; + } + return PyLong_AsUnsignedLong(x); + } else { + return PyLong_AsLong(x); + } + } else { + unsigned long val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (unsigned long)-1; + val = __Pyx_PyInt_AsUnsignedLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static CYTHON_INLINE unsigned PY_LONG_LONG __Pyx_PyInt_AsUnsignedLongLong(PyObject* x) { + const unsigned PY_LONG_LONG neg_one = (unsigned PY_LONG_LONG)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to unsigned PY_LONG_LONG"); + return (unsigned PY_LONG_LONG)-1; + } + return (unsigned PY_LONG_LONG)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to unsigned PY_LONG_LONG"); + return (unsigned PY_LONG_LONG)-1; + } + return PyLong_AsUnsignedLongLong(x); + } else { + return PyLong_AsLongLong(x); + } + } else { + unsigned PY_LONG_LONG val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (unsigned PY_LONG_LONG)-1; + val = __Pyx_PyInt_AsUnsignedLongLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static CYTHON_INLINE long __Pyx_PyInt_AsLong(PyObject* x) { + const long neg_one = (long)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to long"); + return (long)-1; + } + return (long)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to long"); + return (long)-1; + } + return PyLong_AsUnsignedLong(x); + } else { + return PyLong_AsLong(x); + } + } else { + long val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (long)-1; + val = __Pyx_PyInt_AsLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static CYTHON_INLINE PY_LONG_LONG __Pyx_PyInt_AsLongLong(PyObject* x) { + const PY_LONG_LONG neg_one = (PY_LONG_LONG)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to PY_LONG_LONG"); + return (PY_LONG_LONG)-1; + } + return (PY_LONG_LONG)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to PY_LONG_LONG"); + return (PY_LONG_LONG)-1; + } + return PyLong_AsUnsignedLongLong(x); + } else { + return PyLong_AsLongLong(x); + } + } else { + PY_LONG_LONG val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (PY_LONG_LONG)-1; + val = __Pyx_PyInt_AsLongLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static CYTHON_INLINE signed long __Pyx_PyInt_AsSignedLong(PyObject* x) { + const signed long neg_one = (signed long)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to signed long"); + return (signed long)-1; + } + return (signed long)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to signed long"); + return (signed long)-1; + } + return PyLong_AsUnsignedLong(x); + } else { + return PyLong_AsLong(x); + } + } else { + signed long val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (signed long)-1; + val = __Pyx_PyInt_AsSignedLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static CYTHON_INLINE signed PY_LONG_LONG __Pyx_PyInt_AsSignedLongLong(PyObject* x) { + const signed PY_LONG_LONG neg_one = (signed PY_LONG_LONG)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to signed PY_LONG_LONG"); + return (signed PY_LONG_LONG)-1; + } + return (signed PY_LONG_LONG)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to signed PY_LONG_LONG"); + return (signed PY_LONG_LONG)-1; + } + return PyLong_AsUnsignedLongLong(x); + } else { + return PyLong_AsLongLong(x); + } + } else { + signed PY_LONG_LONG val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (signed PY_LONG_LONG)-1; + val = __Pyx_PyInt_AsSignedLongLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static int __Pyx_ExportFunction(const char *name, void (*f)(void), const char *sig) { + PyObject *d = 0; + PyObject *cobj = 0; + union { + void (*fp)(void); + void *p; + } tmp; + + d = PyObject_GetAttrString(__pyx_m, (char *)"__pyx_capi__"); + if (!d) { + PyErr_Clear(); + d = PyDict_New(); + if (!d) + goto bad; + Py_INCREF(d); + if (PyModule_AddObject(__pyx_m, (char *)"__pyx_capi__", d) < 0) + goto bad; + } + tmp.fp = f; +#if PY_VERSION_HEX < 0x03010000 + cobj = PyCObject_FromVoidPtrAndDesc(tmp.p, (void *)sig, 0); +#else + cobj = PyCapsule_New(tmp.p, sig, 0); +#endif + if (!cobj) + goto bad; + if (PyDict_SetItemString(d, name, cobj) < 0) + goto bad; + Py_DECREF(cobj); + Py_DECREF(d); + return 0; +bad: + Py_XDECREF(cobj); + Py_XDECREF(d); + return -1; +} + +static int __Pyx_SetVtable(PyObject *dict, void *vtable) { +#if PY_VERSION_HEX < 0x03010000 + PyObject *ob = PyCObject_FromVoidPtr(vtable, 0); +#else + PyObject *ob = PyCapsule_New(vtable, 0, 0); +#endif + if (!ob) + goto bad; + if (PyDict_SetItemString(dict, "__pyx_vtable__", ob) < 0) + goto bad; + Py_DECREF(ob); + return 0; +bad: + Py_XDECREF(ob); + return -1; +} + +#ifndef __PYX_HAVE_RT_ImportType +#define __PYX_HAVE_RT_ImportType +static PyTypeObject *__Pyx_ImportType(const char *module_name, const char *class_name, + long size, int strict) +{ + PyObject *py_module = 0; + PyObject *result = 0; + PyObject *py_name = 0; + char warning[200]; + + py_module = __Pyx_ImportModule(module_name); + if (!py_module) + goto bad; + #if PY_MAJOR_VERSION < 3 + py_name = PyString_FromString(class_name); + #else + py_name = PyUnicode_FromString(class_name); + #endif + if (!py_name) + goto bad; + result = PyObject_GetAttr(py_module, py_name); + Py_DECREF(py_name); + py_name = 0; + Py_DECREF(py_module); + py_module = 0; + if (!result) + goto bad; + if (!PyType_Check(result)) { + PyErr_Format(PyExc_TypeError, + "%s.%s is not a type object", + module_name, class_name); + goto bad; + } + if (!strict && ((PyTypeObject *)result)->tp_basicsize > size) { + PyOS_snprintf(warning, sizeof(warning), + "%s.%s size changed, may indicate binary incompatibility", + module_name, class_name); + PyErr_WarnEx(NULL, warning, 0); + } + else if (((PyTypeObject *)result)->tp_basicsize != size) { + PyErr_Format(PyExc_ValueError, + "%s.%s has the wrong size, try recompiling", + module_name, class_name); + goto bad; + } + return (PyTypeObject *)result; +bad: + Py_XDECREF(py_module); + Py_XDECREF(result); + return 0; +} +#endif + +#ifndef __PYX_HAVE_RT_ImportModule +#define __PYX_HAVE_RT_ImportModule +static PyObject *__Pyx_ImportModule(const char *name) { + PyObject *py_name = 0; + PyObject *py_module = 0; + + #if PY_MAJOR_VERSION < 3 + py_name = PyString_FromString(name); + #else + py_name = PyUnicode_FromString(name); + #endif + if (!py_name) + goto bad; + py_module = PyImport_Import(py_name); + Py_DECREF(py_name); + return py_module; +bad: + Py_XDECREF(py_name); + return 0; +} +#endif + +#include "compile.h" +#include "frameobject.h" +#include "traceback.h" + +static void __Pyx_AddTraceback(const char *funcname) { + PyObject *py_srcfile = 0; + PyObject *py_funcname = 0; + PyObject *py_globals = 0; + PyCodeObject *py_code = 0; + PyFrameObject *py_frame = 0; + + #if PY_MAJOR_VERSION < 3 + py_srcfile = PyString_FromString(__pyx_filename); + #else + py_srcfile = PyUnicode_FromString(__pyx_filename); + #endif + if (!py_srcfile) goto bad; + if (__pyx_clineno) { + #if PY_MAJOR_VERSION < 3 + py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, __pyx_clineno); + #else + py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, __pyx_clineno); + #endif + } + else { + #if PY_MAJOR_VERSION < 3 + py_funcname = PyString_FromString(funcname); + #else + py_funcname = PyUnicode_FromString(funcname); + #endif + } + if (!py_funcname) goto bad; + py_globals = PyModule_GetDict(__pyx_m); + if (!py_globals) goto bad; + py_code = PyCode_New( + 0, /*int argcount,*/ + #if PY_MAJOR_VERSION >= 3 + 0, /*int kwonlyargcount,*/ + #endif + 0, /*int nlocals,*/ + 0, /*int stacksize,*/ + 0, /*int flags,*/ + __pyx_empty_bytes, /*PyObject *code,*/ + __pyx_empty_tuple, /*PyObject *consts,*/ + __pyx_empty_tuple, /*PyObject *names,*/ + __pyx_empty_tuple, /*PyObject *varnames,*/ + __pyx_empty_tuple, /*PyObject *freevars,*/ + __pyx_empty_tuple, /*PyObject *cellvars,*/ + py_srcfile, /*PyObject *filename,*/ + py_funcname, /*PyObject *name,*/ + __pyx_lineno, /*int firstlineno,*/ + __pyx_empty_bytes /*PyObject *lnotab*/ + ); + if (!py_code) goto bad; + py_frame = PyFrame_New( + PyThreadState_GET(), /*PyThreadState *tstate,*/ + py_code, /*PyCodeObject *code,*/ + py_globals, /*PyObject *globals,*/ + 0 /*PyObject *locals*/ + ); + if (!py_frame) goto bad; + py_frame->f_lineno = __pyx_lineno; + PyTraceBack_Here(py_frame); +bad: + Py_XDECREF(py_srcfile); + Py_XDECREF(py_funcname); + Py_XDECREF(py_code); + Py_XDECREF(py_frame); +} + +static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) { + while (t->p) { + #if PY_MAJOR_VERSION < 3 + if (t->is_unicode) { + *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL); + } else if (t->intern) { + *t->p = PyString_InternFromString(t->s); + } else { + *t->p = PyString_FromStringAndSize(t->s, t->n - 1); + } + #else /* Python 3+ has unicode identifiers */ + if (t->is_unicode | t->is_str) { + if (t->intern) { + *t->p = PyUnicode_InternFromString(t->s); + } else if (t->encoding) { + *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL); + } else { + *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1); + } + } else { + *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1); + } + #endif + if (!*t->p) + return -1; + ++t; + } + return 0; +} + +/* Type Conversion Functions */ + +static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { + if (x == Py_True) return 1; + else if ((x == Py_False) | (x == Py_None)) return 0; + else return PyObject_IsTrue(x); +} + +static CYTHON_INLINE PyObject* __Pyx_PyNumber_Int(PyObject* x) { + PyNumberMethods *m; + const char *name = NULL; + PyObject *res = NULL; +#if PY_VERSION_HEX < 0x03000000 + if (PyInt_Check(x) || PyLong_Check(x)) +#else + if (PyLong_Check(x)) +#endif + return Py_INCREF(x), x; + m = Py_TYPE(x)->tp_as_number; +#if PY_VERSION_HEX < 0x03000000 + if (m && m->nb_int) { + name = "int"; + res = PyNumber_Int(x); + } + else if (m && m->nb_long) { + name = "long"; + res = PyNumber_Long(x); + } +#else + if (m && m->nb_int) { + name = "int"; + res = PyNumber_Long(x); + } +#endif + if (res) { +#if PY_VERSION_HEX < 0x03000000 + if (!PyInt_Check(res) && !PyLong_Check(res)) { +#else + if (!PyLong_Check(res)) { +#endif + PyErr_Format(PyExc_TypeError, + "__%s__ returned non-%s (type %.200s)", + name, name, Py_TYPE(res)->tp_name); + Py_DECREF(res); + return NULL; + } + } + else if (!PyErr_Occurred()) { + PyErr_SetString(PyExc_TypeError, + "an integer is required"); + } + return res; +} + +static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { + Py_ssize_t ival; + PyObject* x = PyNumber_Index(b); + if (!x) return -1; + ival = PyInt_AsSsize_t(x); + Py_DECREF(x); + return ival; +} + +static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) { +#if PY_VERSION_HEX < 0x02050000 + if (ival <= LONG_MAX) + return PyInt_FromLong((long)ival); + else { + unsigned char *bytes = (unsigned char *) &ival; + int one = 1; int little = (int)*(unsigned char*)&one; + return _PyLong_FromByteArray(bytes, sizeof(size_t), little, 0); + } +#else + return PyInt_FromSize_t(ival); +#endif +} + +static CYTHON_INLINE size_t __Pyx_PyInt_AsSize_t(PyObject* x) { + unsigned PY_LONG_LONG val = __Pyx_PyInt_AsUnsignedLongLong(x); + if (unlikely(val == (unsigned PY_LONG_LONG)-1 && PyErr_Occurred())) { + return (size_t)-1; + } else if (unlikely(val != (unsigned PY_LONG_LONG)(size_t)val)) { + PyErr_SetString(PyExc_OverflowError, + "value too large to convert to size_t"); + return (size_t)-1; + } + return (size_t)val; +} + + +#endif /* Py_PYTHON_H */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/matlab/tests/afunc.m python-scipy-0.8.0+dfsg1/scipy/io/matlab/tests/afunc.m --- python-scipy-0.7.2+dfsg1/scipy/io/matlab/tests/afunc.m 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/matlab/tests/afunc.m 2010-07-26 15:48:31.000000000 +0100 @@ -2,11 +2,3 @@ % A function a = c + 1; b = d + 10; -function [a, b] = afunc(c, d) -% A function -a = c + 1; -b = d + 10; -function [a, b] = afunc(c, d) -% A function -a = c + 1; -b = d + 10; Binary files /tmp/tTuQn84fcN/python-scipy-0.7.2+dfsg1/scipy/io/matlab/tests/data/parabola.mat and /tmp/9QioNV5RG6/python-scipy-0.8.0+dfsg1/scipy/io/matlab/tests/data/parabola.mat differ Binary files /tmp/tTuQn84fcN/python-scipy-0.7.2+dfsg1/scipy/io/matlab/tests/data/single_empty_string.mat and /tmp/9QioNV5RG6/python-scipy-0.8.0+dfsg1/scipy/io/matlab/tests/data/single_empty_string.mat differ Binary files /tmp/tTuQn84fcN/python-scipy-0.7.2+dfsg1/scipy/io/matlab/tests/data/some_functions.mat and /tmp/9QioNV5RG6/python-scipy-0.8.0+dfsg1/scipy/io/matlab/tests/data/some_functions.mat differ Binary files /tmp/tTuQn84fcN/python-scipy-0.7.2+dfsg1/scipy/io/matlab/tests/data/sqr.mat and /tmp/9QioNV5RG6/python-scipy-0.8.0+dfsg1/scipy/io/matlab/tests/data/sqr.mat differ diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/matlab/tests/test_mio5_utils.py python-scipy-0.8.0+dfsg1/scipy/io/matlab/tests/test_mio5_utils.py --- python-scipy-0.7.2+dfsg1/scipy/io/matlab/tests/test_mio5_utils.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/matlab/tests/test_mio5_utils.py 2010-07-26 15:48:31.000000000 +0100 @@ -0,0 +1,160 @@ +""" Testing + +""" +import cStringIO +import StringIO + +import numpy as np + +from nose.tools import assert_true, assert_false, \ + assert_equal, assert_raises + +from numpy.testing import assert_array_equal, assert_array_almost_equal + +import scipy.io.matlab.byteordercodes as boc +import scipy.io.matlab.streams as streams +import scipy.io.matlab.miobase as miob +import scipy.io.matlab.mio5 as mio5 +import scipy.io.matlab.mio5_utils as m5u + + +def test_byteswap(): + for val in ( + 1, + 0x100, + 0x10000): + a = np.array(val, dtype=np.uint32) + b = a.byteswap() + c = m5u.byteswap_u4(a) + yield assert_equal, b.item(), c + d = m5u.byteswap_u4(c) + yield assert_equal, a.item(), d + + +def _make_tag(base_dt, val, mdtype, sde=False): + ''' Makes a simple matlab tag, full or sde ''' + base_dt = np.dtype(base_dt) + bo = boc.to_numpy_code(base_dt.byteorder) + byte_count = base_dt.itemsize + if not sde: + udt = bo + 'u4' + padding = 8 - (byte_count % 8) + all_dt = [('mdtype', udt), + ('byte_count', udt), + ('val', base_dt)] + if padding: + all_dt.append(('padding', 'u1', padding)) + else: # is sde + udt = bo + 'u2' + padding = 4-byte_count + if bo == '<': # little endian + all_dt = [('mdtype', udt), + ('byte_count', udt), + ('val', base_dt)] + else: # big endian + all_dt = [('byte_count', udt), + ('mdtype', udt), + ('val', base_dt)] + if padding: + all_dt.append(('padding', 'u1', padding)) + tag = np.zeros((1,), dtype=all_dt) + tag['mdtype'] = mdtype + tag['byte_count'] = byte_count + tag['val'] = val + return tag + + +def _write_stream(stream, *strings): + stream.truncate(0) + for s in strings: + stream.write(s) + stream.seek(0) + + +def _make_readerlike(): + class R(object): + pass + r = R() + r.byte_order = boc.native_code + r.dtypes = {} + r.class_dtypes = {} + r.codecs = {} + r.struct_as_record = True + r.uint16_codec = None + r.chars_as_strings = False + r.mat_dtype = False + r.squeeze_me = False + return r + + +def test_read_tag(): + # mainly to test errors + # make reader-like thing + str_io = StringIO.StringIO() + r = _make_readerlike() + r.mat_stream = str_io + c_reader = m5u.VarReader5(r) + # This works for StringIO but _not_ cStringIO + yield assert_raises, IOError, c_reader.read_tag + # bad SDE + tag = _make_tag('i4', 1, mio5.miINT32, sde=True) + tag['byte_count'] = 5 + _write_stream(str_io, tag.tostring()) + yield assert_raises, ValueError, c_reader.read_tag + + +def test_read_stream(): + tag = _make_tag('i4', 1, mio5.miINT32, sde=True) + tag_str = tag.tostring() + str_io = cStringIO.StringIO(tag_str) + st = streams.make_stream(str_io) + s = streams._read_into(st, tag.itemsize) + yield assert_equal, s, tag.tostring() + + +def test_read_numeric(): + # make reader-like thing + str_io = cStringIO.StringIO() + r = _make_readerlike() + r.mat_stream = str_io + # check simplest of tags + for base_dt, val, mdtype in ( + ('u2', 30, mio5.miUINT16), + ('i4', 1, mio5.miINT32), + ('i2', -1, mio5.miINT16)): + for byte_code in ('<', '>'): + r.byte_order = byte_code + r.dtypes = miob.convert_dtypes(mio5.mdtypes_template, byte_code) + c_reader = m5u.VarReader5(r) + yield assert_equal, c_reader.little_endian, byte_code == '<' + yield assert_equal, c_reader.is_swapped, byte_code != boc.native_code + for sde_f in (False, True): + dt = np.dtype(base_dt).newbyteorder(byte_code) + a = _make_tag(dt, val, mdtype, sde_f) + a_str = a.tostring() + _write_stream(str_io, a_str) + el = c_reader.read_numeric() + yield assert_equal, el, val + # two sequential reads + _write_stream(str_io, a_str, a_str) + el = c_reader.read_numeric() + yield assert_equal, el, val + el = c_reader.read_numeric() + yield assert_equal, el, val + + +def test_read_numeric_writeable(): + # make reader-like thing + str_io = cStringIO.StringIO() + r = _make_readerlike() + r.mat_stream = str_io + r.byte_order = '<' + r.dtypes = miob.convert_dtypes(mio5.mdtypes_template, '<') + c_reader = m5u.VarReader5(r) + dt = np.dtype('' + rdr.mat_stream.read(4) # presumably byte padding + return read_minimat_vars(rdr) + + +def test_jottings(): + # example + fname = pjoin(test_data_path, 'parabola.mat') + ws_vars = read_workspace_vars(fname) diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/matlab/tests/test_mio.py python-scipy-0.8.0+dfsg1/scipy/io/matlab/tests/test_mio.py --- python-scipy-0.7.2+dfsg1/scipy/io/matlab/tests/test_mio.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/matlab/tests/test_mio.py 2010-07-26 15:48:31.000000000 +0100 @@ -4,10 +4,16 @@ Need function load / save / roundtrip tests ''' -from os.path import join, dirname +from os.path import join as pjoin, dirname from glob import glob from StringIO import StringIO from tempfile import mkdtemp +# functools is only available in Python >=2.5 +try: + from functools import partial +except ImportError: + from scipy.io.arff.myfunctools import partial + import warnings import shutil import gzip @@ -24,13 +30,25 @@ from numpy import array import scipy.sparse as SP -from scipy.io.matlab.miobase import matdims -from scipy.io.matlab.mio import loadmat, savemat, find_mat_file, \ - mat_reader_factory +import scipy.io.matlab.byteordercodes as boc +from scipy.io.matlab.miobase import matdims, MatFileReader, \ + MatWriteError +from scipy.io.matlab.mio import find_mat_file, mat_reader_factory, \ + loadmat, savemat from scipy.io.matlab.mio5 import MatlabObject, MatFile5Writer, \ - Mat5NumericWriter + MatFile5Reader, MatlabFunction + +# Use future defaults to silence unwanted test warnings +savemat_future = partial(savemat, oned_as='row') +class MatFile5Reader_future(MatFile5Reader): + def __init__(self, *args, **kwargs): + sar = kwargs.get('struct_as_record') + if sar is None: + kwargs['struct_as_record'] = True + super(MatFile5Reader_future, self).__init__(*args, **kwargs) + -test_data_path = join(dirname(__file__), 'data') +test_data_path = pjoin(dirname(__file__), 'data') def mlarr(*args, **kwargs): ''' Convenience function to return matlab-compatible 2D array @@ -175,7 +193,7 @@ 'expected': {'testobject': MO} }) u_str = file( - join(test_data_path, 'japanese_utf8.txt'), + pjoin(test_data_path, 'japanese_utf8.txt'), 'rb').read().decode('utf-8') case_table5.append( {'name': 'unicode', @@ -189,13 +207,8 @@ {'name': 'sparsecomplex', 'expected': {'testsparsecomplex': SP.coo_matrix(B)}, }) -# We cannot read matlab functions for the moment -case_table5.append( - {'name': 'func', - 'expected': {'testfunc': 'Read error: Cannot read matlab functions'}, - }) -case_table5_rt = case_table5[:-1] # not the function read write +case_table5_rt = case_table5[:] # Inline functions can't be concatenated in matlab, so RT only case_table5_rt.append( {'name': 'objectarray', @@ -204,7 +217,7 @@ def types_compatible(var1, var2): ''' Check if types are same or compatible - + 0d numpy scalars are compatible with bare python scalars ''' type1 = type(var1) @@ -229,7 +242,7 @@ return # Check types are as expected assert_true(types_compatible(expected, actual), \ - "Expected type %s, got %s at %s" % + "Expected type %s, got %s at %s" % (type(expected), type(actual), label)) # A field in a record array may not be an ndarray # A scalar from a record array will be type np.void @@ -278,7 +291,7 @@ # Round trip tests def _rt_check_case(name, expected, format): mat_stream = StringIO() - savemat(mat_stream, expected, format=format) + savemat_future(mat_stream, expected, format=format) mat_stream.seek(0) _load_check_case(name, [mat_stream], expected) @@ -288,7 +301,7 @@ for case in case_table4 + case_table5: name = case['name'] expected = case['expected'] - filt = join(test_data_path, 'test%s_*.mat' % name) + filt = pjoin(test_data_path, 'test%s_*.mat' % name) files = glob(filt) assert_true(len(files) > 0, "No files for test %s using filter %s" % (name, filt)) @@ -316,9 +329,9 @@ tmpdir = mkdtemp() try: - fname = join(tmpdir,name) + fname = pjoin(tmpdir,name) mat_stream = gzip.open( fname,mode='wb') - savemat(mat_stream, expected, format=format) + savemat_future(mat_stream, expected, format=format) mat_stream.close() mat_stream = gzip.open( fname,mode='rb') @@ -334,7 +347,7 @@ def test_mat73(): # Check any hdf5 files raise an error filenames = glob( - join(test_data_path, 'testhdf5*.mat')) + pjoin(test_data_path, 'testhdf5*.mat')) assert_true(len(filenames)>0) for filename in filenames: assert_raises(NotImplementedError, @@ -344,26 +357,20 @@ def test_warnings(): - fname = join(test_data_path, 'testdouble_7.1_GLNX86.mat') + fname = pjoin(test_data_path, 'testdouble_7.1_GLNX86.mat') warnings.simplefilter('error') # This should not generate a warning mres = loadmat(fname, struct_as_record=True) # This neither mres = loadmat(fname, struct_as_record=False) - # This should - yield assert_raises, FutureWarning, loadmat, fname - # This too - yield assert_raises, FutureWarning, find_mat_file, fname - # we need kwargs for this one - yield (lambda a, k: assert_raises(*a, **k), - (DeprecationWarning, loadmat, fname), - {'struct_as_record':True, 'basename':'raw'}) + # This should - because of deprecated system path search + yield assert_raises, DeprecationWarning, find_mat_file, fname warnings.resetwarnings() def test_regression_653(): """Regression test for #653.""" - assert_raises(TypeError, savemat, StringIO(), {'d':{1:2}}, format='5') + assert_raises(TypeError, savemat_future, StringIO(), {'d':{1:2}}, format='5') def test_structname_len(): @@ -372,17 +379,17 @@ fldname = 'a' * lim st1 = np.zeros((1,1), dtype=[(fldname, object)]) mat_stream = StringIO() - savemat(StringIO(), {'longstruct': st1}, format='5') + savemat_future(StringIO(), {'longstruct': st1}, format='5') fldname = 'a' * (lim+1) st1 = np.zeros((1,1), dtype=[(fldname, object)]) - assert_raises(ValueError, savemat, StringIO(), + assert_raises(ValueError, savemat_future, StringIO(), {'longstruct': st1}, format='5') def test_4_and_long_field_names_incompatible(): # Long field names option not supported in 4 my_struct = np.zeros((1,1),dtype=[('my_fieldname',object)]) - assert_raises(ValueError, savemat, StringIO(), + assert_raises(ValueError, savemat_future, StringIO(), {'my_struct':my_struct}, format='4', long_field_names=True) @@ -392,10 +399,10 @@ fldname = 'a' * lim st1 = np.zeros((1,1), dtype=[(fldname, object)]) mat_stream = StringIO() - savemat(StringIO(), {'longstruct': st1}, format='5',long_field_names=True) + savemat_future(StringIO(), {'longstruct': st1}, format='5',long_field_names=True) fldname = 'a' * (lim+1) st1 = np.zeros((1,1), dtype=[(fldname, object)]) - assert_raises(ValueError, savemat, StringIO(), + assert_raises(ValueError, savemat_future, StringIO(), {'longstruct': st1}, format='5',long_field_names=True) @@ -409,11 +416,11 @@ cell[0,0]=st1 cell[0,1]=st1 mat_stream = StringIO() - savemat(StringIO(), {'longstruct': cell}, format='5',long_field_names=True) + savemat_future(StringIO(), {'longstruct': cell}, format='5',long_field_names=True) # # Check to make sure it fails with long field names off # - assert_raises(ValueError, savemat, StringIO(), + assert_raises(ValueError, savemat_future, StringIO(), {'longstruct': cell}, format='5', long_field_names=False) @@ -425,17 +432,17 @@ cells[0,0]='Hello' cells[0,1]='World' mat_stream = StringIO() - savemat(StringIO(), {'x': cells}, format='5') + savemat_future(StringIO(), {'x': cells}, format='5') cells = np.ndarray((1,1),dtype=object) cells[0,0]='Hello, world' mat_stream = StringIO() - savemat(StringIO(), {'x': cells}, format='5') + savemat_future(StringIO(), {'x': cells}, format='5') def test_writer_properties(): # Tests getting, setting of properties of matrix writer - mfw = MatFile5Writer(StringIO()) + mfw = MatFile5Writer(StringIO(), oned_as='row') yield assert_equal, mfw.global_vars, [] mfw.global_vars = ['avar'] yield assert_equal, mfw.global_vars, ['avar'] @@ -450,16 +457,18 @@ def test_use_small_element(): # Test whether we're using small data element or not sio = StringIO() + wtr = MatFile5Writer(sio, oned_as='column') # First check size for no sde for name - writer = Mat5NumericWriter(sio, np.zeros(10), 'aaaaa').write() + arr = np.zeros(10) + wtr.put_variables({'aaaaa': arr}) w_sz = sio.len # Check small name results in largish difference in size sio.truncate(0) - writer = Mat5NumericWriter(sio, np.zeros(10), 'aaaa').write() + wtr.put_variables({'aaaa': arr}) yield assert_true, w_sz - sio.len > 4 # Whereas increasing name size makes less difference sio.truncate(0) - writer = Mat5NumericWriter(sio, np.zeros(10), 'aaaaaa').write() + wtr.put_variables({'aaaaaa': arr}) yield assert_true, sio.len - w_sz < 4 @@ -467,7 +476,7 @@ # Test that dict can be saved (as recarray), loaded as matstruct d = {'a':1, 'b':2} stream = StringIO() - savemat(stream, {'dict':d}) + savemat_future(stream, {'dict':d}) stream.seek(0) vals = loadmat(stream) @@ -476,11 +485,12 @@ # Current 5 behavior is 1D -> column vector arr = np.arange(5) stream = StringIO() + # silence warnings for tests + warnings.simplefilter('ignore') savemat(stream, {'oned':arr}, format='5') vals = loadmat(stream) yield assert_equal, vals['oned'].shape, (5,1) # Current 4 behavior is 1D -> row vector - arr = np.arange(5) stream = StringIO() savemat(stream, {'oned':arr}, format='4') vals = loadmat(stream) @@ -488,30 +498,31 @@ for format in ('4', '5'): # can be explicitly 'column' for oned_as stream = StringIO() - savemat(stream, {'oned':arr}, + savemat(stream, {'oned':arr}, format=format, oned_as='column') vals = loadmat(stream) yield assert_equal, vals['oned'].shape, (5,1) # but different from 'row' stream = StringIO() - savemat(stream, {'oned':arr}, + savemat(stream, {'oned':arr}, format=format, oned_as='row') vals = loadmat(stream) yield assert_equal, vals['oned'].shape, (1,5) - + warnings.resetwarnings() + def test_compression(): arr = np.zeros(100).reshape((5,20)) arr[2,10] = 1 stream = StringIO() - savemat(stream, {'arr':arr}) + savemat_future(stream, {'arr':arr}) raw_len = len(stream.getvalue()) vals = loadmat(stream) yield assert_array_equal, vals['arr'], arr stream = StringIO() - savemat(stream, {'arr':arr}, do_compression=True) + savemat_future(stream, {'arr':arr}, do_compression=True) compressed_len = len(stream.getvalue()) vals = loadmat(stream) yield assert_array_equal, vals['arr'], arr @@ -520,18 +531,19 @@ arr2 = arr.copy() arr2[0,0] = 1 stream = StringIO() - savemat(stream, {'arr':arr, 'arr2':arr2}, do_compression=False) + savemat_future(stream, {'arr':arr, 'arr2':arr2}, do_compression=False) vals = loadmat(stream) yield assert_array_equal, vals['arr2'], arr2 stream = StringIO() - savemat(stream, {'arr':arr, 'arr2':arr2}, do_compression=True) + savemat_future(stream, {'arr':arr, 'arr2':arr2}, do_compression=True) vals = loadmat(stream) yield assert_array_equal, vals['arr2'], arr2 - + def test_single_object(): stream = StringIO() - savemat(stream, {'A':np.array(1, dtype=object)}) + savemat_future(stream, {'A':np.array(1, dtype=object)}) + def test_skip_variable(): # Test skipping over the first of two variables in a MAT file @@ -544,7 +556,7 @@ # The problem arises when the chunk is large: this file has # a 256x256 array of random (uncompressible) doubles. # - filename = join(test_data_path,'test_skip_variable.mat') + filename = pjoin(test_data_path,'test_skip_variable.mat') # # Prove that it loads with loadmat # @@ -564,7 +576,7 @@ def test_empty_struct(): # ticket 885 - filename = join(test_data_path,'test_empty_struct.mat') + filename = pjoin(test_data_path,'test_empty_struct.mat') # before ticket fix, this would crash with ValueError, empty data # type d = loadmat(filename, struct_as_record=True) @@ -575,7 +587,7 @@ stream = StringIO() arr = np.array((), dtype='U') # before ticket fix, this used to give data type not understood - savemat(stream, {'arr':arr}) + savemat_future(stream, {'arr':arr}) d = loadmat(stream) a2 = d['arr'] yield assert_array_equal, a2, arr @@ -591,7 +603,7 @@ arr[1]['f1'] = 99 arr[1]['f2'] = 'not perl' stream = StringIO() - savemat(stream, {'arr': arr}) + savemat_future(stream, {'arr': arr}) d = loadmat(stream, struct_as_record=False) a20 = d['arr'][0,0] yield assert_equal, a20.f1, 0.5 @@ -606,7 +618,7 @@ a21 = d['arr'].flat[1] yield assert_equal, a21['f1'], 99 yield assert_equal, a21['f2'], 'not perl' - + def test_save_object(): class C(object): pass @@ -614,7 +626,7 @@ c.field1 = 1 c.field2 = 'a string' stream = StringIO() - savemat(stream, {'c': c}) + savemat_future(stream, {'c': c}) d = loadmat(stream, struct_as_record=False) c2 = d['c'][0,0] yield assert_equal, c2.field1, 1 @@ -623,3 +635,147 @@ c2 = d['c'][0,0] yield assert_equal, c2['field1'], 1 yield assert_equal, c2['field2'], 'a string' + + +def test_read_opts(): + # tests if read is seeing option sets, at initialization and after + # initialization + arr = np.arange(6).reshape(1,6) + stream = StringIO() + savemat_future(stream, {'a': arr}) + rdr = MatFile5Reader_future(stream) + back_dict = rdr.get_variables() + rarr = back_dict['a'] + yield assert_array_equal, rarr, arr + rdr = MatFile5Reader_future(stream, squeeze_me=True) + yield assert_array_equal, rdr.get_variables()['a'], arr.reshape((6,)) + rdr.squeeze_me = False + yield assert_array_equal, rarr, arr + rdr = MatFile5Reader_future(stream, byte_order=boc.native_code) + yield assert_array_equal, rdr.get_variables()['a'], arr + # inverted byte code leads to error on read because of swapped + # header etc + rdr = MatFile5Reader_future(stream, byte_order=boc.swapped_code) + yield assert_raises, Exception, rdr.get_variables + rdr.byte_order = boc.native_code + yield assert_array_equal, rdr.get_variables()['a'], arr + arr = np.array(['a string']) + stream.truncate(0) + savemat_future(stream, {'a': arr}) + rdr = MatFile5Reader_future(stream) + yield assert_array_equal, rdr.get_variables()['a'], arr + rdr = MatFile5Reader_future(stream, chars_as_strings=False) + carr = np.atleast_2d(np.array(list(arr.item()), dtype='U1')) + yield assert_array_equal, rdr.get_variables()['a'], carr + rdr.chars_as_strings=True + yield assert_array_equal, rdr.get_variables()['a'], arr + + +def test_empty_string(): + # make sure reading empty string does not raise error + estring_fname = pjoin(test_data_path, 'single_empty_string.mat') + rdr = MatFile5Reader_future(file(estring_fname, 'rb')) + d = rdr.get_variables() + yield assert_array_equal, d['a'], np.array([], dtype='U1') + # empty string round trip. Matlab cannot distiguish + # between a string array that is empty, and a string array + # containing a single empty string, because it stores strings as + # arrays of char. There is no way of having an array of char that + # is not empty, but contains an empty string. + stream = StringIO() + savemat_future(stream, {'a': np.array([''])}) + rdr = MatFile5Reader_future(stream) + d = rdr.get_variables() + yield assert_array_equal, d['a'], np.array([], dtype='U1') + stream.truncate(0) + savemat_future(stream, {'a': np.array([], dtype='U1')}) + rdr = MatFile5Reader_future(stream) + d = rdr.get_variables() + yield assert_array_equal, d['a'], np.array([], dtype='U1') + + +def test_mat4_3d(): + # test behavior when writing 3D arrays to matlab 4 files + stream = StringIO() + arr = np.arange(24).reshape((2,3,4)) + warnings.simplefilter('error') + yield (assert_raises, DeprecationWarning, savemat_future, + stream, {'a': arr}, True, '4') + warnings.resetwarnings() + # For now, we save a 3D array as 2D + warnings.simplefilter('ignore') + savemat_future(stream, {'a': arr}, format='4') + warnings.resetwarnings() + d = loadmat(stream) + yield assert_array_equal, d['a'], arr.reshape((6,4)) + + +def test_func_read(): + func_eg = pjoin(test_data_path, 'testfunc_7.4_GLNX86.mat') + rdr = MatFile5Reader_future(file(func_eg, 'rb')) + d = rdr.get_variables() + yield assert_true, isinstance(d['testfunc'], MatlabFunction) + stream = StringIO() + wtr = MatFile5Writer(stream, oned_as='row') + yield assert_raises, MatWriteError, wtr.put_variables, d + + +def test_mat_dtype(): + double_eg = pjoin(test_data_path, 'testmatrix_6.1_SOL2.mat') + rdr = MatFile5Reader_future(file(double_eg, 'rb'), mat_dtype=False) + d = rdr.get_variables() + yield assert_equal, d['testmatrix'].dtype.kind, 'u' + rdr = MatFile5Reader_future(file(double_eg, 'rb'), mat_dtype=True) + d = rdr.get_variables() + yield assert_equal, d['testmatrix'].dtype.kind, 'f' + + +def test_sparse_in_struct(): + # reproduces bug found by DC where Cython code was insisting on + # ndarray return type, but getting sparse matrix + st = {'sparsefield': SP.coo_matrix(np.eye(4))} + stream = StringIO() + savemat_future(stream, {'a':st}) + d = loadmat(stream, struct_as_record=True) + yield assert_array_equal, d['a'][0,0]['sparsefield'].todense(), np.eye(4) + + +def test_mat_struct_squeeze(): + stream = StringIO() + in_d = {'st':{'one':1, 'two':2}} + savemat_future(stream, in_d) + # no error without squeeze + out_d = loadmat(stream, struct_as_record=False) + # previous error was with squeeze, with mat_struct + out_d = loadmat(stream, + struct_as_record=False, + squeeze_me=True, + ) + + +def test_str_round(): + # from report by Angus McMorland on mailing list 3 May 2010 + stream = StringIO() + in_arr = np.array(['Hello', 'Foob']) + out_arr = np.array(['Hello', 'Foob ']) + savemat_future(stream, dict(a=in_arr)) + res = loadmat(stream) + # resulted in [u'HloolFoa', u'elWrdobr'] + yield assert_array_equal, res['a'], out_arr + stream.truncate(0) + # Make Fortran ordered version of string + in_str = in_arr.tostring(order='F') + in_from_str = np.ndarray(shape=a.shape, + dtype=in_arr.dtype, + order='F', + buffer=in_str) + savemat_future(stream, dict(a=in_from_str)) + yield assert_array_equal, res['a'], out_arr + # unicode save did lead to buffer too small error + stream.truncate(0) + in_arr_u = in_arr.astype('U') + out_arr_u = out_arr.astype('U') + savemat_future(stream, {'a': in_arr_u}) + res = loadmat(stream) + yield assert_array_equal, res['a'], out_arr_u + diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/matlab/tests/test_mio_utils.py python-scipy-0.8.0+dfsg1/scipy/io/matlab/tests/test_mio_utils.py --- python-scipy-0.7.2+dfsg1/scipy/io/matlab/tests/test_mio_utils.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/matlab/tests/test_mio_utils.py 2010-07-26 15:48:31.000000000 +0100 @@ -0,0 +1,56 @@ +""" Testing + +""" + +import numpy as np + +from nose.tools import assert_true, assert_false, \ + assert_equal, assert_raises + +from numpy.testing import assert_array_equal, assert_array_almost_equal + +from scipy.io.matlab.mio_utils import cproduct, squeeze_element, \ + chars_to_strings + + +def test_cproduct(): + yield assert_equal, cproduct(()), 1 + yield assert_equal, cproduct((1,)), 1 + yield assert_equal, cproduct((1,3)), 3 + yield assert_equal, cproduct([1,3]), 3 + + +def test_squeeze_element(): + a = np.zeros((1,3)) + yield (assert_array_equal, + np.squeeze(a), + squeeze_element(a)) + # 0d output from squeeze gives scalar + sq_int = squeeze_element(np.zeros((1,1), dtype=np.float)) + yield assert_true, isinstance(sq_int, float) + # Unless it's a structured array + sq_sa = squeeze_element(np.zeros((1,1),dtype=[('f1', 'f')])) + yield assert_true, isinstance(sq_sa, np.ndarray) + + +def test_chars_strings(): + # chars as strings + strings = ['learn ', 'python', 'fast ', 'here '] + str_arr = np.array(strings, dtype='U6') # shape (4,) + chars = [list(s) for s in strings] + char_arr = np.array(chars, dtype='U1') # shape (4,6) + yield assert_array_equal, chars_to_strings(char_arr), str_arr + ca2d = char_arr.reshape((2,2,6)) + sa2d = str_arr.reshape((2,2)) + yield assert_array_equal, chars_to_strings(ca2d), sa2d + ca3d = char_arr.reshape((1,2,2,6)) + sa3d = str_arr.reshape((1,2,2)) + yield assert_array_equal, chars_to_strings(ca3d), sa3d + # Fortran ordered arrays + char_arrf = np.array(chars, dtype='U1', order='F') # shape (4,6) + yield assert_array_equal, chars_to_strings(char_arrf), str_arr + # empty array + arr = np.array([['']], dtype='U1') + out_arr = np.array([''], dtype='U1') + yield assert_array_equal, chars_to_strings(arr), out_arr + diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/matlab/tests/test_streams.py python-scipy-0.8.0+dfsg1/scipy/io/matlab/tests/test_streams.py --- python-scipy-0.7.2+dfsg1/scipy/io/matlab/tests/test_streams.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/matlab/tests/test_streams.py 2010-07-26 15:48:31.000000000 +0100 @@ -0,0 +1,90 @@ +""" Testing + +""" + +import os + +import StringIO +import cStringIO +from tempfile import mkstemp + +import numpy as np + +from nose.tools import assert_true, assert_false, \ + assert_equal, assert_raises + +from numpy.testing import assert_array_equal, assert_array_almost_equal + +from scipy.io.matlab.streams import make_stream, \ + GenericStream, cStringStream, FileStream, \ + _read_into, _read_string + + +def setup(): + val = 'a\x00string' + global fs, gs, cs, fname + fd, fname = mkstemp() + fs = os.fdopen(fd, 'wb') + fs.write(val) + fs.close() + fs = open(fname) + gs = StringIO.StringIO(val) + cs = cStringIO.StringIO(val) + + +def teardown(): + global fname, fs + del fs + os.unlink(fname) + + +def test_make_stream(): + global fs, gs, cs + # test stream initialization + yield assert_true, isinstance(make_stream(gs), GenericStream) + yield assert_true, isinstance(make_stream(cs), cStringStream) + yield assert_true, isinstance(make_stream(fs), FileStream) + + +def test_tell_seek(): + global fs, gs, cs + for s in (fs, gs, cs): + st = make_stream(s) + res = st.seek(0) + yield assert_equal, res, 0 + yield assert_equal, st.tell(), 0 + res = st.seek(5) + yield assert_equal, res, 0 + yield assert_equal, st.tell(), 5 + res = st.seek(2, 1) + yield assert_equal, res, 0 + yield assert_equal, st.tell(), 7 + res = st.seek(-2, 2) + yield assert_equal, res, 0 + yield assert_equal, st.tell(), 6 + + +def test_read(): + global fs, gs, cs + for s in (fs, gs, cs): + st = make_stream(s) + st.seek(0) + res = st.read(-1) + yield assert_equal, res, 'a\x00string' + st.seek(0) + res = st.read(4) + yield assert_equal, res, 'a\x00st' + # read into + st.seek(0) + res = _read_into(st, 4) + yield assert_equal, res, 'a\x00st' + res = _read_into(st, 4) + yield assert_equal, res, 'ring' + yield assert_raises, IOError, _read_into, st, 2 + # read alloc + st.seek(0) + res = _read_string(st, 4) + yield assert_equal, res, 'a\x00st' + res = _read_string(st, 4) + yield assert_equal, res, 'ring' + yield assert_raises, IOError, _read_string, st, 2 diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/mmio.py python-scipy-0.8.0+dfsg1/scipy/io/mmio.py --- python-scipy-0.7.2+dfsg1/scipy/io/mmio.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/io/mmio.py 2010-07-26 15:48:31.000000000 +0100 @@ -18,50 +18,74 @@ #------------------------------------------------------------------------------- def mminfo(source): - """ Queries the contents of the Matrix Market file 'filename' to + """ + Queries the contents of the Matrix Market file 'filename' to extract size and storage information. - Inputs: + Parameters + ---------- + + source : file + Matrix Market filename (extension .mtx) or open file object + + Returns + ------- - source - Matrix Market filename (extension .mtx) or open file object + rows,cols : int + Number of matrix rows and columns + entries : int + Number of non-zero entries of a sparse matrix + or rows*cols for a dense matrix - Outputs: + format : {'coordinate', 'array'} + + field : {'real', 'complex', 'pattern', 'integer'} + + symm : {'general', 'symmetric', 'skew-symmetric', 'hermitian'} - rows,cols - number of matrix rows and columns - entries - number of non-zero entries of a sparse matrix - or rows*cols for a dense matrix - format - 'coordinate' | 'array' - field - 'real' | 'complex' | 'pattern' | 'integer' - symm - 'general' | 'symmetric' | 'skew-symmetric' | 'hermitian' """ return MMFile.info(source) #------------------------------------------------------------------------------- def mmread(source): - """ Reads the contents of a Matrix Market file 'filename' into a matrix. - - Inputs: + """ + Reads the contents of a Matrix Market file 'filename' into a matrix. - source - Matrix Market filename (extensions .mtx, .mtz.gz) - or open file object. + Parameters + ---------- - Outputs: + source : file + Matrix Market filename (extensions .mtx, .mtz.gz) + or open file object. + + Returns + ------- + a: + Sparse or full matrix - a - sparse or full matrix """ return MMFile().read(source) #------------------------------------------------------------------------------- def mmwrite(target, a, comment='', field=None, precision=None): - """ Writes the sparse or dense matrix A to a Matrix Market formatted file. + """ + Writes the sparse or dense matrix A to a Matrix Market formatted file. + + Parameters + ---------- + + target : file + Matrix Market filename (extension .mtx) or open file object + a : array like + Sparse or full matrix + comment : str + comments to be prepended to the Matrix Market file + + field : {'real', 'complex', 'pattern', 'integer'}, optional - Inputs: + precision : + Number of digits to display for real or complex values. - target - Matrix Market filename (extension .mtx) or open file object - a - sparse or full matrix - comment - comments to be prepended to the Matrix Market file - field - 'real' | 'complex' | 'pattern' | 'integer' - precision - Number of digits to display for real or complex values. """ MMFile().write(target, a, comment, field, precision) diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/netcdf.py python-scipy-0.8.0+dfsg1/scipy/io/netcdf.py --- python-scipy-0.7.2+dfsg1/scipy/io/netcdf.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/netcdf.py 2010-07-26 15:48:31.000000000 +0100 @@ -1,27 +1,93 @@ -"""NetCDF file reader. - -This is adapted from Roberto De Almeida's Pupynere PUre PYthon NEtcdf REader. +""" +NetCDF reader/writer module. -classes changed to underscore_separated instead of CamelCase +This module implements the Scientific.IO.NetCDF API to read and create +NetCDF files. The same API is also used in the PyNIO and pynetcdf +modules, allowing these modules to be used interchangebly when working +with NetCDF files. The major advantage of ``scipy.io.netcdf`` over other +modules is that it doesn't require the code to be linked to the NetCDF +libraries as the other modules do. + +The code is based on the `NetCDF file format specification +`_. A +NetCDF file is a self-describing binary format, with a header followed +by data. The header contains metadata describing dimensions, variables +and the position of the data in the file, so access can be done in an +efficient manner without loading unnecessary data into memory. We use +the ``mmap`` module to create Numpy arrays mapped to the data on disk, +for the same purpose. + +The structure of a NetCDF file is as follows: + + C D F + + + +Record data refers to data where the first axis can be expanded at +will. All record variables share a same dimension at the first axis, +and they are stored at the end of the file per record, ie + + A[0], B[0], ..., A[1], B[1], ..., etc, + +so that new data can be appended to the file without changing its original +structure. Non-record data are padded to a 4n bytes boundary. Record data +are also padded, unless there is exactly one record variable in the file, +in which case the padding is dropped. All data is stored in big endian +byte order. + +The Scientific.IO.NetCDF API allows attributes to be added directly to +instances of ``netcdf_file`` and ``netcdf_variable``. To differentiate +between user-set attributes and instance attributes, user-set attributes +are automatically stored in the ``_attributes`` attribute by overloading +``__setattr__``. This is the reason why the code sometimes uses +``obj.__dict__['key'] = value``, instead of simply ``obj.key = value``; +otherwise the key would be inserted into userspace attributes. + +To create a NetCDF file:: + + >>> import time + >>> f = netcdf_file('simple.nc', 'w') + >>> f.history = 'Created for a test' + >>> f.createDimension('time', 10) + >>> time = f.createVariable('time', 'i', ('time',)) + >>> time[:] = range(10) + >>> time.units = 'days since 2008-01-01' + >>> f.close() + +To read the NetCDF file we just created:: + + >>> f = netcdf_file('simple.nc', 'r') + >>> print f.history + Created for a test + >>> time = f.variables['time'] + >>> print time.units + days since 2008-01-01 + >>> print time.shape + (10,) + >>> print time[-1] + 9 + >>> f.close() TODO: - - Add write capability. + * properly implement ``_FillValue``. + * implement Jeff Whitaker's patch for masked variables. + * fix character variables. + * implement PAGESIZE for Python 2.6? """ -#__author__ = "Roberto De Almeida " - - __all__ = ['netcdf_file', 'netcdf_variable'] -import struct -import mmap -from numpy import ndarray, zeros, array +from operator import mul +from mmap import mmap, ACCESS_READ + +import numpy as np +from numpy import fromstring, ndarray, dtype, empty, array, asarray +from numpy import little_endian as LITTLE_ENDIAN -ABSENT = '\x00' * 8 -ZERO = '\x00' * 4 +ABSENT = '\x00\x00\x00\x00\x00\x00\x00\x00' +ZERO = '\x00\x00\x00\x00' NC_BYTE = '\x00\x00\x00\x01' NC_CHAR = '\x00\x00\x00\x02' NC_SHORT = '\x00\x00\x00\x03' @@ -33,254 +99,582 @@ NC_ATTRIBUTE = '\x00\x00\x00\x0c' +TYPEMAP = { NC_BYTE: ('b', 1), + NC_CHAR: ('c', 1), + NC_SHORT: ('h', 2), + NC_INT: ('i', 4), + NC_FLOAT: ('f', 4), + NC_DOUBLE: ('d', 8) } + +REVERSE = { 'b': NC_BYTE, + 'c': NC_CHAR, + 'h': NC_SHORT, + 'i': NC_INT, + 'f': NC_FLOAT, + 'd': NC_DOUBLE, + + # these come from asarray(1).dtype.char and asarray('foo').dtype.char, + # used when getting the types from generic attributes. + 'l': NC_INT, + 'S': NC_CHAR } + + class netcdf_file(object): - """A NetCDF file parser.""" + """ + A ``netcdf_file`` object has two standard attributes: ``dimensions`` and + ``variables``. The values of both are dictionaries, mapping dimension + names to their associated lengths and variable names to variables, + respectively. Application programs should never modify these + dictionaries. + + All other attributes correspond to global attributes defined in the + NetCDF file. Global file attributes are created by assigning to an + attribute of the ``netcdf_file`` object. + + """ + def __init__(self, filename, mode='r', mmap=None, version=1): + ''' Initialize netcdf_file from fileobj (string or file-like) + + Parameters + ---------- + filename : string or file-like + string -> filename + mode : {'r', 'w'}, optional + read-write mode, default is 'r' + mmap : None or bool, optional + Whether to mmap `filename` when reading. Default is True + when `filename` is a file name, False when `filename` is a + file-like object + version : {1, 2}, optional + version of netcdf to read / write, where 1 means *Classic + format* and 2 means *64-bit offset format*. Default is 1. See + http://www.unidata.ucar.edu/software/netcdf/docs/netcdf/Which-Format.html#Which-Format + ''' + if hasattr(filename, 'seek'): # file-like + self.fp = filename + self.filename = 'None' + if mmap is None: + mmap = False + elif mmap and not hasattr(filename, 'fileno'): + raise ValueError('Cannot use file object for mmap') + else: # maybe it's a string + self.filename = filename + self.fp = open(self.filename, '%sb' % mode) + if mmap is None: + mmap = True + self.use_mmap = mmap + self.version_byte = version + + if not mode in 'rw': + raise ValueError("Mode must be either 'r' or 'w'.") + self.mode = mode - def __init__(self, file, mode): - mode += 'b' - self._buffer = open(file, mode) - if mode in ['rb', 'r+b']: - self._parse() - elif mode == 'ab': - raise NotImplementedError + self.dimensions = {} + self.variables = {} - def flush(self): - pass + self._dims = [] + self._recs = 0 + self._recsize = 0 - def sync(self): - pass + self._attributes = {} - def close(self): - pass + if mode == 'r': + self._read() - def create_dimension(self, name, length): - pass + def __setattr__(self, attr, value): + # Store user defined attributes in a separate dict, + # so we can save them to file later. + try: + self._attributes[attr] = value + except AttributeError: + pass + self.__dict__[attr] = value - def create_variable(self, name, type, dimensions): - pass + def close(self): + if not self.fp.closed: + try: + self.flush() + finally: + self.fp.close() + __del__ = close + + def createDimension(self, name, length): + self.dimensions[name] = length + self._dims.append(name) + + def createVariable(self, name, type, dimensions): + shape = tuple([self.dimensions[dim] for dim in dimensions]) + shape_ = tuple([dim or 0 for dim in shape]) # replace None with 0 for numpy + + if isinstance(type, basestring): type = dtype(type) + typecode, size = type.char, type.itemsize + dtype_ = '>%s' % typecode + if size > 1: dtype_ += str(size) + + data = empty(shape_, dtype=dtype_) + self.variables[name] = netcdf_variable(data, typecode, shape, dimensions) + return self.variables[name] - def read(self, size=-1): - """Alias for reading the file buffer.""" - return self._buffer.read(size) - - def _parse(self): - """Initial parsing of the header.""" - # Check magic bytes. - assert self.read(3) == 'CDF' - - # Read version byte. - byte = self.read(1) - self.version_byte = struct.unpack('>b', byte)[0] - - # Read header info. - self._numrecs() - self._dim_array() - self._gatt_array() - self._var_array() - - def _numrecs(self): - """Read number of records.""" - self._nrecs = self._unpack_int() - - def _dim_array(self): - """Read a dict with dimensions names and sizes.""" - assert self.read(4) in [ZERO, NC_DIMENSION] - count = self._unpack_int() + def flush(self): + if hasattr(self, 'mode') and self.mode is 'w': + self._write() + sync = flush + + def _write(self): + self.fp.write('CDF') + self.fp.write(array(self.version_byte, '>b').tostring()) + + # Write headers and data. + self._write_numrecs() + self._write_dim_array() + self._write_gatt_array() + self._write_var_array() + + def _write_numrecs(self): + # Get highest record count from all record variables. + for var in self.variables.values(): + if var.isrec and len(var.data) > self._recs: + self.__dict__['_recs'] = len(var.data) + self._pack_int(self._recs) + + def _write_dim_array(self): + if self.dimensions: + self.fp.write(NC_DIMENSION) + self._pack_int(len(self.dimensions)) + for name in self._dims: + self._pack_string(name) + length = self.dimensions[name] + self._pack_int(length or 0) # replace None with 0 for record dimension + else: + self.fp.write(ABSENT) - self.dimensions = {} - self._dims = [] - for dim in range(count): - name = self._read_string() - length = self._unpack_int() - if length == 0: length = None # record dimension - self.dimensions[name] = length - self._dims.append(name) # preserve dim order + def _write_gatt_array(self): + self._write_att_array(self._attributes) - def _gatt_array(self): - """Read global attributes.""" - self.attributes = self._att_array() - - # Update __dict__ for compatibility with S.IO.N - self.__dict__.update(self.attributes) - - def _att_array(self): - """Read a dict with attributes.""" - assert self.read(4) in [ZERO, NC_ATTRIBUTE] - count = self._unpack_int() + def _write_att_array(self, attributes): + if attributes: + self.fp.write(NC_ATTRIBUTE) + self._pack_int(len(attributes)) + for name, values in attributes.items(): + self._pack_string(name) + self._write_values(values) + else: + self.fp.write(ABSENT) - # Read attributes. - attributes = {} - for attribute in range(count): - name = self._read_string() - nc_type = self._unpack_int() - n = self._unpack_int() + def _write_var_array(self): + if self.variables: + self.fp.write(NC_VARIABLE) + self._pack_int(len(self.variables)) + + # Sort variables non-recs first, then recs. We use a DSU + # since some people use pupynere with Python 2.3.x. + deco = [ (v._shape and not v.isrec, k) for (k, v) in self.variables.items() ] + deco.sort() + variables = [ k for (unused, k) in deco ][::-1] + + # Set the metadata for all variables. + for name in variables: + self._write_var_metadata(name) + # Now that we have the metadata, we know the vsize of + # each record variable, so we can calculate recsize. + self.__dict__['_recsize'] = sum([ + var._vsize for var in self.variables.values() + if var.isrec]) + # Set the data for all variables. + for name in variables: + self._write_var_data(name) + else: + self.fp.write(ABSENT) - # Read value for attributes. - attributes[name] = self._read_values(n, nc_type) + def _write_var_metadata(self, name): + var = self.variables[name] - return attributes + self._pack_string(name) + self._pack_int(len(var.dimensions)) + for dimname in var.dimensions: + dimid = self._dims.index(dimname) + self._pack_int(dimid) + + self._write_att_array(var._attributes) + + nc_type = REVERSE[var.typecode()] + self.fp.write(nc_type) + + if not var.isrec: + vsize = var.data.size * var.data.itemsize + vsize += -vsize % 4 + else: # record variable + try: + vsize = var.data[0].size * var.data.itemsize + except IndexError: + vsize = 0 + rec_vars = len([var for var in self.variables.values() + if var.isrec]) + if rec_vars > 1: + vsize += -vsize % 4 + self.variables[name].__dict__['_vsize'] = vsize + self._pack_int(vsize) + + # Pack a bogus begin, and set the real value later. + self.variables[name].__dict__['_begin'] = self.fp.tell() + self._pack_begin(0) + + def _write_var_data(self, name): + var = self.variables[name] + + # Set begin in file header. + the_beguine = self.fp.tell() + self.fp.seek(var._begin) + self._pack_begin(the_beguine) + self.fp.seek(the_beguine) + + # Write data. + if not var.isrec: + self.fp.write(var.data.tostring()) + count = var.data.size * var.data.itemsize + self.fp.write('0' * (var._vsize - count)) + else: # record variable + # Handle rec vars with shape[0] < nrecs. + if self._recs > len(var.data): + shape = (self._recs,) + var.data.shape[1:] + var.data.resize(shape) + + pos0 = pos = self.fp.tell() + for rec in var.data: + # Apparently scalars cannot be converted to big endian. If we + # try to convert a ``=i4`` scalar to, say, '>i4' the dtype + # will remain as ``=i4``. + if not rec.shape and (rec.dtype.byteorder == '<' or + (rec.dtype.byteorder == '=' and LITTLE_ENDIAN)): + rec = rec.byteswap() + self.fp.write(rec.tostring()) + # Padding + count = rec.size * rec.itemsize + self.fp.write('0' * (var._vsize - count)) + pos += self._recsize + self.fp.seek(pos) + self.fp.seek(pos0 + var._vsize) + + def _write_values(self, values): + if hasattr(values, 'dtype'): + nc_type = REVERSE[values.dtype.char] + else: + types = [ + (int, NC_INT), + (long, NC_INT), + (float, NC_FLOAT), + (basestring, NC_CHAR), + ] + try: + sample = values[0] + except TypeError: + sample = values + for class_, nc_type in types: + if isinstance(sample, class_): break + + typecode, size = TYPEMAP[nc_type] + if typecode is 'c': + dtype_ = '>c' + else: + dtype_ = '>%s' % typecode + if size > 1: dtype_ += str(size) - def _var_array(self): - """Read all variables.""" - assert self.read(4) in [ZERO, NC_VARIABLE] + values = asarray(values, dtype=dtype_) - # Read size of each record, in bytes. - self._read_recsize() + self.fp.write(nc_type) - # Read variables. - self.variables = {} + if values.dtype.char == 'S': + nelems = values.itemsize + else: + nelems = values.size + self._pack_int(nelems) + + if not values.shape and (values.dtype.byteorder == '<' or + (values.dtype.byteorder == '=' and LITTLE_ENDIAN)): + values = values.byteswap() + self.fp.write(values.tostring()) + count = values.size * values.itemsize + self.fp.write('0' * (-count % 4)) # pad + + def _read(self): + # Check magic bytes and version + magic = self.fp.read(3) + if not magic == 'CDF': + raise TypeError("Error: %s is not a valid NetCDF 3 file" % + self.filename) + self.__dict__['version_byte'] = fromstring(self.fp.read(1), '>b')[0] + + # Read file headers and set data. + self._read_numrecs() + self._read_dim_array() + self._read_gatt_array() + self._read_var_array() + + def _read_numrecs(self): + self.__dict__['_recs'] = self._unpack_int() + + def _read_dim_array(self): + header = self.fp.read(4) + assert header in [ZERO, NC_DIMENSION] count = self._unpack_int() - for variable in range(count): - name = self._read_string() - self.variables[name] = self._read_var() - - def _read_recsize(self): - """Read all variables and compute record bytes.""" - pos = self._buffer.tell() - recsize = 0 + for dim in range(count): + name = self._unpack_string() + length = self._unpack_int() or None # None for record dimension + self.dimensions[name] = length + self._dims.append(name) # preserve order + + def _read_gatt_array(self): + for k, v in self._read_att_array().items(): + self.__setattr__(k, v) + + def _read_att_array(self): + header = self.fp.read(4) + assert header in [ZERO, NC_ATTRIBUTE] count = self._unpack_int() - for variable in range(count): - name = self._read_string() - n = self._unpack_int() - isrec = False - for i in range(n): - dimid = self._unpack_int() - name = self._dims[dimid] - dim = self.dimensions[name] - if dim is None and i == 0: - isrec = True - attributes = self._att_array() - nc_type = self._unpack_int() - vsize = self._unpack_int() - begin = [self._unpack_int, self._unpack_int64][self.version_byte-1]() - if isrec: recsize += vsize + attributes = {} + for attr in range(count): + name = self._unpack_string() + attributes[name] = self._read_values() + return attributes - self._recsize = recsize - self._buffer.seek(pos) + def _read_var_array(self): + header = self.fp.read(4) + assert header in [ZERO, NC_VARIABLE] + + begin = 0 + dtypes = {'names': [], 'formats': []} + rec_vars = [] + count = self._unpack_int() + for var in range(count): + (name, dimensions, shape, attributes, + typecode, size, dtype_, begin_, vsize) = self._read_var() + # http://www.unidata.ucar.edu/software/netcdf/docs/netcdf.html + # Note that vsize is the product of the dimension lengths + # (omitting the record dimension) and the number of bytes + # per value (determined from the type), increased to the + # next multiple of 4, for each variable. If a record + # variable, this is the amount of space per record. The + # netCDF "record size" is calculated as the sum of the + # vsize's of all the record variables. + # + # The vsize field is actually redundant, because its value + # may be computed from other information in the header. The + # 32-bit vsize field is not large enough to contain the size + # of variables that require more than 2^32 - 4 bytes, so + # 2^32 - 1 is used in the vsize field for such variables. + if shape and shape[0] is None: # record variable + rec_vars.append(name) + # The netCDF "record size" is calculated as the sum of + # the vsize's of all the record variables. + self.__dict__['_recsize'] += vsize + if begin == 0: begin = begin_ + dtypes['names'].append(name) + dtypes['formats'].append(str(shape[1:]) + dtype_) + + # Handle padding with a virtual variable. + if typecode in 'bch': + actual_size = reduce(mul, (1,) + shape[1:]) * size + padding = -actual_size % 4 + if padding: + dtypes['names'].append('_padding_%d' % var) + dtypes['formats'].append('(%d,)>b' % padding) + + # Data will be set later. + data = None + else: # not a record variable + # Calculate size to avoid problems with vsize (above) + a_size = reduce(mul, shape, 1) * size + if self.use_mmap: + mm = mmap(self.fp.fileno(), begin_+a_size, access=ACCESS_READ) + data = ndarray.__new__(ndarray, shape, dtype=dtype_, + buffer=mm, offset=begin_, order=0) + else: + pos = self.fp.tell() + self.fp.seek(begin_) + data = fromstring(self.fp.read(a_size), dtype=dtype_) + data.shape = shape + self.fp.seek(pos) + + # Add variable. + self.variables[name] = netcdf_variable( + data, typecode, shape, dimensions, attributes) + + if rec_vars: + # Remove padding when only one record variable. + if len(rec_vars) == 1: + dtypes['names'] = dtypes['names'][:1] + dtypes['formats'] = dtypes['formats'][:1] + + # Build rec array. + if self.use_mmap: + mm = mmap(self.fp.fileno(), begin+self._recs*self._recsize, access=ACCESS_READ) + rec_array = ndarray.__new__(ndarray, (self._recs,), dtype=dtypes, + buffer=mm, offset=begin, order=0) + else: + pos = self.fp.tell() + self.fp.seek(begin) + rec_array = fromstring(self.fp.read(self._recs*self._recsize), dtype=dtypes) + rec_array.shape = (self._recs,) + self.fp.seek(pos) + + for var in rec_vars: + self.variables[var].__dict__['data'] = rec_array[var] def _read_var(self): + name = self._unpack_string() dimensions = [] shape = [] - n = self._unpack_int() - isrec = False - for i in range(n): + dims = self._unpack_int() + + for i in range(dims): dimid = self._unpack_int() - name = self._dims[dimid] - dimensions.append(name) - dim = self.dimensions[name] - if dim is None and i == 0: - dim = self._nrecs - isrec = True + dimname = self._dims[dimid] + dimensions.append(dimname) + dim = self.dimensions[dimname] shape.append(dim) dimensions = tuple(dimensions) shape = tuple(shape) - attributes = self._att_array() - nc_type = self._unpack_int() + attributes = self._read_att_array() + nc_type = self.fp.read(4) vsize = self._unpack_int() - - # Read offset. begin = [self._unpack_int, self._unpack_int64][self.version_byte-1]() - return netcdf_variable(self._buffer.fileno(), nc_type, vsize, begin, shape, dimensions, attributes, isrec, self._recsize) - - def _read_values(self, n, nc_type): - bytes = [1, 1, 2, 4, 4, 8] - typecodes = ['b', 'c', 'h', 'i', 'f', 'd'] - - count = n * bytes[nc_type-1] - values = self.read(count) - padding = self.read((4 - (count % 4)) % 4) - - typecode = typecodes[nc_type-1] - if nc_type != 2: # not char - values = struct.unpack('>%s' % (typecode * n), values) - values = array(values, dtype=typecode) + typecode, size = TYPEMAP[nc_type] + if typecode is 'c': + dtype_ = '>c' else: - # Remove EOL terminator. - if values.endswith('\x00'): values = values[:-1] + dtype_ = '>%s' % typecode + if size > 1: dtype_ += str(size) + + return name, dimensions, shape, attributes, typecode, size, dtype_, begin, vsize + + def _read_values(self): + nc_type = self.fp.read(4) + n = self._unpack_int() + + typecode, size = TYPEMAP[nc_type] + count = n*size + values = self.fp.read(count) + self.fp.read(-count % 4) # read padding + + if typecode is not 'c': + values = fromstring(values, dtype='>%s%d' % (typecode, size)) + if values.shape == (1,): values = values[0] + else: + values = values.rstrip('\x00') return values + def _pack_begin(self, begin): + if self.version_byte == 1: + self._pack_int(begin) + elif self.version_byte == 2: + self._pack_int64(begin) + + def _pack_int(self, value): + self.fp.write(array(value, '>i').tostring()) + _pack_int32 = _pack_int + def _unpack_int(self): - return struct.unpack('>i', self.read(4))[0] + return fromstring(self.fp.read(4), '>i')[0] _unpack_int32 = _unpack_int + def _pack_int64(self, value): + self.fp.write(array(value, '>q').tostring()) + def _unpack_int64(self): - return struct.unpack('>q', self.read(8))[0] + return fromstring(self.fp.read(8), '>q')[0] - def _read_string(self): - count = struct.unpack('>i', self.read(4))[0] - s = self.read(count) - # Remove EOL terminator. - if s.endswith('\x00'): s = s[:-1] - padding = self.read((4 - (count % 4)) % 4) - return s + def _pack_string(self, s): + count = len(s) + self._pack_int(count) + self.fp.write(s) + self.fp.write('0' * (-count % 4)) # pad - def close(self): - self._buffer.close() + def _unpack_string(self): + count = self._unpack_int() + s = self.fp.read(count).rstrip('\x00') + self.fp.read(-count % 4) # read padding + return s class netcdf_variable(object): - def __init__(self, fileno, nc_type, vsize, begin, shape, dimensions, attributes, isrec=False, recsize=0): - self._nc_type = nc_type - self._vsize = vsize - self._begin = begin - self.shape = shape + """ + ``netcdf_variable`` objects are constructed by calling the method + ``createVariable`` on the netcdf_file object. + + ``netcdf_variable`` objects behave much like array objects defined in + Numpy, except that their data resides in a file. Data is read by + indexing and written by assigning to an indexed subset; the entire + array can be accessed by the index ``[:]`` or using the methods + ``getValue`` and ``assignValue``. ``netcdf_variable`` objects also + have attribute ``shape`` with the same meaning as for arrays, but + the shape cannot be modified. There is another read-only attribute + ``dimensions``, whose value is the tuple of dimension names. + + All other attributes correspond to variable attributes defined in + the NetCDF file. Variable attributes are created by assigning to an + attribute of the ``netcdf_variable`` object. + + """ + def __init__(self, data, typecode, shape, dimensions, attributes=None): + self.data = data + self._typecode = typecode + self._shape = shape self.dimensions = dimensions - self.attributes = attributes # for ``dap.plugins.netcdf`` - self.__dict__.update(attributes) - self._is_record = isrec - - # Number of bytes and type. - self._bytes = [1, 1, 2, 4, 4, 8][self._nc_type-1] - type_ = ['i', 'S', 'i', 'i', 'f', 'f'][self._nc_type-1] - dtype = '>%s%d' % (type_, self._bytes) - bytes = self._begin + self._vsize - - if isrec: - # Record variables are not stored contiguosly on disk, so we - # need to create a separate array for each record. - # - # TEO: This will copy data from the newly-created array - # into the __array_data__ region, thus removing any benefit of using - # a memory-mapped file. You might as well just read the data - # in directly. - self.__array_data__ = zeros(shape, dtype) - bytes += (shape[0] - 1) * recsize - for n in range(shape[0]): - offset = self._begin + (n * recsize) - mm = mmap.mmap(fileno, bytes, access=mmap.ACCESS_READ) - self.__array_data__[n] = ndarray.__new__(ndarray, shape[1:], dtype=dtype, buffer=mm, offset=offset, order=0) - else: - # Create buffer and data. - mm = mmap.mmap(fileno, bytes, access=mmap.ACCESS_READ) - self.__array_data__ = ndarray.__new__(ndarray, shape, dtype=dtype, buffer=mm, offset=self._begin, order=0) - - # N-D array interface - self.__array_interface__ = {'shape' : shape, - 'typestr': dtype, - 'data' : self.__array_data__, - 'version': 3, - } - def __getitem__(self, index): - return self.__array_data__.__getitem__(index) + self._attributes = attributes or {} + for k, v in self._attributes.items(): + self.__dict__[k] = v + + def __setattr__(self, attr, value): + # Store user defined attributes in a separate dict, + # so we can save them to file later. + try: + self._attributes[attr] = value + except AttributeError: + pass + self.__dict__[attr] = value + + def isrec(self): + return self.data.shape and not self._shape[0] + isrec = property(isrec) + + def shape(self): + return self.data.shape + shape = property(shape) def getValue(self): - """For scalars.""" - return self.__array_data__.item() + return self.data.item() def assignValue(self, value): - """For scalars.""" - self.__array_data__.itemset(value) + self.data.itemset(value) def typecode(self): - return ['b', 'c', 'h', 'i', 'f', 'd'][self._nc_type-1] + return self._typecode + + def __getitem__(self, index): + return self.data[index] + + def __setitem__(self, index, data): + # Expand data for record vars? + if self.isrec: + if isinstance(index, tuple): + rec_index = index[0] + else: + rec_index = index + if isinstance(rec_index, slice): + recs = (rec_index.start or 0) + len(data) + else: + recs = rec_index + 1 + if recs > len(self.data): + shape = (recs,) + self._shape[1:] + self.data.resize(shape) + self.data[index] = data -def _test(): - import doctest - doctest.testmod() +NetCDFFile = netcdf_file +NetCDFVariable = netcdf_variable diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/npfile.py python-scipy-0.8.0+dfsg1/scipy/io/npfile.py --- python-scipy-0.7.2+dfsg1/scipy/io/npfile.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/npfile.py 1970-01-01 01:00:00.000000000 +0100 @@ -1,232 +0,0 @@ -# Authors: Matthew Brett, Travis Oliphant - -""" -Class for reading and writing numpy arrays from / to binary files -""" - -import sys - -import numpy as np - -__all__ = ['sys_endian_code', 'npfile'] - -sys_endian_code = (sys.byteorder == 'little') and '<' or '>' - -class npfile(object): - ''' Class for reading and writing numpy arrays to/from files - - Inputs: - file_name -- The complete path name to the file to open - or an open file-like object - permission -- Open the file with given permissions: ('r', 'w', 'a') - for reading, writing, or appending. This is the same - as the mode argument in the builtin open command. - format -- The byte-ordering of the file: - (['native', 'n'], ['ieee-le', 'l'], ['ieee-be', 'B']) for - native, little-endian, or big-endian respectively. - - Attributes: - endian -- default endian code for reading / writing - order -- default order for reading writing ('C' or 'F') - file -- file object containing read / written data - - Methods: - seek, tell, close -- as for file objects - rewind -- set read position to beginning of file - read_raw -- read string data from file (read method of file) - write_raw -- write string data to file (write method of file) - read_array -- read numpy array from binary file data - write_array -- write numpy array contents to binary file - - Example use: - >>> from StringIO import StringIO - >>> import numpy as np - >>> from scipy.io import npfile - >>> arr = np.arange(10).reshape(5,2) - >>> # Make file-like object (could also be file name) - >>> my_file = StringIO() - >>> npf = npfile(my_file) - >>> npf.write_array(arr) - >>> npf.rewind() - >>> npf.read_array((5,2), arr.dtype) - >>> npf.close() - >>> # Or read write in Fortran order, Big endian - >>> # and read back in C, system endian - >>> my_file = StringIO() - >>> npf = npfile(my_file, order='F', endian='>') - >>> npf.write_array(arr) - >>> npf.rewind() - >>> npf.read_array((5,2), arr.dtype) - ''' - - def __init__(self, file_name, - permission='rb', - endian = 'dtype', - order = 'C'): - if 'b' not in permission: permission += 'b' - if isinstance(file_name, basestring): - self.file = file(file_name, permission) - else: - try: - closed = file_name.closed - except AttributeError: - raise TypeError, 'Need filename or file object as input' - if closed: - raise TypeError, 'File object should be open' - self.file = file_name - self.endian = endian - self.order = order - - def get_endian(self): - return self._endian - def set_endian(self, endian_code): - self._endian = self.parse_endian(endian_code) - endian = property(get_endian, set_endian, None, 'get/set endian code') - - def parse_endian(self, endian_code): - ''' Returns valid endian code from wider input options''' - if endian_code in ['native', 'n', 'N','default', '=']: - return sys_endian_code - elif endian_code in ['swapped', 's', 'S']: - return sys_endian_code == '<' and '>' or '<' - elif endian_code in ['ieee-le','l','L','little-endian', - 'little','le','<']: - return '<' - elif endian_code in ['ieee-be','B','b','big-endian', - 'big','be', '>']: - return '>' - elif endian_code == 'dtype': - return 'dtype' - else: - raise ValueError, "Unrecognized endian code: " + endian_code - return - - def __del__(self): - try: - self.file.close() - except: - pass - - def close(self): - self.file.close() - - def seek(self, *args): - self.file.seek(*args) - - def tell(self): - return self.file.tell() - - def rewind(self,howmany=None): - """Rewind a file to its beginning or by a specified amount. - """ - if howmany is None: - self.seek(0) - else: - self.seek(-howmany,1) - - def read_raw(self, size=-1): - """Read raw bytes from file as string.""" - return self.file.read(size) - - def write_raw(self, str): - """Write string to file as raw bytes.""" - return self.file.write(str) - - def remaining_bytes(self): - cur_pos = self.tell() - self.seek(0, 2) - end_pos = self.tell() - self.seek(cur_pos) - return end_pos - cur_pos - - def _endian_order(self, endian, order): - ''' Housekeeping function to return endian, order from input args ''' - if endian is None: - endian = self.endian - else: - endian = self.parse_endian(endian) - if order is None: - order = self.order - return endian, order - - def _endian_from_dtype(self, dt): - dt_endian = dt.byteorder - if dt_endian == '=': - dt_endian = sys_endian_code - return dt_endian - - def write_array(self, data, endian=None, order=None): - ''' Write to open file object the flattened numpy array data - - Inputs - data - numpy array or object convertable to array - endian - endianness of written data - (can be None, 'dtype', '<', '>') - (if None, get from self.endian) - order - order of array to write (C, F) - (if None from self.order) - ''' - endian, order = self._endian_order(endian, order) - data = np.asarray(data) - dt_endian = self._endian_from_dtype(data.dtype) - if not endian == 'dtype': - if dt_endian != endian: - data = data.byteswap() - self.file.write(data.tostring(order=order)) - - def read_array(self, dt, shape=-1, endian=None, order=None): - '''Read data from file and return it in a numpy array. - - Inputs - ------ - dt - dtype of array to be read - shape - shape of output array, or number of elements - (-1 as number of elements or element in shape - means unknown dimension as in reshape; size - of array calculated from remaining bytes in file) - endian - endianness of data in file - (can be None, 'dtype', '<', '>') - (if None, get from self.endian) - order - order of array in file (C, F) - (if None get from self.order) - - Outputs - arr - array from file with given dtype (dt) - ''' - endian, order = self._endian_order(endian, order) - dt = np.dtype(dt) - try: - shape = list(shape) - except TypeError: - shape = [shape] - minus_ones = shape.count(-1) - if minus_ones == 0: - pass - elif minus_ones == 1: - known_dimensions_size = -np.product(shape,axis=0) * dt.itemsize - unknown_dimension_size, illegal = divmod(self.remaining_bytes(), - known_dimensions_size) - if illegal: - raise ValueError("unknown dimension doesn't match filesize") - shape[shape.index(-1)] = unknown_dimension_size - else: - raise ValueError( - "illegal -1 count; can only specify one unknown dimension") - sz = dt.itemsize * np.product(shape) - dt_endian = self._endian_from_dtype(dt) - buf = self.file.read(sz) - arr = np.ndarray(shape=shape, - dtype=dt, - buffer=buf, - order=order) - if (not endian == 'dtype') and (dt_endian != endian): - return arr.byteswap() - return arr.copy() - -npfile = np.deprecate_with_doc(""" -You can achieve the same effect as using npfile, using ndarray.tofile -and numpy.fromfile. - -Even better you can use memory-mapped arrays and data-types to map out a -file format for direct manipulation in NumPy. -""")(npfile) diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/numpyiomodule.c python-scipy-0.8.0+dfsg1/scipy/io/numpyiomodule.c --- python-scipy-0.7.2+dfsg1/scipy/io/numpyiomodule.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/numpyiomodule.c 1970-01-01 01:00:00.000000000 +0100 @@ -1,1011 +0,0 @@ -/* numpyio.c -- Version 0.9.9 - * - * Author: Travis E. Oliphant - * Date : March 1999 - * - * This file is a module for python that defines basically two functions for - * reading from and writing to a binary file. It also has some functions - * for byteswapping data and packing and unpacking bits. - * - * The data goes into a NumPy array object (multiarray) - * - * It is basically an implemetation of read and write with the data - * going directly into a NumPy array - * - * Permission is granted to use this program however you see fit, but I give - * no guarantees as to its usefulness or reliability. You assume full - * responsibility for using this program. - * - * Thanks to Michael A. Miller - * whose TableIO packages helped me learn how - * to write an extension package. I've adapted his Makefile as well. - */ - -#include "Python.h" /* Python header files */ -#include "numpy/arrayobject.h" -/* #include */ -#include - -void rbo(char *, int, int); -void packbits(char *, int, char *, int, int); -void unpackbits(char *, int, char *, int, int, int); -int is_little_endian(void); - -static PyObject *ErrorObject; /* locally-raised exception */ - -#define PYERR(message) do {PyErr_SetString(PyExc_ValueError, message); goto fail;} while(0) -#define DATA(arr) ((arr)->data) -#define DIMS(arr) ((arr)->dimensions) -#define STRIDES(arr) ((arr)->strides) -#define ELSIZE(arr) ((arr)->descr->elsize) -#define OBJECTTYPE(arr) ((arr)->descr->type_num) -#define BASEOBJ(arr) ((PyArrayObject *)((arr)->base)) -#define RANK(arr) ((arr)->nd) -#define ISCONTIGUOUS(m) ((m)->flags & NPY_CONTIGUOUS) - -#define PYSETERROR(message) \ -{ PyErr_SetString(ErrorObject, message); goto fail; } - -#define INCREMENT(ret_ind, nd, max_ind) \ -{ \ - int k; \ - k = (nd) - 1; \ - if (++(ret_ind)[k] >= (max_ind)[k]) { \ - while (k >= 0 && ((ret_ind)[k] >= (max_ind)[k]-1)) \ - (ret_ind)[k--] = 0; \ - if (k >= 0) (ret_ind)[k]++; \ - else (ret_ind)[0] = (max_ind)[0]; \ - } \ -} - -#define CALCINDEX(indx, nd_index, strides, ndim) \ -{ \ - int i; \ - \ - indx = 0; \ - for (i=0; i < (ndim); i++) \ - indx += nd_index[i]*strides[i]; \ -} - -static PyObject * - numpyio_fromfile(PyObject *self, PyObject *args) /* args: number of bytes and type */ -{ - PyObject *file; - PyArrayObject *arr=NULL; - PyArray_Descr *indescr=NULL; - void *ibuff=NULL; - int myelsize; - int ibuff_cleared = 1; - long n,nread; - char read_type; - FILE *fp; - char dobyteswap = 0; - int swap_factor; - char out_type = 124; /* set to unused value */ - PyArray_VectorUnaryFunc *castfunc; - - if (!PyArg_ParseTuple( args, "Olc|cb" , &file, &n, &read_type, &out_type, &dobyteswap )) - return NULL; - - if (out_type == 124) - out_type = read_type; - - fp = PyFile_AsFile(file); - - if (fp == NULL) { - PYSETERROR("First argument must be an open file"); - } - - if (n <= 0) { - PYSETERROR("Second argument (number of bytes to read) must be positive."); - } - /* Make a 1-D NumPy array of type read_type with n elements */ - - if ((arr = (PyArrayObject *)PyArray_SimpleNew(1,(npy_intp*)&n,out_type)) == NULL) - return NULL; - - if (arr->descr->elsize == 0) { - PYSETERROR("Does not support variable types."); - } - - /* Read the data into the array from the file */ - if (out_type == read_type) { - ibuff = arr -> data; - myelsize = arr -> descr -> elsize; - } - else { /* Alocate a storage buffer for data read in */ - indescr = PyArray_DescrFromType((int ) read_type); - if (indescr == NULL) goto fail; - if (indescr->elsize == 0) { - PYSETERROR("Does not support variable types."); - } - if (PyTypeNum_ISEXTENDED(indescr->type_num)) { - PyErr_SetString(PyExc_ValueError, - "Does not support extended types."); - goto fail; - } - myelsize = indescr -> elsize; - ibuff = malloc(myelsize*n); - castfunc = indescr->f->cast[arr->descr->type_num]; - Py_DECREF(indescr); - indescr=NULL; - if (ibuff == NULL) - PYSETERROR("Could not allocate memory for type casting") - ibuff_cleared = 0; - } - - nread = fread(ibuff,myelsize,n,fp); - if (ferror(fp)) { - clearerr(fp); - PYSETERROR("There was an error reading from the file"); - } - - /* Check to see correct number of bytes were read. If not, then - resize the array to the number of bytes actually read in. - */ - - if (nread < n) { - fprintf(stderr,"Warning: %ld bytes requested, %ld bytes read.\n", n, nread); - arr->dimensions[0] = nread; - arr->data = realloc(arr->data,arr->descr->elsize*nread); - } - - if (dobyteswap) { - swap_factor = ((read_type=='F' || read_type=='D') ? 2 : 1); - rbo(ibuff,myelsize/swap_factor,nread*swap_factor); - } - - if (out_type != read_type) { /* We need to type_cast it */ - castfunc(ibuff, arr->data, nread, NULL, NULL); - free(ibuff); - ibuff_cleared = 1; - } - - return PyArray_Return(arr); - - fail: - Py_XDECREF(indescr); - if (!ibuff_cleared) free(ibuff); - Py_XDECREF(arr); - return NULL; - -} - -static int write_buffered_output(FILE *fp, PyArrayObject *arr, PyArray_Descr* outdescr, char *buffer, int buffer_size, int bswap) { - - /* INITIALIZE N-D index */ - - /* Loop over the N-D index filling the buffer with the data in arr - (indexed correctly using strides) - Each time dimension subdim is about to roll - write the buffer to disk and fill it again. */ - - char *buff_ptr, *output_ptr; - int nwrite, *nd_index, indx; - int buffer_size_bytes, elsize; - - buff_ptr = buffer; - nd_index = (int *)calloc(arr->nd,sizeof(int)); - if (NULL == nd_index) { - PyErr_SetString(ErrorObject,"Could not allocate memory for index array."); - return -1; - } - buffer_size_bytes = buffer_size * arr->descr->elsize; - while(nd_index[0] != arr->dimensions[0]) { - CALCINDEX(indx,nd_index,arr->strides,arr->nd); - memcpy(buff_ptr, arr->data+indx, arr->descr->elsize); - buff_ptr += arr->descr->elsize; - INCREMENT(nd_index,arr->nd,arr->dimensions); - if ((buff_ptr - buffer) >= buffer_size_bytes) { - buff_ptr = buffer; - - if (outdescr->type != arr->descr->type) { /* Cast to new type before writing */ - output_ptr = buffer + buffer_size_bytes; - (arr->descr->f->cast[outdescr->type_num])(buffer,output_ptr,buffer_size,NULL, NULL); - elsize = outdescr->elsize; - } - else { - output_ptr = buffer; - elsize = arr->descr->elsize; - } - if (bswap) { - rbo((char *)output_ptr, elsize, buffer_size); - } - - nwrite = fwrite(output_ptr, elsize, buffer_size, fp); - - if (ferror(fp)) { - clearerr(fp); - PyErr_SetString(ErrorObject,"There was an error writing to the file"); - return -1; - } - if (nwrite < buffer_size) { - fprintf(stderr,"Warning: %d of %d specified bytes written.\n",nwrite, buffer_size); - } - } - - } - return 0; -} - -static PyObject * - numpyio_tofile(PyObject *self, PyObject *args) /* args: number of bytes and type */ -{ - PyObject *file; - PyArrayObject *arr = NULL; - PyObject *obj; - PyArray_Descr *outdescr=NULL; - void *obuff = NULL; - long n, k, nwrite, maxN, elsize_bytes; - int myelsize, buffer_size; - FILE *fp; - char *buffer = NULL; - char dobyteswap = 0; - int swap_factor = 1; - char ownalloc = 0; - char write_type = 124; - - if (!PyArg_ParseTuple( args, "OlO|cb" , &file, &n, &obj, &write_type, &dobyteswap)) - return NULL; - - fp = PyFile_AsFile(file); - - if (fp == NULL) { - PYSETERROR("First argument must be an open file"); - } - - if (!PyArray_Check(obj)) { - PYSETERROR("Third argument must be a NumPy array."); - } - - if (PyArray_ISEXTENDED(obj)) { - PYSETERROR("Does not support extended types."); - } - - maxN = PyArray_SIZE((PyArrayObject *)obj); - if (n > maxN) - PYSETERROR("The NumPy array does not have that many elements."); - - if (((PyArrayObject *)obj)->descr->type_num == PyArray_OBJECT) - PYSETERROR("Cannot write an object array."); - - if (!PyArray_ISCONTIGUOUS((PyArrayObject *)obj)) { - arr = (PyArrayObject *)PyArray_CopyFromObject(obj,((PyArrayObject *)obj) -> descr -> type_num, 0, 0); - if (NULL == arr) { /* Memory allocation failed - Write out buffered data using strides info */ - arr = (PyArrayObject *)obj; - Py_INCREF(arr); - if (write_type == 124) - write_type = arr -> descr -> type; - - if (write_type != arr -> descr -> type) { - outdescr = PyArray_DescrFromType((int) write_type); - if (outdescr == NULL) goto fail; - elsize_bytes = (outdescr->elsize + arr->descr->elsize); /* allocate space for buffer and casted buffer */ - } - else { - outdescr = arr->descr; - Py_INCREF(outdescr); - elsize_bytes = (arr->descr->elsize); - } - k = 0; - do { - k++; - buffer_size = PyArray_MultiplyList(arr->dimensions + k, arr->nd - k); - buffer = (char *)malloc(elsize_bytes*buffer_size); - } - while ((NULL == buffer) && (k < arr->nd - 1)); - - if (NULL == buffer) /* Still NULL no size was small enough */ - PYSETERROR("Could not allocate memory for any attempted output buffer size."); - - /* Write a buffered output */ - - if (write_buffered_output(fp, (PyArrayObject *)obj, outdescr, buffer, buffer_size, dobyteswap) < 0) { - free(buffer); - goto fail; - } - free(buffer); - Py_DECREF(outdescr); - Py_DECREF(arr); - Py_INCREF(Py_None); - return Py_None; - } - } - else { - arr = (PyArrayObject *)obj; - Py_INCREF(arr); - } - - /* Write the array to file (low-level data transfer) */ - if (n > 0) { - - if (write_type == 124) /* Wasn't specified: use input type */ - write_type = arr -> descr -> type; - - if (write_type == arr -> descr -> type) { /* point output buffer to data */ - obuff = arr -> data; - myelsize = arr -> descr -> elsize; - } - else { - if ((outdescr = PyArray_DescrFromType((int ) write_type)) == NULL) - goto fail; - myelsize = outdescr -> elsize; - obuff = malloc(n*myelsize); - if (obuff == NULL) - PYSETERROR("Could not allocate memory for type-casting"); - ownalloc = 1; - (arr->descr->f->cast[(int)outdescr->type_num])(arr->data,obuff,n,NULL,NULL); - Py_DECREF(outdescr); - outdescr=NULL; - } - /* Write the data from the array to the file */ - if (dobyteswap) { - swap_factor = ((write_type=='F' || write_type=='D') ? 2 : 1); - rbo((char *)obuff,myelsize/swap_factor,n*swap_factor); - } - - nwrite = fwrite(obuff,myelsize,n,fp); - - if (dobyteswap) { /* Swap data in memory back if allocated obuff */ - if (write_type == arr -> descr -> type) /* otherwise we changed obuff only */ - rbo(arr->data,arr->descr->elsize/swap_factor,PyArray_SIZE(arr)*swap_factor); - } - - if (ferror(fp)) { - clearerr(fp); - PYSETERROR("There was an error writing to the file"); - } - if (nwrite < n) { - fprintf(stderr,"Warning: %ld of %ld specified bytes written.\n",nwrite,n); - } - } - - if (ownalloc == 1) { - free(obuff); - } - - Py_DECREF(arr); - Py_INCREF(Py_None); - return Py_None; - - fail: - Py_XDECREF(outdescr); - if (ownalloc == 1) free(obuff); - Py_XDECREF(arr); - return NULL; - -} - -static PyObject * - numpyio_byteswap(PyObject *self, PyObject *args) /* args: number of bytes and type */ -{ - PyArrayObject *arr = NULL; - PyObject *obj; - int type; - - if (!PyArg_ParseTuple( args, "O" , &obj)) - return NULL; - - type = PyArray_ObjectType(obj,0); - if ((arr = (PyArrayObject *)PyArray_ContiguousFromObject(obj,type,0,0)) == NULL) - return NULL; - - rbo(arr->data,arr->descr->elsize,PyArray_SIZE(arr)); - - return PyArray_Return(arr); -} - -static PyObject * - numpyio_pack(PyObject *self, PyObject *args) /* args: in */ -{ - PyArrayObject *arr = NULL, *out = NULL; - PyObject *obj; - int els_per_slice; - int out_size; - int type; - - if (!PyArg_ParseTuple( args, "O" , &obj)) - return NULL; - - type = PyArray_ObjectType(obj,0); - if ((arr = (PyArrayObject *)PyArray_ContiguousFromObject(obj,type,0,0)) == NULL) - return NULL; - - if (arr->descr->type_num > PyArray_LONG) - PYSETERROR("Expecting an input array of integer type (no floats)."); - - /* Get size information from input array and make a 1-D output array of bytes */ - - els_per_slice = arr->dimensions[arr->nd - 1]; - if (arr->nd > 1) - els_per_slice = els_per_slice * arr->dimensions[arr->nd - 2]; - - out_size = (PyArray_SIZE(arr)/els_per_slice)*ceil ( (float) els_per_slice / 8); - - if ((out = (PyArrayObject *)PyArray_SimpleNew(1,&out_size,PyArray_UBYTE))==NULL) { - goto fail; - } - - packbits(arr->data,arr->descr->elsize,out->data,PyArray_SIZE(arr),els_per_slice); - - Py_DECREF(arr); - return PyArray_Return(out); - - fail: - Py_XDECREF(arr); - return NULL; - -} - -static PyObject * - numpyio_unpack(PyObject *self, PyObject *args) /* args: in, out_type */ -{ - PyArrayObject *arr = NULL, *out=NULL; - PyObject *obj; - int els_per_slice, arrsize; - int out_size, type; - char out_type = 'b'; - - if (!PyArg_ParseTuple( args, "Oi|c" , &obj, &els_per_slice, &out_type)) - return NULL; - - if (els_per_slice < 1) - PYSETERROR("Second argument is elements_per_slice and it must be >= 1."); - - type = PyArray_ObjectType(obj,0); - if ((arr = (PyArrayObject *)PyArray_ContiguousFromObject(obj,type,0,0)) == NULL) - return NULL; - - arrsize = PyArray_SIZE(arr); - - if ((arrsize % (int) (ceil( (float) els_per_slice / 8))) != 0) - PYSETERROR("That cannot be the number of elements per slice for this array size."); - - if (arr->descr->type_num > PyArray_LONG) - PYSETERROR("Can only unpack arrays that are of integer type."); - - /* Make an 1-D output array of type out_type */ - - out_size = els_per_slice * arrsize / ceil( (float) els_per_slice / 8); - - if ((out = (PyArrayObject *)PyArray_SimpleNew(1,&out_size,out_type))==NULL) - goto fail; - - if (out->descr->type_num > PyArray_LONG) { - PYSETERROR("Can only unpack bits into integer type."); - } - - unpackbits(arr->data,arr->descr->elsize,out->data,out->descr->elsize,out_size,els_per_slice); - - Py_DECREF(arr); - return PyArray_Return(out); - - fail: - Py_XDECREF(out); - Py_XDECREF(arr); - return NULL; -} - - -static char fread_doc[] = -"g = numpyio.fread( fid, Num, read_type { mem_type, byteswap})\n\n" -" fid = open file pointer object (i.e. from fid = open('filename') )\n" -" Num = number of elements to read of type read_type\n" -" read_type = a character in 'cb1silfdFD' (PyArray types)\n" -" describing how to interpret bytes on disk.\nOPTIONAL\n" -" mem_type = a character (PyArray type) describing what kind of\n" -" PyArray to return in g. Default = read_type\n" -" byteswap = 0 for no byteswapping or a 1 to byteswap (to handle\n" -" different endianness). Default = 0."; - -static char fwrite_doc[] = -"numpyio.fwrite( fid, Num, myarray { write_type, byteswap} )\n\n" -" fid = open file stream\n" -" Num = number of elements to write\n" -" myarray = NumPy array holding the data to write (will be\n" -" written as if ravel(myarray) was passed)\nOPTIONAL\n" -" write_type = character ('cb1silfdFD') describing how to write the\n" -" data (what datatype to use) Default = type of\n" -" myarray.\n" -" byteswap = 0 or 1 to determine if byteswapping occurs on write.\n" -" Default = 0."; - -static char bswap_doc[] = -" out = numpyio.bswap(myarray)\n\n" -" myarray = an array whose elements you want to byteswap.\n" -" out = a reference to byteswapped myarray.\n\n" -" This does an inplace byte-swap so that myarray is changed in\n" -" memory."; - -static char packbits_doc[] = -"out = numpyio.packbits(myarray)\n\n" -" myarray = an array whose (assumed binary) elements you want to\n" -" pack into bits (must be of integer type, 'cb1sl')\n\n" -" This routine packs the elements of a binary-valued dataset into a\n" -" 1-D NumPy array of type PyArray_UBYTE ('b') whose bits correspond to\n" -" the logical (0 or nonzero) value of the input elements. \n\n" -" If myarray has more dimensions than 2 it packs each slice (rows*columns)\n" -" separately. The number of elements per slice (rows*columns) is\n" -" important to know to be able to unpack the data later.\n\n" -" Example:\n" -" >>> a = array([[[1,0,1],\n" -" ... [0,1,0]],\n" -" ... [[1,1,0],\n" -" ... [0,0,1]]])\n" -" >>> b = numpyio.packbits(a)\n" -" >>> b\n" -" array([168, 196], 'b')\n\n" -" Note that 168 = 128 + 32 + 8\n" -" 196 = 128 + 64 + 4"; - -static char unpackbits_doc[] = -"out = numpyio.unpackbits(myarray, elements_per_slice {, out_type} )\n\n" -" myarray = Array of integer type ('cb1sl') whose least\n" -" significant byte is a bit-field for the\n" -" resulting output array.\n\n" -" elements_per_slice = Necessary for interpretation of myarray.\n" -" This is how many elements in the\n " -" rows*columns of original packed structure.\n\nOPTIONAL\n" -" out_type = The type of output array to populate with 1's\n" -" and 0's. Must be an integer type.\n\n\nThe output array\n" -" will be a 1-D array of 1's and zero's"; - - -#define BUFSIZE 256 -/* Convert a Python string object to a complex number */ -static int convert_from_object(PyObject *obj, Py_complex *cnum) -{ - PyObject *res=NULL, *elobj=NULL; - PyObject *newstr=NULL, *finalobj=NULL, *valobj=NULL; - char strbuffer[2*BUFSIZE]; - char *xptr, *elptr; - char *newstrbuff, thischar; - char buffer[BUFSIZE]; - char validnum[] = "0123456789.eE+-"; - int validlen = 15; - int inegflag = 1; - int rnegflag = 1; - int n, k, m, i, elN, size, state, count; - double val; - - if (!PyString_Check(obj)) return -1; - - /* strip string */ - newstr = PyObject_CallMethod(obj, "strip", NULL); - if (newstr == NULL) goto fail; - - /* Replace any 'e+' or 'e-' */ - size = PyString_GET_SIZE(newstr); - newstrbuff = PyString_AsString(newstr); - if (newstrbuff == NULL) goto fail; - if (size > 2*BUFSIZE) PYERR("String too large."); - - state = 0; - count = 0; - for (k=0; k BUFSIZE)) - PYSETERROR("String too large."); - - /* Replace back the + and - and strip away invalid characters */ - elptr = PyString_AsString(elobj); - m = 0; - for (n=0; n < elN; n++) { - thischar = elptr[n]; - if (thischar == '\254') - buffer[m++] = '+'; - else if (thischar == '\253') - buffer[m++] = '-'; - else { - for (i=0; i< validlen; i++) { - if (thischar == validnum[i]) break; - } - if (i < validlen) buffer[m++] = thischar; - } - } - finalobj = PyString_FromStringAndSize(buffer, m); - if (finalobj == NULL) goto fail; - valobj = PyFloat_FromString(finalobj, NULL); /* Try to make a float */ - if (valobj == NULL) goto fail; - val = PyFloat_AsDouble(valobj); - if (PyErr_Occurred()) goto fail; - Py_DECREF(finalobj); - Py_DECREF(valobj); - Py_DECREF(elobj); - if (k==0) { - cnum->real = val*rnegflag; - } - else { - cnum->imag = val*inegflag; - } - - } - Py_DECREF(newstr); - Py_DECREF(res); - return 0; - - fail: - Py_XDECREF(res); - Py_XDECREF(elobj); - Py_XDECREF(newstr); - Py_XDECREF(finalobj); - Py_XDECREF(valobj); - return -1; -} - - - -static int PyTypeFromChar(char ctype) -{ - switch(ctype) { - case 'c': return PyArray_CHAR; - case 'b': return PyArray_UBYTE; - case '1': return PyArray_BYTE; - case 's': return PyArray_SHORT; - case 'i': return PyArray_INT; -#ifdef PyArray_UNSIGNED_TYPES - case 'u': return PyArray_UINT; - case 'w': return PyArray_USHORT; -#endif - case 'l': return PyArray_LONG; - case 'f': return PyArray_FLOAT; - case 'd': return PyArray_DOUBLE; - case 'F': return PyArray_CFLOAT; - case 'D': return PyArray_CDOUBLE; - case 'O': return PyArray_OBJECT; - } - return PyArray_NOTYPE; -} - - -static PyObject * - numpyio_convert_objects(PyObject *self, PyObject *args) -{ - PyObject *obj = NULL, *missing_val = NULL; - PyArrayObject *arr = NULL, *out=NULL; - PyArrayObject *missing_arr = NULL; - PyArray_Descr *descr; - PyObject *builtins, *dict; - char out_type; - int int_type, i, err; - char *outptr; - PyObject **arrptr; - PyObject *numobj=NULL; - PyObject *comp_obj; - Py_complex numc; - PyArray_VectorUnaryFunc *funcptr; - - if (!PyArg_ParseTuple( args, "Oc|O" , &obj, &out_type, &missing_val)) - return NULL; - - if (missing_val == NULL) { - missing_val = PyInt_FromLong(0); - } - else { - Py_INCREF(missing_val); /* Increment missing_val for later DECREF */ - } - - int_type = PyTypeFromChar(out_type); - if ((int_type == PyArray_NOTYPE) || (int_type == PyArray_OBJECT) || \ - PyTypeNum_ISEXTENDED(int_type)) - PYERR("Invalid output type."); - - missing_arr = (PyArrayObject *)PyArray_ContiguousFromObject(missing_val, - int_type, 0, 0); - Py_DECREF(missing_val); - missing_val = NULL; /* So later later failures don't decrement it */ - - if ((missing_arr == NULL)) goto fail; - if ((RANK(missing_arr) > 0)) PYERR("Missing value must be as scalar"); - - arr = (PyArrayObject *)PyArray_ContiguousFromObject(obj, PyArray_OBJECT, - 0, 0); - if (arr == NULL) goto fail; - - out = (PyArrayObject *)PyArray_SimpleNew(RANK(arr), DIMS(arr), int_type); - if (out == NULL) goto fail; - - /* Get the builtin_functions from the builtin module */ - builtins = PyImport_AddModule("__builtin__"); - if (builtins == NULL) goto fail; - - dict = PyModule_GetDict(builtins); - comp_obj = PyDict_GetItemString(dict, "complex"); - if (comp_obj == NULL) goto fail; - - /* get_complex = PyDict_GetItemString(dict, "complex"); - get_float = PyDict_GetItemString(dict, "float"); - get_int = PyDict_GetItemString(dict, "int"); - if ((get_complex == NULL) || (get_float == NULL) || (get_int == NULL) ) goto fail; - */ - /* - get_complex_self = PyCFunction_GetSelf(PyDict_GetItemString(dict, "complex")); - get_float_self = PyCFunction_GetSelf(PyDict_GetItemString(dict, "float")); - get_int_self = PyCFunction_GetSelf(PyDict_GetItemString(dict, "int")); - */ - - /* Loop through arr and convert each element and place in out */ - i = PyArray_Size((PyObject *)arr); - arrptr = ((PyObject **)DATA(arr)) - 1; - outptr = (DATA(out)) - ELSIZE(out); - - descr = PyArray_DescrFromType(PyArray_CDOUBLE); - funcptr = descr->f->cast[int_type]; - Py_DECREF(descr); - - while (i--) { - outptr += ELSIZE(out); - arrptr += 1; - numc.real = 0; - numc.imag = 0; - numobj = PyObject_CallFunction(comp_obj, "O", *arrptr); - if (numobj != NULL) { - numc = PyComplex_AsCComplex(numobj); - Py_DECREF(numobj); - } - if (PyErr_Occurred()) { /* Use our own homegrown converter... */ - PyErr_Clear(); - err = convert_from_object(*arrptr, &numc); - if (PyErr_Occurred()) PyErr_Clear(); - if (err < 0) { /* Nothing works fill with missing value... */ - memcpy(outptr, DATA(missing_arr), ELSIZE(out)); - } - } - /* Place numc into the array */ - funcptr((void *)&(numc.real), (void *)outptr, 1, NULL, NULL); - } - - Py_DECREF(missing_arr); - Py_DECREF(arr); - return PyArray_Return(out); - - fail: - Py_XDECREF(out); - Py_XDECREF(arr); - Py_XDECREF(missing_arr); - Py_XDECREF(missing_val); - return NULL; -} - - -static char convert_objects_doc[] = -"convert_objectarray(myarray, arraytype{, missing_value} ) -> out \n\n" -" myarray = Sequence of strings.\n" -" arraytype = Type of output array.\n" -" missing_value = Value to insert when conversion fails."; - -/* *************************************************************************** */ -/* Method registration table: name-string -> function-pointer */ - -static struct PyMethodDef numpyio_methods[] = { - {"fread", numpyio_fromfile, 1, fread_doc}, - {"fwrite", numpyio_tofile, 1, fwrite_doc}, - {"bswap", numpyio_byteswap, 1, bswap_doc}, - {"packbits", numpyio_pack, 1, packbits_doc}, - {"unpackbits", numpyio_unpack, 1, unpackbits_doc}, - {"convert_objectarray", numpyio_convert_objects, 1, convert_objects_doc}, - {NULL, NULL} -}; - -PyMODINIT_FUNC initnumpyio(void) -{ - PyObject *m, *d; - - import_array(); /* allows multiarray to be a shared library (I think) */ - /* Should be defined in arrayobject.h */ - - /* create the module and add the functions */ - m = Py_InitModule("numpyio", numpyio_methods); /* registration hook */ - - /* add symbolic constants to the module */ - d = PyModule_GetDict(m); - ErrorObject = Py_BuildValue("s", "numpyio.error"); /* export exception */ - PyDict_SetItemString(d, "error", ErrorObject); /* add more if need */ - -} - -/**********************************************************/ -/* */ -/* SYNOPSIS: rbo(data, bpe, nel) ; */ -/* where: */ -/* nel..... number of array elements */ -/* data.... pointer to the first byte in the */ -/* array */ -/* bpe..... bytes per array element */ -/* */ -/* PURPOSE: convert data from little to big endian (and */ -/* visa-versa) */ -/* */ -/**********************************************************/ - -void rbo(char * data, int bpe, int nel) -{ - int nswaps, i,j; /* number of swaps to make per element */ - char tmp; /* temporary storage for swapping */ - long int p1, p2; /* indexes for elements to be swapped */ - - nswaps = bpe / 2; /* divide element size by two */ - if (nswaps == 0) return; /* return if it is a byte array */ - - p1 = 0; - for ( i=0; i 0); /* add to this bit if the input value is non-zero */ - } - if (index == out_bytes - 1) build <<= (8-remain); - /* printf("Here: %d %d %d %d\n",build,slice,index,maxi); - */ - *(outptr++) = build; - } - } - return; -} - - -void unpackbits( - char In[], - int in_element_size, - char Out[], - int element_size, - int total_elements, - int els_per_slice - ) -{ - unsigned char mask; - int i,index,slice,slices,out_bytes; - int maxi, remain; - char *outptr,*inptr; - - outptr = Out; - inptr = In; - if (is_little_endian()) { - fprintf(stderr,"This is a little-endian machine.\n"); - } - else { - fprintf(stderr,"This is a big-endian machine.\n"); - outptr += (element_size - 1); - inptr += (in_element_size - 1); - } - slices = total_elements / els_per_slice; - out_bytes = ceil( (float) els_per_slice / 8); - remain = els_per_slice % 8; - if (remain == 0) remain = 8; - /* printf("Start: %d %d %d %d %d\n",inM,MN,slices,out_bytes,remain); - */ - for (slice = 0; slice < slices; slice++) { - for (index = 0; index < out_bytes; index++) { - maxi = (index != out_bytes - 1 ? 8 : remain); - mask = 128; - for (i = 0; i < maxi ; i++) { - *outptr = ((mask & (unsigned char)(*inptr)) > 0); - outptr += element_size; - mask >>= 1; - } - /* printf("Here: %d %d %d %d\n",build,slice,index,maxi); - */ - inptr += in_element_size; - } - } - return; -} - -int is_little_endian() -{ /* high low */ - short testnum = 1; /* If little endian it will be 0x00 0x01 */ - /* If big endian it will be 0x01 0x00 */ - void *testptr; - char *myptr; - - testptr = (void *)(&testnum); /* Assumes address gives low-byte in memory */ - myptr = (char*)testptr; - - return (*(myptr) == 1); - -} - - diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/pickler.py python-scipy-0.8.0+dfsg1/scipy/io/pickler.py --- python-scipy-0.7.2+dfsg1/scipy/io/pickler.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/pickler.py 1970-01-01 01:00:00.000000000 +0100 @@ -1,38 +0,0 @@ -import cPickle - -from numpy import deprecate_with_doc - -@deprecate_with_doc(""" -Just use cPickle.dump directly or numpy.savez -""") -def objsave(file, allglobals, *args): - """Pickle the part of a dictionary containing the argument list - into file string. - - Syntax: objsave(file, globals(), obj1, obj2, ... ) - """ - fid = open(file,'w') - savedict = {} - for key in allglobals.keys(): - inarglist = 0 - for obj in args: - if allglobals[key] is obj: - inarglist = 1 - break - if inarglist: - savedict[key] = obj - cPickle.dump(savedict,fid,1) - fid.close() - -@deprecate_with_doc(""" -Just use cPickle.load or numpy.load. -""") -def objload(file, allglobals): - """Load a previously pickled dictionary and insert into given dictionary. - - Syntax: objload(file, globals()) - """ - fid = open(file,'r') - savedict = cPickle.load(fid) - allglobals.update(savedict) - fid.close() diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/recaster.py python-scipy-0.8.0+dfsg1/scipy/io/recaster.py --- python-scipy-0.7.2+dfsg1/scipy/io/recaster.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/io/recaster.py 2010-07-26 15:48:31.000000000 +0100 @@ -5,9 +5,20 @@ """ from numpy import * +from numpy.lib.utils import deprecate +# deprecated in 0.8, will be removed in 0.9. +@deprecate def sctype_attributes(): - ''' Return dictionary describing numpy scalar types ''' + """Return dictionary describing numpy scalar types + + .. deprecated:: sctype_attributes is deprecated in scipy 0.8 and + will be removed in scipy 0.9. + """ + return _sctype_attributes() + + +def _sctype_attributes(): d_dict = {} for sc_type in ('complex','float'): t_list = sctypes[sc_type] @@ -46,9 +57,13 @@ class RecastError(ValueError): pass +# deprecated in 0.8, will be removed in 0.9. class Recaster(object): ''' Class to recast arrays to one of acceptable scalar types + .. deprecated:: Recaster is deprecated in scipy 0.8 and will be + removed in scipy 0.9. + Initialization specifies acceptable types (ATs) Implements recast method - returns array that may be of different @@ -58,7 +73,7 @@ specified in options at object creation. ''' - _sctype_attributes = sctype_attributes() + _sctype_attributes = _sctype_attributes() _k = 2**10 _option_defaults = { 'only_if_none': { @@ -107,6 +122,7 @@ } } + @deprecate def __init__(self, sctype_list=None, sctype_tols=None, recast_options='only_if_none'): diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/SConscript python-scipy-0.8.0+dfsg1/scipy/io/SConscript --- python-scipy-0.7.2+dfsg1/scipy/io/SConscript 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/SConscript 2010-07-26 15:48:31.000000000 +0100 @@ -3,5 +3,3 @@ from numscons import GetNumpyEnvironment env = GetNumpyEnvironment(ARGUMENTS) - -env.NumpyPythonExtension('numpyio', source = 'numpyiomodule.c') diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/setup.py python-scipy-0.8.0+dfsg1/scipy/io/setup.py --- python-scipy-0.7.2+dfsg1/scipy/io/setup.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/io/setup.py 2010-07-26 15:48:31.000000000 +0100 @@ -4,11 +4,7 @@ from numpy.distutils.misc_util import Configuration config = Configuration('io', parent_package, top_path) - config.add_extension('numpyio', - sources = ['numpyiomodule.c']) - config.add_data_dir('tests') - config.add_data_dir('examples') config.add_data_dir('docs') config.add_subpackage('matlab') config.add_subpackage('arff') diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/setupscons.py python-scipy-0.8.0+dfsg1/scipy/io/setupscons.py --- python-scipy-0.7.2+dfsg1/scipy/io/setupscons.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/io/setupscons.py 2010-07-26 15:48:31.000000000 +0100 @@ -2,12 +2,12 @@ def configuration(parent_package='',top_path=None): from numpy.distutils.misc_util import Configuration - config = Configuration('io', parent_package, top_path) + config = Configuration('io', parent_package, top_path, + setup_name = 'setupscons.py') config.add_sconscript('SConstruct') config.add_data_dir('tests') - config.add_data_dir('examples') config.add_data_dir('docs') config.add_subpackage('matlab') config.add_subpackage('arff') Binary files /tmp/tTuQn84fcN/python-scipy-0.7.2+dfsg1/scipy/io/tests/data/example_1.nc and /tmp/9QioNV5RG6/python-scipy-0.8.0+dfsg1/scipy/io/tests/data/example_1.nc differ Binary files /tmp/tTuQn84fcN/python-scipy-0.7.2+dfsg1/scipy/io/tests/data/test-44100-le-1ch-4bytes.wav and /tmp/9QioNV5RG6/python-scipy-0.8.0+dfsg1/scipy/io/tests/data/test-44100-le-1ch-4bytes.wav differ Binary files /tmp/tTuQn84fcN/python-scipy-0.7.2+dfsg1/scipy/io/tests/data/test-8000-le-2ch-1byteu.wav and /tmp/9QioNV5RG6/python-scipy-0.8.0+dfsg1/scipy/io/tests/data/test-8000-le-2ch-1byteu.wav differ diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/tests/test_array_import.py python-scipy-0.8.0+dfsg1/scipy/io/tests/test_array_import.py --- python-scipy-0.7.2+dfsg1/scipy/io/tests/test_array_import.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/io/tests/test_array_import.py 1970-01-01 01:00:00.000000000 +0100 @@ -1,68 +0,0 @@ -#!/usr/bin/env python - -# This python script tests the numpyio module. -# also check out numpyio.fread.__doc__ and other method docstrings. - -import os -from numpy.testing import * -import scipy.io as io -from scipy.io import numpyio -from scipy.io import array_import - -import numpy.oldnumeric as N -import tempfile - -class TestNumpyio(TestCase): - def test_basic(self): - # Generate some data - a = 255*rand(20) - # Open a file - fname = tempfile.mktemp('.dat') - fid = open(fname,"wb") - # Write the data as shorts - numpyio.fwrite(fid,20,a,N.Int16) - fid.close() - # Reopen the file and read in data - fid = open(fname,"rb") - if verbose >= 3: - print "\nDon't worry about a warning regarding the number of bytes read." - b = numpyio.fread(fid,1000000,N.Int16,N.Int) - fid.close() - assert(N.product(a.astype(N.Int16) == b,axis=0)) - os.remove(fname) - -class TestReadArray(TestCase): - def test_complex(self): - a = rand(13,4) + 1j*rand(13,4) - fname = tempfile.mktemp('.dat') - io.write_array(fname,a) - b = io.read_array(fname,atype=N.Complex) - assert_array_almost_equal(a,b,decimal=4) - os.remove(fname) - - def test_float(self): - a = rand(3,4)*30 - fname = tempfile.mktemp('.dat') - io.write_array(fname,a) - b = io.read_array(fname) - assert_array_almost_equal(a,b,decimal=4) - os.remove(fname) - - def test_integer(self): - from scipy import stats - a = stats.randint.rvs(1,20,size=(3,4)) - fname = tempfile.mktemp('.dat') - io.write_array(fname,a) - b = io.read_array(fname,atype=a.dtype.char) - assert_array_equal(a,b) - os.remove(fname) - -class TestRegression(TestCase): - def test_get_open_file_works_with_filelike_objects(self): - f = tempfile.TemporaryFile() - f2 = array_import.get_open_file(f) - assert f2 is f - f.close() - -if __name__ == "__main__": - run_module_suite() diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/tests/test_mmio.py python-scipy-0.8.0+dfsg1/scipy/io/tests/test_mmio.py --- python-scipy-0.7.2+dfsg1/scipy/io/tests/test_mmio.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/tests/test_mmio.py 2010-07-26 15:48:31.000000000 +0100 @@ -243,7 +243,7 @@ b = mmread(fn).todense() assert_array_almost_equal(a,b) - def test_read_symmetric(self): + def test_read_symmetric_pattern(self): """read a symmetric pattern matrix""" fn = mktemp() f = open(fn,'w') diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/tests/test_netcdf.py python-scipy-0.8.0+dfsg1/scipy/io/tests/test_netcdf.py --- python-scipy-0.7.2+dfsg1/scipy/io/tests/test_netcdf.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/tests/test_netcdf.py 2010-07-26 15:48:31.000000000 +0100 @@ -0,0 +1,121 @@ +''' Tests for netcdf ''' + +import os +from os.path import join as pjoin, dirname +import shutil +import tempfile +import time +from StringIO import StringIO +from glob import glob + +import numpy as np + +from scipy.io.netcdf import netcdf_file + +from nose.tools import assert_true, assert_false, assert_equal, assert_raises + +TEST_DATA_PATH = pjoin(dirname(__file__), 'data') + +N_EG_ELS = 11 # number of elements for example variable +VARTYPE_EG = 'b' # var type for example variable + + +def make_simple(*args, **kwargs): + f = netcdf_file(*args, **kwargs) + f.history = 'Created for a test' + f.createDimension('time', N_EG_ELS) + time = f.createVariable('time', VARTYPE_EG, ('time',)) + time[:] = np.arange(N_EG_ELS) + time.units = 'days since 2008-01-01' + f.flush() + return f + + +def gen_for_simple(ncfileobj): + ''' Generator for example fileobj tests ''' + yield assert_equal, ncfileobj.history, 'Created for a test' + time = ncfileobj.variables['time'] + yield assert_equal, str(time.units), 'days since 2008-01-01' + yield assert_equal, time.shape, (N_EG_ELS,) + yield assert_equal, time[-1], N_EG_ELS-1 + + +def test_read_write_files(): + # test round trip for example file + cwd = os.getcwd() + try: + tmpdir = tempfile.mkdtemp() + os.chdir(tmpdir) + f = make_simple('simple.nc', 'w') + f.close() + # To read the NetCDF file we just created:: + f = netcdf_file('simple.nc') + # Using mmap is the default + yield assert_true, f.use_mmap + for testargs in gen_for_simple(f): + yield testargs + f.close() + # Now without mmap + f = netcdf_file('simple.nc', mmap=False) + # Using mmap is the default + yield assert_false, f.use_mmap + for testargs in gen_for_simple(f): + yield testargs + f.close() + # To read the NetCDF file we just created, as file object, no + # mmap. When n * n_bytes(var_type) is not divisible by 4, this + # raised an error in pupynere 1.0.12 and scipy rev 5893, because + # calculated vsize was rounding up in units of 4 - see + # http://www.unidata.ucar.edu/software/netcdf/docs/netcdf.html + fobj = open('simple.nc', 'r') + f = netcdf_file(fobj) + # by default, don't use mmap for file-like + yield assert_false, f.use_mmap + for testargs in gen_for_simple(f): + yield testargs + f.close() + except: + os.chdir(cwd) + shutil.rmtree(tmpdir) + raise + os.chdir(cwd) + shutil.rmtree(tmpdir) + + +def test_read_write_sio(): + eg_sio1 = StringIO() + f1 = make_simple(eg_sio1, 'w') + str_val = eg_sio1.getvalue() + f1.close() + eg_sio2 = StringIO(str_val) + f2 = netcdf_file(eg_sio2) + for testargs in gen_for_simple(f2): + yield testargs + f2.close() + # Test that error is raised if attempting mmap for sio + eg_sio3 = StringIO(str_val) + yield assert_raises, ValueError, netcdf_file, eg_sio3, 'r', True + # Test 64-bit offset write / read + eg_sio_64 = StringIO() + f_64 = make_simple(eg_sio_64, 'w', version=2) + str_val = eg_sio_64.getvalue() + f_64.close() + eg_sio_64 = StringIO(str_val) + f_64 = netcdf_file(eg_sio_64) + for testargs in gen_for_simple(f_64): + yield testargs + yield assert_equal, f_64.version_byte, 2 + # also when version 2 explicitly specified + eg_sio_64 = StringIO(str_val) + f_64 = netcdf_file(eg_sio_64, version=2) + for testargs in gen_for_simple(f_64): + yield testargs + yield assert_equal, f_64.version_byte, 2 + + +def test_read_example_data(): + # read any example data files + for fname in glob(pjoin(TEST_DATA_PATH, '*.nc')): + f = netcdf_file(fname, 'r') + f = netcdf_file(fname, 'r', mmap=False) + diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/tests/test_npfile.py python-scipy-0.8.0+dfsg1/scipy/io/tests/test_npfile.py --- python-scipy-0.7.2+dfsg1/scipy/io/tests/test_npfile.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/io/tests/test_npfile.py 1970-01-01 01:00:00.000000000 +0100 @@ -1,105 +0,0 @@ -import os -from StringIO import StringIO -from tempfile import mkstemp -from numpy.testing import * -import numpy as np - -from scipy.io.npfile import npfile, sys_endian_code - -class TestNpFile(TestCase): - - def test_init(self): - fd, fname = mkstemp() - os.close(fd) - npf = npfile(fname) - arr = np.reshape(np.arange(10), (5,2)) - self.assertRaises(IOError, npf.write_array, arr) - npf.close() - npf = npfile(fname, 'w') - npf.write_array(arr) - npf.rewind() - self.assertRaises(IOError, npf.read_array, - arr.dtype, arr.shape) - npf.close() - os.remove(fname) - - npf = npfile(StringIO(), endian='>', order='F') - assert npf.endian == '>', 'Endian not set correctly' - assert npf.order == 'F', 'Order not set correctly' - npf.endian = '<' - assert npf.endian == '<', 'Endian not set correctly' - - def test_parse_endian(self): - npf = npfile(StringIO()) - swapped_code = sys_endian_code == '<' and '>' or '<' - assert npf.parse_endian('native') == sys_endian_code - assert npf.parse_endian('swapped') == swapped_code - assert npf.parse_endian('l') == '<' - assert npf.parse_endian('B') == '>' - assert npf.parse_endian('dtype') == 'dtype' - self.assertRaises(ValueError, npf.parse_endian, 'nonsense') - - def test_read_write_raw(self): - npf = npfile(StringIO()) - str = 'test me with this string' - npf.write_raw(str) - npf.rewind() - assert str == npf.read_raw(len(str)) - - def test_remaining_bytes(self): - npf = npfile(StringIO()) - assert npf.remaining_bytes() == 0 - npf.write_raw('+' * 10) - assert npf.remaining_bytes() == 0 - npf.rewind() - assert npf.remaining_bytes() == 10 - npf.seek(5) - assert npf.remaining_bytes() == 5 - - def test_read_write_array(self): - npf = npfile(StringIO()) - arr = np.reshape(np.arange(10), (5,2)) - # Arr as read in fortran order - f_arr = arr.reshape((2,5)).T - # Arr written in fortran order read in C order - cf_arr = arr.T.reshape((5,2)) - # Byteswapped array - bo = arr.dtype.byteorder - swapped_code = sys_endian_code == '<' and '>' or '<' - if bo in ['=', sys_endian_code]: - nbo = swapped_code - else: - nbo = sys_endian_code - bs_arr = arr.newbyteorder(nbo) - adt = arr.dtype - shp = arr.shape - npf.write_array(arr) - npf.rewind() - assert_array_equal(npf.read_array(adt), arr.flatten()) - npf.rewind() - assert_array_equal(npf.read_array(adt, shp), arr) - npf.rewind() - assert_array_equal(npf.read_array(adt, shp, endian=swapped_code), - bs_arr) - npf.rewind() - assert_array_equal(npf.read_array(adt, shp, order='F'), - f_arr) - npf.rewind() - npf.write_array(arr, order='F') - npf.rewind() - assert_array_equal(npf.read_array(adt), arr.flatten('F')) - npf.rewind() - assert_array_equal(npf.read_array(adt, shp), - cf_arr) - - npf = npfile(StringIO(), endian='swapped', order='F') - npf.write_array(arr) - npf.rewind() - assert_array_equal(npf.read_array(adt, shp), arr) - npf.rewind() - assert_array_equal(npf.read_array(adt, shp, endian='dtype'), bs_arr) - npf.rewind() - assert_array_equal(npf.read_array(adt, shp, order='C'), cf_arr) - -if __name__ == "__main__": - run_module_suite() diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/tests/test_recaster.py python-scipy-0.8.0+dfsg1/scipy/io/tests/test_recaster.py --- python-scipy-0.7.2+dfsg1/scipy/io/tests/test_recaster.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/io/tests/test_recaster.py 2010-07-26 15:48:31.000000000 +0100 @@ -1,3 +1,5 @@ +import warnings + import numpy as np from numpy.testing import * @@ -167,5 +169,7 @@ assert dtt is outp, \ 'Expected %s from %s, got %s' % (outp, inp, dtt) +warnings.simplefilter('ignore', category=DeprecationWarning) + if __name__ == "__main__": run_module_suite() diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/tests/test_wavfile.py python-scipy-0.8.0+dfsg1/scipy/io/tests/test_wavfile.py --- python-scipy-0.7.2+dfsg1/scipy/io/tests/test_wavfile.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/tests/test_wavfile.py 2010-07-26 15:48:31.000000000 +0100 @@ -0,0 +1,64 @@ +import os +import tempfile +import warnings + +import numpy as np +from numpy.testing import * +from scipy.io import wavfile + + +def datafile(fn): + return os.path.join(os.path.dirname(__file__), 'data', fn) + +def test_read_1(): + rate, data = wavfile.read(datafile('test-44100-le-1ch-4bytes.wav')) + assert rate == 44100 + assert np.issubdtype(data.dtype, np.int32) + assert data.shape == (4410,) + +def test_read_2(): + rate, data = wavfile.read(datafile('test-8000-le-2ch-1byteu.wav')) + assert rate == 8000 + assert np.issubdtype(data.dtype, np.uint8) + assert data.shape == (800, 2) + +def test_read_fail(): + assert_raises(ValueError, wavfile.read, datafile('example_1.nc')) + +def _check_roundtrip(rate, dtype, channels): + fd, tmpfile = tempfile.mkstemp(suffix='.wav') + try: + os.close(fd) + + data = np.random.rand(100, channels) + if channels == 1: + data = data[:,0] + data = (data*128).astype(dtype) + + wavfile.write(tmpfile, rate, data) + rate2, data2 = wavfile.read(tmpfile) + + assert rate == rate2 + assert data2.dtype.byteorder in ('<', '=', '|'), data2.dtype + assert_array_equal(data, data2) + finally: + os.unlink(tmpfile) + +def test_write_roundtrip(): + for signed in ('i', 'u'): + for size in (1, 2, 4, 8): + if size == 1 and signed == 'i': + # signed 8-bit integer PCM is not allowed + continue + for endianness in ('>', '<'): + if size == 1 and endianness == '<': + continue + for rate in (8000, 32000): + for channels in (1, 2, 5): + dt = np.dtype('%s%s%d' % (endianness, signed, size)) + yield _check_roundtrip, rate, dt, channels + + +# Filter test noise in 0.8.x branch. Format of data file does not seem to be +# recognized. +warnings.filterwarnings("ignore", category=wavfile.WavFileWarning) diff -Nru python-scipy-0.7.2+dfsg1/scipy/io/wavfile.py python-scipy-0.8.0+dfsg1/scipy/io/wavfile.py --- python-scipy-0.7.2+dfsg1/scipy/io/wavfile.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/io/wavfile.py 2010-07-26 15:48:31.000000000 +0100 @@ -1,13 +1,33 @@ +""" +Module to read / write wav files using numpy arrays + +Functions +--------- +read: Return the sample rate (in samples/sec) and data from a WAV file. + +write: Write a numpy array as a WAV file. + +""" import numpy import struct +import warnings + +class WavFileWarning(UserWarning): + pass + +_big_endian = False # assumes file pointer is immediately # after the 'fmt ' id def _read_fmt_chunk(fid): - res = struct.unpack('ihHIIHH',fid.read(20)) + if _big_endian: + fmt = '>' + else: + fmt = '<' + res = struct.unpack(fmt+'ihHIIHH',fid.read(20)) size, comp, noc, rate, sbytes, ba, bits = res if (comp != 1 or size > 16): - print "Warning: unfamiliar format bytes..." + warnings.warn("Unfamiliar format bytes", WavFileWarning) if (size>16): fid.read(size-16) return size, comp, noc, rate, sbytes, ba, bits @@ -15,35 +35,71 @@ # assumes file pointer is immediately # after the 'data' id def _read_data_chunk(fid, noc, bits): - size = struct.unpack('i',fid.read(4))[0] + if _big_endian: + fmt = '>i' + else: + fmt = ' 1: data = data.reshape(-1,noc) else: bytes = bits//8 - dtype = 'i%d' % bytes + if _big_endian: + dtype = '>i%d' % bytes + else: + dtype = ' 1: data = data.reshape(-1,noc) return data def _read_riff_chunk(fid): + global _big_endian str1 = fid.read(4) - fsize = struct.unpack('I', fid.read(4))[0] + 8 + if str1 == 'RIFX': + _big_endian = True + elif str1 != 'RIFF': + raise ValueError("Not a WAV file.") + if _big_endian: + fmt = '>I' + else: + fmt = '' or (data.dtype.byteorder == '=' and sys.byteorder == 'big'): + data = data.byteswap() data.tofile(fid) # Determine file size and place it in correct # position at start of the file. size = fid.tell() fid.seek(4) - fid.write(struct.pack('i', size-8)) + fid.write(struct.pack(')' -Run tests if blas is not installed: - python tests/test_blas.py [] + python -c 'import scipy;scipy.lib.blas.test()' """ import math @@ -199,27 +197,5 @@ gemm, = get_blas_funcs(('gemm',),(a,b)) assert_array_almost_equal(gemm(1,a,b),[[3]],15) - def test_fblas(self): - if hasattr(fblas,'empty_module'): - print """ -**************************************************************** -WARNING: fblas module is empty. ------------ -See scipy/INSTALL.txt for troubleshooting. -**************************************************************** -""" - def test_cblas(self): - if hasattr(cblas,'empty_module'): - print """ -**************************************************************** -WARNING: cblas module is empty ------------ -See scipy/INSTALL.txt for troubleshooting. -Notes: -* If atlas library is not found by numpy/distutils/system_info.py, - then scipy uses fblas instead of cblas. -**************************************************************** -""" - if __name__ == "__main__": run_module_suite() diff -Nru python-scipy-0.7.2+dfsg1/scipy/lib/lapack/atlas_version.c python-scipy-0.8.0+dfsg1/scipy/lib/lapack/atlas_version.c --- python-scipy-0.7.2+dfsg1/scipy/lib/lapack/atlas_version.c 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/lib/lapack/atlas_version.c 2010-07-26 15:48:31.000000000 +0100 @@ -1,24 +1,33 @@ -#ifdef __CPLUSPLUS__ -extern "C" { -#endif #include "Python.h" -static PyMethodDef module_methods[] = { {NULL,NULL} }; -PyMODINIT_FUNC initatlas_version(void) { - PyObject *m = NULL; + +static PyObject* version(PyObject* self, PyObject* dummy) +{ #if defined(NO_ATLAS_INFO) - printf("NO ATLAS INFO AVAILABLE\n"); + printf("NO ATLAS INFO AVAILABLE\n"); #else - void ATL_buildinfo(void); - ATL_buildinfo(); + void ATL_buildinfo(void); + ATL_buildinfo(); #endif - m = Py_InitModule("atlas_version", module_methods); + + Py_INCREF(Py_None); + return Py_None; +} + +static char version_doc[] = "Print the build info from atlas."; + +static PyMethodDef module_methods[] = { + {"version", version, METH_VARARGS, version_doc}, + {NULL, NULL, 0, NULL} +}; + +PyMODINIT_FUNC initatlas_version(void) +{ + PyObject *m = NULL; + m = Py_InitModule("atlas_version", module_methods); #if defined(ATLAS_INFO) - { - PyObject *d = PyModule_GetDict(m); - PyDict_SetItemString(d,"ATLAS_VERSION",PyString_FromString(ATLAS_INFO)); - } + { + PyObject *d = PyModule_GetDict(m); + PyDict_SetItemString(d,"ATLAS_VERSION",PyString_FromString(ATLAS_INFO)); + } #endif } -#ifdef __CPLUSCPLUS__ -} -#endif diff -Nru python-scipy-0.7.2+dfsg1/scipy/lib/lapack/SConscript python-scipy-0.8.0+dfsg1/scipy/lib/lapack/SConscript --- python-scipy-0.7.2+dfsg1/scipy/lib/lapack/SConscript 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/lib/lapack/SConscript 2010-07-26 15:48:31.000000000 +0100 @@ -53,7 +53,6 @@ #========== # Build #========== -env.AppendUnique(CPPPATH=[env['F2PYINCLUDEDIR']]) env.AppendUnique(F2PYOPTIONS = '--quiet') env['BUILDERS']['GenerateFakePyf'] = Builder(action = do_generate_fake_interface, diff -Nru python-scipy-0.7.2+dfsg1/scipy/lib/lapack/setup.py python-scipy-0.8.0+dfsg1/scipy/lib/lapack/setup.py --- python-scipy-0.7.2+dfsg1/scipy/lib/lapack/setup.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/lib/lapack/setup.py 2010-07-26 15:48:31.000000000 +0100 @@ -34,7 +34,7 @@ atlas_version = ([v[3:-3] for k,v in lapack_opt.get('define_macros',[]) \ if k=='ATLAS_INFO']+[None])[0] if atlas_version: - print 'ATLAS version',atlas_version + print ('ATLAS version: %s' % atlas_version) target_dir = '' skip_names = {'clapack':[],'flapack':[]} Binary files /tmp/tTuQn84fcN/python-scipy-0.7.2+dfsg1/scipy/lib/lapack/setup.pyc and /tmp/9QioNV5RG6/python-scipy-0.8.0+dfsg1/scipy/lib/lapack/setup.pyc differ diff -Nru python-scipy-0.7.2+dfsg1/scipy/linalg/atlas_version.c python-scipy-0.8.0+dfsg1/scipy/linalg/atlas_version.c --- python-scipy-0.7.2+dfsg1/scipy/linalg/atlas_version.c 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/linalg/atlas_version.c 2010-07-26 15:48:31.000000000 +0100 @@ -1,24 +1,33 @@ -#ifdef __CPLUSPLUS__ -extern "C" { -#endif #include "Python.h" -static PyMethodDef module_methods[] = { {NULL,NULL} }; -PyMODINIT_FUNC initatlas_version(void) { - PyObject *m = NULL; + +static PyObject* version(PyObject* self, PyObject* dummy) +{ #if defined(NO_ATLAS_INFO) - printf("NO ATLAS INFO AVAILABLE\n"); + printf("NO ATLAS INFO AVAILABLE\n"); #else - void ATL_buildinfo(void); - ATL_buildinfo(); + void ATL_buildinfo(void); + ATL_buildinfo(); #endif - m = Py_InitModule("atlas_version", module_methods); + + Py_INCREF(Py_None); + return Py_None; +} + +static char version_doc[] = "Print the build info from atlas."; + +static PyMethodDef module_methods[] = { + {"version", version, METH_VARARGS, version_doc}, + {NULL, NULL, 0, NULL} +}; + +PyMODINIT_FUNC initatlas_version(void) +{ + PyObject *m = NULL; + m = Py_InitModule("atlas_version", module_methods); #if defined(ATLAS_INFO) - { - PyObject *d = PyModule_GetDict(m); - PyDict_SetItemString(d,"ATLAS_VERSION",PyString_FromString(ATLAS_INFO)); - } + { + PyObject *d = PyModule_GetDict(m); + PyDict_SetItemString(d,"ATLAS_VERSION",PyString_FromString(ATLAS_INFO)); + } #endif } -#ifdef __CPLUSCPLUS__ -} -#endif diff -Nru python-scipy-0.7.2+dfsg1/scipy/linalg/basic.py python-scipy-0.8.0+dfsg1/scipy/linalg/basic.py --- python-scipy-0.7.2+dfsg1/scipy/linalg/basic.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/linalg/basic.py 2010-07-26 15:48:31.000000000 +0100 @@ -1,104 +1,27 @@ -## Automatically adapted for scipy Oct 18, 2005 by - -## Automatically adapted for scipy Oct 18, 2005 by - # # Author: Pearu Peterson, March 2002 # # w/ additions by Travis Oliphant, March 2002 -__all__ = ['solve','inv','det','lstsq','norm','pinv','pinv2', - 'tri','tril','triu','toeplitz','hankel','lu_solve', - 'cho_solve','solve_banded','LinAlgError','kron', - 'all_mat', 'cholesky_banded', 'solveh_banded'] +__all__ = ['solve', 'solveh_banded', 'solve_banded', + 'inv', 'det', 'lstsq', 'pinv', 'pinv2'] + +from warnings import warn + +from numpy import asarray, zeros, sum, conjugate, dot, transpose, \ + asarray_chkfinite, single +import numpy -#from blas import get_blas_funcs from flinalg import get_flinalg_funcs from lapack import get_lapack_funcs -from numpy import asarray,zeros,sum,newaxis,greater_equal,subtract,arange,\ - conjugate,ravel,r_,mgrid,take,ones,dot,transpose,sqrt,add,real -import numpy -from numpy import asarray_chkfinite, outer, concatenate, reshape, single -from numpy import matrix as Matrix -from numpy.linalg import LinAlgError +from misc import LinAlgError from scipy.linalg import calc_lwork +import decomp_svd -def lu_solve((lu, piv), b, trans=0, overwrite_b=0): - """Solve an equation system, a x = b, given the LU factorization of a - - Parameters - ---------- - (lu, piv) - Factorization of the coefficient matrix a, as given by lu_factor - b : array - Right-hand side - trans : {0, 1, 2} - Type of system to solve: - - ===== ========= - trans system - ===== ========= - 0 a x = b - 1 a^T x = b - 2 a^H x = b - ===== ========= - - Returns - ------- - x : array - Solution to the system - - See also - -------- - lu_factor : LU factorize a matrix - - """ - b1 = asarray_chkfinite(b) - overwrite_b = overwrite_b or (b1 is not b and not hasattr(b,'__array__')) - if lu.shape[0] != b1.shape[0]: - raise ValueError, "incompatible dimensions." - getrs, = get_lapack_funcs(('getrs',),(lu,b1)) - x,info = getrs(lu,piv,b1,trans=trans,overwrite_b=overwrite_b) - if info==0: - return x - raise ValueError,\ - 'illegal value in %-th argument of internal gesv|posv'%(-info) - -def cho_solve((c, lower), b, overwrite_b=0): - """Solve an equation system, a x = b, given the Cholesky factorization of a - - Parameters - ---------- - (c, lower) - Cholesky factorization of a, as given by cho_factor - b : array - Right-hand side - - Returns - ------- - x : array - The solution to the system a x = b - - See also - -------- - cho_factor : Cholesky factorization of a matrix - - """ - b1 = asarray_chkfinite(b) - overwrite_b = overwrite_b or (b1 is not b and not hasattr(b,'__array__')) - if c.shape[0] != b1.shape[0]: - raise ValueError, "incompatible dimensions." - potrs, = get_lapack_funcs(('potrs',),(c,b1)) - x,info = potrs(c,b1,lower=lower,overwrite_b=overwrite_b) - if info==0: - return x - raise ValueError,\ - 'illegal value in %-th argument of internal gesv|posv'%(-info) - # Linear equations -def solve(a, b, sym_pos=0, lower=0, overwrite_a=0, overwrite_b=0, - debug = 0): +def solve(a, b, sym_pos=False, lower=False, overwrite_a=False, overwrite_b=False, + debug=False): """Solve the equation a x = b for x Parameters @@ -134,26 +57,24 @@ print 'solve:overwrite_a=',overwrite_a print 'solve:overwrite_b=',overwrite_b if sym_pos: - posv, = get_lapack_funcs(('posv',),(a1,b1)) - c,x,info = posv(a1,b1, - lower = lower, + posv, = get_lapack_funcs(('posv',), (a1,b1)) + c, x, info = posv(a1, b1, lower=lower, overwrite_a=overwrite_a, overwrite_b=overwrite_b) else: - gesv, = get_lapack_funcs(('gesv',),(a1,b1)) - lu,piv,x,info = gesv(a1,b1, - overwrite_a=overwrite_a, - overwrite_b=overwrite_b) + gesv, = get_lapack_funcs(('gesv',), (a1,b1)) + lu, piv, x, info = gesv(a1, b1, overwrite_a=overwrite_a, + overwrite_b=overwrite_b) - if info==0: + if info == 0: return x - if info>0: - raise LinAlgError, "singular matrix" - raise ValueError,\ - 'illegal value in %-th argument of internal gesv|posv'%(-info) + if info > 0: + raise LinAlgError("singular matrix") + raise ValueError('illegal value in %d-th argument of internal gesv|posv' + % -info) -def solve_banded((l,u), ab, b, overwrite_ab=0, overwrite_b=0, - debug = 0): +def solve_banded((l, u), ab, b, overwrite_ab=False, overwrite_b=False, + debug=False): """Solve the equation a x = b for x, assuming a is banded matrix. The matrix a is stored in ab using the matrix diagonal orded form:: @@ -186,24 +107,29 @@ The solution to the system a x = b """ - a1, b1 = map(asarray_chkfinite,(ab,b)) + a1, b1 = map(asarray_chkfinite, (ab, b)) + + # Validate shapes. + if a1.shape[-1] != b1.shape[0]: + raise ValueError("shapes of ab and b are not compatible.") + if l + u + 1 != a1.shape[0]: + raise ValueError("invalid values for the number of lower and upper diagonals:" + " l+u+1 (%d) does not equal ab.shape[0] (%d)" % (l+u+1, ab.shape[0])) + overwrite_b = overwrite_b or (b1 is not b and not hasattr(b,'__array__')) - gbsv, = get_lapack_funcs(('gbsv',),(a1,b1)) - a2 = zeros((2*l+u+1,a1.shape[1]), dtype=gbsv.dtype) + gbsv, = get_lapack_funcs(('gbsv',), (a1, b1)) + a2 = zeros((2*l+u+1, a1.shape[1]), dtype=gbsv.dtype) a2[l:,:] = a1 - lu,piv,x,info = gbsv(l,u,a2,b1, - overwrite_ab=1, - overwrite_b=overwrite_b) - if info==0: + lu, piv, x, info = gbsv(l, u, a2, b1, overwrite_ab=True, + overwrite_b=overwrite_b) + if info == 0: return x - if info>0: - raise LinAlgError, "singular matrix" - raise ValueError,\ - 'illegal value in %-th argument of internal gbsv'%(-info) + if info > 0: + raise LinAlgError("singular matrix") + raise ValueError('illegal value in %d-th argument of internal gbsv' % -info) -def solveh_banded(ab, b, overwrite_ab=0, overwrite_b=0, - lower=0): +def solveh_banded(ab, b, overwrite_ab=False, overwrite_b=False, lower=False): """Solve equation a x = b. a is Hermitian positive-definite banded matrix. The matrix a is stored in ab either in lower diagonal or upper @@ -228,7 +154,7 @@ Parameters ---------- - ab : array, shape (M, u + 1) + ab : array, shape (u + 1, M) Banded matrix b : array, shape (M,) or (M, K) Right-hand side @@ -241,79 +167,39 @@ Returns ------- - c : array, shape (M, u+1) + c : array, shape (u+1, M) Cholesky factorization of a, in the same banded format as ab x : array, shape (M,) or (M, K) The solution to the system a x = b + + Notes + ----- + The inclusion of `c` in the return value is deprecated. In SciPy + version 0.9, the return value will be the solution `x` only. """ - ab, b = map(asarray_chkfinite,(ab,b)) + warn("In SciPy 0.9, the return value of solveh_banded will be " + "the solution x only.", DeprecationWarning) - pbsv, = get_lapack_funcs(('pbsv',),(ab,b)) - c,x,info = pbsv(ab,b, - lower=lower, - overwrite_ab=overwrite_ab, - overwrite_b=overwrite_b) - if info==0: - return c, x - if info>0: - raise LinAlgError, "%d-th leading minor not positive definite" % info - raise ValueError,\ - 'illegal value in %d-th argument of internal pbsv'%(-info) + ab, b = map(asarray_chkfinite, (ab, b)) -def cholesky_banded(ab, overwrite_ab=0, lower=0): - """Cholesky decompose a banded Hermitian positive-definite matrix - - The matrix a is stored in ab either in lower diagonal or upper - diagonal ordered form: - - ab[u + i - j, j] == a[i,j] (if upper form; i <= j) - ab[ i - j, j] == a[i,j] (if lower form; i >= j) - - Example of ab (shape of a is (6,6), u=2):: - - upper form: - * * a02 a13 a24 a35 - * a01 a12 a23 a34 a45 - a00 a11 a22 a33 a44 a55 - - lower form: - a00 a11 a22 a33 a44 a55 - a10 a21 a32 a43 a54 * - a20 a31 a42 a53 * * - - Parameters - ---------- - ab : array, shape (M, u + 1) - Banded matrix - overwrite_ab : boolean - Discard data in ab (may enhance performance) - lower : boolean - Is the matrix in the lower form. (Default is upper form) - - Returns - ------- - c : array, shape (M, u+1) - Cholesky factorization of a, in the same banded format as ab - - """ - ab = asarray_chkfinite(ab) - - pbtrf, = get_lapack_funcs(('pbtrf',),(ab,)) - c,info = pbtrf(ab, - lower=lower, - overwrite_ab=overwrite_ab) - - if info==0: - return c - if info>0: - raise LinAlgError, "%d-th leading minor not positive definite" % info - raise ValueError,\ - 'illegal value in %d-th argument of internal pbtrf'%(-info) + # Validate shapes. + if ab.shape[-1] != b.shape[0]: + raise ValueError("shapes of ab and b are not compatible.") + + pbsv, = get_lapack_funcs(('pbsv',), (ab, b)) + c, x, info = pbsv(ab, b, lower=lower, overwrite_ab=overwrite_ab, + overwrite_b=overwrite_b) + if info > 0: + raise LinAlgError("%d-th leading minor not positive definite" % info) + if info < 0: + raise ValueError('illegal value in %d-th argument of internal pbsv' + % -info) + return c, x # matrix inversion -def inv(a, overwrite_a=0): +def inv(a, overwrite_a=False): """Compute the inverse of a matrix. Parameters @@ -341,7 +227,7 @@ """ a1 = asarray_chkfinite(a) if len(a1.shape) != 2 or a1.shape[0] != a1.shape[1]: - raise ValueError, 'expected square matrix' + raise ValueError('expected square matrix') overwrite_a = overwrite_a or (a1 is not a and not hasattr(a,'__array__')) #XXX: I found no advantage or disadvantage of using finv. ## finv, = get_flinalg_funcs(('inv',),(a1,)) @@ -351,22 +237,22 @@ ## return a_inv ## if info>0: raise LinAlgError, "singular matrix" ## if info<0: raise ValueError,\ -## 'illegal value in %-th argument of internal inv.getrf|getri'%(-info) - getrf,getri = get_lapack_funcs(('getrf','getri'),(a1,)) +## 'illegal value in %d-th argument of internal inv.getrf|getri'%(-info) + getrf, getri = get_lapack_funcs(('getrf','getri'), (a1,)) #XXX: C ATLAS versions of getrf/i have rowmajor=1, this could be # exploited for further optimization. But it will be probably # a mess. So, a good testing site is required before trying # to do that. - if getrf.module_name[:7]=='clapack'!=getri.module_name[:7]: + if getrf.module_name[:7] == 'clapack' != getri.module_name[:7]: # ATLAS 3.2.1 has getrf but not getri. - lu,piv,info = getrf(transpose(a1), - rowmajor=0,overwrite_a=overwrite_a) + lu, piv, info = getrf(transpose(a1), rowmajor=0, + overwrite_a=overwrite_a) lu = transpose(lu) else: - lu,piv,info = getrf(a1,overwrite_a=overwrite_a) - if info==0: + lu, piv, info = getrf(a1, overwrite_a=overwrite_a) + if info == 0: if getri.module_name[:7] == 'flapack': - lwork = calc_lwork.getri(getri.prefix,a1.shape[0]) + lwork = calc_lwork.getri(getri.prefix, a1.shape[0]) lwork = lwork[1] # XXX: the following line fixes curious SEGFAULT when # benchmarking 500x500 matrix inverse. This seems to @@ -374,94 +260,21 @@ # minimal (when using lwork[0] instead of lwork[1]) then # all tests pass. Further investigation is required if # more such SEGFAULTs occur. - lwork = int(1.01*lwork) - inv_a,info = getri(lu,piv, - lwork=lwork,overwrite_lu=1) + lwork = int(1.01 * lwork) + inv_a, info = getri(lu, piv, lwork=lwork, overwrite_lu=1) else: # clapack - inv_a,info = getri(lu,piv,overwrite_lu=1) - if info>0: raise LinAlgError, "singular matrix" - if info<0: raise ValueError,\ - 'illegal value in %-th argument of internal getrf|getri'%(-info) + inv_a, info = getri(lu, piv, overwrite_lu=1) + if info > 0: + raise LinAlgError("singular matrix") + if info < 0: + raise ValueError('illegal value in %d-th argument of internal ' + 'getrf|getri' % -info) return inv_a -## matrix and Vector norm -import decomp -def norm(x, ord=None): - """Matrix or vector norm. - - Parameters - ---------- - x : array, shape (M,) or (M, N) - ord : number, or {None, 1, -1, 2, -2, inf, -inf, 'fro'} - Order of the norm: - - ===== ============================ ========================== - ord norm for matrices norm for vectors - ===== ============================ ========================== - None Frobenius norm 2-norm - 'fro' Frobenius norm -- - inf max(sum(abs(x), axis=1)) max(abs(x)) - -inf min(sum(abs(x), axis=1)) min(abs(x)) - 1 max(sum(abs(x), axis=0)) as below - -1 min(sum(abs(x), axis=0)) as below - 2 2-norm (largest sing. value) as below - -2 smallest singular value as below - other -- sum(abs(x)**ord)**(1./ord) - ===== ============================ ========================== - - Returns - ------- - n : float - Norm of the matrix or vector - - Notes - ----- - For values ord < 0, the result is, strictly speaking, not a - mathematical 'norm', but it may still be useful for numerical - purposes. - - """ - x = asarray_chkfinite(x) - if ord is None: # check the default case first and handle it immediately - return sqrt(add.reduce(real((conjugate(x)*x).ravel()))) - - nd = len(x.shape) - Inf = numpy.Inf - if nd == 1: - if ord == Inf: - return numpy.amax(abs(x)) - elif ord == -Inf: - return numpy.amin(abs(x)) - elif ord == 1: - return numpy.sum(abs(x),axis=0) # special case for speedup - elif ord == 2: - return sqrt(numpy.sum(real((conjugate(x)*x)),axis=0)) # special case for speedup - else: - return numpy.sum(abs(x)**ord,axis=0)**(1.0/ord) - elif nd == 2: - if ord == 2: - return numpy.amax(decomp.svd(x,compute_uv=0)) - elif ord == -2: - return numpy.amin(decomp.svd(x,compute_uv=0)) - elif ord == 1: - return numpy.amax(numpy.sum(abs(x),axis=0)) - elif ord == Inf: - return numpy.amax(numpy.sum(abs(x),axis=1)) - elif ord == -1: - return numpy.amin(numpy.sum(abs(x),axis=0)) - elif ord == -Inf: - return numpy.amin(numpy.sum(abs(x),axis=1)) - elif ord in ['fro','f']: - return sqrt(add.reduce(real((conjugate(x)*x).ravel()))) - else: - raise ValueError, "Invalid norm order for matrices." - else: - raise ValueError, "Improper number of dimensions to norm." - ### Determinant -def det(a, overwrite_a=0): +def det(a, overwrite_a=False): """Compute the determinant of a matrix Parameters @@ -479,85 +292,104 @@ """ a1 = asarray_chkfinite(a) if len(a1.shape) != 2 or a1.shape[0] != a1.shape[1]: - raise ValueError, 'expected square matrix' + raise ValueError('expected square matrix') overwrite_a = overwrite_a or (a1 is not a and not hasattr(a,'__array__')) - fdet, = get_flinalg_funcs(('det',),(a1,)) - a_det,info = fdet(a1,overwrite_a=overwrite_a) - if info<0: raise ValueError,\ - 'illegal value in %-th argument of internal det.getrf'%(-info) + fdet, = get_flinalg_funcs(('det',), (a1,)) + a_det, info = fdet(a1, overwrite_a=overwrite_a) + if info < 0: + raise ValueError('illegal value in %d-th argument of internal ' + 'det.getrf' % -info) return a_det ### Linear Least Squares -def lstsq(a, b, cond=None, overwrite_a=0, overwrite_b=0): - """Compute least-squares solution to equation :m:`a x = b` +def lstsq(a, b, cond=None, overwrite_a=False, overwrite_b=False): + """ + Compute least-squares solution to equation Ax = b. - Compute a vector x such that the 2-norm :m:`|b - a x|` is minimised. + Compute a vector x such that the 2-norm ``|b - A x|`` is minimized. Parameters ---------- a : array, shape (M, N) + Left hand side matrix (2-D array). b : array, shape (M,) or (M, K) - cond : float + Right hand side matrix or vector (1-D or 2-D array). + cond : float, optional Cutoff for 'small' singular values; used to determine effective - rank of a. Singular values smaller than rcond*largest_singular_value - are considered zero. - overwrite_a : boolean - Discard data in a (may enhance performance) - overwrite_b : boolean - Discard data in b (may enhance performance) + rank of a. Singular values smaller than + ``rcond * largest_singular_value`` are considered zero. + overwrite_a : bool, optional + Discard data in `a` (may enhance performance). Default is False. + overwrite_b : bool, optional + Discard data in `b` (may enhance performance). Default is False. Returns ------- x : array, shape (N,) or (N, K) depending on shape of b - Least-squares solution - residues : array, shape () or (1,) or (K,) - Sums of residues, squared 2-norm for each column in :m:`b - a x` + Least-squares solution. + residues : ndarray, shape () or (1,) or (K,) + Sums of residues, squared 2-norm for each column in ``b - a x``. If rank of matrix a is < N or > M this is an empty array. - If b was 1-d, this is an (1,) shape array, otherwise the shape is (K,) - rank : integer - Effective rank of matrix a + If b was 1-D, this is an (1,) shape array, otherwise the shape is (K,). + rank : int + Effective rank of matrix `a`. s : array, shape (min(M,N),) - Singular values of a. The condition number of a is abs(s[0]/s[-1]). + Singular values of `a`. The condition number of a is + ``abs(s[0]/s[-1])``. + + Raises + ------ + LinAlgError : + If computation does not converge. - Raises LinAlgError if computation does not converge + + See Also + -------- + optimize.nnls : linear least squares with non-negativity constraint """ - a1, b1 = map(asarray_chkfinite,(a,b)) + a1, b1 = map(asarray_chkfinite, (a, b)) if len(a1.shape) != 2: raise ValueError, 'expected matrix' - m,n = a1.shape - if len(b1.shape)==2: nrhs = b1.shape[1] - else: nrhs = 1 + m, n = a1.shape + if len(b1.shape) == 2: + nrhs = b1.shape[1] + else: + nrhs = 1 if m != b1.shape[0]: - raise ValueError, 'incompatible dimensions' - gelss, = get_lapack_funcs(('gelss',),(a1,b1)) - if n>m: + raise ValueError('incompatible dimensions') + gelss, = get_lapack_funcs(('gelss',), (a1, b1)) + if n > m: # need to extend b matrix as it will be filled with # a larger solution matrix - b2 = zeros((n,nrhs), dtype=gelss.dtype) - if len(b1.shape)==2: b2[:m,:] = b1 - else: b2[:m,0] = b1 + b2 = zeros((n, nrhs), dtype=gelss.dtype) + if len(b1.shape) == 2: + b2[:m,:] = b1 + else: + b2[:m,0] = b1 b1 = b2 overwrite_a = overwrite_a or (a1 is not a and not hasattr(a,'__array__')) overwrite_b = overwrite_b or (b1 is not b and not hasattr(b,'__array__')) if gelss.module_name[:7] == 'flapack': - lwork = calc_lwork.gelss(gelss.prefix,m,n,nrhs)[1] - v,x,s,rank,info = gelss(a1,b1,cond = cond, - lwork = lwork, - overwrite_a = overwrite_a, - overwrite_b = overwrite_b) + lwork = calc_lwork.gelss(gelss.prefix, m, n, nrhs)[1] + v, x, s, rank, info = gelss(a1, b1, cond=cond, lwork=lwork, + overwrite_a=overwrite_a, + overwrite_b=overwrite_b) else: - raise NotImplementedError,'calling gelss from %s' % (gelss.module_name) - if info>0: raise LinAlgError, "SVD did not converge in Linear Least Squares" - if info<0: raise ValueError,\ - 'illegal value in %-th argument of internal gelss'%(-info) + raise NotImplementedError('calling gelss from %s' % gelss.module_name) + if info > 0: + raise LinAlgError("SVD did not converge in Linear Least Squares") + if info < 0: + raise ValueError('illegal value in %d-th argument of internal gelss' + % -info) resids = asarray([], dtype=x.dtype) - if n cutoff: psigma[i,i] = 1.0/conjugate(s[i]) #XXX: use lapack/blas routines for dot return transpose(conjugate(dot(dot(u,psigma),vh))) - -#----------------------------------------------------------------------------- -# matrix construction functions -#----------------------------------------------------------------------------- - -def tri(N, M=None, k=0, dtype=None): - """Construct (N, M) matrix filled with ones at and below the k-th diagonal. - - The matrix has A[i,j] == 1 for i <= j + k - - Parameters - ---------- - N : integer - M : integer - Size of the matrix. If M is None, M == N is assumed. - k : integer - Number of subdiagonal below which matrix is filled with ones. - k == 0 is the main diagonal, k < 0 subdiagonal and k > 0 superdiagonal. - dtype : dtype - Data type of the matrix. - - Returns - ------- - A : array, shape (N, M) - - Examples - -------- - >>> from scipy.linalg import tri - >>> tri(3, 5, 2, dtype=int) - array([[1, 1, 1, 0, 0], - [1, 1, 1, 1, 0], - [1, 1, 1, 1, 1]]) - >>> tri(3, 5, -1, dtype=int) - array([[0, 0, 0, 0, 0], - [1, 0, 0, 0, 0], - [1, 1, 0, 0, 0]]) - - """ - if M is None: M = N - if type(M) == type('d'): - #pearu: any objections to remove this feature? - # As tri(N,'d') is equivalent to tri(N,dtype='d') - dtype = M - M = N - m = greater_equal(subtract.outer(arange(N), arange(M)),-k) - if dtype is None: - return m - else: - return m.astype(dtype) - -def tril(m, k=0): - """Construct a copy of a matrix with elements above the k-th diagonal zeroed. - - Parameters - ---------- - m : array - Matrix whose elements to return - k : integer - Diagonal above which to zero elements. - k == 0 is the main diagonal, k < 0 subdiagonal and k > 0 superdiagonal. - - Returns - ------- - A : array, shape m.shape, dtype m.dtype - - Examples - -------- - >>> from scipy.linalg import tril - >>> tril([[1,2,3],[4,5,6],[7,8,9],[10,11,12]], -1) - array([[ 0, 0, 0], - [ 4, 0, 0], - [ 7, 8, 0], - [10, 11, 12]]) - - """ - svsp = getattr(m,'spacesaver',lambda:0)() - m = asarray(m) - out = tri(m.shape[0], m.shape[1], k=k, dtype=m.dtype.char)*m - pass ## pass ## out.savespace(svsp) - return out - -def triu(m, k=0): - """Construct a copy of a matrix with elements below the k-th diagonal zeroed. - - Parameters - ---------- - m : array - Matrix whose elements to return - k : integer - Diagonal below which to zero elements. - k == 0 is the main diagonal, k < 0 subdiagonal and k > 0 superdiagonal. - - Returns - ------- - A : array, shape m.shape, dtype m.dtype - - Examples - -------- - >>> from scipy.linalg import tril - >>> triu([[1,2,3],[4,5,6],[7,8,9],[10,11,12]], -1) - array([[ 1, 2, 3], - [ 4, 5, 6], - [ 0, 8, 9], - [ 0, 0, 12]]) - - """ - svsp = getattr(m,'spacesaver',lambda:0)() - m = asarray(m) - out = (1-tri(m.shape[0], m.shape[1], k-1, m.dtype.char))*m - pass ## pass ## out.savespace(svsp) - return out - -def toeplitz(c,r=None): - """Construct a Toeplitz matrix. - - The Toepliz matrix has constant diagonals, c as its first column, - and r as its first row (if not given, r == c is assumed). - - Parameters - ---------- - c : array - First column of the matrix - r : array - First row of the matrix. If None, r == c is assumed. - - Returns - ------- - A : array, shape (len(c), len(r)) - Constructed Toeplitz matrix. - dtype is the same as (c[0] + r[0]).dtype - - Examples - -------- - >>> from scipy.linalg import toeplitz - >>> toeplitz([1,2,3], [1,4,5,6]) - array([[1, 4, 5, 6], - [2, 1, 4, 5], - [3, 2, 1, 4]]) - - See also - -------- - hankel : Hankel matrix - - """ - isscalar = numpy.isscalar - if isscalar(c) or isscalar(r): - return c - if r is None: - r = c - r[0] = conjugate(r[0]) - c = conjugate(c) - r,c = map(asarray_chkfinite,(r,c)) - r,c = map(ravel,(r,c)) - rN,cN = map(len,(r,c)) - if r[0] != c[0]: - print "Warning: column and row values don't agree; column value used." - vals = r_[r[rN-1:0:-1], c] - cols = mgrid[0:cN] - rows = mgrid[rN:0:-1] - indx = cols[:,newaxis]*ones((1,rN),dtype=int) + \ - rows[newaxis,:]*ones((cN,1),dtype=int) - 1 - return take(vals, indx, 0) - - -def hankel(c,r=None): - """Construct a Hankel matrix. - - The Hankel matrix has constant anti-diagonals, c as its first column, - and r as its last row (if not given, r == 0 os assumed). - - Parameters - ---------- - c : array - First column of the matrix - r : array - Last row of the matrix. If None, r == 0 is assumed. - - Returns - ------- - A : array, shape (len(c), len(r)) - Constructed Hankel matrix. - dtype is the same as (c[0] + r[0]).dtype - - Examples - -------- - >>> from scipy.linalg import hankel - >>> hankel([1,2,3,4], [4,7,7,8,9]) - array([[1, 2, 3, 4, 7], - [2, 3, 4, 7, 7], - [3, 4, 7, 7, 8], - [4, 7, 7, 8, 9]]) - - See also - -------- - toeplitz : Toeplitz matrix - - """ - isscalar = numpy.isscalar - if isscalar(c) or isscalar(r): - return c - if r is None: - r = zeros(len(c)) - elif r[0] != c[-1]: - print "Warning: column and row values don't agree; column value used." - r,c = map(asarray_chkfinite,(r,c)) - r,c = map(ravel,(r,c)) - rN,cN = map(len,(r,c)) - vals = r_[c, r[1:rN]] - cols = mgrid[1:cN+1] - rows = mgrid[0:rN] - indx = cols[:,newaxis]*ones((1,rN),dtype=int) + \ - rows[newaxis,:]*ones((cN,1),dtype=int) - 1 - return take(vals, indx, 0) - -def all_mat(*args): - return map(Matrix,args) - -def kron(a,b): - """Kronecker product of a and b. - - The result is the block matrix:: - - a[0,0]*b a[0,1]*b ... a[0,-1]*b - a[1,0]*b a[1,1]*b ... a[1,-1]*b - ... - a[-1,0]*b a[-1,1]*b ... a[-1,-1]*b - - Parameters - ---------- - a : array, shape (M, N) - b : array, shape (P, Q) - - Returns - ------- - A : array, shape (M*P, N*Q) - Kronecker product of a and b - - Examples - -------- - >>> from scipy import kron, array - >>> kron(array([[1,2],[3,4]]), array([[1,1,1]])) - array([[1, 1, 1, 2, 2, 2], - [3, 3, 3, 4, 4, 4]]) - - """ - if not a.flags['CONTIGUOUS']: - a = reshape(a, a.shape) - if not b.flags['CONTIGUOUS']: - b = reshape(b, b.shape) - o = outer(a,b) - o=o.reshape(a.shape + b.shape) - return concatenate(concatenate(o, axis=1), axis=1) diff -Nru python-scipy-0.7.2+dfsg1/scipy/linalg/blas.py python-scipy-0.8.0+dfsg1/scipy/linalg/blas.py --- python-scipy-0.7.2+dfsg1/scipy/linalg/blas.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/linalg/blas.py 2010-07-26 15:48:32.000000000 +0100 @@ -1,5 +1,3 @@ -## Automatically adapted for scipy Oct 18, 2005 by - # # Author: Pearu Peterson, March 2002 # diff -Nru python-scipy-0.7.2+dfsg1/scipy/linalg/decomp_cholesky.py python-scipy-0.8.0+dfsg1/scipy/linalg/decomp_cholesky.py --- python-scipy-0.7.2+dfsg1/scipy/linalg/decomp_cholesky.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/linalg/decomp_cholesky.py 2010-07-26 15:48:32.000000000 +0100 @@ -0,0 +1,243 @@ +"""Cholesky decomposition functions.""" + +from numpy import asarray_chkfinite + +# Local imports +from misc import LinAlgError, _datanotshared +from lapack import get_lapack_funcs + +__all__ = ['cholesky', 'cho_factor', 'cho_solve', 'cholesky_banded', + 'cho_solve_banded'] + + +def _cholesky(a, lower=False, overwrite_a=False, clean=True): + """Common code for cholesky() and cho_factor().""" + + a1 = asarray_chkfinite(a) + if len(a1.shape) != 2 or a1.shape[0] != a1.shape[1]: + raise ValueError('expected square matrix') + + overwrite_a = overwrite_a or _datanotshared(a1, a) + potrf, = get_lapack_funcs(('potrf',), (a1,)) + c, info = potrf(a1, lower=lower, overwrite_a=overwrite_a, clean=clean) + if info > 0: + raise LinAlgError("%d-th leading minor not positive definite" % info) + if info < 0: + raise ValueError('illegal value in %d-th argument of internal potrf' + % -info) + return c, lower + +def cholesky(a, lower=False, overwrite_a=False): + """Compute the Cholesky decomposition of a matrix. + + Returns the Cholesky decomposition, :lm:`A = L L^*` or :lm:`A = U^* U` + of a Hermitian positive-definite matrix :lm:`A`. + + Parameters + ---------- + a : array, shape (M, M) + Matrix to be decomposed + lower : boolean + Whether to compute the upper or lower triangular Cholesky factorization + (Default: upper-triangular) + overwrite_a : boolean + Whether to overwrite data in a (may improve performance) + + Returns + ------- + c : array, shape (M, M) + Upper- or lower-triangular Cholesky factor of A + + Raises LinAlgError if decomposition fails + + Examples + -------- + >>> from scipy import array, linalg, dot + >>> a = array([[1,-2j],[2j,5]]) + >>> L = linalg.cholesky(a, lower=True) + >>> L + array([[ 1.+0.j, 0.+0.j], + [ 0.+2.j, 1.+0.j]]) + >>> dot(L, L.T.conj()) + array([[ 1.+0.j, 0.-2.j], + [ 0.+2.j, 5.+0.j]]) + + """ + c, lower = _cholesky(a, lower=lower, overwrite_a=overwrite_a, clean=True) + return c + + +def cho_factor(a, lower=False, overwrite_a=False): + """Compute the Cholesky decomposition of a matrix, to use in cho_solve + + Returns a matrix containing the Cholesky decomposition, + ``A = L L*`` or ``A = U* U`` of a Hermitian positive-definite matrix `a`. + The return value can be directly used as the first parameter to cho_solve. + + .. warning:: + The returned matrix also contains random data in the entries not + used by the Cholesky decomposition. If you need to zero these + entries, use the function `cholesky` instead. + + Parameters + ---------- + a : array, shape (M, M) + Matrix to be decomposed + lower : boolean + Whether to compute the upper or lower triangular Cholesky factorization + (Default: upper-triangular) + overwrite_a : boolean + Whether to overwrite data in a (may improve performance) + + Returns + ------- + c : array, shape (M, M) + Matrix whose upper or lower triangle contains the Cholesky factor + of `a`. Other parts of the matrix contain random data. + lower : boolean + Flag indicating whether the factor is in the lower or upper triangle + + Raises + ------ + LinAlgError + Raised if decomposition fails. + + See also + -------- + cho_solve : Solve a linear set equations using the Cholesky factorization + of a matrix. + + """ + c, lower = _cholesky(a, lower=lower, overwrite_a=overwrite_a, clean=False) + return c, lower + + +def cho_solve((c, lower), b, overwrite_b=False): + """Solve the linear equations A x = b, given the Cholesky factorization of A. + + Parameters + ---------- + (c, lower) : tuple, (array, bool) + Cholesky factorization of a, as given by cho_factor + b : array + Right-hand side + + Returns + ------- + x : array + The solution to the system A x = b + + See also + -------- + cho_factor : Cholesky factorization of a matrix + + """ + + b1 = asarray_chkfinite(b) + c = asarray_chkfinite(c) + if c.ndim != 2 or c.shape[0] != c.shape[1]: + raise ValueError("The factored matrix c is not square.") + if c.shape[1] != b1.shape[0]: + raise ValueError("incompatible dimensions.") + + overwrite_b = overwrite_b or (b1 is not b and not hasattr(b,'__array__')) + + potrs, = get_lapack_funcs(('potrs',), (c, b1)) + x, info = potrs(c, b1, lower=lower, overwrite_b=overwrite_b) + if info != 0: + raise ValueError('illegal value in %d-th argument of internal potrs' + % -info) + return x + +def cholesky_banded(ab, overwrite_ab=False, lower=False): + """Cholesky decompose a banded Hermitian positive-definite matrix + + The matrix a is stored in ab either in lower diagonal or upper + diagonal ordered form: + + ab[u + i - j, j] == a[i,j] (if upper form; i <= j) + ab[ i - j, j] == a[i,j] (if lower form; i >= j) + + Example of ab (shape of a is (6,6), u=2):: + + upper form: + * * a02 a13 a24 a35 + * a01 a12 a23 a34 a45 + a00 a11 a22 a33 a44 a55 + + lower form: + a00 a11 a22 a33 a44 a55 + a10 a21 a32 a43 a54 * + a20 a31 a42 a53 * * + + Parameters + ---------- + ab : array, shape (u + 1, M) + Banded matrix + overwrite_ab : boolean + Discard data in ab (may enhance performance) + lower : boolean + Is the matrix in the lower form. (Default is upper form) + + Returns + ------- + c : array, shape (u+1, M) + Cholesky factorization of a, in the same banded format as ab + + """ + ab = asarray_chkfinite(ab) + + pbtrf, = get_lapack_funcs(('pbtrf',), (ab,)) + c, info = pbtrf(ab, lower=lower, overwrite_ab=overwrite_ab) + if info > 0: + raise LinAlgError("%d-th leading minor not positive definite" % info) + if info < 0: + raise ValueError('illegal value in %d-th argument of internal pbtrf' + % -info) + return c + + +def cho_solve_banded((cb, lower), b, overwrite_b=False): + """Solve the linear equations A x = b, given the Cholesky factorization of A. + + Parameters + ---------- + (cb, lower) : tuple, (array, bool) + `cb` is the Cholesky factorization of A, as given by cholesky_banded. + `lower` must be the same value that was given to cholesky_banded. + b : array + Right-hand side + overwrite_b : bool + If True, the function will overwrite the values in `b`. + + Returns + ------- + x : array + The solution to the system A x = b + + See also + -------- + cholesky_banded : Cholesky factorization of a banded matrix + + Notes + ----- + + .. versionadded:: 0.8.0 + + """ + + cb = asarray_chkfinite(cb) + b = asarray_chkfinite(b) + + # Validate shapes. + if cb.shape[-1] != b.shape[0]: + raise ValueError("shapes of cb and b are not compatible.") + + pbtrs, = get_lapack_funcs(('pbtrs',), (cb, b)) + x, info = pbtrs(cb, b, lower=lower, overwrite_b=overwrite_b) + if info > 0: + raise LinAlgError("%d-th leading minor not positive definite" % info) + if info < 0: + raise ValueError('illegal value in %d-th argument of internal pbtrs' + % -info) + return x diff -Nru python-scipy-0.7.2+dfsg1/scipy/linalg/decomp_lu.py python-scipy-0.8.0+dfsg1/scipy/linalg/decomp_lu.py --- python-scipy-0.7.2+dfsg1/scipy/linalg/decomp_lu.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/linalg/decomp_lu.py 2010-07-26 15:48:32.000000000 +0100 @@ -0,0 +1,159 @@ +"""LU decomposition functions.""" + +from warnings import warn + +from numpy import asarray, asarray_chkfinite + +# Local imports +from misc import _datanotshared +from lapack import get_lapack_funcs +from flinalg import get_flinalg_funcs + + +def lu_factor(a, overwrite_a=False): + """Compute pivoted LU decomposition of a matrix. + + The decomposition is:: + + A = P L U + + where P is a permutation matrix, L lower triangular with unit + diagonal elements, and U upper triangular. + + Parameters + ---------- + a : array, shape (M, M) + Matrix to decompose + overwrite_a : boolean + Whether to overwrite data in A (may increase performance) + + Returns + ------- + lu : array, shape (N, N) + Matrix containing U in its upper triangle, and L in its lower triangle. + The unit diagonal elements of L are not stored. + piv : array, shape (N,) + Pivot indices representing the permutation matrix P: + row i of matrix was interchanged with row piv[i]. + + See also + -------- + lu_solve : solve an equation system using the LU factorization of a matrix + + Notes + ----- + This is a wrapper to the *GETRF routines from LAPACK. + + """ + a1 = asarray(a) + if len(a1.shape) != 2 or (a1.shape[0] != a1.shape[1]): + raise ValueError, 'expected square matrix' + overwrite_a = overwrite_a or (_datanotshared(a1, a)) + getrf, = get_lapack_funcs(('getrf',), (a1,)) + lu, piv, info = getrf(a, overwrite_a=overwrite_a) + if info < 0: + raise ValueError('illegal value in %d-th argument of ' + 'internal getrf (lu_factor)' % -info) + if info > 0: + warn("Diagonal number %d is exactly zero. Singular matrix." % info, + RuntimeWarning) + return lu, piv + + +def lu_solve((lu, piv), b, trans=0, overwrite_b=False): + """Solve an equation system, a x = b, given the LU factorization of a + + Parameters + ---------- + (lu, piv) + Factorization of the coefficient matrix a, as given by lu_factor + b : array + Right-hand side + trans : {0, 1, 2} + Type of system to solve: + + ===== ========= + trans system + ===== ========= + 0 a x = b + 1 a^T x = b + 2 a^H x = b + ===== ========= + + Returns + ------- + x : array + Solution to the system + + See also + -------- + lu_factor : LU factorize a matrix + + """ + b1 = asarray_chkfinite(b) + overwrite_b = overwrite_b or (b1 is not b and not hasattr(b, '__array__')) + if lu.shape[0] != b1.shape[0]: + raise ValueError("incompatible dimensions.") + + getrs, = get_lapack_funcs(('getrs',), (lu, b1)) + x,info = getrs(lu, piv, b1, trans=trans, overwrite_b=overwrite_b) + if info == 0: + return x + raise ValueError('illegal value in %d-th argument of internal gesv|posv' + % -info) + + +def lu(a, permute_l=False, overwrite_a=False): + """Compute pivoted LU decompostion of a matrix. + + The decomposition is:: + + A = P L U + + where P is a permutation matrix, L lower triangular with unit + diagonal elements, and U upper triangular. + + Parameters + ---------- + a : array, shape (M, N) + Array to decompose + permute_l : boolean + Perform the multiplication P*L (Default: do not permute) + overwrite_a : boolean + Whether to overwrite data in a (may improve performance) + + Returns + ------- + (If permute_l == False) + p : array, shape (M, M) + Permutation matrix + l : array, shape (M, K) + Lower triangular or trapezoidal matrix with unit diagonal. + K = min(M, N) + u : array, shape (K, N) + Upper triangular or trapezoidal matrix + + (If permute_l == True) + pl : array, shape (M, K) + Permuted L matrix. + K = min(M, N) + u : array, shape (K, N) + Upper triangular or trapezoidal matrix + + Notes + ----- + This is a LU factorization routine written for Scipy. + + """ + a1 = asarray_chkfinite(a) + if len(a1.shape) != 2: + raise ValueError('expected matrix') + overwrite_a = overwrite_a or (_datanotshared(a1, a)) + flu, = get_flinalg_funcs(('lu',), (a1,)) + p, l, u, info = flu(a1, permute_l=permute_l, overwrite_a=overwrite_a) + if info < 0: + raise ValueError('illegal value in %d-th argument of ' + 'internal lu.getrf' % -info) + if permute_l: + return l, u + return p, l, u diff -Nru python-scipy-0.7.2+dfsg1/scipy/linalg/decomp.py python-scipy-0.8.0+dfsg1/scipy/linalg/decomp.py --- python-scipy-0.7.2+dfsg1/scipy/linalg/decomp.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/linalg/decomp.py 2010-07-26 15:48:32.000000000 +0100 @@ -1,5 +1,3 @@ -## Automatically adapted for scipy Oct 18, 2005 by - # # Author: Pearu Peterson, March 2002 # @@ -9,81 +7,70 @@ # additions by Bart Vandereycken, June 2006 # additions by Andrew D Straw, May 2007 # additions by Tiziano Zito, November 2008 - +# +# April 2010: Functions for LU, QR, SVD, Schur and Cholesky decompositions were +# moved to their own files. Still in this file are functions for eigenstuff +# and for the Hessenberg form. + __all__ = ['eig','eigh','eig_banded','eigvals','eigvalsh', 'eigvals_banded', - 'lu','svd','svdvals','diagsvd','cholesky','qr','qr_old','rq', - 'schur','rsf2csf','lu_factor','cho_factor','cho_solve','orth', 'hessenberg'] -from basic import LinAlgError -import basic - -from warnings import warn -from lapack import get_lapack_funcs, find_best_lapack_type -from blas import get_blas_funcs -from flinalg import get_flinalg_funcs -from scipy.linalg import calc_lwork import numpy from numpy import array, asarray_chkfinite, asarray, diag, zeros, ones, \ - single, isfinite, inexact, complexfloating, nonzero, iscomplexobj + isfinite, inexact, nonzero, iscomplexobj, cast + +# Local imports +from scipy.linalg import calc_lwork +from misc import LinAlgError, _datanotshared +from lapack import get_lapack_funcs +from blas import get_blas_funcs -cast = numpy.cast -r_ = numpy.r_ _I = cast['F'](1j) -def _make_complex_eigvecs(w,vin,cmplx_tcode): - v = numpy.array(vin,dtype=cmplx_tcode) + +def _make_complex_eigvecs(w, vin, cmplx_tcode): + v = numpy.array(vin, dtype=cmplx_tcode) #ind = numpy.flatnonzero(numpy.not_equal(w.imag,0.0)) - ind = numpy.flatnonzero(numpy.logical_and(numpy.not_equal(w.imag,0.0), + ind = numpy.flatnonzero(numpy.logical_and(numpy.not_equal(w.imag, 0.0), numpy.isfinite(w))) - vnew = numpy.zeros((v.shape[0],len(ind)>>1),cmplx_tcode) - vnew.real = numpy.take(vin,ind[::2],1) - vnew.imag = numpy.take(vin,ind[1::2],1) + vnew = numpy.zeros((v.shape[0], len(ind)>>1), cmplx_tcode) + vnew.real = numpy.take(vin, ind[::2],1) + vnew.imag = numpy.take(vin, ind[1::2],1) count = 0 conj = numpy.conjugate for i in range(len(ind)/2): - v[:,ind[2*i]] = vnew[:,count] - v[:,ind[2*i+1]] = conj(vnew[:,count]) + v[:, ind[2*i]] = vnew[:, count] + v[:, ind[2*i+1]] = conj(vnew[:, count]) count += 1 return v - - -def _datanotshared(a1,a): - if a1 is a: - return False - else: - #try comparing data pointers - try: - return a1.__array_interface__['data'][0] != a.__array_interface__['data'][0] - except: - return True - - -def _geneig(a1,b,left,right,overwrite_a,overwrite_b): +def _geneig(a1, b, left, right, overwrite_a, overwrite_b): b1 = asarray(b) - overwrite_b = overwrite_b or _datanotshared(b1,b) + overwrite_b = overwrite_b or _datanotshared(b1, b) if len(b1.shape) != 2 or b1.shape[0] != b1.shape[1]: - raise ValueError, 'expected square matrix' - ggev, = get_lapack_funcs(('ggev',),(a1,b1)) - cvl,cvr = left,right + raise ValueError('expected square matrix') + ggev, = get_lapack_funcs(('ggev',), (a1, b1)) + cvl, cvr = left, right if ggev.module_name[:7] == 'clapack': - raise NotImplementedError,'calling ggev from %s' % (ggev.module_name) - res = ggev(a1,b1,lwork=-1) + raise NotImplementedError('calling ggev from %s' % ggev.module_name) + res = ggev(a1, b1, lwork=-1) lwork = res[-2][0] if ggev.prefix in 'cz': - alpha,beta,vl,vr,work,info = ggev(a1,b1,cvl,cvr,lwork, - overwrite_a,overwrite_b) + alpha, beta, vl, vr, work, info = ggev(a1, b1, cvl, cvr, lwork, + overwrite_a, overwrite_b) w = alpha / beta else: - alphar,alphai,beta,vl,vr,work,info = ggev(a1,b1,cvl,cvr,lwork, - overwrite_a,overwrite_b) - w = (alphar+_I*alphai)/beta - if info<0: raise ValueError,\ - 'illegal value in %-th argument of internal ggev'%(-info) - if info>0: raise LinAlgError,"generalized eig algorithm did not converge" + alphar, alphai, beta, vl, vr, work, info = ggev(a1, b1, cvl, cvr, lwork, + overwrite_a,overwrite_b) + w = (alphar + _I * alphai) / beta + if info < 0: + raise ValueError('illegal value in %d-th argument of internal ggev' + % -info) + if info > 0: + raise LinAlgError("generalized eig algorithm did not converge (info=%d)" + % info) - only_real = numpy.logical_and.reduce(numpy.equal(w.imag,0.0)) + only_real = numpy.logical_and.reduce(numpy.equal(w.imag, 0.0)) if not (ggev.prefix in 'cz' or only_real): t = w.dtype.char if left: @@ -98,7 +85,7 @@ return w, vl return w, vr -def eig(a,b=None, left=False, right=True, overwrite_a=False, overwrite_b=False): +def eig(a, b=None, left=False, right=True, overwrite_a=False, overwrite_b=False): """Solve an ordinary or generalized eigenvalue problem of a square matrix. Find eigenvalues w and right or left eigenvectors of a general matrix:: @@ -150,46 +137,51 @@ """ a1 = asarray_chkfinite(a) if len(a1.shape) != 2 or a1.shape[0] != a1.shape[1]: - raise ValueError, 'expected square matrix' - overwrite_a = overwrite_a or (_datanotshared(a1,a)) + raise ValueError('expected square matrix') + overwrite_a = overwrite_a or (_datanotshared(a1, a)) if b is not None: b = asarray_chkfinite(b) - return _geneig(a1,b,left,right,overwrite_a,overwrite_b) - geev, = get_lapack_funcs(('geev',),(a1,)) - compute_vl,compute_vr=left,right + if b.shape != a1.shape: + raise ValueError('a and b must have the same shape') + return _geneig(a1, b, left, right, overwrite_a, overwrite_b) + geev, = get_lapack_funcs(('geev',), (a1,)) + compute_vl, compute_vr = left, right if geev.module_name[:7] == 'flapack': - lwork = calc_lwork.geev(geev.prefix,a1.shape[0], - compute_vl,compute_vr)[1] + lwork = calc_lwork.geev(geev.prefix, a1.shape[0], + compute_vl, compute_vr)[1] if geev.prefix in 'cz': - w,vl,vr,info = geev(a1,lwork = lwork, - compute_vl=compute_vl, - compute_vr=compute_vr, - overwrite_a=overwrite_a) - else: - wr,wi,vl,vr,info = geev(a1,lwork = lwork, - compute_vl=compute_vl, - compute_vr=compute_vr, - overwrite_a=overwrite_a) + w, vl, vr, info = geev(a1, lwork=lwork, + compute_vl=compute_vl, + compute_vr=compute_vr, + overwrite_a=overwrite_a) + else: + wr, wi, vl, vr, info = geev(a1, lwork=lwork, + compute_vl=compute_vl, + compute_vr=compute_vr, + overwrite_a=overwrite_a) t = {'f':'F','d':'D'}[wr.dtype.char] - w = wr+_I*wi + w = wr + _I * wi else: # 'clapack' if geev.prefix in 'cz': - w,vl,vr,info = geev(a1, - compute_vl=compute_vl, - compute_vr=compute_vr, - overwrite_a=overwrite_a) - else: - wr,wi,vl,vr,info = geev(a1, + w, vl, vr, info = geev(a1, compute_vl=compute_vl, compute_vr=compute_vr, overwrite_a=overwrite_a) + else: + wr, wi, vl, vr, info = geev(a1, + compute_vl=compute_vl, + compute_vr=compute_vr, + overwrite_a=overwrite_a) t = {'f':'F','d':'D'}[wr.dtype.char] - w = wr+_I*wi - if info<0: raise ValueError,\ - 'illegal value in %-th argument of internal geev'%(-info) - if info>0: raise LinAlgError,"eig algorithm did not converge" + w = wr + _I * wi + if info < 0: + raise ValueError('illegal value in %d-th argument of internal geev' + % -info) + if info > 0: + raise LinAlgError("eig algorithm did not converge (only eigenvalues " + "with order >= %d have converged)" % info) - only_real = numpy.logical_and.reduce(numpy.equal(w.imag,0.0)) + only_real = numpy.logical_and.reduce(numpy.equal(w.imag, 0.0)) if not (geev.prefix in 'cz' or only_real): t = w.dtype.char if left: @@ -275,17 +267,17 @@ """ a1 = asarray_chkfinite(a) if len(a1.shape) != 2 or a1.shape[0] != a1.shape[1]: - raise ValueError, 'expected square matrix' - overwrite_a = overwrite_a or (_datanotshared(a1,a)) + raise ValueError('expected square matrix') + overwrite_a = overwrite_a or (_datanotshared(a1, a)) if iscomplexobj(a1): cplx = True else: cplx = False if b is not None: b1 = asarray_chkfinite(b) - overwrite_b = overwrite_b or _datanotshared(b1,b) + overwrite_b = overwrite_b or _datanotshared(b1, b) if len(b1.shape) != 2 or b1.shape[0] != b1.shape[1]: - raise ValueError, 'expected square matrix' + raise ValueError('expected square matrix') if b1.shape != a1.shape: raise ValueError("wrong b dimensions %s, should " @@ -393,9 +385,9 @@ " and no eigenvalues or eigenvectors were" " computed." % (info-b1.shape[0])) -def eig_banded(a_band, lower=0, eigvals_only=0, overwrite_a_band=0, +def eig_banded(a_band, lower=False, eigvals_only=False, overwrite_a_band=False, select='a', select_range=None, max_ev = 0): - """Solve real symmetric or complex hermetian band matrix eigenvalue problem. + """Solve real symmetric or complex hermitian band matrix eigenvalue problem. Find eigenvalues w and optionally right eigenvectors v of a:: @@ -466,34 +458,34 @@ """ if eigvals_only or overwrite_a_band: a1 = asarray_chkfinite(a_band) - overwrite_a_band = overwrite_a_band or (_datanotshared(a1,a_band)) + overwrite_a_band = overwrite_a_band or (_datanotshared(a1, a_band)) else: a1 = array(a_band) if issubclass(a1.dtype.type, inexact) and not isfinite(a1).all(): - raise ValueError, "array must not contain infs or NaNs" + raise ValueError("array must not contain infs or NaNs") overwrite_a_band = 1 if len(a1.shape) != 2: - raise ValueError, 'expected two-dimensional array' + raise ValueError('expected two-dimensional array') if select.lower() not in [0, 1, 2, 'a', 'v', 'i', 'all', 'value', 'index']: - raise ValueError, 'invalid argument for select' + raise ValueError('invalid argument for select') if select.lower() in [0, 'a', 'all']: if a1.dtype.char in 'GFD': - bevd, = get_lapack_funcs(('hbevd',),(a1,)) + bevd, = get_lapack_funcs(('hbevd',), (a1,)) # FIXME: implement this somewhen, for now go with builtin values # FIXME: calc optimal lwork by calling ?hbevd(lwork=-1) # or by using calc_lwork.f ??? # lwork = calc_lwork.hbevd(bevd.prefix, a1.shape[0], lower) internal_name = 'hbevd' else: # a1.dtype.char in 'fd': - bevd, = get_lapack_funcs(('sbevd',),(a1,)) + bevd, = get_lapack_funcs(('sbevd',), (a1,)) # FIXME: implement this somewhen, for now go with builtin values # see above # lwork = calc_lwork.sbevd(bevd.prefix, a1.shape[0], lower) internal_name = 'sbevd' - w,v,info = bevd(a1, compute_v = not eigvals_only, - lower = lower, - overwrite_ab = overwrite_a_band) + w,v,info = bevd(a1, compute_v=not eigvals_only, + lower=lower, + overwrite_ab=overwrite_a_band) if select.lower() in [1, 2, 'i', 'v', 'index', 'value']: # calculate certain range only if select.lower() in [2, 'i', 'index']: @@ -511,37 +503,39 @@ max_ev = 1 # calculate optimal abstol for dsbevx (see manpage) if a1.dtype.char in 'fF': # single precision - lamch, = get_lapack_funcs(('lamch',),(array(0, dtype='f'),)) + lamch, = get_lapack_funcs(('lamch',), (array(0, dtype='f'),)) else: - lamch, = get_lapack_funcs(('lamch',),(array(0, dtype='d'),)) + lamch, = get_lapack_funcs(('lamch',), (array(0, dtype='d'),)) abstol = 2 * lamch('s') if a1.dtype.char in 'GFD': - bevx, = get_lapack_funcs(('hbevx',),(a1,)) + bevx, = get_lapack_funcs(('hbevx',), (a1,)) internal_name = 'hbevx' else: # a1.dtype.char in 'gfd' - bevx, = get_lapack_funcs(('sbevx',),(a1,)) + bevx, = get_lapack_funcs(('sbevx',), (a1,)) internal_name = 'sbevx' # il+1, iu+1: translate python indexing (0 ... N-1) into Fortran # indexing (1 ... N) w, v, m, ifail, info = bevx(a1, vl, vu, il+1, iu+1, - compute_v = not eigvals_only, - mmax = max_ev, - range = select, lower = lower, - overwrite_ab = overwrite_a_band, + compute_v=not eigvals_only, + mmax=max_ev, + range=select, lower=lower, + overwrite_ab=overwrite_a_band, abstol=abstol) # crop off w and v w = w[:m] if not eigvals_only: v = v[:, :m] - if info<0: raise ValueError,\ - 'illegal value in %-th argument of internal %s'%(-info, internal_name) - if info>0: raise LinAlgError,"eig algorithm did not converge" + if info < 0: + raise ValueError('illegal value in %d-th argument of internal %s' + % (-info, internal_name)) + if info > 0: + raise LinAlgError("eig algorithm did not converge") if eigvals_only: return w return w, v -def eigvals(a,b=None,overwrite_a=0): +def eigvals(a, b=None, overwrite_a=False): """Compute eigenvalues from an ordinary or generalized eigenvalue problem. Find eigenvalues of a general matrix:: @@ -574,7 +568,7 @@ eigh : eigenvalues and eigenvectors of symmetric/Hermitean arrays. """ - return eig(a,b=b,left=0,right=0,overwrite_a=overwrite_a) + return eig(a, b=b, left=0, right=0, overwrite_a=overwrite_a) def eigvalsh(a, b=None, lower=True, overwrite_a=False, overwrite_b=False, turbo=True, eigvals=None, type=1): @@ -638,7 +632,7 @@ overwrite_a=overwrite_a, overwrite_b=overwrite_b, turbo=turbo, eigvals=eigvals, type=type) -def eigvals_banded(a_band,lower=0,overwrite_a_band=0, +def eigvals_banded(a_band, lower=False, overwrite_a_band=False, select='a', select_range=None): """Solve real symmetric or complex hermitian band matrix eigenvalue problem. @@ -704,833 +698,14 @@ eig : eigenvalues and right eigenvectors for non-symmetric arrays """ - return eig_banded(a_band,lower=lower,eigvals_only=1, + return eig_banded(a_band, lower=lower, eigvals_only=1, overwrite_a_band=overwrite_a_band, select=select, select_range=select_range) -def lu_factor(a, overwrite_a=0): - """Compute pivoted LU decomposition of a matrix. - - The decomposition is:: - - A = P L U - - where P is a permutation matrix, L lower triangular with unit - diagonal elements, and U upper triangular. - - Parameters - ---------- - a : array, shape (M, M) - Matrix to decompose - overwrite_a : boolean - Whether to overwrite data in A (may increase performance) - - Returns - ------- - lu : array, shape (N, N) - Matrix containing U in its upper triangle, and L in its lower triangle. - The unit diagonal elements of L are not stored. - piv : array, shape (N,) - Pivot indices representing the permutation matrix P: - row i of matrix was interchanged with row piv[i]. - - See also - -------- - lu_solve : solve an equation system using the LU factorization of a matrix - - Notes - ----- - This is a wrapper to the *GETRF routines from LAPACK. - - """ - a1 = asarray(a) - if len(a1.shape) != 2 or (a1.shape[0] != a1.shape[1]): - raise ValueError, 'expected square matrix' - overwrite_a = overwrite_a or (_datanotshared(a1,a)) - getrf, = get_lapack_funcs(('getrf',),(a1,)) - lu, piv, info = getrf(a,overwrite_a=overwrite_a) - if info<0: raise ValueError,\ - 'illegal value in %-th argument of internal getrf (lu_factor)'%(-info) - if info>0: warn("Diagonal number %d is exactly zero. Singular matrix." % info, - RuntimeWarning) - return lu, piv - -def lu_solve(a_lu_pivots,b): - """Solve an equation system, a x = b, given the LU factorization of a - - Parameters - ---------- - (lu, piv) - Factorization of the coefficient matrix a, as given by lu_factor - b : array - Right-hand side - - Returns - ------- - x : array - Solution to the system - - See also - -------- - lu_factor : LU factorize a matrix - - """ - a_lu, pivots = a_lu_pivots - a_lu = asarray_chkfinite(a_lu) - pivots = asarray_chkfinite(pivots) - b = asarray_chkfinite(b) - _assert_squareness(a_lu) - - getrs, = get_lapack_funcs(('getrs',),(a_lu,)) - b, info = getrs(a_lu,pivots,b) - if info < 0: - msg = "Argument %d to lapack's ?getrs() has an illegal value." % info - raise TypeError, msg - if info > 0: - msg = "Unknown error occured int ?getrs(): error code = %d" % info - raise TypeError, msg - return b - - -def lu(a,permute_l=0,overwrite_a=0): - """Compute pivoted LU decompostion of a matrix. - - The decomposition is:: - - A = P L U - - where P is a permutation matrix, L lower triangular with unit - diagonal elements, and U upper triangular. - - Parameters - ---------- - a : array, shape (M, N) - Array to decompose - permute_l : boolean - Perform the multiplication P*L (Default: do not permute) - overwrite_a : boolean - Whether to overwrite data in a (may improve performance) - - Returns - ------- - (If permute_l == False) - p : array, shape (M, M) - Permutation matrix - l : array, shape (M, K) - Lower triangular or trapezoidal matrix with unit diagonal. - K = min(M, N) - u : array, shape (K, N) - Upper triangular or trapezoidal matrix - - (If permute_l == True) - pl : array, shape (M, K) - Permuted L matrix. - K = min(M, N) - u : array, shape (K, N) - Upper triangular or trapezoidal matrix - - Notes - ----- - This is a LU factorization routine written for Scipy. - - """ - a1 = asarray_chkfinite(a) - if len(a1.shape) != 2: - raise ValueError, 'expected matrix' - overwrite_a = overwrite_a or (_datanotshared(a1,a)) - flu, = get_flinalg_funcs(('lu',),(a1,)) - p,l,u,info = flu(a1,permute_l=permute_l,overwrite_a = overwrite_a) - if info<0: raise ValueError,\ - 'illegal value in %-th argument of internal lu.getrf'%(-info) - if permute_l: - return l,u - return p,l,u - -def svd(a,full_matrices=1,compute_uv=1,overwrite_a=0): - """Singular Value Decomposition. - - Factorizes the matrix a into two unitary matrices U and Vh and - an 1d-array s of singular values (real, non-negative) such that - a == U S Vh if S is an suitably shaped matrix of zeros whose - main diagonal is s. - - Parameters - ---------- - a : array, shape (M, N) - Matrix to decompose - full_matrices : boolean - If true, U, Vh are shaped (M,M), (N,N) - If false, the shapes are (M,K), (K,N) where K = min(M,N) - compute_uv : boolean - Whether to compute also U, Vh in addition to s (Default: true) - overwrite_a : boolean - Whether data in a is overwritten (may improve performance) - - Returns - ------- - U: array, shape (M,M) or (M,K) depending on full_matrices - s: array, shape (K,) - The singular values, sorted so that s[i] >= s[i+1]. K = min(M, N) - Vh: array, shape (N,N) or (K,N) depending on full_matrices - - For compute_uv = False, only s is returned. - - Raises LinAlgError if SVD computation does not converge - - Examples - -------- - >>> from scipy import random, linalg, allclose, dot - >>> a = random.randn(9, 6) + 1j*random.randn(9, 6) - >>> U, s, Vh = linalg.svd(a) - >>> U.shape, Vh.shape, s.shape - ((9, 9), (6, 6), (6,)) - - >>> U, s, Vh = linalg.svd(a, full_matrices=False) - >>> U.shape, Vh.shape, s.shape - ((9, 6), (6, 6), (6,)) - >>> S = linalg.diagsvd(s, 6, 6) - >>> allclose(a, dot(U, dot(S, Vh))) - True - - >>> s2 = linalg.svd(a, compute_uv=False) - >>> allclose(s, s2) - True - - See also - -------- - svdvals : return singular values of a matrix - diagsvd : return the Sigma matrix, given the vector s - - """ - # A hack until full_matrices == 0 support is fixed here. - if full_matrices == 0: - import numpy.linalg - return numpy.linalg.svd(a, full_matrices=0, compute_uv=compute_uv) - a1 = asarray_chkfinite(a) - if len(a1.shape) != 2: - raise ValueError, 'expected matrix' - m,n = a1.shape - overwrite_a = overwrite_a or (_datanotshared(a1,a)) - gesdd, = get_lapack_funcs(('gesdd',),(a1,)) - if gesdd.module_name[:7] == 'flapack': - lwork = calc_lwork.gesdd(gesdd.prefix,m,n,compute_uv)[1] - u,s,v,info = gesdd(a1,compute_uv = compute_uv, lwork = lwork, - overwrite_a = overwrite_a) - else: # 'clapack' - raise NotImplementedError,'calling gesdd from %s' % (gesdd.module_name) - if info>0: raise LinAlgError, "SVD did not converge" - if info<0: raise ValueError,\ - 'illegal value in %-th argument of internal gesdd'%(-info) - if compute_uv: - return u,s,v - else: - return s - -def svdvals(a,overwrite_a=0): - """Compute singular values of a matrix. - - Parameters - ---------- - a : array, shape (M, N) - Matrix to decompose - overwrite_a : boolean - Whether data in a is overwritten (may improve performance) - - Returns - ------- - s: array, shape (K,) - The singular values, sorted so that s[i] >= s[i+1]. K = min(M, N) - - Raises LinAlgError if SVD computation does not converge - - See also - -------- - svd : return the full singular value decomposition of a matrix - diagsvd : return the Sigma matrix, given the vector s - - """ - return svd(a,compute_uv=0,overwrite_a=overwrite_a) - -def diagsvd(s,M,N): - """Construct the sigma matrix in SVD from singular values and size M,N. - - Parameters - ---------- - s : array, shape (M,) or (N,) - Singular values - M : integer - N : integer - Size of the matrix whose singular values are s - - Returns - ------- - S : array, shape (M, N) - The S-matrix in the singular value decomposition - - """ - part = diag(s) - typ = part.dtype.char - MorN = len(s) - if MorN == M: - return r_['-1',part,zeros((M,N-M),typ)] - elif MorN == N: - return r_[part,zeros((M-N,N),typ)] - else: - raise ValueError, "Length of s must be M or N." - -def cholesky(a,lower=0,overwrite_a=0): - """Compute the Cholesky decomposition of a matrix. - - Returns the Cholesky decomposition, :lm:`A = L L^*` or :lm:`A = U^* U` - of a Hermitian positive-definite matrix :lm:`A`. - - Parameters - ---------- - a : array, shape (M, M) - Matrix to be decomposed - lower : boolean - Whether to compute the upper or lower triangular Cholesky factorization - (Default: upper-triangular) - overwrite_a : boolean - Whether to overwrite data in a (may improve performance) - - Returns - ------- - B : array, shape (M, M) - Upper- or lower-triangular Cholesky factor of A - - Raises LinAlgError if decomposition fails - - Examples - -------- - >>> from scipy import array, linalg, dot - >>> a = array([[1,-2j],[2j,5]]) - >>> L = linalg.cholesky(a, lower=True) - >>> L - array([[ 1.+0.j, 0.+0.j], - [ 0.+2.j, 1.+0.j]]) - >>> dot(L, L.T.conj()) - array([[ 1.+0.j, 0.-2.j], - [ 0.+2.j, 5.+0.j]]) - - """ - a1 = asarray_chkfinite(a) - if len(a1.shape) != 2 or a1.shape[0] != a1.shape[1]: - raise ValueError, 'expected square matrix' - overwrite_a = overwrite_a or _datanotshared(a1,a) - potrf, = get_lapack_funcs(('potrf',),(a1,)) - c,info = potrf(a1,lower=lower,overwrite_a=overwrite_a,clean=1) - if info>0: raise LinAlgError, "matrix not positive definite" - if info<0: raise ValueError,\ - 'illegal value in %-th argument of internal potrf'%(-info) - return c - -def cho_factor(a, lower=0, overwrite_a=0): - """Compute the Cholesky decomposition of a matrix, to use in cho_solve - - Returns a matrix containing the Cholesky decomposition, - ``A = L L*`` or ``A = U* U`` of a Hermitian positive-definite matrix `a`. - The return value can be directly used as the first parameter to cho_solve. - - .. warning:: - The returned matrix also contains random data in the entries not - used by the Cholesky decomposition. If you need to zero these - entries, use the function `cholesky` instead. - - Parameters - ---------- - a : array, shape (M, M) - Matrix to be decomposed - lower : boolean - Whether to compute the upper or lower triangular Cholesky factorization - (Default: upper-triangular) - overwrite_a : boolean - Whether to overwrite data in a (may improve performance) - - Returns - ------- - c : array, shape (M, M) - Matrix whose upper or lower triangle contains the Cholesky factor - of `a`. Other parts of the matrix contain random data. - lower : boolean - Flag indicating whether the factor is in the lower or upper triangle - - Raises - ------ - LinAlgError - Raised if decomposition fails. - - """ - a1 = asarray_chkfinite(a) - if len(a1.shape) != 2 or a1.shape[0] != a1.shape[1]: - raise ValueError, 'expected square matrix' - overwrite_a = overwrite_a or (_datanotshared(a1,a)) - potrf, = get_lapack_funcs(('potrf',),(a1,)) - c,info = potrf(a1,lower=lower,overwrite_a=overwrite_a,clean=0) - if info>0: raise LinAlgError, "matrix not positive definite" - if info<0: raise ValueError,\ - 'illegal value in %-th argument of internal potrf'%(-info) - return c, lower - -def cho_solve(clow, b): - """Solve a previously factored symmetric system of equations. - - The equation system is - - A x = b, A = U^H U = L L^H - - and A is real symmetric or complex Hermitian. - - Parameters - ---------- - clow : tuple (c, lower) - Cholesky factor and a flag indicating whether it is lower triangular. - The return value from cho_factor can be used. - b : array - Right-hand side of the equation system - - First input is a tuple (LorU, lower) which is the output to cho_factor. - Second input is the right-hand side. - - Returns - ------- - x : array - Solution to the equation system - - """ - c, lower = clow - c = asarray_chkfinite(c) - _assert_squareness(c) - b = asarray_chkfinite(b) - potrs, = get_lapack_funcs(('potrs',),(c,)) - b, info = potrs(c,b,lower) - if info < 0: - msg = "Argument %d to lapack's ?potrs() has an illegal value." % info - raise TypeError, msg - if info > 0: - msg = "Unknown error occured int ?potrs(): error code = %d" % info - raise TypeError, msg - return b - -def qr(a, overwrite_a=0, lwork=None, econ=None, mode='qr'): - """Compute QR decomposition of a matrix. - - Calculate the decomposition :lm:`A = Q R` where Q is unitary/orthogonal - and R upper triangular. - - Parameters - ---------- - a : array, shape (M, N) - Matrix to be decomposed - overwrite_a : boolean - Whether data in a is overwritten (may improve performance) - lwork : integer - Work array size, lwork >= a.shape[1]. If None or -1, an optimal size - is computed. - econ : boolean - Whether to compute the economy-size QR decomposition, making shapes - of Q and R (M, K) and (K, N) instead of (M,M) and (M,N). K=min(M,N). - Default is False. - mode : {'qr', 'r'} - Determines what information is to be returned: either both Q and R - or only R. - - Returns - ------- - (if mode == 'qr') - Q : double or complex array, shape (M, M) or (M, K) for econ==True - - (for any mode) - R : double or complex array, shape (M, N) or (K, N) for econ==True - Size K = min(M, N) - - Raises LinAlgError if decomposition fails - - Notes - ----- - This is an interface to the LAPACK routines dgeqrf, zgeqrf, - dorgqr, and zungqr. - - Examples - -------- - >>> from scipy import random, linalg, dot - >>> a = random.randn(9, 6) - >>> q, r = linalg.qr(a) - >>> allclose(a, dot(q, r)) - True - >>> q.shape, r.shape - ((9, 9), (9, 6)) - - >>> r2 = linalg.qr(a, mode='r') - >>> allclose(r, r2) - - >>> q3, r3 = linalg.qr(a, econ=True) - >>> q3.shape, r3.shape - ((9, 6), (6, 6)) - - """ - if econ is None: - econ = False - else: - warn("qr econ argument will be removed after scipy 0.7. " - "The economy transform will then be available through " - "the mode='economic' argument.", DeprecationWarning) - - a1 = asarray_chkfinite(a) - if len(a1.shape) != 2: - raise ValueError("expected 2D array") - M, N = a1.shape - overwrite_a = overwrite_a or (_datanotshared(a1,a)) - - geqrf, = get_lapack_funcs(('geqrf',),(a1,)) - if lwork is None or lwork == -1: - # get optimal work array - qr,tau,work,info = geqrf(a1,lwork=-1,overwrite_a=1) - lwork = work[0] - - qr,tau,work,info = geqrf(a1,lwork=lwork,overwrite_a=overwrite_a) - if info<0: - raise ValueError("illegal value in %-th argument of internal geqrf" - % -info) - - if not econ or M= a.shape[1]. If None or -1, an optimal size - is computed. - - Returns - ------- - Q : double or complex array, shape (M, M) - R : double or complex array, shape (M, N) - Size K = min(M, N) - - Raises LinAlgError if decomposition fails - - """ - a1 = asarray_chkfinite(a) - if len(a1.shape) != 2: - raise ValueError, 'expected matrix' - M,N = a1.shape - overwrite_a = overwrite_a or (_datanotshared(a1,a)) - geqrf, = get_lapack_funcs(('geqrf',),(a1,)) - if lwork is None or lwork == -1: - # get optimal work array - qr,tau,work,info = geqrf(a1,lwork=-1,overwrite_a=1) - lwork = work[0] - qr,tau,work,info = geqrf(a1,lwork=lwork,overwrite_a=overwrite_a) - if info<0: raise ValueError,\ - 'illegal value in %-th argument of internal geqrf'%(-info) - gemm, = get_blas_funcs(('gemm',),(qr,)) - t = qr.dtype.char - R = basic.triu(qr) - Q = numpy.identity(M,dtype=t) - ident = numpy.identity(M,dtype=t) - zeros = numpy.zeros - for i in range(min(M,N)): - v = zeros((M,),t) - v[i] = 1 - v[i+1:M] = qr[i+1:M,i] - H = gemm(-tau[i],v,v,1+0j,ident,trans_b=2) - Q = gemm(1,Q,H) - return Q, R - - - -def rq(a,overwrite_a=0,lwork=None): - """Compute RQ decomposition of a square real matrix. - - Calculate the decomposition :lm:`A = R Q` where Q is unitary/orthogonal - and R upper triangular. - - Parameters - ---------- - a : array, shape (M, M) - Square real matrix to be decomposed - overwrite_a : boolean - Whether data in a is overwritten (may improve performance) - lwork : integer - Work array size, lwork >= a.shape[1]. If None or -1, an optimal size - is computed. - econ : boolean - - Returns - ------- - R : double array, shape (M, N) or (K, N) for econ==True - Size K = min(M, N) - Q : double or complex array, shape (M, M) or (M, K) for econ==True - - Raises LinAlgError if decomposition fails - - """ - # TODO: implement support for non-square and complex arrays - a1 = asarray_chkfinite(a) - if len(a1.shape) != 2: - raise ValueError, 'expected matrix' - M,N = a1.shape - if M != N: - raise ValueError, 'expected square matrix' - if issubclass(a1.dtype.type,complexfloating): - raise ValueError, 'expected real (non-complex) matrix' - overwrite_a = overwrite_a or (_datanotshared(a1,a)) - gerqf, = get_lapack_funcs(('gerqf',),(a1,)) - if lwork is None or lwork == -1: - # get optimal work array - rq,tau,work,info = gerqf(a1,lwork=-1,overwrite_a=1) - lwork = work[0] - rq,tau,work,info = gerqf(a1,lwork=lwork,overwrite_a=overwrite_a) - if info<0: raise ValueError, \ - 'illegal value in %-th argument of internal geqrf'%(-info) - gemm, = get_blas_funcs(('gemm',),(rq,)) - t = rq.dtype.char - R = basic.triu(rq) - Q = numpy.identity(M,dtype=t) - ident = numpy.identity(M,dtype=t) - zeros = numpy.zeros - - k = min(M,N) - for i in range(k): - v = zeros((M,),t) - v[N-k+i] = 1 - v[0:N-k+i] = rq[M-k+i,0:N-k+i] - H = gemm(-tau[i],v,v,1+0j,ident,trans_b=2) - Q = gemm(1,Q,H) - return R, Q - _double_precision = ['i','l','d'] -def schur(a,output='real',lwork=None,overwrite_a=0): - """Compute Schur decomposition of a matrix. - - The Schur decomposition is - - A = Z T Z^H - - where Z is unitary and T is either upper-triangular, or for real - Schur decomposition (output='real'), quasi-upper triangular. In - the quasi-triangular form, 2x2 blocks describing complex-valued - eigenvalue pairs may extrude from the diagonal. - - Parameters - ---------- - a : array, shape (M, M) - Matrix to decompose - output : {'real', 'complex'} - Construct the real or complex Schur decomposition (for real matrices). - lwork : integer - Work array size. If None or -1, it is automatically computed. - overwrite_a : boolean - Whether to overwrite data in a (may improve performance) - - Returns - ------- - T : array, shape (M, M) - Schur form of A. It is real-valued for the real Schur decomposition. - Z : array, shape (M, M) - An unitary Schur transformation matrix for A. - It is real-valued for the real Schur decomposition. - - See also - -------- - rsf2csf : Convert real Schur form to complex Schur form - - """ - if not output in ['real','complex','r','c']: - raise ValueError, "argument must be 'real', or 'complex'" - a1 = asarray_chkfinite(a) - if len(a1.shape) != 2 or (a1.shape[0] != a1.shape[1]): - raise ValueError, 'expected square matrix' - typ = a1.dtype.char - if output in ['complex','c'] and typ not in ['F','D']: - if typ in _double_precision: - a1 = a1.astype('D') - typ = 'D' - else: - a1 = a1.astype('F') - typ = 'F' - overwrite_a = overwrite_a or (_datanotshared(a1,a)) - gees, = get_lapack_funcs(('gees',),(a1,)) - if lwork is None or lwork == -1: - # get optimal work array - result = gees(lambda x: None,a,lwork=-1) - lwork = result[-2][0] - result = gees(lambda x: None,a,lwork=result[-2][0],overwrite_a=overwrite_a) - info = result[-1] - if info<0: raise ValueError,\ - 'illegal value in %-th argument of internal gees'%(-info) - elif info>0: raise LinAlgError, "Schur form not found. Possibly ill-conditioned." - return result[0], result[-3] - -eps = numpy.finfo(float).eps -feps = numpy.finfo(single).eps - -_array_kind = {'b':0, 'h':0, 'B': 0, 'i':0, 'l': 0, 'f': 0, 'd': 0, 'F': 1, 'D': 1} -_array_precision = {'i': 1, 'l': 1, 'f': 0, 'd': 1, 'F': 0, 'D': 1} -_array_type = [['f', 'd'], ['F', 'D']] -def _commonType(*arrays): - kind = 0 - precision = 0 - for a in arrays: - t = a.dtype.char - kind = max(kind, _array_kind[t]) - precision = max(precision, _array_precision[t]) - return _array_type[kind][precision] - -def _castCopy(type, *arrays): - cast_arrays = () - for a in arrays: - if a.dtype.char == type: - cast_arrays = cast_arrays + (a.copy(),) - else: - cast_arrays = cast_arrays + (a.astype(type),) - if len(cast_arrays) == 1: - return cast_arrays[0] - else: - return cast_arrays - -def _assert_squareness(*arrays): - for a in arrays: - if max(a.shape) != min(a.shape): - raise LinAlgError, 'Array must be square' - -def rsf2csf(T, Z): - """Convert real Schur form to complex Schur form. - - Convert a quasi-diagonal real-valued Schur form to the upper triangular - complex-valued Schur form. - - Parameters - ---------- - T : array, shape (M, M) - Real Schur form of the original matrix - Z : array, shape (M, M) - Schur transformation matrix - - Returns - ------- - T : array, shape (M, M) - Complex Schur form of the original matrix - Z : array, shape (M, M) - Schur transformation matrix corresponding to the complex form - - See also - -------- - schur : Schur decompose a matrix - """ - Z,T = map(asarray_chkfinite, (Z,T)) - if len(Z.shape) !=2 or Z.shape[0] != Z.shape[1]: - raise ValueError, "matrix must be square." - if len(T.shape) !=2 or T.shape[0] != T.shape[1]: - raise ValueError, "matrix must be square." - if T.shape[0] != Z.shape[0]: - raise ValueError, "matrices must be same dimension." - N = T.shape[0] - arr = numpy.array - t = _commonType(Z, T, arr([3.0],'F')) - Z, T = _castCopy(t, Z, T) - conj = numpy.conj - dot = numpy.dot - r_ = numpy.r_ - transp = numpy.transpose - for m in range(N-1,0,-1): - if abs(T[m,m-1]) > eps*(abs(T[m-1,m-1]) + abs(T[m,m])): - k = slice(m-1,m+1) - mu = eigvals(T[k,k]) - T[m,m] - r = basic.norm([mu[0], T[m,m-1]]) - c = mu[0] / r - s = T[m,m-1] / r - G = r_[arr([[conj(c),s]],dtype=t),arr([[-s,c]],dtype=t)] - Gc = conj(transp(G)) - j = slice(m-1,N) - T[k,j] = dot(G,T[k,j]) - i = slice(0,m+1) - T[i,k] = dot(T[i,k], Gc) - i = slice(0,N) - Z[i,k] = dot(Z[i,k], Gc) - T[m,m-1] = 0.0; - return T, Z - - -# Orthonormal decomposition - -def orth(A): - """Construct an orthonormal basis for the range of A using SVD - - Parameters - ---------- - A : array, shape (M, N) - - Returns - ------- - Q : array, shape (M, K) - Orthonormal basis for the range of A. - K = effective rank of A, as determined by automatic cutoff - - See also - -------- - svd : Singular value decomposition of a matrix - - """ - u,s,vh = svd(A) - M,N = A.shape - tol = max(M,N)*numpy.amax(s)*eps - num = numpy.sum(s > tol,dtype=int) - Q = u[:,:num] - return Q - -def hessenberg(a,calc_q=0,overwrite_a=0): +def hessenberg(a, calc_q=False, overwrite_a=False): """Compute Hessenberg form of a matrix. The Hessenberg decomposition is @@ -1561,39 +736,41 @@ """ a1 = asarray(a) if len(a1.shape) != 2 or (a1.shape[0] != a1.shape[1]): - raise ValueError, 'expected square matrix' - overwrite_a = overwrite_a or (_datanotshared(a1,a)) - gehrd,gebal = get_lapack_funcs(('gehrd','gebal'),(a1,)) - ba,lo,hi,pivscale,info = gebal(a,permute=1,overwrite_a = overwrite_a) - if info<0: raise ValueError,\ - 'illegal value in %-th argument of internal gebal (hessenberg)'%(-info) + raise ValueError('expected square matrix') + overwrite_a = overwrite_a or (_datanotshared(a1, a)) + gehrd,gebal = get_lapack_funcs(('gehrd','gebal'), (a1,)) + ba, lo, hi, pivscale, info = gebal(a, permute=1, overwrite_a=overwrite_a) + if info < 0: + raise ValueError('illegal value in %d-th argument of internal gebal ' + '(hessenberg)' % -info) n = len(a1) - lwork = calc_lwork.gehrd(gehrd.prefix,n,lo,hi) - hq,tau,info = gehrd(ba,lo=lo,hi=hi,lwork=lwork,overwrite_a=1) - if info<0: raise ValueError,\ - 'illegal value in %-th argument of internal gehrd (hessenberg)'%(-info) + lwork = calc_lwork.gehrd(gehrd.prefix, n, lo, hi) + hq, tau, info = gehrd(ba, lo=lo, hi=hi, lwork=lwork, overwrite_a=1) + if info < 0: + raise ValueError('illegal value in %d-th argument of internal gehrd ' + '(hessenberg)' % -info) if not calc_q: - for i in range(lo,hi): - hq[i+2:hi+1,i] = 0.0 + for i in range(lo, hi): + hq[i+2:hi+1, i] = 0.0 return hq # XXX: Use ORGHR routines to compute q. - ger,gemm = get_blas_funcs(('ger','gemm'),(hq,)) + ger,gemm = get_blas_funcs(('ger','gemm'), (hq,)) typecode = hq.dtype.char q = None - for i in range(lo,hi): + for i in range(lo, hi): if tau[i]==0.0: continue - v = zeros(n,dtype=typecode) + v = zeros(n, dtype=typecode) v[i+1] = 1.0 - v[i+2:hi+1] = hq[i+2:hi+1,i] - hq[i+2:hi+1,i] = 0.0 - h = ger(-tau[i],v,v,a=diag(ones(n,dtype=typecode)),overwrite_a=1) + v[i+2:hi+1] = hq[i+2:hi+1, i] + hq[i+2:hi+1, i] = 0.0 + h = ger(-tau[i], v, v,a=diag(ones(n, dtype=typecode)), overwrite_a=1) if q is None: q = h else: - q = gemm(1.0,q,h) + q = gemm(1.0, q, h) if q is None: - q = diag(ones(n,dtype=typecode)) - return hq,q + q = diag(ones(n, dtype=typecode)) + return hq, q diff -Nru python-scipy-0.7.2+dfsg1/scipy/linalg/decomp_qr.py python-scipy-0.8.0+dfsg1/scipy/linalg/decomp_qr.py --- python-scipy-0.7.2+dfsg1/scipy/linalg/decomp_qr.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/linalg/decomp_qr.py 2010-07-26 15:48:32.000000000 +0100 @@ -0,0 +1,248 @@ +"""QR decomposition functions.""" + +from warnings import warn + +import numpy +from numpy import asarray_chkfinite, complexfloating + +# Local imports +import special_matrices +from blas import get_blas_funcs +from lapack import get_lapack_funcs, find_best_lapack_type +from misc import _datanotshared + + +def qr(a, overwrite_a=False, lwork=None, econ=None, mode='qr'): + """Compute QR decomposition of a matrix. + + Calculate the decomposition :lm:`A = Q R` where Q is unitary/orthogonal + and R upper triangular. + + Parameters + ---------- + a : array, shape (M, N) + Matrix to be decomposed + overwrite_a : boolean + Whether data in a is overwritten (may improve performance) + lwork : integer + Work array size, lwork >= a.shape[1]. If None or -1, an optimal size + is computed. + econ : boolean + Whether to compute the economy-size QR decomposition, making shapes + of Q and R (M, K) and (K, N) instead of (M,M) and (M,N). K=min(M,N). + Default is False. + mode : {'qr', 'r'} + Determines what information is to be returned: either both Q and R + or only R. + + Returns + ------- + (if mode == 'qr') + Q : double or complex array, shape (M, M) or (M, K) for econ==True + + (for any mode) + R : double or complex array, shape (M, N) or (K, N) for econ==True + Size K = min(M, N) + + Raises LinAlgError if decomposition fails + + Notes + ----- + This is an interface to the LAPACK routines dgeqrf, zgeqrf, + dorgqr, and zungqr. + + Examples + -------- + >>> from scipy import random, linalg, dot + >>> a = random.randn(9, 6) + >>> q, r = linalg.qr(a) + >>> allclose(a, dot(q, r)) + True + >>> q.shape, r.shape + ((9, 9), (9, 6)) + + >>> r2 = linalg.qr(a, mode='r') + >>> allclose(r, r2) + + >>> q3, r3 = linalg.qr(a, econ=True) + >>> q3.shape, r3.shape + ((9, 6), (6, 6)) + + """ + if econ is None: + econ = False + else: + warn("qr econ argument will be removed after scipy 0.7. " + "The economy transform will then be available through " + "the mode='economic' argument.", DeprecationWarning) + + a1 = asarray_chkfinite(a) + if len(a1.shape) != 2: + raise ValueError("expected 2D array") + M, N = a1.shape + overwrite_a = overwrite_a or (_datanotshared(a1, a)) + + geqrf, = get_lapack_funcs(('geqrf',), (a1,)) + if lwork is None or lwork == -1: + # get optimal work array + qr, tau, work, info = geqrf(a1, lwork=-1, overwrite_a=1) + lwork = work[0] + + qr, tau, work, info = geqrf(a1, lwork=lwork, overwrite_a=overwrite_a) + if info < 0: + raise ValueError("illegal value in %d-th argument of internal geqrf" + % -info) + if not econ or M < N: + R = special_matrices.triu(qr) + else: + R = special_matrices.triu(qr[0:N, 0:N]) + + if mode == 'r': + return R + + if find_best_lapack_type((a1,))[0] == 's' or \ + find_best_lapack_type((a1,))[0] == 'd': + gor_un_gqr, = get_lapack_funcs(('orgqr',), (qr,)) + else: + gor_un_gqr, = get_lapack_funcs(('ungqr',), (qr,)) + + if M < N: + # get optimal work array + Q, work, info = gor_un_gqr(qr[:,0:M], tau, lwork=-1, overwrite_a=1) + lwork = work[0] + Q, work, info = gor_un_gqr(qr[:,0:M], tau, lwork=lwork, overwrite_a=1) + elif econ: + # get optimal work array + Q, work, info = gor_un_gqr(qr, tau, lwork=-1, overwrite_a=1) + lwork = work[0] + Q, work, info = gor_un_gqr(qr, tau, lwork=lwork, overwrite_a=1) + else: + t = qr.dtype.char + qqr = numpy.empty((M, M), dtype=t) + qqr[:,0:N] = qr + # get optimal work array + Q, work, info = gor_un_gqr(qqr, tau, lwork=-1, overwrite_a=1) + lwork = work[0] + Q, work, info = gor_un_gqr(qqr, tau, lwork=lwork, overwrite_a=1) + + if info < 0: + raise ValueError("illegal value in %d-th argument of internal gorgqr" + % -info) + return Q, R + + + +def qr_old(a, overwrite_a=False, lwork=None): + """Compute QR decomposition of a matrix. + + Calculate the decomposition :lm:`A = Q R` where Q is unitary/orthogonal + and R upper triangular. + + Parameters + ---------- + a : array, shape (M, N) + Matrix to be decomposed + overwrite_a : boolean + Whether data in a is overwritten (may improve performance) + lwork : integer + Work array size, lwork >= a.shape[1]. If None or -1, an optimal size + is computed. + + Returns + ------- + Q : double or complex array, shape (M, M) + R : double or complex array, shape (M, N) + Size K = min(M, N) + + Raises LinAlgError if decomposition fails + + """ + a1 = asarray_chkfinite(a) + if len(a1.shape) != 2: + raise ValueError, 'expected matrix' + M,N = a1.shape + overwrite_a = overwrite_a or (_datanotshared(a1, a)) + geqrf, = get_lapack_funcs(('geqrf',), (a1,)) + if lwork is None or lwork == -1: + # get optimal work array + qr, tau, work, info = geqrf(a1, lwork=-1, overwrite_a=1) + lwork = work[0] + qr, tau, work, info = geqrf(a1, lwork=lwork, overwrite_a=overwrite_a) + if info < 0: + raise ValueError('illegal value in %d-th argument of internal geqrf' + % -info) + gemm, = get_blas_funcs(('gemm',), (qr,)) + t = qr.dtype.char + R = special_matrices.triu(qr) + Q = numpy.identity(M, dtype=t) + ident = numpy.identity(M, dtype=t) + zeros = numpy.zeros + for i in range(min(M, N)): + v = zeros((M,), t) + v[i] = 1 + v[i+1:M] = qr[i+1:M, i] + H = gemm(-tau[i], v, v, 1+0j, ident, trans_b=2) + Q = gemm(1, Q, H) + return Q, R + + +def rq(a, overwrite_a=False, lwork=None): + """Compute RQ decomposition of a square real matrix. + + Calculate the decomposition :lm:`A = R Q` where Q is unitary/orthogonal + and R upper triangular. + + Parameters + ---------- + a : array, shape (M, M) + Square real matrix to be decomposed + overwrite_a : boolean + Whether data in a is overwritten (may improve performance) + lwork : integer + Work array size, lwork >= a.shape[1]. If None or -1, an optimal size + is computed. + econ : boolean + + Returns + ------- + R : double array, shape (M, N) or (K, N) for econ==True + Size K = min(M, N) + Q : double or complex array, shape (M, M) or (M, K) for econ==True + + Raises LinAlgError if decomposition fails + + """ + # TODO: implement support for non-square and complex arrays + a1 = asarray_chkfinite(a) + if len(a1.shape) != 2: + raise ValueError('expected matrix') + M,N = a1.shape + if M != N: + raise ValueError('expected square matrix') + if issubclass(a1.dtype.type, complexfloating): + raise ValueError('expected real (non-complex) matrix') + overwrite_a = overwrite_a or (_datanotshared(a1, a)) + gerqf, = get_lapack_funcs(('gerqf',), (a1,)) + if lwork is None or lwork == -1: + # get optimal work array + rq, tau, work, info = gerqf(a1, lwork=-1, overwrite_a=1) + lwork = work[0] + rq, tau, work, info = gerqf(a1, lwork=lwork, overwrite_a=overwrite_a) + if info < 0: + raise ValueError('illegal value in %d-th argument of internal geqrf' + % -info) + gemm, = get_blas_funcs(('gemm',), (rq,)) + t = rq.dtype.char + R = special_matrices.triu(rq) + Q = numpy.identity(M, dtype=t) + ident = numpy.identity(M, dtype=t) + zeros = numpy.zeros + + k = min(M, N) + for i in range(k): + v = zeros((M,), t) + v[N-k+i] = 1 + v[0:N-k+i] = rq[M-k+i, 0:N-k+i] + H = gemm(-tau[i], v, v, 1+0j, ident, trans_b=2) + Q = gemm(1, Q, H) + return R, Q diff -Nru python-scipy-0.7.2+dfsg1/scipy/linalg/decomp_schur.py python-scipy-0.8.0+dfsg1/scipy/linalg/decomp_schur.py --- python-scipy-0.7.2+dfsg1/scipy/linalg/decomp_schur.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/linalg/decomp_schur.py 2010-07-26 15:48:32.000000000 +0100 @@ -0,0 +1,167 @@ +"""Schur decomposition functions.""" + +import numpy +from numpy import asarray_chkfinite, single + +# Local imports. +import misc +from misc import LinAlgError, _datanotshared +from lapack import get_lapack_funcs +from decomp import eigvals + + +__all__ = ['schur', 'rsf2csf'] + +_double_precision = ['i','l','d'] + +def schur(a, output='real', lwork=None, overwrite_a=False): + """Compute Schur decomposition of a matrix. + + The Schur decomposition is + + A = Z T Z^H + + where Z is unitary and T is either upper-triangular, or for real + Schur decomposition (output='real'), quasi-upper triangular. In + the quasi-triangular form, 2x2 blocks describing complex-valued + eigenvalue pairs may extrude from the diagonal. + + Parameters + ---------- + a : array, shape (M, M) + Matrix to decompose + output : {'real', 'complex'} + Construct the real or complex Schur decomposition (for real matrices). + lwork : integer + Work array size. If None or -1, it is automatically computed. + overwrite_a : boolean + Whether to overwrite data in a (may improve performance) + + Returns + ------- + T : array, shape (M, M) + Schur form of A. It is real-valued for the real Schur decomposition. + Z : array, shape (M, M) + An unitary Schur transformation matrix for A. + It is real-valued for the real Schur decomposition. + + See also + -------- + rsf2csf : Convert real Schur form to complex Schur form + + """ + if not output in ['real','complex','r','c']: + raise ValueError, "argument must be 'real', or 'complex'" + a1 = asarray_chkfinite(a) + if len(a1.shape) != 2 or (a1.shape[0] != a1.shape[1]): + raise ValueError, 'expected square matrix' + typ = a1.dtype.char + if output in ['complex','c'] and typ not in ['F','D']: + if typ in _double_precision: + a1 = a1.astype('D') + typ = 'D' + else: + a1 = a1.astype('F') + typ = 'F' + overwrite_a = overwrite_a or (_datanotshared(a1, a)) + gees, = get_lapack_funcs(('gees',), (a1,)) + if lwork is None or lwork == -1: + # get optimal work array + result = gees(lambda x: None, a, lwork=-1) + lwork = result[-2][0] + result = gees(lambda x: None, a, lwork=result[-2][0], overwrite_a=overwrite_a) + info = result[-1] + if info < 0: + raise ValueError('illegal value in %d-th argument of internal gees' + % -info) + elif info > 0: + raise LinAlgError("Schur form not found. Possibly ill-conditioned.") + return result[0], result[-3] + + +eps = numpy.finfo(float).eps +feps = numpy.finfo(single).eps + +_array_kind = {'b':0, 'h':0, 'B': 0, 'i':0, 'l': 0, 'f': 0, 'd': 0, 'F': 1, 'D': 1} +_array_precision = {'i': 1, 'l': 1, 'f': 0, 'd': 1, 'F': 0, 'D': 1} +_array_type = [['f', 'd'], ['F', 'D']] + +def _commonType(*arrays): + kind = 0 + precision = 0 + for a in arrays: + t = a.dtype.char + kind = max(kind, _array_kind[t]) + precision = max(precision, _array_precision[t]) + return _array_type[kind][precision] + +def _castCopy(type, *arrays): + cast_arrays = () + for a in arrays: + if a.dtype.char == type: + cast_arrays = cast_arrays + (a.copy(),) + else: + cast_arrays = cast_arrays + (a.astype(type),) + if len(cast_arrays) == 1: + return cast_arrays[0] + else: + return cast_arrays + + +def rsf2csf(T, Z): + """Convert real Schur form to complex Schur form. + + Convert a quasi-diagonal real-valued Schur form to the upper triangular + complex-valued Schur form. + + Parameters + ---------- + T : array, shape (M, M) + Real Schur form of the original matrix + Z : array, shape (M, M) + Schur transformation matrix + + Returns + ------- + T : array, shape (M, M) + Complex Schur form of the original matrix + Z : array, shape (M, M) + Schur transformation matrix corresponding to the complex form + + See also + -------- + schur : Schur decompose a matrix + + """ + Z, T = map(asarray_chkfinite, (Z, T)) + if len(Z.shape) != 2 or Z.shape[0] != Z.shape[1]: + raise ValueError("matrix must be square.") + if len(T.shape) != 2 or T.shape[0] != T.shape[1]: + raise ValueError("matrix must be square.") + if T.shape[0] != Z.shape[0]: + raise ValueError("matrices must be same dimension.") + N = T.shape[0] + arr = numpy.array + t = _commonType(Z, T, arr([3.0],'F')) + Z, T = _castCopy(t, Z, T) + conj = numpy.conj + dot = numpy.dot + r_ = numpy.r_ + transp = numpy.transpose + for m in range(N-1, 0, -1): + if abs(T[m,m-1]) > eps*(abs(T[m-1,m-1]) + abs(T[m,m])): + k = slice(m-1, m+1) + mu = eigvals(T[k,k]) - T[m,m] + r = misc.norm([mu[0], T[m,m-1]]) + c = mu[0] / r + s = T[m,m-1] / r + G = r_[arr([[conj(c), s]], dtype=t), arr([[-s, c]], dtype=t)] + Gc = conj(transp(G)) + j = slice(m-1, N) + T[k,j] = dot(G, T[k,j]) + i = slice(0, m+1) + T[i,k] = dot(T[i,k], Gc) + i = slice(0, N) + Z[i,k] = dot(Z[i,k], Gc) + T[m,m-1] = 0.0; + return T, Z diff -Nru python-scipy-0.7.2+dfsg1/scipy/linalg/decomp_svd.py python-scipy-0.8.0+dfsg1/scipy/linalg/decomp_svd.py --- python-scipy-0.7.2+dfsg1/scipy/linalg/decomp_svd.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/linalg/decomp_svd.py 2010-07-26 15:48:32.000000000 +0100 @@ -0,0 +1,173 @@ +"""SVD decomposition functions.""" + +import numpy +from numpy import asarray_chkfinite, zeros, r_, diag +from scipy.linalg import calc_lwork + +# Local imports. +from misc import LinAlgError, _datanotshared +from lapack import get_lapack_funcs + + +def svd(a, full_matrices=True, compute_uv=True, overwrite_a=False): + """Singular Value Decomposition. + + Factorizes the matrix a into two unitary matrices U and Vh and + an 1d-array s of singular values (real, non-negative) such that + a == U S Vh if S is an suitably shaped matrix of zeros whose + main diagonal is s. + + Parameters + ---------- + a : array, shape (M, N) + Matrix to decompose + full_matrices : boolean + If true, U, Vh are shaped (M,M), (N,N) + If false, the shapes are (M,K), (K,N) where K = min(M,N) + compute_uv : boolean + Whether to compute also U, Vh in addition to s (Default: true) + overwrite_a : boolean + Whether data in a is overwritten (may improve performance) + + Returns + ------- + U: array, shape (M,M) or (M,K) depending on full_matrices + s: array, shape (K,) + The singular values, sorted so that s[i] >= s[i+1]. K = min(M, N) + Vh: array, shape (N,N) or (K,N) depending on full_matrices + + For compute_uv = False, only s is returned. + + Raises LinAlgError if SVD computation does not converge + + Examples + -------- + >>> from scipy import random, linalg, allclose, dot + >>> a = random.randn(9, 6) + 1j*random.randn(9, 6) + >>> U, s, Vh = linalg.svd(a) + >>> U.shape, Vh.shape, s.shape + ((9, 9), (6, 6), (6,)) + + >>> U, s, Vh = linalg.svd(a, full_matrices=False) + >>> U.shape, Vh.shape, s.shape + ((9, 6), (6, 6), (6,)) + >>> S = linalg.diagsvd(s, 6, 6) + >>> allclose(a, dot(U, dot(S, Vh))) + True + + >>> s2 = linalg.svd(a, compute_uv=False) + >>> allclose(s, s2) + True + + See also + -------- + svdvals : return singular values of a matrix + diagsvd : return the Sigma matrix, given the vector s + + """ + # A hack until full_matrices == 0 support is fixed here. + if full_matrices == 0: + import numpy.linalg + return numpy.linalg.svd(a, full_matrices=0, compute_uv=compute_uv) + a1 = asarray_chkfinite(a) + if len(a1.shape) != 2: + raise ValueError('expected matrix') + m,n = a1.shape + overwrite_a = overwrite_a or (_datanotshared(a1, a)) + gesdd, = get_lapack_funcs(('gesdd',), (a1,)) + if gesdd.module_name[:7] == 'flapack': + lwork = calc_lwork.gesdd(gesdd.prefix, m, n, compute_uv)[1] + u,s,v,info = gesdd(a1,compute_uv = compute_uv, lwork = lwork, + overwrite_a = overwrite_a) + else: # 'clapack' + raise NotImplementedError('calling gesdd from %s' % gesdd.module_name) + if info > 0: + raise LinAlgError("SVD did not converge") + if info < 0: + raise ValueError('illegal value in %d-th argument of internal gesdd' + % -info) + if compute_uv: + return u, s, v + else: + return s + +def svdvals(a, overwrite_a=False): + """Compute singular values of a matrix. + + Parameters + ---------- + a : array, shape (M, N) + Matrix to decompose + overwrite_a : boolean + Whether data in a is overwritten (may improve performance) + + Returns + ------- + s: array, shape (K,) + The singular values, sorted so that s[i] >= s[i+1]. K = min(M, N) + + Raises LinAlgError if SVD computation does not converge + + See also + -------- + svd : return the full singular value decomposition of a matrix + diagsvd : return the Sigma matrix, given the vector s + + """ + return svd(a, compute_uv=0, overwrite_a=overwrite_a) + +def diagsvd(s, M, N): + """Construct the sigma matrix in SVD from singular values and size M,N. + + Parameters + ---------- + s : array, shape (M,) or (N,) + Singular values + M : integer + N : integer + Size of the matrix whose singular values are s + + Returns + ------- + S : array, shape (M, N) + The S-matrix in the singular value decomposition + + """ + part = diag(s) + typ = part.dtype.char + MorN = len(s) + if MorN == M: + return r_['-1', part, zeros((M, N-M), typ)] + elif MorN == N: + return r_[part, zeros((M-N,N), typ)] + else: + raise ValueError("Length of s must be M or N.") + + +# Orthonormal decomposition + +def orth(A): + """Construct an orthonormal basis for the range of A using SVD + + Parameters + ---------- + A : array, shape (M, N) + + Returns + ------- + Q : array, shape (M, K) + Orthonormal basis for the range of A. + K = effective rank of A, as determined by automatic cutoff + + See also + -------- + svd : Singular value decomposition of a matrix + + """ + u, s, vh = svd(A) + M, N = A.shape + eps = numpy.finfo(float).eps + tol = max(M,N) * numpy.amax(s) * eps + num = numpy.sum(s > tol, dtype=int) + Q = u[:,:num] + return Q diff -Nru python-scipy-0.7.2+dfsg1/scipy/linalg/flinalg.py python-scipy-0.8.0+dfsg1/scipy/linalg/flinalg.py --- python-scipy-0.7.2+dfsg1/scipy/linalg/flinalg.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/linalg/flinalg.py 2010-07-26 15:48:32.000000000 +0100 @@ -1,5 +1,3 @@ -## Automatically adapted for scipy Oct 18, 2005 by - # # Author: Pearu Peterson, March 2002 # diff -Nru python-scipy-0.7.2+dfsg1/scipy/linalg/generic_flapack.pyf python-scipy-0.8.0+dfsg1/scipy/linalg/generic_flapack.pyf --- python-scipy-0.7.2+dfsg1/scipy/linalg/generic_flapack.pyf 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/linalg/generic_flapack.pyf 2010-07-26 15:48:32.000000000 +0100 @@ -56,6 +56,57 @@ end subroutine pbtrf + + subroutine pbtrs(lower, n, kd, nrhs, ab, ldab, b, ldb, info) + + ! Solve a system of linear equations A*X = B with a symmetric + ! positive definite band matrix A using the Cholesky factorization. + ! AB is the triangular factur U or L from the Cholesky factorization + ! previously computed with *PBTRF. + ! A = U^T * U, AB = U if lower = 0 + ! A = L * L^T, AB = L if lower = 1 + + callstatement (*f2py_func)((lower?"L":"U"),&n,&kd,&nrhs,ab,&ldab,b,&ldb,&info); + callprotoargument char*,int*,int*,int*,*,int*,*,int*,int* + + integer optional,check(shape(ab,0)==ldab),depend(ab) :: ldab=shape(ab,0) + integer intent(hide),depend(ab) :: n=shape(ab,1) + integer intent(hide),depend(ab) :: kd=shape(ab,0)-1 + integer intent(hide),depend(b) :: ldb=shape(b,0) + integer intent(hide),depend(b) :: nrhs=shape(b,1) + integer optional,intent(in),check(lower==0||lower==1) :: lower = 0 + + dimension(ldb, nrhs),intent(in,out,copy,out=x) :: b + dimension(ldab,n),intent(in) :: ab + integer intent(out) :: info + + end subroutine pbtrs + + subroutine pbtrs(lower, n, kd, nrhs, ab, ldab, b, ldb, info) + + ! Solve a system of linear equations A*X = B with a symmetric + ! positive definite band matrix A using the Cholesky factorization. + ! AB is the triangular factur U or L from the Cholesky factorization + ! previously computed with *PBTRF. + ! A = U^T * U, AB = U if lower = 0 + ! A = L * L^T, AB = L if lower = 1 + + callstatement (*f2py_func)((lower?"L":"U"),&n,&kd,&nrhs,ab,&ldab,b,&ldb,&info); + callprotoargument char*,int*,int*,int*,*,int*,*,int*,int* + + integer optional,check(shape(ab,0)==ldab),depend(ab) :: ldab=shape(ab,0) + integer intent(hide),depend(ab) :: n=shape(ab,1) + integer intent(hide),depend(ab) :: kd=shape(ab,0)-1 + integer intent(hide),depend(b) :: ldb=shape(b,0) + integer intent(hide),depend(b) :: nrhs=shape(b,1) + integer optional,intent(in),check(lower==0||lower==1) :: lower = 0 + + dimension(ldb, nrhs),intent(in,out,copy,out=x) :: b + dimension(ldab,n),intent(in) :: ab + integer intent(out) :: info + + end subroutine pbtrs + subroutine pbsv(lower,n,kd,nrhs,ab,ldab,b,ldb,info) ! @@ -78,11 +129,11 @@ integer optional,check(shape(ab,0)==ldab),depend(ab) :: ldab=shape(ab,0) integer intent(hide),depend(ab) :: n=shape(ab,1) integer intent(hide),depend(ab) :: kd=shape(ab,0)-1 - integer intent(hide),depend(b) :: ldb=shape(b,1) - integer intent(hide),depend(b) :: nrhs=shape(b,0) + integer intent(hide),depend(b) :: ldb=shape(b,0) + integer intent(hide),depend(b) :: nrhs=shape(b,1) integer optional,intent(in),check(lower==0||lower==1) :: lower = 0 - dimension(nrhs,ldb),intent(in,out,copy,out=x) :: b + dimension(ldb, nrhs),intent(in,out,copy,out=x) :: b dimension(ldab,n),intent(in,out,copy,out=c) :: ab integer intent(out) :: info @@ -110,11 +161,11 @@ integer optional,check(shape(ab,0)==ldab),depend(ab) :: ldab=shape(ab,0) integer intent(hide),depend(ab) :: n=shape(ab,1) integer intent(hide),depend(ab) :: kd=shape(ab,0)-1 - integer intent(hide),depend(b) :: ldb=shape(b,1) - integer intent(hide),depend(b) :: nrhs=shape(b,0) + integer intent(hide),depend(b) :: ldb=shape(b,0) + integer intent(hide),depend(b) :: nrhs=shape(b,1) integer optional,intent(in),check(lower==0||lower==1) :: lower = 0 - dimension(nrhs,ldb),intent(in,out,copy,out=x) :: b + dimension(ldb, nrhs),intent(in,out,copy,out=x) :: b dimension(ldab,n),intent(in,out,copy,out=c) :: ab integer intent(out) :: info @@ -173,11 +224,11 @@ callstatement { hi++; lo++; (*f2py_func)(&n,&lo,&hi,a,&n,tau,work,&lwork,&info); } callprotoargument int*,int*,int*,*,int*,*,*,int*,int* integer intent(hide),depend(a) :: n = shape(a,0) - dimension(n,n),intent(in,out,copy,out=ht),check(shape(a,0)==shape(a,1)) :: a + dimension(n,n),intent(in,out,copy,out=ht,aligned8),check(shape(a,0)==shape(a,1)) :: a integer intent(in),optional :: lo = 0 integer intent(in),optional,depend(n) :: hi = n-1 dimension(n-1),intent(out),depend(n) :: tau - dimension(lwork),intent(cahce,hide),depend(lwork) :: work + dimension(lwork),intent(cache,hide),depend(lwork) :: work integer intent(in),optional,depend(n),check(lwork>=MAX(n,1)) :: lwork = MAX(n,1) integer intent(out) :: info @@ -306,7 +357,7 @@ integer intent(hide),depend(m,n):: minmn = MIN(m,n) integer intent(hide),depend(compute_uv,minmn) :: du = (compute_uv?m:1) integer intent(hide),depend(compute_uv,n) :: dvt = (compute_uv?n:1) - dimension(m,n),intent(in,copy) :: a + dimension(m,n),intent(in,copy,aligned8) :: a dimension(minmn),intent(out),depend(minmn) :: s dimension(du,du),intent(out),depend(du) :: u dimension(dvt,dvt),intent(out),depend(dvt) :: vt @@ -431,7 +482,7 @@ integer intent(hide),depend(a):: m = shape(a,0) integer intent(hide),depend(a):: n = shape(a,1) - dimension(m,n),intent(in,out,copy,out=qr) :: a + dimension(m,n),intent(in,out,copy,out=qr,aligned8) :: a dimension(MIN(m,n)),intent(out) :: tau integer optional,intent(in),depend(n),check(lwork>=n||lwork==-1) :: lwork=3*n @@ -450,7 +501,7 @@ integer intent(hide),depend(a):: m = shape(a,0) integer intent(hide),depend(a):: n = shape(a,1) - dimension(m,n),intent(in,out,copy,out=qr) :: a + dimension(m,n),intent(in,out,copy,out=qr,aligned8) :: a dimension(MIN(m,n)),intent(out) :: tau integer optional,intent(in),depend(n),check(lwork>=n||lwork==-1) :: lwork=3*n @@ -513,7 +564,7 @@ check(compute_vr==1||compute_vr==0) compute_vr integer intent(hide),depend(a) :: n = shape(a,0) - dimension(n,n),intent(in,copy) :: a + dimension(n,n),intent(in,copy,aligned8) :: a check(shape(a,0)==shape(a,1)) :: a dimension(n),intent(out),depend(n) :: wr @@ -939,7 +990,7 @@ integer optional,intent(in),check(sort_t==0||sort_t==1) :: sort_t = 0 external select integer intent(hide),depend(a) :: n = shape(a,1) - intent(in,out,copy,out=t),check(shape(a,0)==shape(a,1)),dimension(n,n) :: a + intent(in,out,copy,out=t,aligned8),check(shape(a,0)==shape(a,1)),dimension(n,n) :: a integer intent(hide),depend(a) :: nrows=shape(a,0) integer intent(out) :: sdim=0 intent(out),dimension(n) :: wr @@ -1391,7 +1442,7 @@ character intent(in) :: range='A' character intent(in) :: uplo='L' integer intent(hide) :: n=shape(a,0) - real intent(in,copy),dimension(n,n) :: a + real intent(in,copy,aligned8),dimension(n,n) :: a integer intent(hide),depend(n,a) :: lda=n real intent(hide) :: vl=0 real intent(hide) :: vu=1 @@ -1419,7 +1470,7 @@ character intent(in) :: range='A' character intent(in) :: uplo='L' integer intent(hide) :: n=shape(a,0) - double precision intent(in,copy),dimension(n,n) :: a + double precision intent(in,copy,aligned8),dimension(n,n) :: a integer intent(hide),depend(n,a) :: lda=n double precision intent(hide) :: vl=0 double precision intent(hide) :: vu=1 @@ -1447,7 +1498,7 @@ character intent(in) :: range='A' character intent(in) :: uplo='L' integer intent(hide) :: n=shape(a,0) - complex intent(in,copy),dimension(n,n) :: a + complex intent(in,copy,aligned8),dimension(n,n) :: a integer intent(hide),depend(n,a) :: lda=n real intent(hide) :: vl=0 real intent(hide) :: vu=1 @@ -1477,7 +1528,7 @@ character intent(in) :: range='A' character intent(in) :: uplo='L' integer intent(hide) :: n=shape(a,0) - complex*16 intent(in,copy),dimension(n,n) :: a + complex*16 intent(in,copy,aligned8),dimension(n,n) :: a integer intent(hide),depend(n,a) :: lda=n double precision intent(hide) :: vl=0 double precision intent(hide) :: vu=1 @@ -1507,9 +1558,9 @@ character intent(in) :: jobz='V' character intent(in) :: uplo='L' integer intent(hide) :: n=shape(a,0) - real intent(in,copy,out),dimension(n,n) :: a + real intent(in,copy,out,aligned8),dimension(n,n) :: a integer intent(hide),depend(n,a) :: lda=n - real intent(in,copy),dimension(n,n) :: b + real intent(in,copy,aligned8),dimension(n,n) :: b integer intent(hide),depend(n,b) :: ldb=n real intent(out),dimension(n),depend(n) :: w integer intent(hide) :: lwork=3*n-1 @@ -1526,9 +1577,9 @@ character intent(in) :: jobz='V' character intent(in) :: uplo='L' integer intent(hide) :: n=shape(a,0) - double precision intent(in,copy,out),dimension(n,n) :: a + double precision intent(in,copy,out,aligned8),dimension(n,n) :: a integer intent(hide),depend(n,a) :: lda=n - double precision intent(in,copy),dimension(n,n) :: b + double precision intent(in,copy,aligned8),dimension(n,n) :: b integer intent(hide),depend(n,b) :: ldb=n double precision intent(out),dimension(n),depend(n) :: w integer intent(hide) :: lwork=3*n-1 @@ -1545,9 +1596,9 @@ character intent(in) :: jobz='V' character intent(in) :: uplo='L' integer intent(hide) :: n=shape(a,0) - complex intent(in,copy,out),dimension(n,n) :: a + complex intent(in,copy,out,aligned8),dimension(n,n) :: a integer intent(hide),depend(n,a) :: lda=n - complex intent(in,copy),dimension(n,n) :: b + complex intent(in,copy,aligned8),dimension(n,n) :: b integer intent(hide),depend(n,b) :: ldb=n real intent(out),dimension(n),depend(n) :: w integer intent(hide) :: lwork=18*n-1 @@ -1565,9 +1616,9 @@ character intent(in) :: jobz='V' character intent(in) :: uplo='L' integer intent(hide) :: n=shape(a,0) - complex*16 intent(in,copy,out),dimension(n,n) :: a + complex*16 intent(in,copy,out,aligned8),dimension(n,n) :: a integer intent(hide),depend(n,a) :: lda=n - complex*16 intent(in,copy),dimension(n,n) :: b + complex*16 intent(in,copy,aligned8),dimension(n,n) :: b integer intent(hide),depend(n,b) :: ldb=n double precision intent(out),dimension(n),depend(n) :: w integer intent(hide) :: lwork=18*n-1 @@ -1588,9 +1639,9 @@ character intent(in) :: jobz='V' character intent(in) :: uplo='L' integer intent(hide) :: n=shape(a,0) - real intent(in,copy,out),dimension(n,n) :: a + real intent(in,copy,out,aligned8),dimension(n,n) :: a integer intent(hide),depend(n,a) :: lda=n - real intent(in,copy),dimension(n,n) :: b + real intent(in,copy,aligned8),dimension(n,n) :: b integer intent(hide),depend(n,b) :: ldb=n real intent(out),dimension(n),depend(n) :: w integer intent(in),depend(n) :: lwork=1+6*n+2*n*n @@ -1609,9 +1660,9 @@ character intent(in) :: jobz='V' character intent(in) :: uplo='L' integer intent(hide) :: n=shape(a,0) - double precision intent(in,copy,out),dimension(n,n) :: a + double precision intent(in,copy,out,aligned8),dimension(n,n) :: a integer intent(hide),depend(n,a) :: lda=n - double precision intent(in,copy),dimension(n,n) :: b + double precision intent(in,copy,aligned8),dimension(n,n) :: b integer intent(hide),depend(n,b) :: ldb=n double precision intent(out),dimension(n),depend(n) :: w integer intent(in),depend(n) :: lwork=1+6*n+2*n*n @@ -1630,9 +1681,9 @@ character intent(in) :: jobz='V' character intent(in) :: uplo='L' integer intent(hide) :: n=shape(a,0) - complex intent(in,copy,out),dimension(n,n) :: a + complex intent(in,copy,out,aligned8),dimension(n,n) :: a integer intent(hide),depend(n,a) :: lda=n - complex intent(in,copy),dimension(n,n) :: b + complex intent(in,copy,aligned8),dimension(n,n) :: b integer intent(hide),depend(n,b) :: ldb=n real intent(out),dimension(n),depend(n) :: w integer intent(in),depend(n) :: lwork=2*n+n*n @@ -1653,9 +1704,9 @@ character intent(in) :: jobz='V' character intent(in) :: uplo='L' integer intent(hide) :: n=shape(a,0) - complex*16 intent(in,copy,out),dimension(n,n) :: a + complex*16 intent(in,copy,out,aligned8),dimension(n,n) :: a integer intent(hide),depend(n,a) :: lda=n - complex*16 intent(in,copy),dimension(n,n) :: b + complex*16 intent(in,copy,aligned8),dimension(n,n) :: b integer intent(hide),depend(n,b) :: ldb=n double precision intent(out),dimension(n),depend(n) :: w integer intent(in),depend(n) :: lwork=2*n+n*n @@ -1679,9 +1730,9 @@ character intent(hide) :: range='I' character intent(in) :: uplo='L' integer intent(hide) :: n=shape(a,0) - real intent(in,copy),dimension(n,n) :: a + real intent(in,copy,aligned8),dimension(n,n) :: a integer intent(hide),depend(n,a) :: lda=n - real intent(in,copy),dimension(n,n) :: b + real intent(in,copy,aligned8),dimension(n,n) :: b integer intent(hide),depend(n,b) :: ldb=n real intent(hide) :: vl=0. real intent(hide) :: vu=0. @@ -1709,9 +1760,9 @@ character intent(hide) :: range='I' character intent(in) :: uplo='L' integer intent(hide) :: n=shape(a,0) - double precision intent(in,copy),dimension(n,n) :: a + double precision intent(in,copy,aligned8),dimension(n,n) :: a integer intent(hide),depend(n,a) :: lda=n - double precision intent(in,copy),dimension(n,n) :: b + double precision intent(in,copy,aligned8),dimension(n,n) :: b integer intent(hide),depend(n,b) :: ldb=n double precision intent(hide) :: vl=0. double precision intent(hide) :: vu=0. @@ -1739,9 +1790,9 @@ character intent(hide) :: range='I' character intent(in) :: uplo='L' integer intent(hide) :: n=shape(a,0) - complex intent(in,copy),dimension(n,n) :: a + complex intent(in,copy,aligned8),dimension(n,n) :: a integer intent(hide),depend(n,a) :: lda=n - complex intent(in,copy),dimension(n,n) :: b + complex intent(in,copy,aligned8),dimension(n,n) :: b integer intent(hide),depend(n,b) :: ldb=n real intent(hide) :: vl=0. real intent(hide) :: vu=0. @@ -1770,9 +1821,9 @@ character intent(hide) :: range='I' character intent(in) :: uplo='L' integer intent(hide) :: n=shape(a,0) - complex*16 intent(in,copy),dimension(n,n) :: a + complex*16 intent(in,copy,aligned8),dimension(n,n) :: a integer intent(hide),depend(n,a) :: lda=n - complex*16 intent(in,copy),dimension(n,n) :: b + complex*16 intent(in,copy,aligned8),dimension(n,n) :: b integer intent(hide),depend(n,b) :: ldb=n double precision intent(hide) :: vl=0. double precision intent(hide) :: vu=0. diff -Nru python-scipy-0.7.2+dfsg1/scipy/linalg/info.py python-scipy-0.8.0+dfsg1/scipy/linalg/info.py --- python-scipy-0.7.2+dfsg1/scipy/linalg/info.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/linalg/info.py 2010-07-26 15:48:32.000000000 +0100 @@ -1,66 +1,134 @@ """ -Linear algebra routines -======================= +Linear Algebra +============== -Linear Algebra Basics:: - - inv --- Find the inverse of a square matrix - solve --- Solve a linear system of equations - solve_banded --- Solve a linear system of equations with a banded matrix - solveh_banded --- Solve a linear system of equations with a Hermitian or symmetric banded matrix, returning the Cholesky decomposition as well - det --- Find the determinant of a square matrix - norm --- matrix and vector norm - lstsq --- Solve linear least-squares problem - pinv --- Pseudo-inverse (Moore-Penrose) using lstsq - pinv2 --- Pseudo-inverse using svd - -Eigenvalues and Decompositions:: - - eig --- Find the eigenvalues and vectors of a square matrix - eigvals --- Find the eigenvalues of a square matrix - eig_banded --- Find the eigenvalues and vectors of a band matrix - eigvals_banded --- Find the eigenvalues of a band matrix - lu --- LU decomposition of a matrix - lu_factor --- LU decomposition returning unordered matrix and pivots - lu_solve --- solve Ax=b using back substitution with output of lu_factor - svd --- Singular value decomposition of a matrix - svdvals --- Singular values of a matrix - diagsvd --- construct matrix of singular values from output of svd - orth --- construct orthonormal basis for range of A using svd - cholesky --- Cholesky decomposition of a matrix - cholesky_banded --- Cholesky decomposition of a banded symmetric or Hermitian matrix - cho_factor --- Cholesky decomposition for use in solving linear system - cho_solve --- Solve previously factored linear system - qr --- QR decomposition of a matrix - schur --- Schur decomposition of a matrix - rsf2csf --- Real to complex schur form - hessenberg --- Hessenberg form of a matrix - -matrix Functions:: - - expm --- matrix exponential using Pade approx. - expm2 --- matrix exponential using Eigenvalue decomp. - expm3 --- matrix exponential using Taylor-series expansion - logm --- matrix logarithm - cosm --- matrix cosine - sinm --- matrix sine - tanm --- matrix tangent - coshm --- matrix hyperbolic cosine - sinhm --- matrix hyperbolic sine - tanhm --- matrix hyperbolic tangent - signm --- matrix sign - sqrtm --- matrix square root - funm --- Evaluating an arbitrary matrix function. - -Iterative linear systems solutions:: - - cg --- Conjugate gradient (symmetric systems only) - cgs --- Conjugate gradient squared - qmr --- Quasi-minimal residual - gmres --- Generalized minimal residual - bicg --- Bi-conjugate gradient - bicgstab --- Bi-conjugate gradient stabilized +Linear Algebra Basics: + inv: + Find the inverse of a square matrix + solve: + Solve a linear system of equations + solve_banded: + Solve a linear system of equations with a banded matrix + solveh_banded: + Solve a linear system of equations with a Hermitian or symmetric + banded matrix + det: + Find the determinant of a square matrix + norm: + matrix and vector norm + lstsq: + Solve linear least-squares problem + pinv: + Pseudo-inverse (Moore-Penrose) using lstsq + pinv2: + Pseudo-inverse using svd + +Eigenvalue Problem: + + eig: + Find the eigenvalues and vectors of a square matrix + eigvals: + Find the eigenvalues of a square matrix + eigh: + Find the eigenvalues and eigenvectors of a complex Hermitian or + real symmetric matrix. + eigvalsh: + Find the eigenvalues of a complex Hermitian or real symmetric + matrix. + eig_banded: + Find the eigenvalues and vectors of a band matrix + eigvals_banded: + Find the eigenvalues of a band matrix + +Decompositions: + + lu: + LU decomposition of a matrix + lu_factor: + LU decomposition returning unordered matrix and pivots + lu_solve: + solve Ax=b using back substitution with output of lu_factor + svd: + Singular value decomposition of a matrix + svdvals: + Singular values of a matrix + diagsvd: + construct matrix of singular values from output of svd + orth: + construct orthonormal basis for range of A using svd + cholesky: + Cholesky decomposition of a matrix + cholesky_banded: + Cholesky decomposition of a banded symmetric or Hermitian matrix + cho_factor: + Cholesky decomposition for use in solving linear system + cho_solve: + Solve previously factored linear system + cho_solve_banded: + Solve previously factored banded linear system. + qr: + QR decomposition of a matrix + schur: + Schur decomposition of a matrix + rsf2csf: + Real to complex schur form + hessenberg: + Hessenberg form of a matrix + +Matrix Functions: + + expm: + matrix exponential using Pade approx. + expm2: + matrix exponential using Eigenvalue decomp. + expm3: + matrix exponential using Taylor-series expansion + logm: + matrix logarithm + cosm: + matrix cosine + sinm: + matrix sine + tanm: + matrix tangent + coshm: + matrix hyperbolic cosine + sinhm: + matrix hyperbolic sine + tanhm: + matrix hyperbolic tangent + signm: + matrix sign + sqrtm: + matrix square root + funm: + Evaluating an arbitrary matrix function. + +Special Matrices: + + block_diag: + Construct a block diagonal matrix from submatrices. + circulant: + Circulant matrix + companion: + Companion matrix + hadamard: + Hadamard matrix of order 2^n + hankel: + Hankel matrix + kron: + Kronecker product of two arrays. + leslie: + Leslie matrix + toeplitz: + Toeplitz matrix + tri: + Construct a matrix filled with ones at and below a given diagonal. + tril: + Construct a lower-triangular matrix from a given matrix. + triu: + Construct an upper-triangular matrix from a given matrix. """ postpone_import = 1 diff -Nru python-scipy-0.7.2+dfsg1/scipy/linalg/__init__.py python-scipy-0.8.0+dfsg1/scipy/linalg/__init__.py --- python-scipy-0.7.2+dfsg1/scipy/linalg/__init__.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/linalg/__init__.py 2010-07-26 15:48:31.000000000 +0100 @@ -5,15 +5,19 @@ from info import __doc__ from linalg_version import linalg_version as __version__ +from misc import * from basic import * from decomp import * +from decomp_lu import * +from decomp_cholesky import * +from decomp_qr import * +from decomp_svd import * +from decomp_schur import * from matfuncs import * from blas import * +from special_matrices import * -from iterative import * - - -__all__ = filter(lambda s:not s.startswith('_'),dir()) +__all__ = filter(lambda s: not s.startswith('_'), dir()) from numpy.dual import register_func for k in ['norm', 'inv', 'svd', 'solve', 'det', 'eig', 'eigh', 'eigvals', diff -Nru python-scipy-0.7.2+dfsg1/scipy/linalg/interface_gen.py python-scipy-0.8.0+dfsg1/scipy/linalg/interface_gen.py --- python-scipy-0.7.2+dfsg1/scipy/linalg/interface_gen.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/linalg/interface_gen.py 2010-07-26 15:48:32.000000000 +0100 @@ -1,14 +1,7 @@ -## Automatically adapted for scipy Oct 18, 2005 by - #!/usr/bin/env python import os -import sys - -if sys.version[:3]>='2.3': - import re -else: - import pre as re +import re from distutils.dir_util import mkpath def all_subroutines(interface_in): diff -Nru python-scipy-0.7.2+dfsg1/scipy/linalg/iterative.py python-scipy-0.8.0+dfsg1/scipy/linalg/iterative.py --- python-scipy-0.7.2+dfsg1/scipy/linalg/iterative.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/linalg/iterative.py 1970-01-01 01:00:00.000000000 +0100 @@ -1,13 +0,0 @@ -__all__ = ['bicg','bicgstab','cg','cgs','gmres','qmr'] - -# Deprecated on January 26, 2008 - -from scipy.sparse.linalg import isolve -from numpy import deprecate - -for name in __all__: - oldfn = getattr(isolve, name) - oldname='scipy.linalg.' + name - newname='scipy.sparse.linalg.' + name - newfn = deprecate(oldfn, oldname=oldname, newname=newname) - exec(name + ' = newfn') diff -Nru python-scipy-0.7.2+dfsg1/scipy/linalg/lapack.py python-scipy-0.8.0+dfsg1/scipy/linalg/lapack.py --- python-scipy-0.7.2+dfsg1/scipy/linalg/lapack.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/linalg/lapack.py 2010-07-26 15:48:32.000000000 +0100 @@ -1,16 +1,13 @@ -## Automatically adapted for scipy Oct 18, 2005 by - # # Author: Pearu Peterson, March 2002 # __all__ = ['get_lapack_funcs'] -import new - # The following ensures that possibly missing flavor (C or Fortran) is # replaced with the available one. If none is available, exception # is raised at the first attempt to use the resources. +import types import numpy @@ -97,9 +94,9 @@ func2 = getattr(m2,func_name,None) if func2 is not None: exec _colmajor_func_template % {'func_name':func_name} - func = new.function(func_code, - {'clapack_func':func2}, - func_name) + func = types.FunctionType(func_code, + {'clapack_func':func2}, + func_name) func.module_name = m2_name func.__doc__ = func2.__doc__ func.prefix = required_prefix @@ -107,6 +104,8 @@ funcs.append(func) return tuple(funcs) + + _colmajor_func_template = '''\ def %(func_name)s(*args,**kws): if "rowmajor" not in kws: diff -Nru python-scipy-0.7.2+dfsg1/scipy/linalg/matfuncs.py python-scipy-0.8.0+dfsg1/scipy/linalg/matfuncs.py --- python-scipy-0.7.2+dfsg1/scipy/linalg/matfuncs.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/linalg/matfuncs.py 2010-07-26 15:48:32.000000000 +0100 @@ -1,5 +1,3 @@ -## Automatically adapted for scipy Oct 18, 2005 by - # # Author: Travis Oliphant, March 2002 # @@ -13,13 +11,19 @@ isfinite, sqrt, identity, single from numpy import matrix as mat import numpy as np -from basic import solve, inv, norm, triu, all_mat -from decomp import eig, schur, rsf2csf, orth, svd + +# Local imports +from misc import norm +from basic import solve, inv +from special_matrices import triu, all_mat +from decomp import eig +from decomp_svd import orth, svd +from decomp_schur import schur, rsf2csf eps = np.finfo(float).eps feps = np.finfo(single).eps -def expm(A,q=7): +def expm(A, q=7): """Compute the matrix exponential using Pade approximation. Parameters @@ -36,11 +40,6 @@ """ A = asarray(A) - ss = True - if A.dtype.char in ['f', 'F']: - pass ## A.savespace(1) - else: - pass ## A.savespace(0) # Scale A so that norm is < 1/2 nA = norm(A,Inf) @@ -69,7 +68,6 @@ F = solve(D,N) for k in range(1,j+1): F = dot(F,F) - pass ## A.savespace(ss) return F def expm2(A): @@ -95,7 +93,7 @@ vri = inv(vr) return dot(dot(vr,diag(exp(s))),vri).astype(t) -def expm3(A,q=20): +def expm3(A, q=20): """Compute the matrix exponential using Taylor series. Parameters @@ -126,7 +124,8 @@ return eA _array_precision = {'i': 1, 'l': 1, 'f': 0, 'd': 1, 'F': 0, 'D': 1} -def toreal(arr,tol=None): + +def toreal(arr, tol=None): """Return as real array if imaginary part is small. Parameters @@ -273,7 +272,7 @@ else: return solve(coshm(A), sinhm(A)) -def funm(A,func,disp=1): +def funm(A, func, disp=True): """Evaluate a matrix function specified by a callable. Returns the value of matrix-valued function f at A. The function f @@ -348,7 +347,7 @@ else: return F, err -def logm(A,disp=1): +def logm(A, disp=True): """Compute matrix logarithm. The matrix logarithm is the inverse of expm: expm(logm(A)) == A @@ -399,7 +398,7 @@ else: return F, errest -def signm(a,disp=1): +def signm(a, disp=True): """Matrix sign function. Extension of the scalar sign(x) to matrices. @@ -476,7 +475,7 @@ else: return S0, errest -def sqrtm(A,disp=1): +def sqrtm(A, disp=True): """Matrix square root. Parameters diff -Nru python-scipy-0.7.2+dfsg1/scipy/linalg/misc.py python-scipy-0.8.0+dfsg1/scipy/linalg/misc.py --- python-scipy-0.7.2+dfsg1/scipy/linalg/misc.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/linalg/misc.py 2010-07-26 15:48:32.000000000 +0100 @@ -0,0 +1,21 @@ +import numpy as np +from numpy.linalg import LinAlgError + +__all__ = ['LinAlgError', 'norm'] + + +def norm(a, ord=None): + # Differs from numpy only in non-finite handling + return np.linalg.norm(np.asarray_chkfinite(a), ord=ord) +norm.__doc__ = np.linalg.norm.__doc__ + + +def _datanotshared(a1,a): + if a1 is a: + return False + else: + #try comparing data pointers + try: + return a1.__array_interface__['data'][0] != a.__array_interface__['data'][0] + except: + return True \ No newline at end of file diff -Nru python-scipy-0.7.2+dfsg1/scipy/linalg/setup_atlas_version.py python-scipy-0.8.0+dfsg1/scipy/linalg/setup_atlas_version.py --- python-scipy-0.7.2+dfsg1/scipy/linalg/setup_atlas_version.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/linalg/setup_atlas_version.py 2010-07-26 15:48:32.000000000 +0100 @@ -1,5 +1,3 @@ -## Automatically adapted for scipy Oct 18, 2005 by - #!/usr/bin/env python import os diff -Nru python-scipy-0.7.2+dfsg1/scipy/linalg/setup.py python-scipy-0.8.0+dfsg1/scipy/linalg/setup.py --- python-scipy-0.7.2+dfsg1/scipy/linalg/setup.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/linalg/setup.py 2010-07-26 15:48:32.000000000 +0100 @@ -1,5 +1,3 @@ -## Automatically adapted for scipy Oct 18, 2005 by - #!/usr/bin/env python import os diff -Nru python-scipy-0.7.2+dfsg1/scipy/linalg/special_matrices.py python-scipy-0.8.0+dfsg1/scipy/linalg/special_matrices.py --- python-scipy-0.7.2+dfsg1/scipy/linalg/special_matrices.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/linalg/special_matrices.py 2010-07-26 15:48:32.000000000 +0100 @@ -0,0 +1,539 @@ + +import math +import numpy as np + +#----------------------------------------------------------------------------- +# matrix construction functions +#----------------------------------------------------------------------------- + +def tri(N, M=None, k=0, dtype=None): + """ + Construct (N, M) matrix filled with ones at and below the k-th diagonal. + + The matrix has A[i,j] == 1 for i <= j + k + + Parameters + ---------- + N : integer + The size of the first dimension of the matrix. + M : integer or None + The size of the second dimension of the matrix. If `M` is None, + `M = N` is assumed. + k : integer + Number of subdiagonal below which matrix is filled with ones. + `k` = 0 is the main diagonal, `k` < 0 subdiagonal and `k` > 0 + superdiagonal. + dtype : dtype + Data type of the matrix. + + Returns + ------- + A : array, shape (N, M) + + Examples + -------- + >>> from scipy.linalg import tri + >>> tri(3, 5, 2, dtype=int) + array([[1, 1, 1, 0, 0], + [1, 1, 1, 1, 0], + [1, 1, 1, 1, 1]]) + >>> tri(3, 5, -1, dtype=int) + array([[0, 0, 0, 0, 0], + [1, 0, 0, 0, 0], + [1, 1, 0, 0, 0]]) + + """ + if M is None: M = N + if type(M) == type('d'): + #pearu: any objections to remove this feature? + # As tri(N,'d') is equivalent to tri(N,dtype='d') + dtype = M + M = N + m = np.greater_equal(np.subtract.outer(np.arange(N), np.arange(M)),-k) + if dtype is None: + return m + else: + return m.astype(dtype) + +def tril(m, k=0): + """Construct a copy of a matrix with elements above the k-th diagonal zeroed. + + Parameters + ---------- + m : array + Matrix whose elements to return + k : integer + Diagonal above which to zero elements. + k == 0 is the main diagonal, k < 0 subdiagonal and k > 0 superdiagonal. + + Returns + ------- + A : array, shape m.shape, dtype m.dtype + + Examples + -------- + >>> from scipy.linalg import tril + >>> tril([[1,2,3],[4,5,6],[7,8,9],[10,11,12]], -1) + array([[ 0, 0, 0], + [ 4, 0, 0], + [ 7, 8, 0], + [10, 11, 12]]) + + """ + m = np.asarray(m) + out = tri(m.shape[0], m.shape[1], k=k, dtype=m.dtype.char)*m + return out + +def triu(m, k=0): + """Construct a copy of a matrix with elements below the k-th diagonal zeroed. + + Parameters + ---------- + m : array + Matrix whose elements to return + k : integer + Diagonal below which to zero elements. + k == 0 is the main diagonal, k < 0 subdiagonal and k > 0 superdiagonal. + + Returns + ------- + A : array, shape m.shape, dtype m.dtype + + Examples + -------- + >>> from scipy.linalg import tril + >>> triu([[1,2,3],[4,5,6],[7,8,9],[10,11,12]], -1) + array([[ 1, 2, 3], + [ 4, 5, 6], + [ 0, 8, 9], + [ 0, 0, 12]]) + + """ + m = np.asarray(m) + out = (1-tri(m.shape[0], m.shape[1], k-1, m.dtype.char))*m + return out + + +def toeplitz(c, r=None): + """ + Construct a Toeplitz matrix. + + The Toepliz matrix has constant diagonals, with c as its first column + and r as its first row. If r is not given, r == conjugate(c) is + assumed. + + Parameters + ---------- + c : array-like, 1D + First column of the matrix. Whatever the actual shape of `c`, it + will be converted to a 1D array. + r : array-like, 1D + First row of the matrix. If None, `r = conjugate(c)` is assumed; in + this case, if `c[0]` is real, the result is a Hermitian matrix. + `r[0]` is ignored; the first row of the returned matrix is + `[c[0], r[1:]]`. Whatever the actual shape of `r`, it will be + converted to a 1D array. + + Returns + ------- + A : array, shape (len(c), len(r)) + The Toeplitz matrix. + dtype is the same as `(c[0] + r[0]).dtype`. + + See also + -------- + circulant : circulant matrix + hankel : Hankel matrix + + Notes + ----- + The behavior when `c` or `r` is a scalar, or when `c` is complex and + `r` is None, was changed in version 0.8.0. The behavior in previous + versions was undocumented and is no longer supported. + + Examples + -------- + >>> from scipy.linalg import toeplitz + >>> toeplitz([1,2,3], [1,4,5,6]) + array([[1, 4, 5, 6], + [2, 1, 4, 5], + [3, 2, 1, 4]]) + >>> toeplitz([1.0, 2+3j, 4-1j]) + array([[ 1.+0.j, 2.-3.j, 4.+1.j], + [ 2.+3.j, 1.+0.j, 2.-3.j], + [ 4.-1.j, 2.+3.j, 1.+0.j]]) + + """ + c = np.asarray(c).ravel() + if r is None: + r = c.conjugate() + else: + r = np.asarray(r).ravel() + # Form a 1D array of values to be used in the matrix, containing a reversed + # copy of r[1:], followed by c. + vals = np.concatenate((r[-1:0:-1], c)) + a, b = np.ogrid[0:len(c), len(r)-1:-1:-1] + indx = a + b + # `indx` is a 2D array of indices into the 1D array `vals`, arranged so that + # `vals[indx]` is the Toeplitz matrix. + return vals[indx] + +def circulant(c): + """ + Construct a circulant matrix. + + Parameters + ---------- + c : array-like, 1D + First column of the matrix. + + Returns + ------- + A : array, shape (len(c), len(c)) + A circulant matrix whose first column is `c`. + + See also + -------- + toeplitz : Toeplitz matrix + hankel : Hankel matrix + + Notes + ----- + .. versionadded:: 0.8.0 + + Examples + -------- + >>> from scipy.linalg import circulant + >>> circulant([1, 2, 3]) + array([[1, 3, 2], + [2, 1, 3], + [3, 2, 1]]) + + """ + c = np.asarray(c).ravel() + a, b = np.ogrid[0:len(c), 0:-len(c):-1] + indx = a + b + # `indx` is a 2D array of indices into `c`, arranged so that `c[indx]` is + # the circulant matrix. + return c[indx] + +def hankel(c, r=None): + """ + Construct a Hankel matrix. + + The Hankel matrix has constant anti-diagonals, with `c` as its + first column and `r` as its last row. If `r` is not given, then + `r = zeros_like(c)` is assumed. + + Parameters + ---------- + c : array-like, 1D + First column of the matrix. Whatever the actual shape of `c`, it + will be converted to a 1D array. + r : array-like, 1D + Last row of the matrix. If None, `r = zeros_like(c)` is assumed. + `r[0]` is ignored; the last row of the returned matrix is + `[c[-1], r[1:]]`. Whatever the actual shape of `r`, it will be + converted to a 1D array. + + Returns + ------- + A : array, shape (len(c), len(r)) + The Hankel matrix. + dtype is the same as `(c[0] + r[0]).dtype`. + + See also + -------- + toeplitz : Toeplitz matrix + circulant : circulant matrix + + Examples + -------- + >>> from scipy.linalg import hankel + >>> hankel([1, 17, 99]) + array([[ 1, 17, 99], + [17, 99, 0], + [99, 0, 0]]) + >>> hankel([1,2,3,4], [4,7,7,8,9]) + array([[1, 2, 3, 4, 7], + [2, 3, 4, 7, 7], + [3, 4, 7, 7, 8], + [4, 7, 7, 8, 9]]) + + """ + c = np.asarray(c).ravel() + if r is None: + r = np.zeros_like(c) + else: + r = np.asarray(r).ravel() + # Form a 1D array of values to be used in the matrix, containing `c` + # followed by r[1:]. + vals = np.concatenate((c, r[1:])) + a, b = np.ogrid[0:len(c), 0:len(r)] + indx = a + b + # `indx` is a 2D array of indices into the 1D array `vals`, arranged so that + # `vals[indx]` is the Hankel matrix. + return vals[indx] + +def hadamard(n, dtype=int): + """ + Construct a Hadamard matrix. + + `hadamard(n)` constructs an n-by-n Hadamard matrix, using Sylvester's + construction. `n` must be a power of 2. + + Parameters + ---------- + n : int + The order of the matrix. `n` must be a power of 2. + dtype : numpy dtype + The data type of the array to be constructed. + + Returns + ------- + H : ndarray with shape (n, n) + The Hadamard matrix. + + Notes + ----- + .. versionadded:: 0.8.0 + + Examples + -------- + >>> hadamard(2, dtype=complex) + array([[ 1.+0.j, 1.+0.j], + [ 1.+0.j, -1.-0.j]]) + >>> hadamard(4) + array([[ 1, 1, 1, 1], + [ 1, -1, 1, -1], + [ 1, 1, -1, -1], + [ 1, -1, -1, 1]]) + + """ + + # This function is a slightly modified version of the + # function contributed by Ivo in ticket #675. + + if n < 1: + lg2 = 0 + else: + lg2 = int(math.log(n, 2)) + if 2 ** lg2 != n: + raise ValueError("n must be an positive integer, and n must be power of 2") + + H = np.array([[1]], dtype=dtype) + + # Sylvester's construction + for i in range(0, lg2): + H = np.vstack((np.hstack((H, H)), np.hstack((H, -H)))) + + return H + + +def leslie(f, s): + """Create a Leslie matrix. + + Parameters + ---------- + f : array-like, 1D + The "fecundity" coefficients. + s : array-like, 1D + The "survival" coefficients. The length of `s` must be one less + than the length of `f`, and it must be at least 1. + + Returns + ------- + L : ndarray, 2D + Returns a 2D numpy ndarray with shape `(n,n)`, where `n` is the + length of `f`. The array is zero except for the first row, + which is `f`, and the first subdiagonal, which is `s`. + The data type of the array will be the data type of `f[0]+s[0]`. + + Notes + ----- + .. versionadded:: 0.8.0 + + Examples + -------- + >>> leslie([0.1, 2.0, 1.0, 0.1], [0.2, 0.8, 0.7]) + array([[ 0.1, 2. , 1. , 0.1], + [ 0.2, 0. , 0. , 0. ], + [ 0. , 0.8, 0. , 0. ], + [ 0. , 0. , 0.7, 0. ]]) + """ + f = np.atleast_1d(f) + s = np.atleast_1d(s) + if f.ndim != 1: + raise ValueError("Incorrect shape for f. f must be one-dimensional") + if s.ndim != 1: + raise ValueError("Incorrect shape for s. s must be one-dimensional") + if f.size != s.size + 1: + raise ValueError("Incorrect lengths for f and s. The length" + " of s must be one less than the length of f.") + if s.size == 0: + raise ValueError("The length of s must be at least 1.") + + tmp = f[0] + s[0] + n = f.size + a = np.zeros((n,n), dtype=tmp.dtype) + a[0] = f + a[range(1,n), range(0,n-1)] = s + return a + + +def all_mat(*args): + return map(np.matrix,args) + +def kron(a,b): + """Kronecker product of a and b. + + The result is the block matrix:: + + a[0,0]*b a[0,1]*b ... a[0,-1]*b + a[1,0]*b a[1,1]*b ... a[1,-1]*b + ... + a[-1,0]*b a[-1,1]*b ... a[-1,-1]*b + + Parameters + ---------- + a : array, shape (M, N) + b : array, shape (P, Q) + + Returns + ------- + A : array, shape (M*P, N*Q) + Kronecker product of a and b + + Examples + -------- + >>> from scipy import kron, array + >>> kron(array([[1,2],[3,4]]), array([[1,1,1]])) + array([[1, 1, 1, 2, 2, 2], + [3, 3, 3, 4, 4, 4]]) + + """ + if not a.flags['CONTIGUOUS']: + a = np.reshape(a, a.shape) + if not b.flags['CONTIGUOUS']: + b = np.reshape(b, b.shape) + o = np.outer(a,b) + o = o.reshape(a.shape + b.shape) + return np.concatenate(np.concatenate(o, axis=1), axis=1) + +def block_diag(*arrs): + """Create a block diagonal matrix from the provided arrays. + + Given the inputs `A`, `B` and `C`, the output will have these + arrays arranged on the diagonal:: + + [[A, 0, 0], + [0, B, 0], + [0, 0, C]] + + If all the input arrays are square, the output is known as a + block diagonal matrix. + + Parameters + ---------- + A, B, C, ... : array-like, up to 2D + Input arrays. A 1D array or array-like sequence with length n is + treated as a 2D array with shape (1,n). + + Returns + ------- + D : ndarray + Array with `A`, `B`, `C`, ... on the diagonal. `D` has the + same dtype as `A`. + + References + ---------- + .. [1] Wikipedia, "Block matrix", + http://en.wikipedia.org/wiki/Block_diagonal_matrix + + Examples + -------- + >>> A = [[1, 0], + ... [0, 1]] + >>> B = [[3, 4, 5], + ... [6, 7, 8]] + >>> C = [[7]] + >>> print(block_diag(A, B, C)) + [[1 0 0 0 0 0] + [0 1 0 0 0 0] + [0 0 3 4 5 0] + [0 0 6 7 8 0] + [0 0 0 0 0 7]] + >>> block_diag(1.0, [2, 3], [[4, 5], [6, 7]]) + array([[ 1., 0., 0., 0., 0.], + [ 0., 2., 3., 0., 0.], + [ 0., 0., 0., 4., 5.], + [ 0., 0., 0., 6., 7.]]) + + """ + if arrs == (): + arrs = ([],) + arrs = [np.atleast_2d(a) for a in arrs] + + bad_args = [k for k in range(len(arrs)) if arrs[k].ndim > 2] + if bad_args: + raise ValueError("arguments in the following positions have dimension " + "greater than 2: %s" % bad_args) + + shapes = np.array([a.shape for a in arrs]) + out = np.zeros(np.sum(shapes, axis=0), dtype=arrs[0].dtype) + + r, c = 0, 0 + for i, (rr, cc) in enumerate(shapes): + out[r:r + rr, c:c + cc] = arrs[i] + r += rr + c += cc + return out + +def companion(a): + """Create a companion matrix. + + Create the companion matrix associated with the polynomial whose + coefficients are given in `a`. + + Parameters + ---------- + a : array-like, 1D + Polynomial coefficients. The length of `a` must be at least two, + and `a[0]` must not be zero. + + Returns + ------- + c : ndarray + A square ndarray with shape `(n-1, n-1)`, where `n` is the length + of `a`. The first row of `c` is `-a[1:]/a[0]`, and the first + subdiagonal is all ones. The data type of the array is the same + as the data type of `1.0*a[0]`. + + Notes + ----- + .. versionadded:: 0.8.0 + + Examples + -------- + >>> companion([1, -10, 31, -30]) + array([[ 10., -31., 30.], + [ 1., 0., 0.], + [ 0., 1., 0.]]) + """ + a = np.atleast_1d(a) + + if a.ndim != 1: + raise ValueError("Incorrect shape for `a`. `a` must be one-dimensional.") + + if a.size < 2: + raise ValueError("The length of `a` must be at least 2.") + + if a[0] == 0: + raise ValueError("The first coefficient in `a` must not be zero.") + + first_row = -a[1:]/(1.0*a[0]) + n = a.size + c = np.zeros((n-1, n-1), dtype=first_row.dtype) + c[0] = first_row + c[range(1,n-1), range(0, n-2)] = 1 + return c diff -Nru python-scipy-0.7.2+dfsg1/scipy/linalg/tests/test_basic.py python-scipy-0.8.0+dfsg1/scipy/linalg/tests/test_basic.py --- python-scipy-0.7.2+dfsg1/scipy/linalg/tests/test_basic.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/linalg/tests/test_basic.py 2010-07-26 15:48:32.000000000 +0100 @@ -19,37 +19,247 @@ python tests/test_basic.py """ -from numpy import arange, add, array, dot, zeros, identity, conjugate, transpose +import warnings + +from numpy import arange, array, dot, zeros, identity, conjugate, transpose, \ + float32, zeros_like import numpy.linalg as linalg from numpy.testing import * -from scipy.linalg import solve,inv,det,lstsq, toeplitz, hankel, tri, triu, \ - tril, pinv, pinv2, solve_banded +from scipy.linalg import solve, inv, det, lstsq, pinv, pinv2, norm,\ + solve_banded, solveh_banded, cholesky_banded def random(size): return rand(*size) -def get_mat(n): - data = arange(n) - data = add.outer(data,data) - return data class TestSolveBanded(TestCase): - def test_simple(self): - - a = [[1,20,0,0],[-30,4,6,0],[2,1,20,2],[0,-1,7,14]] - ab = [[0,20,6,2], - [1,4,20,14], - [-30,1,7,0], - [2,-1,0,0]] + def test_real(self): + a = array([[ 1.0, 20, 0, 0], + [ -30, 4, 6, 0], + [ 2, 1, 20, 2], + [ 0, -1, 7, 14]]) + ab = array([[ 0.0, 20, 6, 2], + [ 1, 4, 20, 14], + [ -30, 1, 7, 0], + [ 2, -1, 0, 0]]) l,u = 2,1 - for b in ([[1,0,0,0],[0,0,0,1],[0,1,0,0],[0,1,0,0]], - [[2,1],[-30,4],[2,3],[1,3]]): - x = solve_banded((l,u),ab,b) - assert_array_almost_equal(dot(a,x),b) + b4 = array([10.0, 0.0, 2.0, 14.0]) + b4by1 = b4.reshape(-1,1) + b4by2 = array([[ 2, 1], + [-30, 4], + [ 2, 3], + [ 1, 3]]) + b4by4 = array([[1, 0, 0, 0], + [0, 0, 0, 1], + [0, 1, 0, 0], + [0, 1, 0, 0]]) + for b in [b4, b4by1, b4by2, b4by4]: + x = solve_banded((l, u), ab, b) + assert_array_almost_equal(dot(a, x), b) + + def test_complex(self): + a = array([[ 1.0, 20, 0, 0], + [ -30, 4, 6, 0], + [ 2j, 1, 20, 2j], + [ 0, -1, 7, 14]]) + ab = array([[ 0.0, 20, 6, 2j], + [ 1, 4, 20, 14], + [ -30, 1, 7, 0], + [ 2j, -1, 0, 0]]) + l,u = 2,1 + b4 = array([10.0, 0.0, 2.0, 14.0j]) + b4by1 = b4.reshape(-1,1) + b4by2 = array([[ 2, 1], + [-30, 4], + [ 2, 3], + [ 1, 3]]) + b4by4 = array([[1, 0, 0, 0], + [0, 0, 0,1j], + [0, 1, 0, 0], + [0, 1, 0, 0]]) + for b in [b4, b4by1, b4by2, b4by4]: + x = solve_banded((l, u), ab, b) + assert_array_almost_equal(dot(a, x), b) + + def test_bad_shape(self): + ab = array([[ 0.0, 20, 6, 2], + [ 1, 4, 20, 14], + [ -30, 1, 7, 0], + [ 2, -1, 0, 0]]) + l,u = 2,1 + bad = array([1.0, 2.0, 3.0, 4.0]).reshape(-1,4) + assert_raises(ValueError, solve_banded, (l, u), ab, bad) + assert_raises(ValueError, solve_banded, (l, u), ab, [1.0, 2.0]) + + # Values of (l,u) are not compatible with ab. + assert_raises(ValueError, solve_banded, (1, 1), ab, [1.0, 2.0]) + + +class TestSolveHBanded(TestCase): + # solveh_banded currently has a DeprecationWarning. When the warning + # is removed in scipy 0.9, the 'ignore' filters and the test for the + # warning can be removed. + + def test_01_upper(self): + warnings.simplefilter('ignore', category=DeprecationWarning) + # Solve + # [ 4 1 0] [1] + # [ 1 4 1] X = [4] + # [ 0 1 4] [1] + # with the RHS as a 1D array. + ab = array([[-99, 1.0, 1.0], [4.0, 4.0, 4.0]]) + b = array([1.0, 4.0, 1.0]) + c, x = solveh_banded(ab, b) + assert_array_almost_equal(x, [0.0, 1.0, 0.0]) + # Remove the following part of this test in scipy 0.9. + a = array([[4.0, 1.0, 0.0], [1.0, 4.0, 1.0], [0.0, 1.0, 4.0]]) + fac = zeros_like(a) + fac[range(3),range(3)] = c[-1] + fac[(0,1),(1,2)] = c[0,1:] + assert_array_almost_equal(a, dot(fac.T, fac)) + + def test_02_upper(self): + warnings.simplefilter('ignore', category=DeprecationWarning) + # Solve + # [ 4 1 0] [1 4] + # [ 1 4 1] X = [4 2] + # [ 0 1 4] [1 4] + # + ab = array([[-99, 1.0, 1.0], + [4.0, 4.0, 4.0]]) + b = array([[1.0, 4.0], + [4.0, 2.0], + [1.0, 4.0]]) + c, x = solveh_banded(ab, b) + expected = array([[0.0, 1.0], + [1.0, 0.0], + [0.0, 1.0]]) + assert_array_almost_equal(x, expected) + + def test_03_upper(self): + warnings.simplefilter('ignore', category=DeprecationWarning) + # Solve + # [ 4 1 0] [1] + # [ 1 4 1] X = [4] + # [ 0 1 4] [1] + # with the RHS as a 2D array with shape (3,1). + ab = array([[-99, 1.0, 1.0], [4.0, 4.0, 4.0]]) + b = array([1.0, 4.0, 1.0]).reshape(-1,1) + c, x = solveh_banded(ab, b) + assert_array_almost_equal(x, array([0.0, 1.0, 0.0]).reshape(-1,1)) + + def test_01_lower(self): + warnings.simplefilter('ignore', category=DeprecationWarning) + # Solve + # [ 4 1 0] [1] + # [ 1 4 1] X = [4] + # [ 0 1 4] [1] + # + ab = array([[4.0, 4.0, 4.0], + [1.0, 1.0, -99]]) + b = array([1.0, 4.0, 1.0]) + c, x = solveh_banded(ab, b, lower=True) + assert_array_almost_equal(x, [0.0, 1.0, 0.0]) + + def test_02_lower(self): + warnings.simplefilter('ignore', category=DeprecationWarning) + # Solve + # [ 4 1 0] [1 4] + # [ 1 4 1] X = [4 2] + # [ 0 1 4] [1 4] + # + ab = array([[4.0, 4.0, 4.0], + [1.0, 1.0, -99]]) + b = array([[1.0, 4.0], + [4.0, 2.0], + [1.0, 4.0]]) + c, x = solveh_banded(ab, b, lower=True) + expected = array([[0.0, 1.0], + [1.0, 0.0], + [0.0, 1.0]]) + assert_array_almost_equal(x, expected) + + def test_01_float32(self): + warnings.simplefilter('ignore', category=DeprecationWarning) + # Solve + # [ 4 1 0] [1] + # [ 1 4 1] X = [4] + # [ 0 1 4] [1] + # + ab = array([[-99, 1.0, 1.0], [4.0, 4.0, 4.0]], dtype=float32) + b = array([1.0, 4.0, 1.0], dtype=float32) + c, x = solveh_banded(ab, b) + assert_array_almost_equal(x, [0.0, 1.0, 0.0]) + + def test_02_float32(self): + warnings.simplefilter('ignore', category=DeprecationWarning) + # Solve + # [ 4 1 0] [1 4] + # [ 1 4 1] X = [4 2] + # [ 0 1 4] [1 4] + # + ab = array([[-99, 1.0, 1.0], + [4.0, 4.0, 4.0]], dtype=float32) + b = array([[1.0, 4.0], + [4.0, 2.0], + [1.0, 4.0]], dtype=float32) + c, x = solveh_banded(ab, b) + expected = array([[0.0, 1.0], + [1.0, 0.0], + [0.0, 1.0]]) + assert_array_almost_equal(x, expected) + + def test_01_complex(self): + warnings.simplefilter('ignore', category=DeprecationWarning) + # Solve + # [ 4 -j 0] [ -j] + # [ j 4 -j] X = [4-j] + # [ 0 j 4] [4+j] + # + ab = array([[-99, -1.0j, -1.0j], [4.0, 4.0, 4.0]]) + b = array([-1.0j, 4.0-1j, 4+1j]) + c, x = solveh_banded(ab, b) + assert_array_almost_equal(x, [0.0, 1.0, 1.0]) + + def test_02_complex(self): + warnings.simplefilter('ignore', category=DeprecationWarning) + # Solve + # [ 4 -j 0] [ -j 4j] + # [ j 4 -j] X = [4-j -1-j] + # [ 0 j 4] [4+j 4 ] + # + ab = array([[-99, -1.0j, -1.0j], + [4.0, 4.0, 4.0]]) + b = array([[ -1j, 4.0j], + [4.0-1j, -1.0-1j], + [4.0+1j, 4.0]]) + c, x = solveh_banded(ab, b) + expected = array([[0.0, 1.0j], + [1.0, 0.0], + [1.0, 1.0]]) + assert_array_almost_equal(x, expected) + + def test_bad_shapes(self): + warnings.simplefilter('ignore', category=DeprecationWarning) + + ab = array([[-99, 1.0, 1.0], + [4.0, 4.0, 4.0]]) + b = array([[1.0, 4.0], + [4.0, 2.0]]) + assert_raises(ValueError, solveh_banded, ab, b) + assert_raises(ValueError, solveh_banded, ab, [1.0, 2.0]) + assert_raises(ValueError, solveh_banded, ab, [1.0]) + + def test_00_deprecation_warning(self): + warnings.simplefilter('error', category=DeprecationWarning) + ab = array([[-99, 1.0, 1.0], [4.0, 4.0, 4.0]]) + b = array([1.0, 4.0, 1.0]) + assert_raises(DeprecationWarning, solveh_banded, ab, b) + class TestSolve(TestCase): @@ -304,99 +514,6 @@ #XXX: check definition of res assert_array_almost_equal(x,direct_lstsq(a,b,1)) -class TestTri(TestCase): - def test_basic(self): - assert_equal(tri(4),array([[1,0,0,0], - [1,1,0,0], - [1,1,1,0], - [1,1,1,1]])) - assert_equal(tri(4,dtype='f'),array([[1,0,0,0], - [1,1,0,0], - [1,1,1,0], - [1,1,1,1]],'f')) - def test_diag(self): - assert_equal(tri(4,k=1),array([[1,1,0,0], - [1,1,1,0], - [1,1,1,1], - [1,1,1,1]])) - assert_equal(tri(4,k=-1),array([[0,0,0,0], - [1,0,0,0], - [1,1,0,0], - [1,1,1,0]])) - def test_2d(self): - assert_equal(tri(4,3),array([[1,0,0], - [1,1,0], - [1,1,1], - [1,1,1]])) - assert_equal(tri(3,4),array([[1,0,0,0], - [1,1,0,0], - [1,1,1,0]])) - def test_diag2d(self): - assert_equal(tri(3,4,k=2),array([[1,1,1,0], - [1,1,1,1], - [1,1,1,1]])) - assert_equal(tri(4,3,k=-2),array([[0,0,0], - [0,0,0], - [1,0,0], - [1,1,0]])) - -class TestTril(TestCase): - def test_basic(self): - a = (100*get_mat(5)).astype('l') - b = a.copy() - for k in range(5): - for l in range(k+1,5): - b[k,l] = 0 - assert_equal(tril(a),b) - - def test_diag(self): - a = (100*get_mat(5)).astype('f') - b = a.copy() - for k in range(5): - for l in range(k+3,5): - b[k,l] = 0 - assert_equal(tril(a,k=2),b) - b = a.copy() - for k in range(5): - for l in range(max((k-1,0)),5): - b[k,l] = 0 - assert_equal(tril(a,k=-2),b) - -class TestTriu(TestCase): - def test_basic(self): - a = (100*get_mat(5)).astype('l') - b = a.copy() - for k in range(5): - for l in range(k+1,5): - b[l,k] = 0 - assert_equal(triu(a),b) - - def test_diag(self): - a = (100*get_mat(5)).astype('f') - b = a.copy() - for k in range(5): - for l in range(max((k-1,0)),5): - b[l,k] = 0 - assert_equal(triu(a,k=2),b) - b = a.copy() - for k in range(5): - for l in range(k+3,5): - b[l,k] = 0 - assert_equal(triu(a,k=-2),b) - -class TestToeplitz(TestCase): - def test_basic(self): - y = toeplitz([1,2,3]) - assert_array_equal(y,[[1,2,3],[2,1,2],[3,2,1]]) - y = toeplitz([1,2,3],[1,4,5]) - assert_array_equal(y,[[1,4,5],[2,1,4],[3,2,1]]) - -class TestHankel(TestCase): - def test_basic(self): - y = hankel([1,2,3]) - assert_array_equal(y,[[1,2,3],[2,3,0],[3,0,0]]) - y = hankel([1,2,3],[3,4,5]) - assert_array_equal(y,[[1,2,3],[2,3,4],[3,4,5]]) class TestPinv(TestCase): @@ -425,5 +542,10 @@ a_pinv2 = pinv2(a) assert_array_almost_equal(a_pinv,a_pinv2) +class TestNorm(object): + def test_zero_norm(self): + assert_equal(norm([1,0,3], 0), 2) + assert_equal(norm([1,2,3], 0), 3) + if __name__ == "__main__": run_module_suite() diff -Nru python-scipy-0.7.2+dfsg1/scipy/linalg/tests/test_blas.py python-scipy-0.8.0+dfsg1/scipy/linalg/tests/test_blas.py --- python-scipy-0.7.2+dfsg1/scipy/linalg/tests/test_blas.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/linalg/tests/test_blas.py 2010-07-26 15:48:32.000000000 +0100 @@ -5,11 +5,9 @@ __usage__ = """ Build linalg: - python setup_linalg.py build + python setup.py build Run tests if scipy is installed: - python -c 'import scipy;scipy.linalg.test()' -Run tests if linalg is not installed: - python tests/test_blas.py [] + python -c 'import scipy;scipy.linalg.test()' """ import math @@ -184,29 +182,5 @@ assert_array_almost_equal(f(3j,[3-4j],[-4]),[[-48-36j]]) assert_array_almost_equal(f(3j,[3-4j],[-4],3,[5j]),[-48-21j]) -class TestBLAS(TestCase): - - def test_fblas(self): - if hasattr(fblas,'empty_module'): - print """ -**************************************************************** -WARNING: fblas module is empty. ------------ -See scipy/INSTALL.txt for troubleshooting. -**************************************************************** -""" - def test_cblas(self): - if hasattr(cblas,'empty_module'): - print """ -**************************************************************** -WARNING: cblas module is empty ------------ -See scipy/INSTALL.txt for troubleshooting. -Notes: -* If atlas library is not found by numpy/distutils/system_info.py, - then scipy uses fblas instead of cblas. -**************************************************************** -""" - if __name__ == "__main__": run_module_suite() diff -Nru python-scipy-0.7.2+dfsg1/scipy/linalg/tests/test_decomp_cholesky.py python-scipy-0.8.0+dfsg1/scipy/linalg/tests/test_decomp_cholesky.py --- python-scipy-0.7.2+dfsg1/scipy/linalg/tests/test_decomp_cholesky.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/linalg/tests/test_decomp_cholesky.py 2010-07-26 15:48:32.000000000 +0100 @@ -0,0 +1,140 @@ + + +from numpy.testing import TestCase, assert_array_almost_equal + +from numpy import array, transpose, dot, conjugate, zeros_like +from numpy.random import rand +from scipy.linalg import cholesky, cholesky_banded, cho_solve_banded + + +def random(size): + return rand(*size) + + +class TestCholesky(TestCase): + + def test_simple(self): + a = [[8,2,3],[2,9,3],[3,3,6]] + c = cholesky(a) + assert_array_almost_equal(dot(transpose(c),c),a) + c = transpose(c) + a = dot(c,transpose(c)) + assert_array_almost_equal(cholesky(a,lower=1),c) + + def test_simple_complex(self): + m = array([[3+1j,3+4j,5],[0,2+2j,2+7j],[0,0,7+4j]]) + a = dot(transpose(conjugate(m)),m) + c = cholesky(a) + a1 = dot(transpose(conjugate(c)),c) + assert_array_almost_equal(a,a1) + c = transpose(c) + a = dot(c,transpose(conjugate(c))) + assert_array_almost_equal(cholesky(a,lower=1),c) + + def test_random(self): + n = 20 + for k in range(2): + m = random([n,n]) + for i in range(n): + m[i,i] = 20*(.1+m[i,i]) + a = dot(transpose(m),m) + c = cholesky(a) + a1 = dot(transpose(c),c) + assert_array_almost_equal(a,a1) + c = transpose(c) + a = dot(c,transpose(c)) + assert_array_almost_equal(cholesky(a,lower=1),c) + + def test_random_complex(self): + n = 20 + for k in range(2): + m = random([n,n])+1j*random([n,n]) + for i in range(n): + m[i,i] = 20*(.1+abs(m[i,i])) + a = dot(transpose(conjugate(m)),m) + c = cholesky(a) + a1 = dot(transpose(conjugate(c)),c) + assert_array_almost_equal(a,a1) + c = transpose(c) + a = dot(c,transpose(conjugate(c))) + assert_array_almost_equal(cholesky(a,lower=1),c) + + +class TestCholeskyBanded(TestCase): + """Tests for cholesky_banded() and cho_solve_banded.""" + + def test_upper_real(self): + # Symmetric positive definite banded matrix `a` + a = array([[4.0, 1.0, 0.0, 0.0], + [1.0, 4.0, 0.5, 0.0], + [0.0, 0.5, 4.0, 0.2], + [0.0, 0.0, 0.2, 4.0]]) + # Banded storage form of `a`. + ab = array([[-1.0, 1.0, 0.5, 0.2], + [4.0, 4.0, 4.0, 4.0]]) + c = cholesky_banded(ab, lower=False) + ufac = zeros_like(a) + ufac[range(4),range(4)] = c[-1] + ufac[(0,1,2),(1,2,3)] = c[0,1:] + assert_array_almost_equal(a, dot(ufac.T, ufac)) + + b = array([0.0, 0.5, 4.2, 4.2]) + x = cho_solve_banded((c, False), b) + assert_array_almost_equal(x, [0.0, 0.0, 1.0, 1.0]) + + def test_upper_complex(self): + # Hermitian positive definite banded matrix `a` + a = array([[4.0, 1.0, 0.0, 0.0], + [1.0, 4.0, 0.5, 0.0], + [0.0, 0.5, 4.0, -0.2j], + [0.0, 0.0, 0.2j, 4.0]]) + # Banded storage form of `a`. + ab = array([[-1.0, 1.0, 0.5, -0.2j], + [4.0, 4.0, 4.0, 4.0]]) + c = cholesky_banded(ab, lower=False) + ufac = zeros_like(a) + ufac[range(4),range(4)] = c[-1] + ufac[(0,1,2),(1,2,3)] = c[0,1:] + assert_array_almost_equal(a, dot(ufac.conj().T, ufac)) + + b = array([0.0, 0.5, 4.0-0.2j, 0.2j + 4.0]) + x = cho_solve_banded((c, False), b) + assert_array_almost_equal(x, [0.0, 0.0, 1.0, 1.0]) + + def test_lower_real(self): + # Symmetric positive definite banded matrix `a` + a = array([[4.0, 1.0, 0.0, 0.0], + [1.0, 4.0, 0.5, 0.0], + [0.0, 0.5, 4.0, 0.2], + [0.0, 0.0, 0.2, 4.0]]) + # Banded storage form of `a`. + ab = array([[4.0, 4.0, 4.0, 4.0], + [1.0, 0.5, 0.2, -1.0]]) + c = cholesky_banded(ab, lower=True) + lfac = zeros_like(a) + lfac[range(4),range(4)] = c[0] + lfac[(1,2,3),(0,1,2)] = c[1,:3] + assert_array_almost_equal(a, dot(lfac, lfac.T)) + + b = array([0.0, 0.5, 4.2, 4.2]) + x = cho_solve_banded((c, True), b) + assert_array_almost_equal(x, [0.0, 0.0, 1.0, 1.0]) + + def test_lower_complex(self): + # Hermitian positive definite banded matrix `a` + a = array([[4.0, 1.0, 0.0, 0.0], + [1.0, 4.0, 0.5, 0.0], + [0.0, 0.5, 4.0, -0.2j], + [0.0, 0.0, 0.2j, 4.0]]) + # Banded storage form of `a`. + ab = array([[4.0, 4.0, 4.0, 4.0], + [1.0, 0.5, 0.2j, -1.0]]) + c = cholesky_banded(ab, lower=True) + lfac = zeros_like(a) + lfac[range(4),range(4)] = c[0] + lfac[(1,2,3),(0,1,2)] = c[1,:3] + assert_array_almost_equal(a, dot(lfac, lfac.conj().T)) + + b = array([0.0, 0.5j, 3.8j, 3.8]) + x = cho_solve_banded((c, True), b) + assert_array_almost_equal(x, [0.0, 0.0, 1.0j, 1.0]) diff -Nru python-scipy-0.7.2+dfsg1/scipy/linalg/tests/test_decomp.py python-scipy-0.8.0+dfsg1/scipy/linalg/tests/test_decomp.py --- python-scipy-0.7.2+dfsg1/scipy/linalg/tests/test_decomp.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/linalg/tests/test_decomp.py 2010-07-26 15:48:32.000000000 +0100 @@ -14,10 +14,12 @@ python tests/test_decomp.py """ -from numpy.testing import * +import numpy as np +from numpy.testing import TestCase, assert_equal, assert_array_almost_equal, \ + assert_array_equal, assert_raises, run_module_suite, dec -from scipy.linalg import eig,eigvals,lu,svd,svdvals,cholesky,qr, \ - schur,rsf2csf, lu_solve,lu_factor,solve,diagsvd,hessenberg,rq, \ +from scipy.linalg import eig, eigvals, lu, svd, svdvals, cholesky, qr, \ + schur, rsf2csf, lu_solve, lu_factor, solve, diagsvd, hessenberg, rq, \ eig_banded, eigvals_banded, eigh from scipy.linalg.flapack import dgbtrf, dgbtrs, zgbtrf, zgbtrs, \ dsbev, dsbevd, dsbevx, zhbevd, zhbevx @@ -204,6 +206,18 @@ if all(isfinite(res[:, i])): assert_array_almost_equal(res[:, i], 0) + def test_not_square_error(self): + """Check that passing a non-square array raises a ValueError.""" + A = np.arange(6).reshape(3,2) + assert_raises(ValueError, eig, A) + + def test_shape_mismatch(self): + """Check that passing arrays of with different shapes raises a ValueError.""" + A = identity(2) + B = np.arange(9.0).reshape(3,3) + assert_raises(ValueError, eig, A, B) + assert_raises(ValueError, eig, B, A) + class TestEigBanded(TestCase): def __init__(self, *args): @@ -777,54 +791,6 @@ def test_simple(self): assert_array_almost_equal(diagsvd([1,0,0],3,3),[[1,0,0],[0,0,0],[0,0,0]]) -class TestCholesky(TestCase): - - def test_simple(self): - a = [[8,2,3],[2,9,3],[3,3,6]] - c = cholesky(a) - assert_array_almost_equal(dot(transpose(c),c),a) - c = transpose(c) - a = dot(c,transpose(c)) - assert_array_almost_equal(cholesky(a,lower=1),c) - - def test_simple_complex(self): - m = array([[3+1j,3+4j,5],[0,2+2j,2+7j],[0,0,7+4j]]) - a = dot(transpose(conjugate(m)),m) - c = cholesky(a) - a1 = dot(transpose(conjugate(c)),c) - assert_array_almost_equal(a,a1) - c = transpose(c) - a = dot(c,transpose(conjugate(c))) - assert_array_almost_equal(cholesky(a,lower=1),c) - - def test_random(self): - n = 20 - for k in range(2): - m = random([n,n]) - for i in range(n): - m[i,i] = 20*(.1+m[i,i]) - a = dot(transpose(m),m) - c = cholesky(a) - a1 = dot(transpose(c),c) - assert_array_almost_equal(a,a1) - c = transpose(c) - a = dot(c,transpose(c)) - assert_array_almost_equal(cholesky(a,lower=1),c) - - def test_random_complex(self): - n = 20 - for k in range(2): - m = random([n,n])+1j*random([n,n]) - for i in range(n): - m[i,i] = 20*(.1+abs(m[i,i])) - a = dot(transpose(conjugate(m)),m) - c = cholesky(a) - a1 = dot(transpose(conjugate(c)),c) - assert_array_almost_equal(a,a1) - c = transpose(c) - a = dot(c,transpose(conjugate(c))) - assert_array_almost_equal(cholesky(a,lower=1),c) - class TestQR(TestCase): @@ -1053,5 +1019,93 @@ assert_equal(_datanotshared(A,M2),True) +def test_aligned_mem_float(): + """Check linalg works with non-aligned memory""" + # Allocate 402 bytes of memory (allocated on boundary) + a = arange(402, dtype=np.uint8) + + # Create an array with boundary offset 4 + z = np.frombuffer(a.data, offset=2, count=100, dtype=float32) + z.shape = 10, 10 + + eig(z, overwrite_a=True) + eig(z.T, overwrite_a=True) + + +def test_aligned_mem(): + """Check linalg works with non-aligned memory""" + # Allocate 804 bytes of memory (allocated on boundary) + a = arange(804, dtype=np.uint8) + + # Create an array with boundary offset 4 + z = np.frombuffer(a.data, offset=4, count=100, dtype=float) + z.shape = 10, 10 + + eig(z, overwrite_a=True) + eig(z.T, overwrite_a=True) + +def test_aligned_mem_complex(): + """Check that complex objects don't need to be completely aligned""" + # Allocate 1608 bytes of memory (allocated on boundary) + a = zeros(1608, dtype=np.uint8) + + # Create an array with boundary offset 8 + z = np.frombuffer(a.data, offset=8, count=100, dtype=complex) + z.shape = 10, 10 + + eig(z, overwrite_a=True) + # This does not need special handling + eig(z.T, overwrite_a=True) + +def check_lapack_misaligned(func, args, kwargs): + args = list(args) + for i in range(len(args)): + a = args[:] + if isinstance(a[i],np.ndarray): + # Try misaligning a[i] + aa = np.zeros(a[i].size*a[i].dtype.itemsize+8, dtype=np.uint8) + aa = np.frombuffer(aa.data, offset=4, count=a[i].size, dtype=a[i].dtype) + aa.shape = a[i].shape + aa[...] = a[i] + a[i] = aa + func(*a,**kwargs) + if len(a[i].shape)>1: + a[i] = a[i].T + func(*a,**kwargs) + + +@dec.knownfailureif(True, "Ticket #1152, triggers a segfault in rare cases.") +def test_lapack_misaligned(): + M = np.eye(10,dtype=float) + R = np.arange(100) + R.shape = 10,10 + S = np.arange(20000,dtype=np.uint8) + S = np.frombuffer(S.data, offset=4, count=100, dtype=np.float) + S.shape = 10, 10 + b = np.ones(10) + v = np.ones(3,dtype=float) + LU, piv = lu_factor(S) + for (func, args, kwargs) in [ + (eig,(S,),dict(overwrite_a=True)), # crash + (eigvals,(S,),dict(overwrite_a=True)), # no crash + (lu,(S,),dict(overwrite_a=True)), # no crash + (lu_factor,(S,),dict(overwrite_a=True)), # no crash + (lu_solve,((LU,piv),b),dict(overwrite_b=True)), + (solve,(S,b),dict(overwrite_a=True,overwrite_b=True)), + (svd,(M,),dict(overwrite_a=True)), # no crash + (svd,(R,),dict(overwrite_a=True)), # no crash + (svd,(S,),dict(overwrite_a=True)), # crash + (svdvals,(S,),dict()), # no crash + (svdvals,(S,),dict(overwrite_a=True)), #crash + (cholesky,(M,),dict(overwrite_a=True)), # no crash + (qr,(S,),dict(overwrite_a=True)), # crash + (rq,(S,),dict(overwrite_a=True)), # crash + (hessenberg,(S,),dict(overwrite_a=True)), # crash + (schur,(S,),dict(overwrite_a=True)), # crash + ]: + yield check_lapack_misaligned, func, args, kwargs +# not properly tested +# cholesky, rsf2csf, lu_solve, solve, eig_banded, eigvals_banded, eigh, diagsvd + if __name__ == "__main__": run_module_suite() diff -Nru python-scipy-0.7.2+dfsg1/scipy/linalg/tests/test_lapack.py python-scipy-0.8.0+dfsg1/scipy/linalg/tests/test_lapack.py --- python-scipy-0.7.2+dfsg1/scipy/linalg/tests/test_lapack.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/linalg/tests/test_lapack.py 2010-07-26 15:48:32.000000000 +0100 @@ -45,25 +45,14 @@ def test_flapack(self): if hasattr(flapack,'empty_module'): - print """ -**************************************************************** -WARNING: flapack module is empty ------------ -See scipy/INSTALL.txt for troubleshooting. -**************************************************************** -""" + #flapack module is empty + pass + def test_clapack(self): if hasattr(clapack,'empty_module'): - print """ -**************************************************************** -WARNING: clapack module is empty ------------ -See scipy/INSTALL.txt for troubleshooting. -Notes: -* If atlas library is not found by numpy/distutils/system_info.py, - then scipy uses flapack instead of clapack. -**************************************************************** -""" + #clapack module is empty + pass + if __name__ == "__main__": run_module_suite() diff -Nru python-scipy-0.7.2+dfsg1/scipy/linalg/tests/test_matfuncs.py python-scipy-0.8.0+dfsg1/scipy/linalg/tests/test_matfuncs.py --- python-scipy-0.7.2+dfsg1/scipy/linalg/tests/test_matfuncs.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/linalg/tests/test_matfuncs.py 2010-07-26 15:48:32.000000000 +0100 @@ -31,7 +31,7 @@ def test_defective1(self): a = array([[0.0,1,0,0],[1,0,1,0],[0,0,0,1],[0,0,1,0]]) - r = signm(a) + r = signm(a, disp=False) #XXX: what would be the correct result? def test_defective2(self): @@ -41,7 +41,7 @@ [-10.0,6.0,-20.0,-18.0,-2.0], [-9.6,9.6,-25.5,-15.4,-2.0], [9.8,-4.8,18.0,18.2,2.0])) - r = signm(a) + r = signm(a, disp=False) #XXX: what would be the correct result? def test_defective3(self): @@ -52,7 +52,7 @@ [ 0., 0., 0., 0., 3., 10., 0.], [ 0., 0., 0., 0., 0., -2., 25.], [ 0., 0., 0., 0., 0., 0., -3.]]) - r = signm(a) + r = signm(a, disp=False) #XXX: what would be the correct result? class TestLogM(TestCase): @@ -66,7 +66,8 @@ [ 0., 0., 0., 0., 0., -2., 25.], [ 0., 0., 0., 0., 0., 0., -3.]]) m = (identity(7)*3.1+0j)-a - logm(m) + logm(m, disp=False) + #XXX: what would be the correct result? class TestSqrtM(TestCase): @@ -83,7 +84,7 @@ [0,0,se,0], [0,0,0,1]]) assert_array_almost_equal(dot(sa,sa),a) - esa = sqrtm(a) + esa = sqrtm(a, disp=False)[0] assert_array_almost_equal(dot(esa,esa),a) class TestExpM(TestCase): diff -Nru python-scipy-0.7.2+dfsg1/scipy/linalg/tests/test_special_matrices.py python-scipy-0.8.0+dfsg1/scipy/linalg/tests/test_special_matrices.py --- python-scipy-0.7.2+dfsg1/scipy/linalg/tests/test_special_matrices.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/linalg/tests/test_special_matrices.py 2010-07-26 15:48:32.000000000 +0100 @@ -0,0 +1,268 @@ +"""Tests for functions in special_matrices.py.""" + +from numpy import arange, add, array, eye, all, copy +from numpy.testing import * + +from scipy.linalg import toeplitz, hankel, circulant, hadamard, leslie, \ + companion, tri, triu, tril, kron, block_diag + + +def get_mat(n): + data = arange(n) + data = add.outer(data,data) + return data + + +class TestTri(TestCase): + def test_basic(self): + assert_equal(tri(4),array([[1,0,0,0], + [1,1,0,0], + [1,1,1,0], + [1,1,1,1]])) + assert_equal(tri(4,dtype='f'),array([[1,0,0,0], + [1,1,0,0], + [1,1,1,0], + [1,1,1,1]],'f')) + def test_diag(self): + assert_equal(tri(4,k=1),array([[1,1,0,0], + [1,1,1,0], + [1,1,1,1], + [1,1,1,1]])) + assert_equal(tri(4,k=-1),array([[0,0,0,0], + [1,0,0,0], + [1,1,0,0], + [1,1,1,0]])) + def test_2d(self): + assert_equal(tri(4,3),array([[1,0,0], + [1,1,0], + [1,1,1], + [1,1,1]])) + assert_equal(tri(3,4),array([[1,0,0,0], + [1,1,0,0], + [1,1,1,0]])) + def test_diag2d(self): + assert_equal(tri(3,4,k=2),array([[1,1,1,0], + [1,1,1,1], + [1,1,1,1]])) + assert_equal(tri(4,3,k=-2),array([[0,0,0], + [0,0,0], + [1,0,0], + [1,1,0]])) + +class TestTril(TestCase): + def test_basic(self): + a = (100*get_mat(5)).astype('l') + b = a.copy() + for k in range(5): + for l in range(k+1,5): + b[k,l] = 0 + assert_equal(tril(a),b) + + def test_diag(self): + a = (100*get_mat(5)).astype('f') + b = a.copy() + for k in range(5): + for l in range(k+3,5): + b[k,l] = 0 + assert_equal(tril(a,k=2),b) + b = a.copy() + for k in range(5): + for l in range(max((k-1,0)),5): + b[k,l] = 0 + assert_equal(tril(a,k=-2),b) + + +class TestTriu(TestCase): + def test_basic(self): + a = (100*get_mat(5)).astype('l') + b = a.copy() + for k in range(5): + for l in range(k+1,5): + b[l,k] = 0 + assert_equal(triu(a),b) + + def test_diag(self): + a = (100*get_mat(5)).astype('f') + b = a.copy() + for k in range(5): + for l in range(max((k-1,0)),5): + b[l,k] = 0 + assert_equal(triu(a,k=2),b) + b = a.copy() + for k in range(5): + for l in range(k+3,5): + b[l,k] = 0 + assert_equal(triu(a,k=-2),b) + + +class TestToeplitz(TestCase): + + def test_basic(self): + y = toeplitz([1,2,3]) + assert_array_equal(y,[[1,2,3],[2,1,2],[3,2,1]]) + y = toeplitz([1,2,3],[1,4,5]) + assert_array_equal(y,[[1,4,5],[2,1,4],[3,2,1]]) + + def test_complex_01(self): + data = (1.0 + arange(3.0)) * (1.0 + 1.0j) + x = copy(data) + t = toeplitz(x) + # Calling toeplitz should not change x. + assert_array_equal(x, data) + # According to the docstring, x should be the first column of t. + col0 = t[:,0] + assert_array_equal(col0, data) + assert_array_equal(t[0,1:], data[1:].conj()) + + def test_scalar_00(self): + """Scalar arguments still produce a 2D array.""" + t = toeplitz(10) + assert_array_equal(t, [[10]]) + t = toeplitz(10, 20) + assert_array_equal(t, [[10]]) + + def test_scalar_01(self): + c = array([1,2,3]) + t = toeplitz(c, 1) + assert_array_equal(t, [[1],[2],[3]]) + + def test_scalar_02(self): + c = array([1,2,3]) + t = toeplitz(c, array(1)) + assert_array_equal(t, [[1],[2],[3]]) + + def test_scalar_03(self): + c = array([1,2,3]) + t = toeplitz(c, array([1])) + assert_array_equal(t, [[1],[2],[3]]) + + def test_scalar_04(self): + r = array([10,2,3]) + t = toeplitz(1, r) + assert_array_equal(t, [[1,2,3]]) + + +class TestHankel(TestCase): + def test_basic(self): + y = hankel([1,2,3]) + assert_array_equal(y, [[1,2,3], [2,3,0], [3,0,0]]) + y = hankel([1,2,3], [3,4,5]) + assert_array_equal(y, [[1,2,3], [2,3,4], [3,4,5]]) + + +class TestCirculant(TestCase): + def test_basic(self): + y = circulant([1,2,3]) + assert_array_equal(y, [[1,3,2], [2,1,3], [3,2,1]]) + + +class TestHadamard(TestCase): + + def test_basic(self): + + y = hadamard(1) + assert_array_equal(y, [[1]]) + + y = hadamard(2, dtype=float) + assert_array_equal(y, [[1.0, 1.0], [1.0, -1.0]]) + + y = hadamard(4) + assert_array_equal(y, [[1,1,1,1], [1,-1,1,-1], [1,1,-1,-1], [1,-1,-1,1]]) + + assert_raises(ValueError, hadamard, 0) + assert_raises(ValueError, hadamard, 5) + + +class TestLeslie(TestCase): + + def test_bad_shapes(self): + assert_raises(ValueError, leslie, [[1,1],[2,2]], [3,4,5]) + assert_raises(ValueError, leslie, [3,4,5], [[1,1],[2,2]]) + assert_raises(ValueError, leslie, [1,2], [1,2]) + assert_raises(ValueError, leslie, [1], []) + + def test_basic(self): + a = leslie([1, 2, 3], [0.25, 0.5]) + expected = array([ + [1.0, 2.0, 3.0], + [0.25, 0.0, 0.0], + [0.0, 0.5, 0.0]]) + assert_array_equal(a, expected) + + +class TestCompanion(TestCase): + + def test_bad_shapes(self): + assert_raises(ValueError, companion, [[1,1],[2,2]]) + assert_raises(ValueError, companion, [0,4,5]) + assert_raises(ValueError, companion, [1]) + assert_raises(ValueError, companion, []) + + def test_basic(self): + c = companion([1, 2, 3]) + expected = array([ + [-2.0, -3.0], + [ 1.0, 0.0]]) + assert_array_equal(c, expected) + + c = companion([2.0, 5.0, -10.0]) + expected = array([ + [-2.5, 5.0], + [ 1.0, 0.0]]) + assert_array_equal(c, expected) + + +class TestBlockDiag: + def test_basic(self): + x = block_diag(eye(2), [[1,2], [3,4], [5,6]], [[1, 2, 3]]) + assert all(x == [[1, 0, 0, 0, 0, 0, 0], + [0, 1, 0, 0, 0, 0, 0], + [0, 0, 1, 2, 0, 0, 0], + [0, 0, 3, 4, 0, 0, 0], + [0, 0, 5, 6, 0, 0, 0], + [0, 0, 0, 0, 1, 2, 3]]) + + def test_dtype(self): + x = block_diag([[1.5]]) + assert_equal(x.dtype, float) + + x = block_diag([[True]]) + assert_equal(x.dtype, bool) + + def test_scalar_and_1d_args(self): + a = block_diag(1) + assert_equal(a.shape, (1,1)) + assert_array_equal(a, [[1]]) + + a = block_diag([2,3], 4) + assert_array_equal(a, [[2, 3, 0], [0, 0, 4]]) + + def test_bad_arg(self): + assert_raises(ValueError, block_diag, [[[1]]]) + + def test_no_args(self): + a = block_diag() + assert_equal(a.ndim, 2) + assert_equal(a.nbytes, 0) + + +class TestKron: + + def test_basic(self): + + a = kron(array([[1, 2], [3, 4]]), array([[1, 1, 1]])) + assert_array_equal(a, array([[1, 1, 1, 2, 2, 2], + [3, 3, 3, 4, 4, 4]])) + + m1 = array([[1, 2], [3, 4]]) + m2 = array([[10], [11]]) + a = kron(m1, m2) + expected = array([[ 10, 20 ], + [ 11, 22 ], + [ 30, 40 ], + [ 33, 44 ]]) + assert_array_equal(a, expected) + + +if __name__ == "__main__": + run_module_suite() diff -Nru python-scipy-0.7.2+dfsg1/scipy/linsolve/__init__.py python-scipy-0.8.0+dfsg1/scipy/linsolve/__init__.py --- python-scipy-0.7.2+dfsg1/scipy/linsolve/__init__.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/linsolve/__init__.py 1970-01-01 01:00:00.000000000 +0100 @@ -1,6 +0,0 @@ - -from warnings import warn - -warn('scipy.linsolve has moved to scipy.sparse.linalg.dsolve', DeprecationWarning) - -from scipy.sparse.linalg.dsolve import * diff -Nru python-scipy-0.7.2+dfsg1/scipy/maxentropy/info.py python-scipy-0.8.0+dfsg1/scipy/maxentropy/info.py --- python-scipy-0.7.2+dfsg1/scipy/maxentropy/info.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/maxentropy/info.py 2010-07-26 15:48:32.000000000 +0100 @@ -2,20 +2,27 @@ Routines for fitting maximum entropy models =========================================== -Contains two classes for fitting maximum entropy models subject to linear -constraints on the expectations of arbitrary feature statistics. One -class, "model", is for small discrete sample spaces, using explicit -summation. The other, "bigmodel", is for sample spaces that are either -continuous (and perhaps high-dimensional) or discrete but too large to -sum over, and uses importance sampling. conditional Monte Carlo methods. +Contains two classes for fitting maximum entropy models (also known as +"exponential family" models) subject to linear constraints on the expectations +of arbitrary feature statistics. One class, "model", is for small discrete sample +spaces, using explicit summation. The other, "bigmodel", is for sample spaces +that are either continuous (and perhaps high-dimensional) or discrete but too +large to sum over, and uses importance sampling. conditional Monte Carlo +methods. The maximum entropy model has exponential form - p(x) = exp(theta^T . f_vec(x)) / Z(theta). +.. + p(x) = exp(theta^T f(x)) / Z(theta) + +.. math:: + \\renewcommand{\\v}[1]{\\mathbf{#1}} + p( \\v{x} ) = \\exp \\left( {\\v{\\theta}^\\mathsf{T} \\vec{f}( \\v{x} ) + \\over Z(\\v{\\theta}) } \\right) with a real parameter vector theta of the same length as the feature -statistic f_vec. For more background, see, for example, Cover and -Thomas (1991), Elements of Information Theory. +statistic f(x), For more background, see, for example, Cover and +Thomas (1991), *Elements of Information Theory*. See the file bergerexample.py for a walk-through of how to use these routines when the sample space is small enough to be enumerated. @@ -25,6 +32,7 @@ Copyright: Ed Schofield, 2003-2006 License: BSD-style (see LICENSE.txt in main source directory) + """ postpone_import = 1 diff -Nru python-scipy-0.7.2+dfsg1/scipy/maxentropy/__init__.py python-scipy-0.8.0+dfsg1/scipy/maxentropy/__init__.py --- python-scipy-0.7.2+dfsg1/scipy/maxentropy/__init__.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/maxentropy/__init__.py 2010-07-26 15:48:32.000000000 +0100 @@ -1,10 +1,3 @@ -"""Maximum entropy modelling tools - -Author: Ed Schofield -Copyright: 2003-2006 -License: BSD-style. See LICENSE.txt in the scipy source directory. -""" - from info import __doc__ from maxentropy import * diff -Nru python-scipy-0.7.2+dfsg1/scipy/maxentropy/maxentropy.py python-scipy-0.8.0+dfsg1/scipy/maxentropy/maxentropy.py --- python-scipy-0.7.2+dfsg1/scipy/maxentropy/maxentropy.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/maxentropy/maxentropy.py 2010-07-26 15:48:32.000000000 +0100 @@ -788,15 +788,16 @@ class conditionalmodel(model): - """A conditional maximum-entropy (exponential-form) model p(x|w) on a + """ + A conditional maximum-entropy (exponential-form) model p(x|w) on a discrete sample space. This is useful for classification problems: given the context w, what is the probability of each class x? - The form of such a model is + The form of such a model is:: p(x | w) = exp(theta . f(w, x)) / Z(w; theta) - where Z(w; theta) is a normalization term equal to + where Z(w; theta) is a normalization term equal to:: Z(w; theta) = sum_x exp(theta . f(w, x)). @@ -804,11 +805,11 @@ the constructor as the parameter 'samplespace'. Such a model form arises from maximizing the entropy of a conditional - model p(x | w) subject to the constraints: + model p(x | w) subject to the constraints:: K_i = E f_i(W, X) - where the expectation is with respect to the distribution + where the expectation is with respect to the distribution:: q(w) p(x | w) @@ -818,7 +819,7 @@ x) with respect to the empirical distribution. This method minimizes the Lagrangian dual L of the entropy, which is - defined for conditional models as + defined for conditional models as:: L(theta) = sum_w q(w) log Z(w; theta) - sum_{w,x} q(w,x) [theta . f(w,x)] @@ -827,8 +828,10 @@ entire sample space, since q(w,x) = 0 for all w,x not in the training set. - The partial derivatives of L are: + The partial derivatives of L are:: + dL / dtheta_i = K_i - E f_i(X, Y) + where the expectation is as defined above. """ diff -Nru python-scipy-0.7.2+dfsg1/scipy/misc/common.py python-scipy-0.8.0+dfsg1/scipy/misc/common.py --- python-scipy-0.7.2+dfsg1/scipy/misc/common.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/misc/common.py 2010-07-26 15:48:32.000000000 +0100 @@ -38,16 +38,36 @@ return where(n>=0,vals,0) -def factorial2(n,exact=0): - """n!! = special.gamma(n/2+1)*2**((m+1)/2)/sqrt(pi) n odd - = 2**(n) * n! n even - - If exact==0, then floating point precision is used, otherwise - exact long integer is computed. - - Notes: - - Array argument accepted only for exact=0 case. - - If n<0, the return value is 0. +def factorial2(n, exact=False): + """Double factorial. + + This is the factorial with every second value is skipped, i.e., + ``7!! = 7 * 5 * 3 * 1``. It can be approximated numerically as:: + + n!! = special.gamma(n/2+1)*2**((m+1)/2)/sqrt(pi) n odd + = 2**(n/2) * (n/2)! n even + + Parameters + ---------- + n : int, array-like + Calculate ``n!!``. Arrays are only supported with `exact` set + to False. If ``n < 0``, the return value is 0. + exact : bool, optional + The result can be approximated rapidly using the gamma-formula + above (default). If `exact` is set to True, calculate the + answer exactly using integer arithmetic. + + Returns + ------- + nff : float or int + Double factorial of `n`, as an int or a float depending on + `exact`. + + References + ---------- + .. [1] Wikipedia, "Double Factorial", + http://en.wikipedia.org/wiki/Factorial#Double_factorial + """ if exact: if n < -1: @@ -73,8 +93,30 @@ return vals def factorialk(n,k,exact=1): - """n(!!...!) = multifactorial of order k - k times + """ + n(!!...!) = multifactorial of order k + k times + + + Parameters + ---------- + n : int, array-like + Calculate multifactorial. Arrays are only supported with exact + set to False. If n < 0, the return value is 0. + exact : bool, optional + If exact is set to True, calculate the answer exactly using + integer arithmetic. + + Returns + ------- + val : int + Multi factorial of n. + + Raises + ------ + NotImplementedError + Raises when exact is False + """ if exact: if n < 1-k: @@ -90,14 +132,29 @@ def comb(N,k,exact=0): - """Combinations of N things taken k at a time. + """ + Combinations of N things taken k at a time. - If exact==0, then floating point precision is used, otherwise - exact long integer is computed. + Parameters + ---------- + N : int, array + Nunmber of things. + k : int, array + Numner of elements taken. + exact : int, optional + If exact is 0, then floating point precision is used, otherwise + exact long integer is computed. + + Returns + ------- + val : int, array + The total number of combinations. + + Notes + ----- + - Array arguments accepted only for exact=0 case. + - If k > N, N < 0, or k < 0, then a 0 is returned. - Notes: - - Array arguments accepted only for exact=0 case. - - If k > N, N < 0, or k < 0, then a 0 is returned. """ if exact: if (k > N) or (N < 0) or (k < 0): @@ -117,13 +174,17 @@ return where(cond, vals, 0.0) def central_diff_weights(Np,ndiv=1): - """Return weights for an Np-point central derivative of order ndiv - assuming equally-spaced function points. + """ + Return weights for an Np-point central derivative of order ndiv + assuming equally-spaced function points. - If weights are in the vector w, then - derivative is w[0] * f(x-ho*dx) + ... + w[-1] * f(x+h0*dx) + If weights are in the vector w, then + derivative is w[0] * f(x-ho*dx) + ... + w[-1] * f(x+h0*dx) + + Notes + ----- + Can be inaccurate for large number of points. - Can be inaccurate for large number of points. """ assert (Np >= ndiv+1), "Number of points must be at least the derivative order + 1." assert (Np % 2 == 1), "Odd-number of points only." @@ -138,13 +199,31 @@ return w def derivative(func,x0,dx=1.0,n=1,args=(),order=3): - """Given a function, use a central difference formula with spacing dx to - compute the nth derivative at x0. + """ + Find the n-th derivative of a function at point x0. + + Given a function, use a central difference formula with spacing `dx` to + compute the n-th derivative at `x0`. - order is the number of points to use and must be odd. + Parameters + ---------- + func : function + Input function. + x0 : float + The point at which nth derivative is found. + dx : int, optional + Spacing. + n : int, optional + Order of the derivative. Default is 1. + args : tuple, optional + Arguments + order : int, optional + Number of points to use, must be odd. + + Notes + ----- + Decreasing the step size too small can result in round-off error. - Warning: Decreasing the step size too small can result in - round-off error. """ assert (order >= n+1), "Number of points must be at least the derivative order + 1." assert (order % 2 == 1), "Odd number of points only." diff -Nru python-scipy-0.7.2+dfsg1/scipy/misc/doccer.py python-scipy-0.8.0+dfsg1/scipy/misc/doccer.py --- python-scipy-0.7.2+dfsg1/scipy/misc/doccer.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/misc/doccer.py 2010-07-26 15:48:32.000000000 +0100 @@ -0,0 +1,135 @@ +''' Utilities to allow inserting docstring fragments for common +parameters into function and method docstrings''' + +import sys + +def docformat(docstring, docdict=None): + ''' Fill a function docstring from variables in dictionary + + Adapt the indent of the inserted docs + + Parameters + ---------- + docstring : string + docstring from function, possibly with dict formatting strings + docdict : dict + dictionary with keys that match the dict formatting strings + and values that are docstring fragments to be inserted. The + indentation of the inserted docstrings is set to match the + minimum indentation of the ``docstring`` by adding this + indentation to all lines of the inserted string, except the + first + + Returns + ------- + outstring : string + string with requested ``docdict`` strings inserted + + Examples + -------- + >>> docformat(' Test string with %(value)s', {'value':'inserted value'}) + ' Test string with inserted value' + >>> docstring = 'First line\\n Second line\\n %(value)s' + >>> inserted_string = "indented\\nstring" + >>> docdict = {'value': inserted_string} + >>> docformat(docstring, docdict) + 'First line\\n Second line\\n indented\\n string' + ''' + if not docstring: + return docstring + if docdict is None: + docdict = {} + if not docdict: + return docstring + lines = docstring.expandtabs().splitlines() + # Find the minimum indent of the main docstring, after first line + if len(lines) < 2: + icount = 0 + else: + icount = indentcount_lines(lines[1:]) + indent = ' ' * icount + # Insert this indent to dictionary docstrings + indented = {} + for name, dstr in docdict.items(): + lines = dstr.expandtabs().splitlines() + try: + newlines = [lines[0]] + for line in lines[1:]: + newlines.append(indent+line) + indented[name] = '\n'.join(newlines) + except IndexError: + indented[name] = dstr + return docstring % indented + + +def indentcount_lines(lines): + ''' Minumum indent for all lines in line list + + >>> lines = [' one', ' two', ' three'] + >>> indentcount_lines(lines) + 1 + >>> lines = [] + >>> indentcount_lines(lines) + 0 + >>> lines = [' one'] + >>> indentcount_lines(lines) + 1 + >>> indentcount_lines([' ']) + 0 + ''' + indentno = sys.maxint + for line in lines: + stripped = line.lstrip() + if stripped: + indentno = min(indentno, len(line) - len(stripped)) + if indentno == sys.maxint: + return 0 + return indentno + + +def filldoc(docdict, unindent_params=True): + ''' Return docstring decorator using docdict variable dictionary + + Parameters + ---------- + docdict : dictionary + dictionary containing name, docstring fragment pairs + unindent_params : {False, True}, boolean, optional + If True, strip common indentation from all parameters in + docdict + + Returns + ------- + decfunc : function + decorator that applies dictionary to input function docstring + + ''' + if unindent_params: + docdict = unindent_dict(docdict) + def decorate(f): + f.__doc__ = docformat(f.__doc__, docdict) + return f + return decorate + + +def unindent_dict(docdict): + ''' Unindent all strings in a docdict ''' + can_dict = {} + for name, dstr in docdict.items(): + can_dict[name] = unindent_string(dstr) + return can_dict + + +def unindent_string(docstring): + ''' Set docstring to minimum indent for all lines, including first + + >>> unindent_string(' two') + 'two' + >>> unindent_string(' two\\n three') + 'two\\n three' + ''' + lines = docstring.expandtabs().splitlines() + icount = indentcount_lines(lines) + if icount == 0: + return docstring + return '\n'.join([line[icount:] for line in lines]) diff -Nru python-scipy-0.7.2+dfsg1/scipy/misc/helpmod.py python-scipy-0.8.0+dfsg1/scipy/misc/helpmod.py --- python-scipy-0.7.2+dfsg1/scipy/misc/helpmod.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/misc/helpmod.py 2010-07-26 15:48:32.000000000 +0100 @@ -3,6 +3,10 @@ import sys import pydoc +import warnings +warnings.warn('The helpmod module is deprecated. It will be removed from SciPy in version 0.9.', + DeprecationWarning) + __all__ = ['info','source'] # NOTE: pydoc defines a help function which works simliarly to this diff -Nru python-scipy-0.7.2+dfsg1/scipy/misc/limits.py python-scipy-0.8.0+dfsg1/scipy/misc/limits.py --- python-scipy-0.7.2+dfsg1/scipy/misc/limits.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/misc/limits.py 1970-01-01 01:00:00.000000000 +0100 @@ -1,37 +0,0 @@ -""" Machine limits for Float32 and Float64. -""" - -import warnings -warnings.warn('limits module is deprecated, please use numpy.finfo instead', - DeprecationWarning) - - -__all__ = ['float_epsilon','float_tiny','float_min', - 'float_max','float_precision','float_resolution', - 'single_epsilon','single_tiny','single_min','single_max', - 'single_precision','single_resolution', - 'double_epsilon','double_tiny','double_min','double_max', - 'double_precision','double_resolution'] - - -from numpy import finfo, single, float_ - -single_epsilon = finfo(single).eps -single_tiny = finfo(single).tiny -single_max = finfo(single).max -single_min = -single_max -single_precision = finfo(single).precision -single_resolution = finfo(single).resolution - -double_epsilon = float_epsilon = finfo(float_).eps -double_tiny = float_tiny = finfo(float_).tiny -double_max = float_max = finfo(float_).max -double_min = float_min = -float_max -double_precision = float_precision = finfo(float_).precision -double_resolution = float_resolution = finfo(float_).resolution - -if __name__ == '__main__': - print 'single epsilon:',single_epsilon - print 'single tiny:',single_tiny - print 'float epsilon:',float_epsilon - print 'float tiny:',float_tiny diff -Nru python-scipy-0.7.2+dfsg1/scipy/misc/pexec.py python-scipy-0.8.0+dfsg1/scipy/misc/pexec.py --- python-scipy-0.7.2+dfsg1/scipy/misc/pexec.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/misc/pexec.py 2010-07-26 15:48:32.000000000 +0100 @@ -12,6 +12,11 @@ import Queue import traceback +import warnings +warnings.warn('The pexec module is deprecated. It will be removed from SciPy in version 0.9.', + DeprecationWarning) + + class ParallelExec(threading.Thread): """ Create a thread of parallel execution. """ diff -Nru python-scipy-0.7.2+dfsg1/scipy/misc/pilutil.py python-scipy-0.8.0+dfsg1/scipy/misc/pilutil.py --- python-scipy-0.7.2+dfsg1/scipy/misc/pilutil.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/misc/pilutil.py 2010-07-26 15:48:32.000000000 +0100 @@ -15,6 +15,22 @@ # Returns a byte-scaled image def bytescale(data, cmin=None, cmax=None, high=255, low=0): + """ + Parameters + ---------- + im : PIL image + Input image. + flatten : bool + If true, convert the output to grey-scale + + Returns + ------- + img_array : ndarray + The different colour bands/channels are stored in the + third dimension, such that a grey-image is MxN, an + RGB-image MxNx3 and an RGBA-image MxNx4. + + """ if data.dtype == uint8: return data high = high - low @@ -25,39 +41,72 @@ return bytedata + cast[uint8](low) def imread(name,flatten=0): - """Read an image file from a filename. + """ + Read an image file from a filename. - Optional arguments: + Parameters + ---------- + name : str + The file name to be read. + flatten : bool, optional + If True, flattens the color layers into a single gray-scale layer. + + Returns + ------- + : nd_array + The array obtained by reading image. + + Notes + ----- + The image is flattened by calling convert('F') on + the resulting image object. - - flatten (0): if true, the image is flattened by calling convert('F') on - the resulting image object. This flattens the color layers into a single - grayscale layer. """ im = Image.open(name) return fromimage(im,flatten=flatten) def imsave(name, arr): - """Save an array to an image file. + """ + Save an array to an image file. + + Parameters + ---------- + im : PIL image + Input image. + + flatten : bool + If true, convert the output to grey-scale. + + Returns + ------- + img_array : ndarray + The different colour bands/channels are stored in the + third dimension, such that a grey-image is MxN, an + RGB-image MxNx3 and an RGBA-image MxNx4. + """ im = toimage(arr) im.save(name) return def fromimage(im, flatten=0): - """Return a copy of a PIL image as a numpy array. + """ + Return a copy of a PIL image as a numpy array. - :Parameters: - im : PIL image - Input image. - flatten : bool - If true, convert the output to grey-scale. - - :Returns: - img_array : ndarray - The different colour bands/channels are stored in the - third dimension, such that a grey-image is MxN, an - RGB-image MxNx3 and an RGBA-image MxNx4. + Parameters + ---------- + im : PIL image + Input image. + flatten : bool + If true, convert the output to grey-scale. + + Returns + ------- + img_array : ndarray + The different colour bands/channels are stored in the + third dimension, such that a grey-image is MxN, an + RGB-image MxNx3 and an RGBA-image MxNx4. """ if not Image.isImageType(im): @@ -171,12 +220,33 @@ return image def imrotate(arr,angle,interp='bilinear'): - """Rotate an image counter-clockwise by angle degrees. + """ + Rotate an image counter-clockwise by angle degrees. + + Parameters + ---------- + arr : nd_array + Input array of image to be rotated. + angle : float + The angle of rotation. + interp : str, optional + Interpolation + + + Returns + ------- + : nd_array + The rotated array of image. + + Notes + ----- Interpolation methods can be: - 'nearest' : for nearest neighbor - 'bilinear' : for bilinear - 'cubic' or 'bicubic' : for bicubic + * 'nearest' : for nearest neighbor + * 'bilinear' : for bilinear + * 'cubic' : cubic + * 'bicubic' : for bicubic + """ arr = asarray(arr) func = {'nearest':0,'bilinear':2,'bicubic':3,'cubic':3} @@ -215,11 +285,25 @@ raise RuntimeError('Could not execute image viewer.') def imresize(arr,size): - """Resize an image. + """ + Resize an image. + + Parameters + ---------- + arr : nd_array + The array of image to be resized. + + size : int, float or tuple + * int - Percentage of current size. + * float - Fraction of current size. + * tuple - Size of the output image. + + Returns + ------- + + : nd_array + The resized array of image. - If size is an integer it is a percentage of current size. - If size is a float it is a fraction of current size. - If size is a tuple it is the size of the output image. """ im = toimage(arr) ts = type(size) @@ -234,11 +318,29 @@ def imfilter(arr,ftype): - """Simple filtering of an image. + """ + Simple filtering of an image. + + Parameters + ---------- + arr : ndarray + The array of Image in which the filter is to be applied. + ftype : str + The filter that has to be applied. Legal values are: + 'blur', 'contour', 'detail', 'edge_enhance', 'edge_enhance_more', + 'emboss', 'find_edges', 'smooth', 'smooth_more', 'sharpen'. + + Returns + ------- + res : nd_array + The array with filter applied. + + Raises + ------ + ValueError + *Unknown filter type.* . If the filter you are trying + to apply is unsupported. - type can be: - 'blur', 'contour', 'detail', 'edge_enhance', 'edge_enhance_more', - 'emboss', 'find_edges', 'smooth', 'smooth_more', 'sharpen' """ _tdict = {'blur':ImageFilter.BLUR, 'contour':ImageFilter.CONTOUR, diff -Nru python-scipy-0.7.2+dfsg1/scipy/misc/ppimport.py python-scipy-0.8.0+dfsg1/scipy/misc/ppimport.py --- python-scipy-0.7.2+dfsg1/scipy/misc/ppimport.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/misc/ppimport.py 2010-07-26 15:48:32.000000000 +0100 @@ -15,6 +15,10 @@ import types import traceback +import warnings +warnings.warn('The ppimport module is deprecated. It will be removed from SciPy in version 0.9.', + DeprecationWarning) + DEBUG=0 _ppimport_is_enabled = 1 @@ -143,7 +147,7 @@ if p_name=='__main__': p_dir = '' fullname = name - elif '__path__' in _frame.f_locals: + elif '__path__' in p_frame.f_locals: # python package p_path = p_frame.f_locals['__path__'] p_dir = p_path[0] @@ -243,7 +247,7 @@ if location != 'sys.path': from numpy.testing import Tester - self.__dict__['test'] = Tester(os,path.dirname(location)).test + self.__dict__['test'] = Tester(os.path.dirname(location)).test # install loader sys.modules[name] = self diff -Nru python-scipy-0.7.2+dfsg1/scipy/misc/tests/test_doccer.py python-scipy-0.8.0+dfsg1/scipy/misc/tests/test_doccer.py --- python-scipy-0.7.2+dfsg1/scipy/misc/tests/test_doccer.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/misc/tests/test_doccer.py 2010-07-26 15:48:32.000000000 +0100 @@ -0,0 +1,89 @@ +''' Some tests for the documenting decorator and support functions ''' + +import numpy as np + +from numpy.testing import assert_equal, assert_raises + +from nose.tools import assert_true + +from scipy.misc import doccer + +docstring = \ +"""Docstring + %(strtest1)s + %(strtest2)s + %(strtest3)s +""" +param_doc1 = \ +"""Another test + with some indent""" + +param_doc2 = \ +"""Another test, one line""" + +param_doc3 = \ +""" Another test + with some indent""" + +doc_dict = {'strtest1':param_doc1, + 'strtest2':param_doc2, + 'strtest3':param_doc3} + +filled_docstring = \ +"""Docstring + Another test + with some indent + Another test, one line + Another test + with some indent +""" + + +def test_unindent(): + yield assert_equal, doccer.unindent_string(param_doc1), param_doc1 + yield assert_equal, doccer.unindent_string(param_doc2), param_doc2 + yield assert_equal, doccer.unindent_string(param_doc3), param_doc1 + + +def test_unindent_dict(): + d2 = doccer.unindent_dict(doc_dict) + yield assert_equal, d2['strtest1'], doc_dict['strtest1'] + yield assert_equal, d2['strtest2'], doc_dict['strtest2'] + yield assert_equal, d2['strtest3'], doc_dict['strtest1'] + + +def test_docformat(): + udd = doccer.unindent_dict(doc_dict) + formatted = doccer.docformat(docstring, udd) + yield assert_equal, formatted, filled_docstring + single_doc = 'Single line doc %(strtest1)s' + formatted = doccer.docformat(single_doc, doc_dict) + # Note - initial indent of format string does not + # affect subsequent indent of inserted parameter + yield assert_equal, formatted, """Single line doc Another test + with some indent""" + + +def test_decorator(): + # with unindentation of parameters + decorator = doccer.filldoc(doc_dict, True) + @decorator + def func(): + """ Docstring + %(strtest3)s + """ + yield assert_equal, func.__doc__, """ Docstring + Another test + with some indent + """ + # without unindentation of parameters + decorator = doccer.filldoc(doc_dict, False) + @decorator + def func(): + """ Docstring + %(strtest3)s + """ + yield assert_equal, func.__doc__, """ Docstring + Another test + with some indent + """ diff -Nru python-scipy-0.7.2+dfsg1/scipy/ndimage/doccer.py python-scipy-0.8.0+dfsg1/scipy/ndimage/doccer.py --- python-scipy-0.7.2+dfsg1/scipy/ndimage/doccer.py 2010-03-20 09:20:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/ndimage/doccer.py 1970-01-01 01:00:00.000000000 +0100 @@ -1,132 +0,0 @@ -''' Utilities to allow inserting docstring fragments for common -parameters into function and method docstrings''' - -import sys - -def docformat(docstring, docdict=None): - ''' Fill a function docstring from variables in dictionary - - Adapt the indent of the inserted docs - - Parameters - ---------- - docstring : string - docstring from function, possibly with dict formatting strings - docdict : dict - dictionary with keys that match the dict formatting strings - and values that are docstring fragments to be inserted. The - indentation of the inserted docstrings is set to match the - minimum indentation of the ``docstring`` by adding this - indentation to all lines of the inserted string, except the - first - - Returns - ------- - outstring : string - string with requested ``docdict`` strings inserted - - Examples - -------- - >>> docformat(' Test string with %(value)s', {'value':'inserted value'}) - ' Test string with inserted value' - >>> docstring = 'First line\\n Second line\\n %(value)s' - >>> inserted_string = "indented\\nstring" - >>> docdict = {'value': inserted_string} - >>> docformat(docstring, docdict) - 'First line\\n Second line\\n indented\\n string' - ''' - if not docstring: - return docstring - if docdict is None: - docdict = {} - if not docdict: - return docstring - lines = docstring.expandtabs().splitlines() - # Find the minimum indent of the main docstring, after first line - if len(lines) < 2: - icount = 0 - else: - icount = indentcount_lines(lines[1:]) - indent = ' ' * icount - # Insert this indent to dictionary docstrings - indented = {} - for name, dstr in docdict.items(): - lines = dstr.expandtabs().splitlines() - newlines = [lines[0]] - for line in lines[1:]: - newlines.append(indent+line) - indented[name] = '\n'.join(newlines) - return docstring % indented - - -def indentcount_lines(lines): - ''' Minumum indent for all lines in line list - - >>> lines = [' one', ' two', ' three'] - >>> indentcount_lines(lines) - 1 - >>> lines = [] - >>> indentcount_lines(lines) - 0 - >>> lines = [' one'] - >>> indentcount_lines(lines) - 1 - >>> indentcount_lines([' ']) - 0 - ''' - indentno = sys.maxint - for line in lines: - stripped = line.lstrip() - if stripped: - indentno = min(indentno, len(line) - len(stripped)) - if indentno == sys.maxint: - return 0 - return indentno - - -def filldoc(docdict, unindent_params=True): - ''' Return docstring decorator using docdict variable dictionary - - Parameters - ---------- - docdict : dictionary - dictionary containing name, docstring fragment pairs - unindent_params : {False, True}, boolean, optional - If True, strip common indentation from all parameters in - docdict - - Returns - ------- - decfunc : function - decorator that applies dictionary to input function docstring - - ''' - if unindent_params: - docdict = unindent_dict(docdict) - def decorate(f): - f.__doc__ = docformat(f.__doc__, docdict) - return f - return decorate - - -def unindent_dict(docdict): - ''' Unindent all strings in a docdict ''' - can_dict = {} - for name, dstr in docdict.items(): - can_dict[name] = unindent_string(dstr) - return can_dict - - -def unindent_string(docstring): - ''' Set docstring to minimum indent for all lines, including first - - >>> unindent_string(' two') - 'two' - >>> unindent_string(' two\\n three') - 'two\\n three' - ''' - lines = docstring.expandtabs().splitlines() - icount = indentcount_lines(lines) - if icount == 0: - return docstring - return '\n'.join([line[icount:] for line in lines]) diff -Nru python-scipy-0.7.2+dfsg1/scipy/ndimage/filters.py python-scipy-0.8.0+dfsg1/scipy/ndimage/filters.py --- python-scipy-0.7.2+dfsg1/scipy/ndimage/filters.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/ndimage/filters.py 2010-07-26 15:48:32.000000000 +0100 @@ -32,7 +32,7 @@ import numpy import _ni_support import _nd_image -import doccer +from scipy.misc import doccer _input_doc = \ """input : array-like @@ -499,19 +499,35 @@ @docfiller def correlate(input, weights, output = None, mode = 'reflect', cval = 0.0, origin = 0): - """Multi-dimensional correlation. + """ + Multi-dimensional correlation. The array is correlated with the given kernel. Parameters ---------- - %(input)s + input : array-like + input array to filter weights : ndarray array of weights, same number of dimensions as input - %(output)s - %(mode)s - %(cval)s - %(origin)s + output : array, optional + The ``output`` parameter passes an array in which to store the + filter output. + mode : {'reflect','constant','nearest','mirror', 'wrap'}, optional + The ``mode`` parameter determines how the array borders are + handled, where ``cval`` is the value when mode is equal to + 'constant'. Default is 'reflect' + cval : scalar, optional + Value to fill past edges of input if ``mode`` is 'constant'. Default + is 0.0 + origin : scalar, optional + The ``origin`` parameter controls the placement of the filter. + Default 0 + + See Also + -------- + convolve : Convolve an image with a kernel. + """ return _correlate_or_convolve(input, weights, output, mode, cval, origin, False) @@ -520,19 +536,36 @@ @docfiller def convolve(input, weights, output = None, mode = 'reflect', cval = 0.0, origin = 0): - """Multi-dimensional convolution. + """ + Multi-dimensional convolution. The array is convolved with the given kernel. Parameters ---------- - %(input)s + input : array-like + input array to filter weights : ndarray array of weights, same number of dimensions as input - %(output)s - %(mode)s - %(cval)s - %(origin)s + output : array, optional + The ``output`` parameter passes an array in which to store the + filter output. + mode : {'reflect','constant','nearest','mirror', 'wrap'}, optional + The ``mode`` parameter determines how the array borders are + handled, where ``cval`` is the value when mode is equal to + 'constant'. Default is 'reflect' + cval : scalar, optional + Value to fill past edges of input if ``mode`` is 'constant'. Default + is 0.0 + origin : scalar, optional + The ``origin`` parameter controls the placement of the filter. + Default 0 + + See Also + -------- + + correlate : Correlate an image with a kernel. + """ return _correlate_or_convolve(input, weights, output, mode, cval, origin, True) @@ -859,16 +892,40 @@ @docfiller def median_filter(input, size = None, footprint = None, output = None, mode = "reflect", cval = 0.0, origin = 0): - """Calculates a multi-dimensional median filter. + """ + Calculates a multi-dimensional median filter. Parameters ---------- - %(input)s - %(size_foot)s - %(output)s - %(mode)s - %(cval)s - %(origin)s + input : array-like + input array to filter + size : scalar or tuple, optional + See footprint, below + footprint : array, optional + Either ``size`` or ``footprint`` must be defined. ``size`` gives + the shape that is taken from the input array, at every element + position, to define the input to the filter function. + ``footprint`` is a boolean array that specifies (implicitly) a + shape, but also which of the elements within this shape will get + passed to the filter function. Thus ``size=(n,m)`` is equivalent + to ``footprint=np.ones((n,m))``. We adjust ``size`` to the number + of dimensions of the input array, so that, if the input array is + shape (10,10,10), and ``size`` is 2, then the actual size used is + (2,2,2). + output : array, optional + The ``output`` parameter passes an array in which to store the + filter output. + mode : {'reflect','constant','nearest','mirror', 'wrap'}, optional + The ``mode`` parameter determines how the array borders are + handled, where ``cval`` is the value when mode is equal to + 'constant'. Default is 'reflect' + cval : scalar, optional + Value to fill past edges of input if ``mode`` is 'constant'. Default + is 0.0 + origin : scalar, optional + The ``origin`` parameter controls the placement of the filter. + Default 0 + """ return _rank_filter(input, 0, size, footprint, output, mode, cval, origin, 'median') diff -Nru python-scipy-0.7.2+dfsg1/scipy/ndimage/fourier.py python-scipy-0.8.0+dfsg1/scipy/ndimage/fourier.py --- python-scipy-0.7.2+dfsg1/scipy/ndimage/fourier.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/ndimage/fourier.py 2010-07-26 15:48:32.000000000 +0100 @@ -58,7 +58,7 @@ if input.dtype.type in [numpy.complex64, numpy.complex128]: output = numpy.zeros(input.shape, dtype = input.dtype) else: - output = numpy.zeros(input.shape, dtype = numpy.Complex64) + output = numpy.zeros(input.shape, dtype = numpy.complex128) return_value = output elif type(output) is types.TypeType: if output not in [numpy.complex64, numpy.complex128]: @@ -72,15 +72,38 @@ return output, return_value def fourier_gaussian(input, sigma, n = -1, axis = -1, output = None): - """Multi-dimensional Gaussian fourier filter. + """ + Multi-dimensional Gaussian fourier filter. The array is multiplied with the fourier transform of a Gaussian - kernel. If the parameter n is negative, then the input is assumed to be - the result of a complex fft. If n is larger or equal to zero, the input - is assumed to be the result of a real fft, and n gives the length of - the of the array before transformation along the the real transform - direction. The axis of the real transform is given by the axis - parameter. + kernel. + + Parameters + ---------- + input : array_like + The input array. + sigma : float or sequence + The sigma of the Gaussian kernel. If a float, `sigma` is the same for + all axes. If a sequence, `sigma` has to contain one value for each + axis. + n : int, optional + If `n` is negative (default), then the input is assumed to be the + result of a complex fft. + If `n` is larger than or equal to zero, the input is assumed to be the + result of a real fft, and `n` gives the length of the array before + transformation along the real transform direction. + axis : int, optional + The axis of the real transform. + output : ndarray, optional + If given, the result of filtering the input is placed in this array. + None is returned in this case. + + Returns + ------- + return_value : ndarray or None + The filtered input. If `output` is given as a parameter, None is + returned. + """ input = numpy.asarray(input) output, return_value = _get_output_fourier(output, input) @@ -94,15 +117,38 @@ return return_value def fourier_uniform(input, size, n = -1, axis = -1, output = None): - """Multi-dimensional Uniform fourier filter. + """ + Multi-dimensional uniform fourier filter. The array is multiplied with the fourier transform of a box of given - sizes. If the parameter n is negative, then the input is assumed to be - the result of a complex fft. If n is larger or equal to zero, the input - is assumed to be the result of a real fft, and n gives the length of - the of the array before transformation along the the real transform - direction. The axis of the real transform is given by the axis - parameter. + size. + + Parameters + ---------- + input : array_like + The input array. + size : float or sequence + The size of the box used for filtering. + If a float, `size` is the same for all axes. If a sequence, `size` has + to contain one value for each axis. + n : int, optional + If `n` is negative (default), then the input is assumed to be the + result of a complex fft. + If `n` is larger than or equal to zero, the input is assumed to be the + result of a real fft, and `n` gives the length of the array before + transformation along the real transform direction. + axis : int, optional + The axis of the real transform. + output : ndarray, optional + If given, the result of filtering the input is placed in this array. + None is returned in this case. + + Returns + ------- + return_value : ndarray or None + The filtered input. If `output` is given as a parameter, None is + returned. + """ input = numpy.asarray(input) output, return_value = _get_output_fourier(output, input) @@ -115,16 +161,42 @@ return return_value def fourier_ellipsoid(input, size, n = -1, axis = -1, output = None): - """Multi-dimensional ellipsoid fourier filter. + """ + Multi-dimensional ellipsoid fourier filter. The array is multiplied with the fourier transform of a ellipsoid of - given sizes. If the parameter n is negative, then the input is assumed - to be the result of a complex fft. If n is larger or equal to zero, the - input is assumed to be the result of a real fft, and n gives the length - of the of the array before transformation along the the real transform - direction. The axis of the real transform is given by the axis - parameter. This function is implemented for arrays of - rank 1, 2, or 3. + given sizes. + + Parameters + ---------- + input : array_like + The input array. + size : float or sequence + The size of the box used for filtering. + If a float, `size` is the same for all axes. If a sequence, `size` has + to contain one value for each axis. + n : int, optional + If `n` is negative (default), then the input is assumed to be the + result of a complex fft. + If `n` is larger than or equal to zero, the input is assumed to be the + result of a real fft, and `n` gives the length of the array before + transformation along the real transform direction. + axis : int, optional + The axis of the real transform. + output : ndarray, optional + If given, the result of filtering the input is placed in this array. + None is returned in this case. + + Returns + ------- + return_value : ndarray or None + The filtered input. If `output` is given as a parameter, None is + returned. + + Notes + ----- + This function is implemented for arrays of rank 1, 2, or 3. + """ input = numpy.asarray(input) output, return_value = _get_output_fourier(output, input) @@ -137,16 +209,38 @@ return return_value def fourier_shift(input, shift, n = -1, axis = -1, output = None): - """Multi-dimensional fourier shift filter. + """ + Multi-dimensional fourier shift filter. - The array is multiplied with the fourier transform of a shift operation - If the parameter n is negative, then the input is assumed to be the - result of a complex fft. If n is larger or equal to zero, the input is - assumed to be the result of a real fft, and n gives the length of the - of the array before transformation along the the real transform - direction. The axis of the real transform is given by the axis - parameter. - """ + The array is multiplied with the fourier transform of a shift operation. + + Parameters + ---------- + input : array_like + The input array. + shift : float or sequence + The size of the box used for filtering. + If a float, `shift` is the same for all axes. If a sequence, `shift` + has to contain one value for each axis. + n : int, optional + If `n` is negative (default), then the input is assumed to be the + result of a complex fft. + If `n` is larger than or equal to zero, the input is assumed to be the + result of a real fft, and `n` gives the length of the array before + transformation along the real transform direction. + axis : int, optional + The axis of the real transform. + output : ndarray, optional + If given, the result of shifting the input is placed in this array. + None is returned in this case. + + Returns + ------- + return_value : ndarray or None + The shifted input. If `output` is given as a parameter, None is + returned. + + """ input = numpy.asarray(input) output, return_value = _get_output_fourier_complex(output, input) axis = _ni_support._check_axis(axis, input.ndim) diff -Nru python-scipy-0.7.2+dfsg1/scipy/ndimage/__init__.py python-scipy-0.8.0+dfsg1/scipy/ndimage/__init__.py --- python-scipy-0.7.2+dfsg1/scipy/ndimage/__init__.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/ndimage/__init__.py 2010-07-26 15:48:32.000000000 +0100 @@ -34,6 +34,12 @@ from interpolation import * from measurements import * from morphology import * +from io import * + +# doccer is moved to scipy.misc in scipy 0.8 +from scipy.misc import doccer +doccer = numpy.deprecate(doccer, old_name='doccer', + new_name='scipy.misc.doccer') from info import __doc__ __version__ = '2.0' diff -Nru python-scipy-0.7.2+dfsg1/scipy/ndimage/interpolation.py python-scipy-0.8.0+dfsg1/scipy/ndimage/interpolation.py --- python-scipy-0.7.2+dfsg1/scipy/ndimage/interpolation.py 2010-03-20 09:20:24.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/ndimage/interpolation.py 2010-07-26 15:48:32.000000000 +0100 @@ -39,10 +39,33 @@ def spline_filter1d(input, order = 3, axis = -1, output = numpy.float64, output_type = None): - """Calculates a one-dimensional spline filter along the given axis. + """ + Calculates a one-dimensional spline filter along the given axis. The lines of the array along the given axis are filtered by a spline filter. The order of the spline must be >= 2 and <= 5. + + Parameters + ---------- + input : array_like + The input array. + order : int, optional + The order of the spline, default is 3. + axis : int, optional + The axis along which the spline filter is applied. Default is the last + axis. + output : ndarray or dtype, optional + The array in which to place the output, or the dtype of the returned + array. Default is `numpy.float64`. + output_type : dtype, optional + DEPRECATED, DO NOT USE. If used, a RuntimeError is raised. + + Returns + ------- + return_value : ndarray or None + The filtered input. If `output` is given as a parameter, None is + returned. + """ if order < 0 or order > 5: raise RuntimeError, 'spline order not supported' @@ -61,13 +84,23 @@ def spline_filter(input, order = 3, output = numpy.float64, output_type = None): - """Multi-dimensional spline filter. + """ + Multi-dimensional spline filter. + + For more details, see `spline_filter1d`. - Note: The multi-dimensional filter is implemented as a sequence of + See Also + -------- + spline_filter1d + + Notes + ----- + The multi-dimensional filter is implemented as a sequence of one-dimensional spline filters. The intermediate arrays are stored in the same data type as the output. Therefore, for output types with a limited precision, the results may be imprecise because intermediate results may be stored with insufficient precision. + """ if order < 2 or order > 5: raise RuntimeError, 'spline order not supported' @@ -88,37 +121,71 @@ output_type = None, output = None, order = 3, mode = 'constant', cval = 0.0, prefilter = True, extra_arguments = (), extra_keywords = {}): - """Apply an arbritrary geometric transform. + """ + Apply an arbritrary geometric transform. The given mapping function is used to find, for each point in the output, the corresponding coordinates in the input. The value of the input at those coordinates is determined by spline interpolation of the requested order. - mapping must be a callable object that accepts a tuple of length - equal to the output array rank and returns the corresponding input - coordinates as a tuple of length equal to the input array - rank. Points outside the boundaries of the input are filled - according to the given mode ('constant', 'nearest', 'reflect' or - 'wrap'). The output shape can optionally be given. If not given, - it is equal to the input shape. The parameter prefilter determines - if the input is pre-filtered before interpolation (necessary for - spline interpolation of order > 1). If False it is assumed that - the input is already filtered. The extra_arguments and - extra_keywords arguments can be used to provide extra arguments - and keywords that are passed to the mapping function at each call. + Parameters + ---------- + input : array_like + The input array. + mapping : callable + A callable object that accepts a tuple of length equal to the output + array rank, and returns the corresponding input coordinates as a tuple + of length equal to the input array rank. + output_shape : tuple of ints + Shape tuple. + output : ndarray or dtype, optional + The array in which to place the output, or the dtype of the returned + array. + output_type : dtype, optional + DEPRECATED, DO NOT USE. If used, a RuntimeError is raised. + order : int, optional + The order of the spline interpolation, default is 3. + The order has to be in the range 0-5. + mode : str, optional + Points outside the boundaries of the input are filled according + to the given mode ('constant', 'nearest', 'reflect' or 'wrap'). + Default is 'constant'. + cval : scalar, optional + Value used for points outside the boundaries of the input if + ``mode='constant'``. Default is 0.0 + prefilter : bool, optional + The parameter prefilter determines if the input is pre-filtered with + `spline_filter` before interpolation (necessary for spline + interpolation of order > 1). If False, it is assumed that the input is + already filtered. Default is True. + extra_arguments : tuple, optional + Extra arguments passed to `mapping`. + extra_keywords : dict, optional + Extra keywords passed to `mapping`. - Example + Returns ------- - >>> a = arange(12.).reshape((4,3)) - >>> def shift_func(output_coordinates): - ... return (output_coordinates[0]-0.5, output_coordinates[1]-0.5) + return_value : ndarray or None + The filtered input. If `output` is given as a parameter, None is + returned. + + See Also + -------- + map_coordinates, affine_transform, spline_filter1d + + Examples + -------- + >>> a = np.arange(12.).reshape((4, 3)) + >>> def shift_func(output_coords): + ... return (output_coords[0] - 0.5, output_coords[1] - 0.5) ... - >>> print geometric_transform(a,shift_func) - array([[ 0. , 0. , 0. ], - [ 0. , 1.3625, 2.7375], - [ 0. , 4.8125, 6.1875], - [ 0. , 8.2625, 9.6375]]) + >>> sp.ndimage.geometric_transform(a, shift_func) + array([[ 0. , 0. , 0. ], + [ 0. , 1.362, 2.738], + [ 0. , 4.812, 6.187], + [ 0. , 8.263, 9.637]]) + """ if order < 0 or order > 5: raise RuntimeError, 'spline order not supported' @@ -159,38 +226,35 @@ Parameters ---------- input : ndarray - The input array + The input array. coordinates : array_like - The coordinates at which `input` is evaluated. - output_type : deprecated - Use `output` instead. - output : dtype, optional - If the output has to have a certain type, specify the dtype. - The default behavior is for the output to have the same type - as `input`. + The coordinates at which `input` is evaluated. + output : ndarray or dtype, optional + The array in which to place the output, or the dtype of the returned + array. + output_type : dtype, optional + DEPRECATED, DO NOT USE. If used, a RuntimeError is raised. order : int, optional - The order of the spline interpolation, default is 3. - The order has to be in the range 0-5. + The order of the spline interpolation, default is 3. + The order has to be in the range 0-5. mode : str, optional - Points outside the boundaries of the input are filled according - to the given mode ('constant', 'nearest', 'reflect' or 'wrap'). - Default is 'constant'. + Points outside the boundaries of the input are filled according + to the given mode ('constant', 'nearest', 'reflect' or 'wrap'). + Default is 'constant'. cval : scalar, optional - Value used for points outside the boundaries of the input if - `mode='constant`. Default is 0.0 + Value used for points outside the boundaries of the input if + ``mode='constant'``. Default is 0.0 prefilter : bool, optional - The parameter prefilter determines if the input is - pre-filtered with `spline_filter`_ before interpolation - (necessary for spline interpolation of order > 1). - If False, it is assumed that the input is already filtered. + The parameter prefilter determines if the input is pre-filtered with + `spline_filter` before interpolation (necessary for spline + interpolation of order > 1). If False, it is assumed that the input is + already filtered. Default is True. Returns ------- return_value : ndarray - The result of transforming the input. The shape of the - output is derived from that of `coordinates` by dropping - the first axis. - + The result of transforming the input. The shape of the output is + derived from that of `coordinates` by dropping the first axis. See Also -------- @@ -199,8 +263,8 @@ Examples -------- >>> import scipy.ndimage - >>> a = np.arange(12.).reshape((4,3)) - >>> print a + >>> a = np.arange(12.).reshape((4, 3)) + >>> a array([[ 0., 1., 2.], [ 3., 4., 5.], [ 6., 7., 8.], @@ -248,22 +312,58 @@ def affine_transform(input, matrix, offset = 0.0, output_shape = None, output_type = None, output = None, order = 3, mode = 'constant', cval = 0.0, prefilter = True): - """Apply an affine transformation. + """ + Apply an affine transformation. The given matrix and offset are used to find for each point in the output the corresponding coordinates in the input by an affine transformation. The value of the input at those coordinates is determined by spline interpolation of the requested order. Points outside the boundaries of the input are filled according to the given - mode. The output shape can optionally be given. If not given it is - equal to the input shape. The parameter prefilter determines if the - input is pre-filtered before interpolation, if False it is assumed - that the input is already filtered. - - The matrix must be two-dimensional or can also be given as a - one-dimensional sequence or array. In the latter case, it is - assumed that the matrix is diagonal. A more efficient algorithms - is then applied that exploits the separability of the problem. + mode. + + Parameters + ---------- + input : ndarray + The input array. + matrix : ndarray + The matrix must be two-dimensional or can also be given as a + one-dimensional sequence or array. In the latter case, it is assumed + that the matrix is diagonal. A more efficient algorithms is then + applied that exploits the separability of the problem. + offset : float or sequence, optional + The offset into the array where the transform is applied. If a float, + `offset` is the same for each axis. If a sequence, `offset` should + contain one value for each axis. + output_shape : tuple of ints, optional + Shape tuple. + output : ndarray or dtype, optional + The array in which to place the output, or the dtype of the returned + array. + output_type : dtype, optional + DEPRECATED, DO NOT USE. If used, a RuntimeError is raised. + order : int, optional + The order of the spline interpolation, default is 3. + The order has to be in the range 0-5. + mode : str, optional + Points outside the boundaries of the input are filled according + to the given mode ('constant', 'nearest', 'reflect' or 'wrap'). + Default is 'constant'. + cval : scalar, optional + Value used for points outside the boundaries of the input if + ``mode='constant'``. Default is 0.0 + prefilter : bool, optional + The parameter prefilter determines if the input is pre-filtered with + `spline_filter` before interpolation (necessary for spline + interpolation of order > 1). If False, it is assumed that the input is + already filtered. Default is True. + + Returns + ------- + return_value : ndarray or None + The transformed input. If `output` is given as a parameter, None is + returned. + """ if order < 0 or order > 5: raise RuntimeError, 'spline order not supported' @@ -307,13 +407,47 @@ def shift(input, shift, output_type = None, output = None, order = 3, mode = 'constant', cval = 0.0, prefilter = True): - """Shift an array. + """ + Shift an array. + + The array is shifted using spline interpolation of the requested order. + Points outside the boundaries of the input are filled according to the + given mode. + + Parameters + ---------- + input : ndarray + The input array. + shift : float or sequence, optional + The shift along the axes. If a float, `shift` is the same for each + axis. If a sequence, `shift` should contain one value for each axis. + output : ndarray or dtype, optional + The array in which to place the output, or the dtype of the returned + array. + output_type : dtype, optional + DEPRECATED, DO NOT USE. If used, a RuntimeError is raised. + order : int, optional + The order of the spline interpolation, default is 3. + The order has to be in the range 0-5. + mode : str, optional + Points outside the boundaries of the input are filled according + to the given mode ('constant', 'nearest', 'reflect' or 'wrap'). + Default is 'constant'. + cval : scalar, optional + Value used for points outside the boundaries of the input if + ``mode='constant'``. Default is 0.0 + prefilter : bool, optional + The parameter prefilter determines if the input is pre-filtered with + `spline_filter` before interpolation (necessary for spline + interpolation of order > 1). If False, it is assumed that the input is + already filtered. Default is True. + + Returns + ------- + return_value : ndarray or None + The shifted input. If `output` is given as a parameter, None is + returned. - The array is shifted using spline interpolation of the requested - order. Points outside the boundaries of the input are filled according - to the given mode. The parameter prefilter determines if the input is - pre-filtered before interpolation, if False it is assumed that the - input is already filtered. """ if order < 0 or order > 5: raise RuntimeError, 'spline order not supported' @@ -340,13 +474,45 @@ def zoom(input, zoom, output_type = None, output = None, order = 3, mode = 'constant', cval = 0.0, prefilter = True): - """Zoom an array. + """ + Zoom an array. The array is zoomed using spline interpolation of the requested order. - Points outside the boundaries of the input are filled according to the - given mode. The parameter prefilter determines if the input is pre- - filtered before interpolation, if False it is assumed that the input - is already filtered. + + Parameters + ---------- + input : ndarray + The input array. + zoom : float or sequence, optional + The zoom factor along the axes. If a float, `zoom` is the same for each + axis. If a sequence, `zoom` should contain one value for each axis. + output : ndarray or dtype, optional + The array in which to place the output, or the dtype of the returned + array. + output_type : dtype, optional + DEPRECATED, DO NOT USE. If used, a RuntimeError is raised. + order : int, optional + The order of the spline interpolation, default is 3. + The order has to be in the range 0-5. + mode : str, optional + Points outside the boundaries of the input are filled according + to the given mode ('constant', 'nearest', 'reflect' or 'wrap'). + Default is 'constant'. + cval : scalar, optional + Value used for points outside the boundaries of the input if + ``mode='constant'``. Default is 0.0 + prefilter : bool, optional + The parameter prefilter determines if the input is pre-filtered with + `spline_filter` before interpolation (necessary for spline + interpolation of order > 1). If False, it is assumed that the input is + already filtered. Default is True. + + Returns + ------- + return_value : ndarray or None + The zoomed input. If `output` is given as a parameter, None is + returned. + """ if order < 0 or order > 5: raise RuntimeError, 'spline order not supported' @@ -384,16 +550,51 @@ def rotate(input, angle, axes = (1, 0), reshape = True, output_type = None, output = None, order = 3, mode = 'constant', cval = 0.0, prefilter = True): - """Rotate an array. + """ + Rotate an array. The array is rotated in the plane defined by the two axes given by the - axes parameter using spline interpolation of the requested order. The - angle is given in degrees. Points outside the boundaries of the input - are filled according to the given mode. If reshape is true, the output - shape is adapted so that the input array is contained completely in - the output. The parameter prefilter determines if the input is pre- - filtered before interpolation, if False it is assumed that the input - is already filtered. + `axes` parameter using spline interpolation of the requested order. + + Parameters + ---------- + input : ndarray + The input array. + angle : float + The rotation angle in degrees. + axes : tuple of 2 ints, optional + The two axes that define the plane of rotation. Default is the first + two axes. + reshape : bool, optional + If `reshape` is true, the output shape is adapted so that the input + array is contained completely in the output. Default is True. + output : ndarray or dtype, optional + The array in which to place the output, or the dtype of the returned + array. + output_type : dtype, optional + DEPRECATED, DO NOT USE. If used, a RuntimeError is raised. + order : int, optional + The order of the spline interpolation, default is 3. + The order has to be in the range 0-5. + mode : str, optional + Points outside the boundaries of the input are filled according + to the given mode ('constant', 'nearest', 'reflect' or 'wrap'). + Default is 'constant'. + cval : scalar, optional + Value used for points outside the boundaries of the input if + ``mode='constant'``. Default is 0.0 + prefilter : bool, optional + The parameter prefilter determines if the input is pre-filtered with + `spline_filter` before interpolation (necessary for spline + interpolation of order > 1). If False, it is assumed that the input is + already filtered. Default is True. + + Returns + ------- + return_value : ndarray or None + The rotated input. If `output` is given as a parameter, None is + returned. + """ input = numpy.asarray(input) axes = list(axes) diff -Nru python-scipy-0.7.2+dfsg1/scipy/ndimage/io.py python-scipy-0.8.0+dfsg1/scipy/ndimage/io.py --- python-scipy-0.7.2+dfsg1/scipy/ndimage/io.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/ndimage/io.py 2010-07-26 15:48:32.000000000 +0100 @@ -0,0 +1,41 @@ +__all__ = ['imread'] + +from numpy import array + +def imread(fname, flatten=False): + """ + Load an image from file. + + Parameters + ---------- + fname : str + Image file name, e.g. ``test.jpg``. + flatten : bool, optional + If true, convert the output to grey-scale. Default is False. + + Returns + ------- + img_array : ndarray + The different colour bands/channels are stored in the + third dimension, such that a grey-image is MxN, an + RGB-image MxNx3 and an RGBA-image MxNx4. + + Raises + ------ + ImportError + If the Python Imaging Library (PIL) can not be imported. + + """ + try: + from PIL import Image + except ImportError: + raise ImportError("Could not import the Python Imaging Library (PIL)" + " required to load image files. Please refer to" + " http://pypi.python.org/pypi/PIL/ for installation" + " instructions.") + + im = Image.open(fname) + if flatten: + im = im.convert('F') + return array(im) + diff -Nru python-scipy-0.7.2+dfsg1/scipy/ndimage/measurements.py python-scipy-0.8.0+dfsg1/scipy/ndimage/measurements.py --- python-scipy-0.7.2+dfsg1/scipy/ndimage/measurements.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/ndimage/measurements.py 2010-07-26 15:48:32.000000000 +0100 @@ -31,19 +31,114 @@ import types import math import numpy +import numpy as np import _ni_support import _nd_image import morphology +import time def label(input, structure = None, output = None): - """Label an array of objects. + """ + Label features in an array. + + Parameters + ---------- + + input : array_like + An array-like object to be labeled. Any non-zero values in `input` are + counted as features and zero values are considered the background. + + + structure : array_like, optional + A structuring element that defines feature connections. + + `structure` must be symmetric. If no structuring element is provided, + one is automatically generated with a squared connectivity equal to + one. + + That is, for a 2D `input` array, the default structuring element is:: + + [[0,1,0], + [1,1,1], + [0,1,0]] + + + output : (None, data-type, array_like), optional + If `output` is a data type, it specifies the type of the resulting + labeled feature array + + If `output` is an array-like object, then `output` will be updated + with the labeled features from this function + + Returns + ------- + labeled_array : array_like + An array-like object where each unique feature has a unique value + + num_features : int + + + If `output` is None or a data type, this function returns a tuple, + (`labeled_array`, `num_features`). + + If `output` is an array, then it will be updated with values in + `labeled_array` and only `num_features` will be returned by this function. + + + See Also + -------- + find_objects : generate a list of slices for the labeled features (or + objects); useful for finding features' position or + dimensions + + Examples + -------- + + Create an image with some features, then label it using the default + (cross-shaped) structuring element: + + >>> a = array([[0,0,1,1,0,0], + ... [0,0,0,1,0,0], + ... [1,1,0,0,1,0], + ... [0,0,0,1,0,0]]) + >>> labeled_array, num_features = label(a) + + Each of the 4 features are labeled with a different integer: + + >>> print num_features + 4 + >>> print labeled_array + array([[0, 0, 1, 1, 0, 0], + [0, 0, 0, 1, 0, 0], + [2, 2, 0, 0, 3, 0], + [0, 0, 0, 4, 0, 0]]) + + Generate a structuring element that will consider features connected even + if they touch diagonally: + + >>> s = generate_binary_structure(2,2) + + or, + + >>> s = [[1,1,1], + [1,1,1], + [1,1,1]] + + Label the image using the new structuring element: + + >>> labeled_array, num_features = label(a, structure=s) + + Show the 2 labeled features (note that features 1, 3, and 4 from above are + now considered a single feature): + + >>> print num_features + 2 + >>> print labeled_array + array([[0, 0, 1, 1, 0, 0], + [0, 0, 0, 1, 0, 0], + [2, 2, 0, 0, 1, 0], + [0, 0, 0, 1, 0, 0]]) - The structure that defines the object connections must be - symmetric. If no structuring element is provided an element is - generated with a squared connectivity equal to one. This function - returns a tuple consisting of the array of labels and the number - of objects found. If an output array is provided only the number of - objects found is returned. """ input = numpy.asarray(input) if numpy.iscomplexobj(input): @@ -87,17 +182,188 @@ max_label = input.max() return _nd_image.find_objects(input, max_label) -def sum(input, labels=None, index=None): - """Calculate the sum of the values of the array. +def labeled_comprehension(input, labels, index, func, out_dtype, default, pass_positions=False): + '''Roughly equivalent to [func(input[labels == i]) for i in index]. + + Special cases: + - index a scalar: returns a single value + - index is None: returns func(inputs[labels > 0]) + + func will be called with linear indices as a second argument if + pass_positions is True. + ''' + + as_scalar = numpy.isscalar(index) + input = numpy.asarray(input) + + if pass_positions: + positions = numpy.arange(input.size).reshape(input.shape) + + if labels is None: + if index is not None: + raise ValueError, "index without defined labels" + if not pass_positions: + return func(input.ravel()) + else: + return func(input.ravel(), positions.ravel()) + + try: + input, labels = numpy.broadcast_arrays(input, labels) + except ValueError: + raise ValueError, "input and labels must have the same shape (excepting dimensions with width 1)" + + if index is None: + if not pass_positions: + return func(input[labels > 0]) + else: + return func(input[labels > 0], positions[labels > 0]) + + index = numpy.atleast_1d(index) + if np.any(index.astype(labels.dtype).astype(index.dtype) != index): + raise ValueError, "Cannot convert index values from <%s> to <%s> (labels' type) without loss of precision"%(index.dtype, labels.dtype) + index = index.astype(labels.dtype) + + # optimization: find min/max in index, and select those parts of labels, input, and positions + lo = index.min() + hi = index.max() + mask = (labels >= lo) & (labels <= hi) + + # this also ravels the arrays + labels = labels[mask] + input = input[mask] + if pass_positions: + positions = positions[mask] + + # sort everything by labels + label_order = labels.argsort() + labels = labels[label_order] + input = input[label_order] + if pass_positions: + positions = positions[label_order] + + index_order = index.argsort() + sorted_index = index[index_order] + + def do_map(inputs, output): + '''labels must be sorted''' + + nlabels = labels.size + nidx = sorted_index.size + + # Find boundaries for each stretch of constant labels + # This could be faster, but we already paid N log N to sort labels. + lo = numpy.searchsorted(labels, sorted_index, side='left') + hi = numpy.searchsorted(labels, sorted_index, side='right') + + for i, l, h in zip(range(nidx), lo, hi): + if l == h: + continue + idx = sorted_index[i] + output[i] = func(*[inp[l:h] for inp in inputs]) + + temp = numpy.empty(index.shape, out_dtype) + temp[:] = default + if not pass_positions: + do_map([input], temp) + else: + do_map([input, positions], temp) + output = numpy.zeros(index.shape, out_dtype) + output[index_order] = temp + + if as_scalar: + output = output[0] + + return output + +def _stats(input, labels = None, index = None, do_sum2=False): + '''returns count, sum, and optionally sum^2 by label''' + + def single_group(vals): + if do_sum2: + return vals.size, vals.sum(), (vals * vals.conjugate()).sum() + else: + return vals.size, vals.sum() + + if labels is None: + return single_group(input) + + # ensure input and labels match sizes + input, labels = numpy.broadcast_arrays(input, labels) + + if index is None: + return single_group(input[labels > 0]) + + if numpy.isscalar(index): + return single_group(input[labels == index]) + + # remap labels to unique integers if necessary, or if the largest + # label is larger than the number of values. + if ((not numpy.issubdtype(labels.dtype, numpy.int)) or + (labels.min() < 0) or (labels.max() > labels.size)): + unique_labels, new_labels = numpy.unique1d(labels, return_inverse=True) + + counts = numpy.bincount(new_labels) + sums = numpy.bincount(new_labels, weights=input.ravel()) + if do_sum2: + sums2 = numpy.bincount(new_labels, weights=(input * input.conjugate()).ravel()) + + idxs = numpy.searchsorted(unique_labels, index) + # make all of idxs valid + idxs[idxs >= unique_labels.size] = 0 + found = (unique_labels[idxs] == index) + else: + # labels are an integer type, and there aren't too many, so + # call bincount directly. + counts = numpy.bincount(labels.ravel()) + sums = numpy.bincount(labels.ravel(), weights=input.ravel()) + if do_sum2: + sums2 = numpy.bincount(labels.ravel(), weights=(input * input.conjugate()).ravel()) + + # make sure all index values are valid + idxs = numpy.asanyarray(index, numpy.int).copy() + found = (idxs >= 0) & (idxs < counts.size) + idxs[~ found] = 0 + + counts = counts[idxs] + counts[~ found] = 0 + sums = sums[idxs] + sums[~ found] = 0 + if not do_sum2: + return (counts, sums) + sums2 = sums2[idxs] + sums2[~ found] = 0 + return (counts, sums, sums2) + + +def sum(input, labels = None, index = None): + """ + Calculate the sum of the values of the array. + + Parameters + ---------- + + input : array_like + Values of `input` inside the regions defined by `labels` + are summed together. + + labels : array of integers, same shape as input + Assign labels to the values of the array. + + index : scalar or array + A single label number or a sequence of label numbers of + the objects to be measured. + + Returns + ------- + + output : list + A list of the sums of the values of `input` inside the regions + defined by `labels`. + + See also + -------- - :Parameters: - labels : array of integers, same shape as input - Assign labels to the values of the array. - - index : scalar or array - A single label number or a sequence of label numbers of - the objects to be measured. If index is None, all - values are used where 'labels' is larger than zero. + mean Examples -------- @@ -108,247 +374,383 @@ [1.0, 5.0] """ - input = numpy.asarray(input) - if numpy.iscomplexobj(input): - raise TypeError, 'Complex type not supported' - if labels is not None: - labels = numpy.asarray(labels) - labels = _broadcast(labels, input.shape) - - if labels.shape != input.shape: - raise RuntimeError, 'input and labels shape are not equal' - if index is not None: - T = getattr(index,'dtype',numpy.int32) - if T not in [numpy.int8, numpy.int16, numpy.int32, - numpy.uint8, numpy.uint16, numpy.bool]: - raise ValueError("Invalid index type") - index = numpy.asarray(index,dtype=T) - return _nd_image.statistics(input, labels, index, 0) - + count, sum = _stats(input, labels, index) + return sum def mean(input, labels = None, index = None): - """Calculate the mean of the values of the array. + """Calculate the mean of the values of an array at labels. - The index parameter is a single label number or a sequence of - label numbers of the objects to be measured. If index is None, all - values are used where labels is larger than zero. + Labels must be None or an array that can be broadcast to the input. + + Index must be None, a single label or sequence of labels. If + None, the mean for all values where label is greater than 0 is + calculated. """ - input = numpy.asarray(input) - if numpy.iscomplexobj(input): - raise TypeError, 'Complex type not supported' - if labels is not None: - labels = numpy.asarray(labels) - labels = _broadcast(labels, input.shape) - - if labels.shape != input.shape: - raise RuntimeError, 'input and labels shape are not equal' - return _nd_image.statistics(input, labels, index, 1) + count, sum = _stats(input, labels, index) + return sum / numpy.asanyarray(count).astype(numpy.float) def variance(input, labels = None, index = None): - """Calculate the variance of the values of the array. + """Calculate the variance of the values of an array at labels. + + Labels must be None or an array of the same dimensions as the input. - The index parameter is a single label number or a sequence of - label numbers of the objects to be measured. If index is None, all - values are used where labels is larger than zero. + Index must be None, a single label or sequence of labels. If + none, all values where label is greater than zero are used. """ - input = numpy.asarray(input) - if numpy.iscomplexobj(input): - raise TypeError, 'Complex type not supported' - if labels is not None: - labels = numpy.asarray(labels) - labels = _broadcast(labels, input.shape) - - if labels.shape != input.shape: - raise RuntimeError, 'input and labels shape are not equal' - return _nd_image.statistics(input, labels, index, 2) + count, sum, sum2 = _stats(input, labels, index, do_sum2=True) + mean = sum / numpy.asanyarray(count).astype(numpy.float) + mean2 = sum2 / numpy.asanyarray(count).astype(numpy.float) + + return mean2 - (mean * mean.conjugate()) def standard_deviation(input, labels = None, index = None): - """Calculate the standard deviation of the values of the array. + """Calculate the standard deviation of the values of an array at labels. - The index parameter is a single label number or a sequence of - label numbers of the objects to be measured. If index is None, all - values are used where labels is larger than zero. - """ - var = variance(input, labels, index) - if (isinstance(var, types.ListType)): - return [math.sqrt(x) for x in var] - else: - return math.sqrt(var) + Labels must be None or an array of the same dimensions as the input. + Index must be None, a single label or sequence of labels. If + none, all values where label is greater than zero are used. + """ -def minimum(input, labels = None, index = None): - """Calculate the minimum of the values of the array. + return numpy.sqrt(variance(input, labels, index)) + +def _select(input, labels = None, index = None, find_min=False, find_max=False, find_min_positions=False, find_max_positions=False): + '''returns min, max, or both, plus positions if requested''' + + + find_positions = find_min_positions or find_max_positions + positions = None + if find_positions: + positions = numpy.arange(input.size).reshape(input.shape) + + def single_group(vals, positions): + result = [] + if find_min: + result += [vals.min()] + if find_min_positions: + result += [positions[vals == vals.min()][0]] + if find_max: + result += [vals.max()] + if find_max_positions: + result += [positions[vals == vals.max()][0]] + return result + + if labels is None: + return single_group(input, positions) + + # ensure input and labels match sizes + input, labels = numpy.broadcast_arrays(input, labels) + + if index is None: + mask = (labels > 0) + masked_positions = None + if find_positions: + masked_positions = positions[mask] + return single_group(input[mask], masked_positions) + + if numpy.isscalar(index): + mask = (labels == index) + masked_positions = None + if find_positions: + masked_positions = positions[mask] + return single_group(input[mask], masked_positions) + + order = input.ravel().argsort() + input = input.ravel()[order] + labels = labels.ravel()[order] + if find_positions: + positions = positions.ravel()[order] + + # remap labels to unique integers if necessary, or if the largest + # label is larger than the number of values. + if ((not numpy.issubdtype(labels.dtype, numpy.int)) or + (labels.min() < 0) or (labels.max() > labels.size)): + # remap labels, and indexes + unique_labels, labels = numpy.unique1d(labels, return_inverse=True) + idxs = numpy.searchsorted(unique_labels, index) + + # make all of idxs valid + idxs[idxs >= unique_labels.size] = 0 + found = (unique_labels[idxs] == index) + else: + # labels are an integer type, and there aren't too many. + idxs = numpy.asanyarray(index, numpy.int).copy() + found = (idxs >= 0) & (idxs <= labels.max()) + + idxs[~ found] = labels.max() + 1 + + result = [] + if find_min: + mins = numpy.zeros(labels.max() + 2, input.dtype) + mins[labels[::-1]] = input[::-1] + result += [mins[idxs]] + if find_min_positions: + minpos = numpy.zeros(labels.max() + 2) + minpos[labels[::-1]] = positions[::-1] + result += [minpos[idxs]] + if find_max: + maxs = numpy.zeros(labels.max() + 2, input.dtype) + maxs[labels] = input + result += [maxs[idxs]] + if find_max_positions: + maxpos = numpy.zeros(labels.max() + 2) + maxpos[labels] = positions + result += [maxpos[idxs]] + return result - The index parameter is a single label number or a sequence of - label numbers of the objects to be measured. If index is None, all - values are used where labels is larger than zero. +def minimum(input, labels = None, index = None): """ - input = numpy.asarray(input) - if numpy.iscomplexobj(input): - raise TypeError, 'Complex type not supported' - if labels is not None: - labels = numpy.asarray(labels) - labels = _broadcast(labels, input.shape) + Calculate the minimum of the values of an array over labeled regions. - if labels.shape != input.shape: - raise RuntimeError, 'input and labels shape are not equal' - return _nd_image.statistics(input, labels, index, 3) + Parameters + ---------- + input: array-like + Array-like of values. For each region specified by `labels`, the + minimal values of `input` over the region is computed. + + labels: array-like, optional + An array-like of integers marking different regions over which the + minimum value of `input` is to be computed. `labels` must have the + same shape as `input`. If `labels` is not specified, the minimum + over the whole array is returned. + + index: array-like, optional + A list of region labels that are taken into account for computing the + minima. If index is None, the minimum over all elements where `labels` + is non-zero is returned. + + Returns + ------- + output : float or list of floats + List of minima of `input` over the regions determined by `labels` and + whose index is in `index`. If `index` or `labels` are not specified, a + float is returned: the minimal value of `input` if `labels` is None, + and the minimal value of elements where `labels` is greater than zero + if `index` is None. -def maximum(input, labels=None, index=None): - """Return the maximum input value. + See also + -------- - The index parameter is a single label number or a sequence of - label numbers of the objects to be measured. If index is None, all - values are used where labels is larger than zero. + label, maximum, minimum_position, extrema, sum, mean, variance, + standard_deviation - """ - input = numpy.asarray(input) - if numpy.iscomplexobj(input): - raise TypeError, 'Complex type not supported' - if labels is not None: - labels = numpy.asarray(labels) - labels = _broadcast(labels, input.shape) - - if labels.shape != input.shape: - raise RuntimeError, 'input and labels shape are not equal' - return _nd_image.statistics(input, labels, index, 4) - - -def _index_to_position(index, shape): - """Convert a linear index to a position""" - if len(shape) > 0: - pos = [] - stride = numpy.multiply.reduce(shape) - for size in shape: - stride = stride // size - pos.append(index // stride) - index -= pos[-1] * stride - return tuple(pos) - else: - return 0 + Notes + ----- + + The function returns a Python list and not a Numpy array, use + `np.array` to convert the list to an array. + + Examples + -------- + + >>> a = np.array([[1, 2, 0, 0], + ... [5, 3, 0, 4], + ... [0, 0, 0, 7], + ... [9, 3, 0, 0]]) + >>> labels, labels_nb = ndimage.label(a) + >>> labels + array([[1, 1, 0, 0], + [1, 1, 0, 2], + [0, 0, 0, 2], + [3, 3, 0, 0]]) + >>> ndimage.minimum(a, labels=labels, index=np.arange(1, labels_nb + 1)) + [1.0, 4.0, 3.0] + >>> ndimage.minimum(a) + 0.0 + >>> ndimage.minimum(a, labels=labels) + 1.0 + + """ + return _select(input, labels, index, find_min=True)[0] + +def maximum(input, labels = None, index = None): + """ + Calculate the maximum of the values of an array over labeled regions. + + Parameters + ---------- + input : array_like + Array-like of values. For each region specified by `labels`, the + maximal values of `input` over the region is computed. + labels : array_like, optional + An array of integers marking different regions over which the + maximum value of `input` is to be computed. `labels` must have the + same shape as `input`. If `labels` is not specified, the maximum + over the whole array is returned. + index : array_like, optional + A list of region labels that are taken into account for computing the + maxima. If index is None, the maximum over all elements where `labels` + is non-zero is returned. + + Returns + ------- + output : float or list of floats + List of maxima of `input` over the regions determined by `labels` and + whose index is in `index`. If `index` or `labels` are not specified, a + float is returned: the maximal value of `input` if `labels` is None, + and the maximal value of elements where `labels` is greater than zero + if `index` is None. + + See also + -------- + label, minimum, maximum_position, extrema, sum, mean, variance, + standard_deviation + + Notes + ----- + The function returns a Python list and not a Numpy array, use + `np.array` to convert the list to an array. + + Examples + -------- + >>> a = np.arange(16).reshape((4,4)) + >>> a + array([[ 0, 1, 2, 3], + [ 4, 5, 6, 7], + [ 8, 9, 10, 11], + [12, 13, 14, 15]]) + >>> labels = np.zeros_like(a) + >>> labels[:2,:2] = 1 + >>> labels[2:, 1:3] = 2 + >>> labels + array([[1, 1, 0, 0], + [1, 1, 0, 0], + [0, 2, 2, 0], + [0, 2, 2, 0]]) + >>> from scipy import ndimage + >>> ndimage.maximum(a) + 15.0 + >>> ndimage.maximum(a, labels=labels, index=[1,2]) + [5.0, 14.0] + >>> ndimage.maximum(a, labels=labels) + 14.0 + + >>> b = np.array([[1, 2, 0, 0], + [5, 3, 0, 4], + [0, 0, 0, 7], + [9, 3, 0, 0]]) + >>> labels, labels_nb = ndimage.label(b) + >>> labels + array([[1, 1, 0, 0], + [1, 1, 0, 2], + [0, 0, 0, 2], + [3, 3, 0, 0]]) + >>> ndimage.maximum(b, labels=labels, index=np.arange(1, labels_nb + 1)) + [5.0, 7.0, 9.0] + + """ + return _select(input, labels, index, find_max=True)[0] def minimum_position(input, labels = None, index = None): - """Find the position of the minimum of the values of the array. + """Find the positions of the minimums of the values of an array at labels. - The index parameter is a single label number or a sequence of - label numbers of the objects to be measured. If index is None, all - values are used where labels is larger than zero. + Labels must be None or an array of the same dimensions as the input. + + Index must be None, a single label or sequence of labels. If + none, all values where label is greater than zero are used. """ - input = numpy.asarray(input) - if numpy.iscomplexobj(input): - raise TypeError, 'Complex type not supported' - if labels is not None: - labels = numpy.asarray(labels) - labels = _broadcast(labels, input.shape) - - if labels.shape != input.shape: - raise RuntimeError, 'input and labels shape are not equal' - pos = _nd_image.statistics(input, labels, index, 5) - if (isinstance(pos, types.ListType)): - return [_index_to_position(x, input.shape) for x in pos] - else: - return _index_to_position(pos, input.shape) + + dims = numpy.array(numpy.asarray(input).shape) + # see numpy.unravel_index to understand this line. + dim_prod = numpy.cumprod([1] + list(dims[:0:-1]))[::-1] + + result = _select(input, labels, index, find_min_positions=True)[0] + if numpy.isscalar(result): + return tuple((result // dim_prod) % dims) + + return [tuple(v) for v in (result.reshape(-1, 1) // dim_prod) % dims] def maximum_position(input, labels = None, index = None): - """Find the position of the maximum of the values of the array. + """Find the positions of the maximums of the values of an array at labels. + + Labels must be None or an array of the same dimensions as the input. - The index parameter is a single label number or a sequence of - label numbers of the objects to be measured. If index is None, all - values are used where labels is larger than zero. + Index must be None, a single label or sequence of labels. If + none, all values where label is greater than zero are used. """ - input = numpy.asarray(input) - if numpy.iscomplexobj(input): - raise TypeError, 'Complex type not supported' - if labels is not None: - labels = numpy.asarray(labels) - labels = _broadcast(labels, input.shape) - - if labels.shape != input.shape: - raise RuntimeError, 'input and labels shape are not equal' - pos = _nd_image.statistics(input, labels, index, 6) - if (isinstance(pos, types.ListType)): - return [_index_to_position(x, input.shape) for x in pos] - else: - return _index_to_position(pos, input.shape) + + dims = numpy.array(numpy.asarray(input).shape) + # see numpy.unravel_index to understand this line. + dim_prod = numpy.cumprod([1] + list(dims[:0:-1]))[::-1] + + result = _select(input, labels, index, find_max_positions=True)[0] + if numpy.isscalar(result): + return tuple((result // dim_prod) % dims) + + return [tuple(v) for v in (result.reshape(-1, 1) // dim_prod) % dims] def extrema(input, labels = None, index = None): - """Calculate the minimum, the maximum and their positions of the - values of the array. + """Calculate the minimums and maximums of the values of an array + at labels, along with their positions. + + Labels must be None or an array of the same dimensions as the input. - The index parameter is a single label number or a sequence of - label numbers of the objects to be measured. If index is None, all - values are used where labels is larger than zero. + Index must be None, a single label or sequence of labels. If + none, all values where label is greater than zero are used. + + Returns: minimums, maximums, min_positions, max_positions """ - input = numpy.asarray(input) - if numpy.iscomplexobj(input): - raise TypeError, 'Complex type not supported' - if labels is not None: - labels = numpy.asarray(labels) - labels = _broadcast(labels, input.shape) - - if labels.shape != input.shape: - raise RuntimeError, 'input and labels shape are not equal' + + dims = numpy.array(numpy.asarray(input).shape) + # see numpy.unravel_index to understand this line. + dim_prod = numpy.cumprod([1] + list(dims[:0:-1]))[::-1] + minimums, min_positions, maximums, max_positions = _select(input, labels, index, + find_min=True, find_max=True, + find_min_positions=True, find_max_positions=True) - min, max, minp, maxp = _nd_image.statistics(input, labels, index, 7) - if (isinstance(minp, types.ListType)): - minp = [_index_to_position(x, input.shape) for x in minp] - maxp = [_index_to_position(x, input.shape) for x in maxp] - else: - minp = _index_to_position(minp, input.shape) - maxp = _index_to_position(maxp, input.shape) - return min, max, minp, maxp + if numpy.isscalar(minimums): + return minimums, maximums, tuple((min_positions // dim_prod) % dims), tuple((max_positions // dim_prod) % dims) + + min_positions = [tuple(v) for v in (min_positions.reshape(-1, 1) // dim_prod) % dims] + max_positions = [tuple(v) for v in (max_positions.reshape(-1, 1) // dim_prod) % dims] + + return minimums, maximums, min_positions, max_positions def center_of_mass(input, labels = None, index = None): - """Calculate the center of mass of of the array. + """Calculate the center of mass of the values of an array at labels. + + Labels must be None or an array of the same dimensions as the input. - The index parameter is a single label number or a sequence of - label numbers of the objects to be measured. If index is None, all - values are used where labels is larger than zero. + Index must be None, a single label or sequence of labels. If + none, all values where label is greater than zero are used. """ - input = numpy.asarray(input) - if numpy.iscomplexobj(input): - raise TypeError, 'Complex type not supported' - if labels is not None: - labels = numpy.asarray(labels) - labels = _broadcast(labels, input.shape) - - if labels.shape != input.shape: - raise RuntimeError, 'input and labels shape are not equal' - return _nd_image.center_of_mass(input, labels, index) + normalizer = sum(input, labels, index) + grids = numpy.ogrid[[slice(0, i) for i in input.shape]] + + results = [sum(input * grids[dir].astype(float), labels, index) / normalizer for dir in range(input.ndim)] + + if numpy.isscalar(results[0]): + return tuple(results) + + return [tuple(v) for v in numpy.array(results).T] def histogram(input, min, max, bins, labels = None, index = None): - """Calculate a histogram of of the array. + """Calculate the histogram of the values of an array at labels. + + Labels must be None or an array of the same dimensions as the input. - The histogram is defined by its minimum and maximum value and the + The histograms are defined by the minimum and maximum values and the number of bins. - The index parameter is a single label number or a sequence of - label numbers of the objects to be measured. If index is None, all - values are used where labels is larger than zero. + Index must be None, a single label or sequence of labels. If + none, all values where label is greater than zero are used. """ - input = numpy.asarray(input) - if numpy.iscomplexobj(input): - raise TypeError, 'Complex type not supported' - if labels is not None: - labels = numpy.asarray(labels) - labels = _broadcast(labels, input.shape) - - if labels.shape != input.shape: - raise RuntimeError, 'input and labels shape are not equal' - if bins < 1: - raise RuntimeError, 'number of bins must be >= 1' - if min >= max: - raise RuntimeError, 'min must be < max' - return _nd_image.histogram(input, min, max, bins, labels, index) + + _bins = numpy.linspace(min, max, bins + 1) + + def _hist(vals): + return numpy.histogram(vals, _bins)[0] + + return labeled_comprehension(input, labels, index, _hist, object, None, pass_positions=False) def watershed_ift(input, markers, structure = None, output = None): """Apply watershed from markers using a iterative forest transform @@ -396,23 +798,3 @@ output, return_value = _ni_support._get_output(output, input) _nd_image.watershed_ift(input, markers, structure, output) return return_value - -def _broadcast(arr, sshape): - """Return broadcast view of arr, else return None.""" - ashape = arr.shape - return_value = numpy.zeros(sshape, arr.dtype) - # Just return arr if they have the same shape - if sshape == ashape: - return arr - srank = len(sshape) - arank = len(ashape) - - aslices = [] - sslices = [] - for i in range(arank): - aslices.append(slice(0, ashape[i], 1)) - - for i in range(srank): - sslices.append(slice(0, sshape[i], 1)) - return_value[sslices] = arr[aslices] - return return_value diff -Nru python-scipy-0.7.2+dfsg1/scipy/ndimage/morphology.py python-scipy-0.8.0+dfsg1/scipy/ndimage/morphology.py --- python-scipy-0.7.2+dfsg1/scipy/ndimage/morphology.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/ndimage/morphology.py 2010-07-26 15:48:32.000000000 +0100 @@ -41,11 +41,60 @@ return bool(structure[coor]) def iterate_structure(structure, iterations, origin = None): - """Iterate a structure by dilating it with itself. + """ + Iterate a structure by dilating it with itself. + + Parameters + ---------- + + structure : array_like + Structuring element (an array of bools, for example), to be dilated with + itself. + + iterations : int + number of dilations performed on the structure with itself + + origin : optional + If origin is None, only the iterated structure is returned. If + not, a tuple of the iterated structure and the modified origin is + returned. + + + Returns + ------- + + output: ndarray of bools + A new structuring element obtained by dilating `structure` + (`iterations` - 1) times with itself. + + See also + -------- + + generate_binary_structure + + Examples + -------- + + >>> struct = ndimage.generate_binary_structure(2, 1) + >>> struct.astype(int) + array([[0, 1, 0], + [1, 1, 1], + [0, 1, 0]]) + >>> ndimage.iterate_structure(struct, 2).astype(int) + array([[0, 0, 1, 0, 0], + [0, 1, 1, 1, 0], + [1, 1, 1, 1, 1], + [0, 1, 1, 1, 0], + [0, 0, 1, 0, 0]]) + >>> ndimage.iterate_structure(struct, 3).astype(int) + array([[0, 0, 0, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 1, 1, 1, 1, 1, 0], + [1, 1, 1, 1, 1, 1, 1], + [0, 1, 1, 1, 1, 1, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 0, 0, 0]]) - If origin is None, only the iterated structure is returned. If - not, a tuple of the iterated structure and the modified origin is - returned. """ structure = numpy.asarray(structure) if iterations < 2: @@ -66,10 +115,93 @@ return out, origin def generate_binary_structure(rank, connectivity): - """Generate a binary structure for binary morphological operations. + """ + Generate a binary structure for binary morphological operations. + + Parameters + ---------- + + rank : int + Number of dimensions of the array to which the structuring element + will be applied, as returned by `np.ndim`. + + connectivity : int + `connectivity` determines which elements of the output array belong + to the structure, i.e. are considered as neighbors of the central + element. Elements up to a squared distance of `connectivity` from + the center are considered neighbors. `connectivity` may range from 1 + (no diagonal elements are neighbors) to `rank` (all elements are + neighbors). + + + Returns + ------- + + output : ndarray of bools + Structuring element which may be used for binary morphological + operations, with `rank` dimensions and all dimensions equal to 3. + + See also + -------- + + iterate_structure, binary_dilation, binary_erosion + + + Notes + ----- + + `generate_binary_structure` can only create structuring elements with + dimensions equal to 3, i.e. minimal dimensions. For larger structuring + elements, that are useful e.g. for eroding large objects, one may either + use `iterate_structure`, or create directly custom arrays with + numpy functions such as `numpy.ones`. + + Examples + -------- + + >>> struct = ndimage.generate_binary_structure(2, 1) + >>> struct + array([[False, True, False], + [ True, True, True], + [False, True, False]], dtype=bool) + >>> a = np.zeros((5,5)) + >>> a[2, 2] = 1 + >>> a + array([[ 0., 0., 0., 0., 0.], + [ 0., 0., 0., 0., 0.], + [ 0., 0., 1., 0., 0.], + [ 0., 0., 0., 0., 0.], + [ 0., 0., 0., 0., 0.]]) + >>> b = ndimage.binary_dilation(a, structure=struct).astype(a.dtype) + >>> b + array([[ 0., 0., 0., 0., 0.], + [ 0., 0., 1., 0., 0.], + [ 0., 1., 1., 1., 0.], + [ 0., 0., 1., 0., 0.], + [ 0., 0., 0., 0., 0.]]) + >>> ndimage.binary_dilation(b, structure=struct).astype(a.dtype) + array([[ 0., 0., 1., 0., 0.], + [ 0., 1., 1., 1., 0.], + [ 1., 1., 1., 1., 1.], + [ 0., 1., 1., 1., 0.], + [ 0., 0., 1., 0., 0.]]) + >>> struct = ndimage.generate_binary_structure(2, 2) + >>> struct + array([[ True, True, True], + [ True, True, True], + [ True, True, True]], dtype=bool) + >>> struct = ndimage.generate_binary_structure(3, 1) + >>> struct # no diagonal elements + array([[[False, False, False], + [False, True, False], + [False, False, False]], + [[False, True, False], + [ True, True, True], + [False, True, False]], + [[False, False, False], + [False, True, False], + [False, False, False]]], dtype=bool) - The inputs are the rank of the array to which the structure will - be applied and the square of the connectivity of the structure. """ if connectivity < 1: connectivity = 1 @@ -159,33 +291,231 @@ def binary_erosion(input, structure = None, iterations = 1, mask = None, output = None, border_value = 0, origin = 0, brute_force = False): - """Multi-dimensional binary erosion with the given structure. + """ + Multi-dimensional binary erosion with a given structuring element. + + Binary erosion is a mathematical morphology operation used for image + processing. + + Parameters + ---------- + + input : array_like + Binary image to be eroded. Non-zero (True) elements form + the subset to be eroded. + + structure : array_like, optional + Structuring element used for the erosion. Non-zero elements are + considered True. If no structuring element is provided, an element + is generated with a square connectivity equal to one. + + iterations : {int, float}, optional + The erosion is repeated `iterations` times (one, by default). + If iterations is less than 1, the erosion is repeated until the + result does not change anymore. + + mask : array_like, optional + If a mask is given, only those elements with a True value at + the corresponding mask element are modified at each iteration. + + output : ndarray, optional + Array of the same shape as input, into which the output is placed. + By default, a new array is created. + + origin: int or tuple of ints, optional + Placement of the filter, by default 0. + + border_value: int (cast to 0 or 1) + Value at the border in the output array. + + + Returns + ------- + + out: ndarray of bools + Erosion of the input by the structuring element. + + + See also + -------- + + grey_erosion, binary_dilation, binary_closing, binary_opening, + generate_binary_structure + + Notes + ----- + + Erosion [1]_ is a mathematical morphology operation [2]_ that uses a + structuring element for shrinking the shapes in an image. The binary + erosion of an image by a structuring element is the locus of the points + where a superimposition of the structuring element centered on the point + is entirely contained in the set of non-zero elements of the image. + + References + ---------- + + .. [1] http://en.wikipedia.org/wiki/Erosion_%28morphology%29 + + .. [2] http://en.wikipedia.org/wiki/Mathematical_morphology + + Examples + -------- + + >>> a = np.zeros((7,7), dtype=np.int) + >>> a[1:6, 2:5] = 1 + >>> a + array([[0, 0, 0, 0, 0, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0]]) + >>> ndimage.binary_erosion(a).astype(a.dtype) + array([[0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]]) + >>> #Erosion removes objects smaller than the structure + >>> ndimage.binary_erosion(a, structure=np.ones((5,5))).astype(a.dtype) + array([[0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]]) - An output array can optionally be provided. The origin parameter - controls the placement of the filter. If no structuring element is - provided an element is generated with a squared connectivity equal - to one. The border_value parameter gives the value of the array - outside the border. The erosion operation is repeated iterations - times. If iterations is less than 1, the erosion is repeated until - the result does not change anymore. If a mask is given, only those - elements with a true value at the corresponding mask element are - modified at each iteration. """ return _binary_erosion(input, structure, iterations, mask, output, border_value, origin, 0, brute_force) def binary_dilation(input, structure = None, iterations = 1, mask = None, output = None, border_value = 0, origin = 0, brute_force = False): - """Multi-dimensional binary dilation with the given structure. + """ + Multi-dimensional binary dilation with the given structuring element. + + + Parameters + ---------- + + input : array_like + Binary array_like to be dilated. Non-zero (True) elements form + the subset to be dilated. + + structure : array_like, optional + Structuring element used for the dilation. Non-zero elements are + considered True. If no structuring element is provided an element + is generated with a square connectivity equal to one. + + iterations : {int, float}, optional + The dilation is repeated `iterations` times (one, by default). + If iterations is less than 1, the dilation is repeated until the + result does not change anymore. + + mask : array_like, optional + If a mask is given, only those elements with a True value at + the corresponding mask element are modified at each iteration. + + + output : ndarray, optional + Array of the same shape as input, into which the output is placed. + By default, a new array is created. + + origin : int or tuple of ints, optional + Placement of the filter, by default 0. + + border_value : int (cast to 0 or 1) + Value at the border in the output array. + + + Returns + ------- + + out : ndarray of bools + Dilation of the input by the structuring element. + + + See also + -------- + + grey_dilation, binary_erosion, binary_closing, binary_opening, + generate_binary_structure + + Notes + ----- + + Dilation [1]_ is a mathematical morphology operation [2]_ that uses a + structuring element for expanding the shapes in an image. The binary + dilation of an image by a structuring element is the locus of the points + covered by the structuring element, when its center lies within the + non-zero points of the image. + + References + ---------- + + .. [1] http://en.wikipedia.org/wiki/Dilation_%28morphology%29 + + .. [2] http://en.wikipedia.org/wiki/Mathematical_morphology + + Examples + -------- + + >>> a = np.zeros((5, 5)) + >>> a[2, 2] = 1 + >>> a + array([[ 0., 0., 0., 0., 0.], + [ 0., 0., 0., 0., 0.], + [ 0., 0., 1., 0., 0.], + [ 0., 0., 0., 0., 0.], + [ 0., 0., 0., 0., 0.]]) + >>> ndimage.binary_dilation(a) + array([[False, False, False, False, False], + [False, False, True, False, False], + [False, True, True, True, False], + [False, False, True, False, False], + [False, False, False, False, False]], dtype=bool) + >>> ndimage.binary_dilation(a).astype(a.dtype) + array([[ 0., 0., 0., 0., 0.], + [ 0., 0., 1., 0., 0.], + [ 0., 1., 1., 1., 0.], + [ 0., 0., 1., 0., 0.], + [ 0., 0., 0., 0., 0.]]) + >>> # 3x3 structuring element with connectivity 1, used by default + >>> struct1 = ndimage.generate_binary_structure(2, 1) + >>> struct1 + array([[False, True, False], + [ True, True, True], + [False, True, False]], dtype=bool) + >>> # 3x3 structuring element with connectivity 2 + >>> struct2 = ndimage.generate_binary_structure(2, 2) + >>> struct2 + array([[ True, True, True], + [ True, True, True], + [ True, True, True]], dtype=bool) + >>> ndimage.binary_dilation(a, structure=struct1).astype(a.dtype) + array([[ 0., 0., 0., 0., 0.], + [ 0., 0., 1., 0., 0.], + [ 0., 1., 1., 1., 0.], + [ 0., 0., 1., 0., 0.], + [ 0., 0., 0., 0., 0.]]) + >>> ndimage.binary_dilation(a, structure=struct2).astype(a.dtype) + array([[ 0., 0., 0., 0., 0.], + [ 0., 1., 1., 1., 0.], + [ 0., 1., 1., 1., 0.], + [ 0., 1., 1., 1., 0.], + [ 0., 0., 0., 0., 0.]]) + >>> ndimage.binary_dilation(a, structure=struct1,\\ + ... iterations=2).astype(a.dtype) + array([[ 0., 0., 1., 0., 0.], + [ 0., 1., 1., 1., 0.], + [ 1., 1., 1., 1., 1.], + [ 0., 1., 1., 1., 0.], + [ 0., 0., 1., 0., 0.]]) - An output array can optionally be provided. The origin parameter - controls the placement of the filter. If no structuring element is - provided an element is generated with a squared connectivity equal - to one. The dilation operation is repeated iterations times. If - iterations is less than 1, the dilation is repeated until the - result does not change anymore. If a mask is given, only those - elements with a true value at the corresponding mask element are - modified at each iteration. """ input = numpy.asarray(input) if structure is None: @@ -204,13 +534,109 @@ def binary_opening(input, structure = None, iterations = 1, output = None, origin = 0): - """Multi-dimensional binary opening with the given structure. + """ + Multi-dimensional binary opening with the given structuring element. + + The *opening* of an input image by a structuring element is the + *dilation* of the *erosion* of the image by the structuring element. + + Parameters + ---------- + + input : array_like + Binary array_like to be opened. Non-zero (True) elements form + the subset to be opened. + + structure : array_like, optional + Structuring element used for the opening. Non-zero elements are + considered True. If no structuring element is provided an element + is generated with a square connectivity equal to one (i.e., only + nearest neighbors are connected to the center, diagonally-connected + elements are not considered neighbors). + + iterations : {int, float}, optional + The erosion step of the opening, then the dilation step are each + repeated `iterations` times (one, by default). If `iterations` is + less than 1, each operation is repeated until the result does + not change anymore. + + output : ndarray, optional + Array of the same shape as input, into which the output is placed. + By default, a new array is created. + + origin : int or tuple of ints, optional + Placement of the filter, by default 0. + + Returns + ------- + + out : ndarray of bools + Opening of the input by the structuring element. + + + See also + -------- + + grey_opening, binary_closing, binary_erosion, binary_dilation, + generate_binary_structure + + Notes + ----- + + *Opening* [1]_ is a mathematical morphology operation [2]_ that + consists in the succession of an erosion and a dilation of the + input with the same structuring element. Opening therefore removes + objects smaller than the structuring element. + + Together with *closing* (`binary_closing`), opening can be used for + noise removal. + + References + ---------- + + .. [1] http://en.wikipedia.org/wiki/Opening_%28morphology%29 + + .. [2] http://en.wikipedia.org/wiki/Mathematical_morphology + + Examples + -------- + + >>> a = np.zeros((5,5), dtype=np.int) + >>> a[1:4, 1:4] = 1; a[4, 4] = 1 + >>> a + array([[0, 0, 0, 0, 0], + [0, 1, 1, 1, 0], + [0, 1, 1, 1, 0], + [0, 1, 1, 1, 0], + [0, 0, 0, 0, 1]]) + >>> # Opening removes small objects + >>> ndimage.binary_opening(a, structure=np.ones((3,3))).astype(np.int) + array([[0, 0, 0, 0, 0], + [0, 1, 1, 1, 0], + [0, 1, 1, 1, 0], + [0, 1, 1, 1, 0], + [0, 0, 0, 0, 0]]) + >>> # Opening can also smooth corners + >>> ndimage.binary_opening(a).astype(np.int) + array([[0, 0, 0, 0, 0], + [0, 0, 1, 0, 0], + [0, 1, 1, 1, 0], + [0, 0, 1, 0, 0], + [0, 0, 0, 0, 0]]) + >>> # Opening is the dilation of the erosion of the input + >>> ndimage.binary_erosion(a).astype(np.int) + array([[0, 0, 0, 0, 0], + [0, 0, 0, 0, 0], + [0, 0, 1, 0, 0], + [0, 0, 0, 0, 0], + [0, 0, 0, 0, 0]]) + >>> ndimage.binary_dilation(ndimage.binary_erosion(a)).astype(np.int) + array([[0, 0, 0, 0, 0], + [0, 0, 1, 0, 0], + [0, 1, 1, 1, 0], + [0, 0, 1, 0, 0], + [0, 0, 0, 0, 0]]) - An output array can optionally be provided. The origin parameter - controls the placement of the filter. If no structuring element is - provided an element is generated with a squared connectivity equal - to one. The iterations parameter gives the number of times the - erosions and then the dilations are done. """ input = numpy.asarray(input) if structure is None: @@ -224,13 +650,132 @@ def binary_closing(input, structure = None, iterations = 1, output = None, origin = 0): - """Multi-dimensional binary closing with the given structure. + """ + Multi-dimensional binary closing with the given structuring element. + + The *closing* of an input image by a structuring element is the + *erosion* of the *dilation* of the image by the structuring element. + + Parameters + ---------- + + input : array_like + Binary array_like to be closed. Non-zero (True) elements form + the subset to be closed. + + structure : array_like, optional + Structuring element used for the closing. Non-zero elements are + considered True. If no structuring element is provided an element + is generated with a square connectivity equal to one (i.e., only + nearest neighbors are connected to the center, diagonally-connected + elements are not considered neighbors). + + iterations : {int, float}, optional + The dilation step of the closing, then the erosion step are each + repeated `iterations` times (one, by default). If iterations is + less than 1, each operations is repeated until the result does + not change anymore. + + output : ndarray, optional + Array of the same shape as input, into which the output is placed. + By default, a new array is created. + + origin : int or tuple of ints, optional + Placement of the filter, by default 0. + + Returns + ------- + + out : ndarray of bools + Closing of the input by the structuring element. + + + See also + -------- + + grey_closing, binary_opening, binary_dilation, binary_erosion, + generate_binary_structure + + Notes + ----- + + *Closing* [1]_ is a mathematical morphology operation [2]_ that + consists in the succession of a dilation and an erosion of the + input with the same structuring element. Closing therefore fills + holes smaller than the structuring element. + + Together with *opening* (`binary_opening`), closing can be used for + noise removal. + + References + ---------- + + .. [1] http://en.wikipedia.org/wiki/Closing_%28morphology%29 + + .. [2] http://en.wikipedia.org/wiki/Mathematical_morphology + + Examples + -------- + + >>> a = np.zeros((5,5), dtype=np.int) + >>> a[1:-1, 1:-1] = 1; a[2,2] = 0 + >>> a + array([[0, 0, 0, 0, 0], + [0, 1, 1, 1, 0], + [0, 1, 0, 1, 0], + [0, 1, 1, 1, 0], + [0, 0, 0, 0, 0]]) + >>> # Closing removes small holes + >>> ndimage.binary_closing(a).astype(np.int) + array([[0, 0, 0, 0, 0], + [0, 1, 1, 1, 0], + [0, 1, 1, 1, 0], + [0, 1, 1, 1, 0], + [0, 0, 0, 0, 0]]) + >>> # Closing is the erosion of the dilation of the input + >>> ndimage.binary_dilation(a).astype(np.int) + array([[0, 1, 1, 1, 0], + [1, 1, 1, 1, 1], + [1, 1, 1, 1, 1], + [1, 1, 1, 1, 1], + [0, 1, 1, 1, 0]]) + >>> ndimage.binary_erosion(ndimage.binary_dilation(a)).astype(np.int) + array([[0, 0, 0, 0, 0], + [0, 1, 1, 1, 0], + [0, 1, 1, 1, 0], + [0, 1, 1, 1, 0], + [0, 0, 0, 0, 0]]) + + + >>> a = np.zeros((7,7), dtype=np.int) + >>> a[1:6, 2:5] = 1; a[1:3,3] = 0 + >>> a + array([[0, 0, 0, 0, 0, 0, 0], + [0, 0, 1, 0, 1, 0, 0], + [0, 0, 1, 0, 1, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0]]) + >>> # In addition to removing holes, closing can also + >>> # coarsen boundaries with fine hollows. + >>> ndimage.binary_closing(a).astype(np.int) + array([[0, 0, 0, 0, 0, 0, 0], + [0, 0, 1, 0, 1, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0]]) + >>> ndimage.binary_closing(a, structure=np.ones((2,2))).astype(np.int) + array([[0, 0, 0, 0, 0, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0]]) - An output array can optionally be provided. The origin parameter - controls the placement of the filter. If no structuring element is - provided an element is generated with a squared connectivity equal - to one. The iterations parameter gives the number of times the - dilations and then the erosions are done. """ input = numpy.asarray(input) if structure is None: @@ -244,15 +789,104 @@ def binary_hit_or_miss(input, structure1 = None, structure2 = None, output = None, origin1 = 0, origin2 = None): - """Multi-dimensional binary hit-or-miss transform. + """ + Multi-dimensional binary hit-or-miss transform. + + The hit-or-miss transform finds the locations of a given pattern + inside the input image. + + Parameters + ---------- + + input : array_like (cast to booleans) + Binary image where a pattern is to be detected. + + structure1 : array_like (cast to booleans), optional + Part of the structuring element to be fitted to the foreground + (non-zero elements) of `input`. If no value is provided, a + structure of square connectivity 1 is chosen. + + structure2 : array_like (cast to booleans), optional + Second part of the structuring element that has to miss completely + the foreground. If no value is provided, the complementary of + `structure1` is taken. + + output : ndarray, optional + Array of the same shape as input, into which the output is placed. + By default, a new array is created. + + origin1 : int or tuple of ints, optional + Placement of the first part of the structuring element `structure1`, + by default 0 for a centered structure. + + origin2 : int or tuple of ints, optional + Placement of the second part of the structuring element `structure2`, + by default 0 for a centered structure. If a value is provided for + `origin1` and not for `origin2`, then `origin2` is set to `origin1`. + + Returns + ------- + + output : ndarray + Hit-or-miss transform of `input` with the given structuring + element (`structure1`, `structure2`). + + See also + -------- + + ndimage.morphology, binary_erosion + + + Notes + ----- + + + + + References + ---------- + + .. [1] http://en.wikipedia.org/wiki/Hit-or-miss_transform + + Examples + -------- + + >>> a = np.zeros((7,7), dtype=np.int) + >>> a[1, 1] = 1; a[2:4, 2:4] = 1; a[4:6, 4:6] = 1 + >>> a + array([[0, 0, 0, 0, 0, 0, 0], + [0, 1, 0, 0, 0, 0, 0], + [0, 0, 1, 1, 0, 0, 0], + [0, 0, 1, 1, 0, 0, 0], + [0, 0, 0, 0, 1, 1, 0], + [0, 0, 0, 0, 1, 1, 0], + [0, 0, 0, 0, 0, 0, 0]]) + >>> structure1 = np.array([[1, 0, 0], [0, 1, 1], [0, 1, 1]]) + >>> structure1 + array([[1, 0, 0], + [0, 1, 1], + [0, 1, 1]]) + >>> # Find the matches of structure1 in the array a + >>> ndimage.binary_hit_or_miss(a, structure1=structure1).astype(np.int) + array([[0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 1, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]]) + >>> # Change the origin of the filter + >>> # origin1=1 is equivalent to origin1=(1,1) here + >>> ndimage.binary_hit_or_miss(a, structure1=structure1,\\ + ... origin1=1).astype(np.int) + array([[0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 1, 0], + [0, 0, 0, 0, 0, 0, 0]]) - An output array can optionally be provided. The origin parameters - controls the placement of the structuring elements. If the first - structuring element is not given one is generated with a squared - connectivity equal to one. If the second structuring element is - not provided, it set equal to the inverse of the first structuring - element. If the origin for the second structure is equal to None - it is set equal to the origin of the first. """ input = numpy.asarray(input) if structure1 is None: @@ -279,28 +913,218 @@ def binary_propagation(input, structure = None, mask = None, output = None, border_value = 0, origin = 0): - """Multi-dimensional binary propagation with the given structure. + """ + Multi-dimensional binary propagation with the given structuring element. + + + Parameters + ---------- + + input : array_like + Binary image to be propagated inside `mask`. + + structure : array_like + Structuring element used in the successive dilations. The output + may depend on the structuring element, especially if `mask` has + several connex components. If no structuring element is + provided, an element is generated with a squared connectivity equal + to one. + + mask : array_like + Binary mask defining the region into which `input` is allowed to + propagate. - An output array can optionally be provided. The origin parameter - controls the placement of the filter. If no structuring element is - provided an element is generated with a squared connectivity equal - to one. If a mask is given, only those elements with a true value at - the corresponding mask element are. + output : ndarray, optional + Array of the same shape as input, into which the output is placed. + By default, a new array is created. + + origin : int or tuple of ints, optional + Placement of the filter, by default 0. + + Returns + ------- + + ouput : ndarray + Binary propagation of `input` inside `mask`. + + Notes + ----- This function is functionally equivalent to calling binary_dilation with the number of iterations less then one: iterative dilation until the result does not change anymore. + + The succession of an erosion and propagation inside the original image + can be used instead of an *opening* for deleting small objects while + keeping the contours of larger objects untouched. + + References + ---------- + + .. [1] http://cmm.ensmp.fr/~serra/cours/pdf/en/ch6en.pdf, slide 15. + + .. [2] http://www.qi.tnw.tudelft.nl/Courses/FIP/noframes/fip-Morpholo.html#Heading102 + + Examples + -------- + + >>> input = np.zeros((8, 8), dtype=np.int) + >>> input[2, 2] = 1 + >>> mask = np.zeros((8, 8), dtype=np.int) + >>> mask[1:4, 1:4] = mask[4, 4] = mask[6:8, 6:8] = 1 + >>> input + array([[0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 1, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]]) + >>> mask + array([[0, 0, 0, 0, 0, 0, 0, 0], + [0, 1, 1, 1, 0, 0, 0, 0], + [0, 1, 1, 1, 0, 0, 0, 0], + [0, 1, 1, 1, 0, 0, 0, 0], + [0, 0, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 1, 1], + [0, 0, 0, 0, 0, 0, 1, 1]]) + >>> ndimage.binary_propagation(input, mask=mask).astype(np.int) + array([[0, 0, 0, 0, 0, 0, 0, 0], + [0, 1, 1, 1, 0, 0, 0, 0], + [0, 1, 1, 1, 0, 0, 0, 0], + [0, 1, 1, 1, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]]) + >>> ndimage.binary_propagation(input, mask=mask,\\ + ... structure=np.ones((3,3))).astype(np.int) + array([[0, 0, 0, 0, 0, 0, 0, 0], + [0, 1, 1, 1, 0, 0, 0, 0], + [0, 1, 1, 1, 0, 0, 0, 0], + [0, 1, 1, 1, 0, 0, 0, 0], + [0, 0, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]]) + + >>> # Comparison between opening and erosion+propagation + >>> a = np.zeros((6,6), dtype=np.int) + >>> a[2:5, 2:5] = 1; a[0, 0] = 1; a[5, 5] = 1 + >>> a + array([[1, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0], + [0, 0, 1, 1, 1, 0], + [0, 0, 1, 1, 1, 0], + [0, 0, 1, 1, 1, 0], + [0, 0, 0, 0, 0, 1]]) + >>> ndimage.binary_opening(a).astype(np.int) + array([[0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 0, 0], + [0, 0, 1, 1, 1, 0], + [0, 0, 0, 1, 0, 0], + [0, 0, 0, 0, 0, 0]]) + >>> b = ndimage.binary_erosion(a) + >>> b.astype(int) + array([[0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 0, 0], + [0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0]]) + >>> ndimage.binary_propagation(b, mask=a).astype(np.int) + array([[0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0], + [0, 0, 1, 1, 1, 0], + [0, 0, 1, 1, 1, 0], + [0, 0, 1, 1, 1, 0], + [0, 0, 0, 0, 0, 0]]) + """ return binary_dilation(input, structure, -1, mask, output, border_value, origin) def binary_fill_holes(input, structure = None, output = None, origin = 0): - """Fill the holes in binary objects. + """ + Fill the holes in binary objects. + + + Parameters + ---------- + + input: array_like + n-dimensional binary array with holes to be filled + + structure: array_like, optional + Structuring element used in the computation; large-size elements + make computations faster but may miss holes separated from the + background by thin regions. The default element (with a square + connectivity equal to one) yields the intuitive result where all + holes in the input have been filled. + + output: ndarray, optional + Array of the same shape as input, into which the output is placed. + By default, a new array is created. + + origin: int, tuple of ints, optional + Position of the structuring element. + + Returns + ------- + + out: ndarray + Transformation of the initial image `input` where holes have been + filled. + + See also + -------- + + binary_dilation, binary_propagation, label + + Notes + ----- + + The algorithm used in this function consists in invading the complementary + of the shapes in `input` from the outer boundary of the image, + using binary dilations. Holes are not connected to the boundary and are + therefore not invaded. The result is the complementary subset of the + invaded region. + + References + ---------- + + .. [1] http://en.wikipedia.org/wiki/Mathematical_morphology + + + Examples + -------- + + >>> a = np.zeros((5, 5), dtype=int) + >>> a[1:4, 1:4] = 1 + >>> a[2,2] = 0 + >>> a + array([[0, 0, 0, 0, 0], + [0, 1, 1, 1, 0], + [0, 1, 0, 1, 0], + [0, 1, 1, 1, 0], + [0, 0, 0, 0, 0]]) + >>> ndimage.binary_fill_holes(a).astype(int) + array([[0, 0, 0, 0, 0], + [0, 1, 1, 1, 0], + [0, 1, 1, 1, 0], + [0, 1, 1, 1, 0], + [0, 0, 0, 0, 0]]) + >>> # Too big structuring element + >>> ndimage.binary_fill_holes(a, structure=np.ones((5,5))).astype(int) + array([[0, 0, 0, 0, 0], + [0, 1, 1, 1, 0], + [0, 1, 0, 1, 0], + [0, 1, 1, 1, 0], + [0, 0, 0, 0, 0]]) - An output array can optionally be provided. The origin parameter - controls the placement of the filter. If no structuring element is - provided an element is generated with a squared connectivity equal - to one. """ mask = numpy.logical_not(input) tmp = numpy.zeros(mask.shape, bool) @@ -316,13 +1140,123 @@ def grey_erosion(input, size = None, footprint = None, structure = None, output = None, mode = "reflect", cval = 0.0, origin = 0): - """Calculate a grey values erosion. + """ + Calculate a greyscale erosion, using either a structuring element, + or a footprint corresponding to a flat structuring element. + + Grayscale erosion is a mathematical morphology operation. For the + simple case of a full and flat structuring element, it can be viewed + as a minimum filter over a sliding window. + + Parameters + ---------- + + input : array_like + Array over which the grayscale erosion is to be computed. + + size : tuple of ints + Shape of a flat and full structuring element used for the + grayscale erosion. Optional if `footprint` is provided. + + footprint : array of ints, optional + Positions of non-infinite elements of a flat structuring element + used for the grayscale erosion. Non-zero values give the set of + neighbors of the center over which the minimum is chosen. + + structure : array of ints, optional + Structuring element used for the grayscale erosion. `structure` + may be a non-flat structuring element. + + output : array, optional + An array used for storing the ouput of the erosion may be provided. + + mode : {'reflect','constant','nearest','mirror', 'wrap'}, optional + The `mode` parameter determines how the array borders are + handled, where `cval` is the value when mode is equal to + 'constant'. Default is 'reflect' + + cval : scalar, optional + Value to fill past edges of input if `mode` is 'constant'. Default + is 0.0. + + origin : scalar, optional + The `origin` parameter controls the placement of the filter. + Default 0 + + + Returns + ------- + + output : ndarray + Grayscale erosion of `input`. + + See also + -------- + + binary_erosion, grey_dilation, grey_opening, grey_closing + + generate_binary_structure + + ndimage.minimum_filter + + Notes + ----- + + The grayscale erosion of an image input by a structuring element s defined + over a domain E is given by: + + (input+s)(x) = min {input(y) - s(x-y), for y in E} + + In particular, for structuring elements defined as + s(y) = 0 for y in E, the grayscale erosion computes the minimum of the + input image inside a sliding window defined by E. + + Grayscale erosion [1]_ is a *mathematical morphology* operation [2]_. + + References + ---------- + + .. [1] http://en.wikipedia.org/wiki/Erosion_%28morphology%29 + + .. [2] http://en.wikipedia.org/wiki/Mathematical_morphology + + Examples + -------- + + >>> a = np.zeros((7,7), dtype=np.int) + >>> a[1:6, 1:6] = 3 + >>> a[4,4] = 2; a[2,3] = 1 + >>> a + array([[0, 0, 0, 0, 0, 0, 0], + [0, 3, 3, 3, 3, 3, 0], + [0, 3, 3, 1, 3, 3, 0], + [0, 3, 3, 3, 3, 3, 0], + [0, 3, 3, 3, 2, 3, 0], + [0, 3, 3, 3, 3, 3, 0], + [0, 0, 0, 0, 0, 0, 0]]) + >>> ndimage.grey_erosion(a, size=(3,3)) + array([[0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 3, 2, 2, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]]) + >>> footprint = ndimage.generate_binary_structure(2, 1) + >>> footprint + array([[False, True, False], + [ True, True, True], + [False, True, False]], dtype=bool) + >>> # Diagonally-connected elements are not considered neighbors + >>> ndimage.grey_erosion(a, size=(3,3), footprint=footprint) + array([[0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 3, 1, 2, 0, 0], + [0, 0, 3, 2, 2, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]]) - Either a size or a footprint, or the structure must be provided. An - output array can optionally be provided. The origin parameter - controls the placement of the filter. The mode parameter - determines how the array borders are handled, where cval is the - value when mode is equal to 'constant'. """ return filters._min_or_max_filter(input, size, footprint, structure, output, mode, cval, origin, 1) @@ -330,13 +1264,139 @@ def grey_dilation(input, size = None, footprint = None, structure = None, output = None, mode = "reflect", cval = 0.0, origin = 0): - """Calculate a grey values dilation. + """ + Calculate a greyscale dilation, using either a structuring element, + or a footprint corresponding to a flat structuring element. + + Grayscale dilation is a mathematical morphology operation. For the + simple case of a full and flat structuring element, it can be viewed + as a maximum filter over a sliding window. + + Parameters + ---------- + + input : array_like + Array over which the grayscale dilation is to be computed. + + size : tuple of ints + Shape of a flat and full structuring element used for the + grayscale dilation. Optional if `footprint` is provided. + + footprint : array of ints, optional + Positions of non-infinite elements of a flat structuring element + used for the grayscale dilation. Non-zero values give the set of + neighbors of the center over which the maximum is chosen. + + structure : array of ints, optional + Structuring element used for the grayscale dilation. `structure` + may be a non-flat structuring element. + + output : array, optional + An array used for storing the ouput of the dilation may be provided. + + mode : {'reflect','constant','nearest','mirror', 'wrap'}, optional + The `mode` parameter determines how the array borders are + handled, where `cval` is the value when mode is equal to + 'constant'. Default is 'reflect' + + cval : scalar, optional + Value to fill past edges of input if `mode` is 'constant'. Default + is 0.0. + + origin : scalar, optional + The `origin` parameter controls the placement of the filter. + Default 0 + + + Returns + ------- + + output : ndarray + Grayscale dilation of `input`. + + See also + -------- + + binary_dilation, grey_erosion, grey_closing, grey_opening + + generate_binary_structure + + ndimage.maximum_filter + + Notes + ----- + + The grayscale dilation of an image input by a structuring element s defined + over a domain E is given by: + + (input+s)(x) = max {input(y) + s(x-y), for y in E} + + In particular, for structuring elements defined as + s(y) = 0 for y in E, the grayscale dilation computes the maximum of the + input image inside a sliding window defined by E. + + Grayscale dilation [1]_ is a *mathematical morphology* operation [2]_. + + References + ---------- + + .. [1] http://en.wikipedia.org/wiki/Dilation_%28morphology%29 + + .. [2] http://en.wikipedia.org/wiki/Mathematical_morphology + + + Examples + -------- + + >>> a = np.zeros((7,7), dtype=np.int) + >>> a[2:5, 2:5] = 1 + >>> a[4,4] = 2; a[2,3] = 3 + >>> a + array([[0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 1, 3, 1, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 2, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]]) + >>> ndimage.grey_dilation(a, size=(3,3)) + array([[0, 0, 0, 0, 0, 0, 0], + [0, 1, 3, 3, 3, 1, 0], + [0, 1, 3, 3, 3, 1, 0], + [0, 1, 3, 3, 3, 2, 0], + [0, 1, 1, 2, 2, 2, 0], + [0, 1, 1, 2, 2, 2, 0], + [0, 0, 0, 0, 0, 0, 0]]) + >>> ndimage.grey_dilation(a, footprint=np.ones((3,3))) + array([[0, 0, 0, 0, 0, 0, 0], + [0, 1, 3, 3, 3, 1, 0], + [0, 1, 3, 3, 3, 1, 0], + [0, 1, 3, 3, 3, 2, 0], + [0, 1, 1, 2, 2, 2, 0], + [0, 1, 1, 2, 2, 2, 0], + [0, 0, 0, 0, 0, 0, 0]]) + >>> s = ndimage.generate_binary_structure(2,1) + >>> s + array([[False, True, False], + [ True, True, True], + [False, True, False]], dtype=bool) + >>> ndimage.grey_dilation(a, footprint=s) + array([[0, 0, 0, 0, 0, 0, 0], + [0, 0, 1, 3, 1, 0, 0], + [0, 1, 3, 3, 3, 1, 0], + [0, 1, 1, 3, 2, 1, 0], + [0, 1, 1, 2, 2, 2, 0], + [0, 0, 1, 1, 2, 0, 0], + [0, 0, 0, 0, 0, 0, 0]]) + >>> ndimage.grey_dilation(a, size=(3,3), structure=np.ones((3,3))) + array([[1, 1, 1, 1, 1, 1, 1], + [1, 2, 4, 4, 4, 2, 1], + [1, 2, 4, 4, 4, 2, 1], + [1, 2, 4, 4, 4, 3, 1], + [1, 2, 2, 3, 3, 3, 1], + [1, 2, 2, 3, 3, 3, 1], + [1, 1, 1, 1, 1, 1, 1]]) - Either a size or a footprint, or the structure must be - provided. An output array can optionally be provided. The origin - parameter controls the placement of the filter. The mode parameter - determines how the array borders are handled, where cval is the - value when mode is equal to 'constant'. """ if structure is not None: structure = numpy.asarray(structure) @@ -362,13 +1422,92 @@ def grey_opening(input, size = None, footprint = None, structure = None, output = None, mode = "reflect", cval = 0.0, origin = 0): - """Multi-dimensional grey valued opening. + """ + Multi-dimensional greyscale opening. + + A greyscale opening consists in the succession of a greyscale erosion, + and a greyscale dilation. + + Parameters + ---------- + + input : array_like + Array over which the grayscale opening is to be computed. + + size : tuple of ints + Shape of a flat and full structuring element used for the + grayscale opening. Optional if `footprint` is provided. + + footprint : array of ints, optional + Positions of non-infinite elements of a flat structuring element + used for the grayscale opening. + + structure : array of ints, optional + Structuring element used for the grayscale opening. `structure` + may be a non-flat structuring element. + + output : array, optional + An array used for storing the ouput of the opening may be provided. + + mode : {'reflect','constant','nearest','mirror', 'wrap'}, optional + The `mode` parameter determines how the array borders are + handled, where `cval` is the value when mode is equal to + 'constant'. Default is 'reflect' + + cval : scalar, optional + Value to fill past edges of input if `mode` is 'constant'. Default + is 0.0. + + origin : scalar, optional + The `origin` parameter controls the placement of the filter. + Default 0 + + Returns + ------- + + output : ndarray + Result of the grayscale opening of `input` with `structure`. + + See also + -------- + + binary_opening, grey_dilation, grey_erosion, grey_closing + + generate_binary_structure + + Notes + ----- + + The action of a grayscale opening with a flat structuring element amounts + to smoothen high local maxima, whereas binary opening erases small objects. + + References + ---------- + + .. [1] http://en.wikipedia.org/wiki/Mathematical_morphology + + + Examples + -------- + + >>> a = np.arange(36).reshape((6,6)) + >>> a[3, 3] = 50 + >>> a + array([[ 0, 1, 2, 3, 4, 5], + [ 6, 7, 8, 9, 10, 11], + [12, 13, 14, 15, 16, 17], + [18, 19, 20, 50, 22, 23], + [24, 25, 26, 27, 28, 29], + [30, 31, 32, 33, 34, 35]]) + >>> ndimage.grey_opening(a, size=(3,3)) + array([[ 0, 1, 2, 3, 4, 4], + [ 6, 7, 8, 9, 10, 10], + [12, 13, 14, 15, 16, 16], + [18, 19, 20, 22, 22, 22], + [24, 25, 26, 27, 28, 28], + [24, 25, 26, 27, 28, 28]]) + >>> # Note that the local maximum a[3,3] has disappeared - Either a size or a footprint, or the structure must be provided. An - output array can optionally be provided. The origin parameter - controls the placement of the filter. The mode parameter - determines how the array borders are handled, where cval is the - value when mode is equal to 'constant'. """ tmp = grey_erosion(input, size, footprint, structure, None, mode, cval, origin) @@ -378,13 +1517,92 @@ def grey_closing(input, size = None, footprint = None, structure = None, output = None, mode = "reflect", cval = 0.0, origin = 0): - """Multi-dimensional grey valued closing. + """ + Multi-dimensional greyscale closing. + + A greyscale closing consists in the succession of a greyscale dilation, + and a greyscale erosion. + + Parameters + ---------- + + input : array_like + Array over which the grayscale closing is to be computed. + + size : tuple of ints + Shape of a flat and full structuring element used for the + grayscale closing. Optional if `footprint` is provided. + + footprint : array of ints, optional + Positions of non-infinite elements of a flat structuring element + used for the grayscale closing. + + structure : array of ints, optional + Structuring element used for the grayscale closing. `structure` + may be a non-flat structuring element. + + output : array, optional + An array used for storing the ouput of the closing may be provided. + + mode : {'reflect','constant','nearest','mirror', 'wrap'}, optional + The `mode` parameter determines how the array borders are + handled, where `cval` is the value when mode is equal to + 'constant'. Default is 'reflect' + + cval : scalar, optional + Value to fill past edges of input if `mode` is 'constant'. Default + is 0.0. + + origin : scalar, optional + The `origin` parameter controls the placement of the filter. + Default 0 + + Returns + ------- + + output : ndarray + Result of the grayscale closing of `input` with `structure`. + + See also + -------- + + binary_closing, grey_dilation, grey_erosion, grey_opening + + generate_binary_structure + + Notes + ----- + + The action of a grayscale closing with a flat structuring element amounts + to smoothen deep local minima, whereas binary closing fills small holes. + + References + ---------- + + .. [1] http://en.wikipedia.org/wiki/Mathematical_morphology + + + Examples + -------- + + >>> a = np.arange(36).reshape((6,6)) + >>> a[3,3] = 0 + >>> a + array([[ 0, 1, 2, 3, 4, 5], + [ 6, 7, 8, 9, 10, 11], + [12, 13, 14, 15, 16, 17], + [18, 19, 20, 0, 22, 23], + [24, 25, 26, 27, 28, 29], + [30, 31, 32, 33, 34, 35]]) + >>> ndimage.grey_closing(a, size=(3,3)) + array([[ 7, 7, 8, 9, 10, 11], + [ 7, 7, 8, 9, 10, 11], + [13, 13, 14, 15, 16, 17], + [19, 19, 20, 20, 22, 23], + [25, 25, 26, 27, 28, 29], + [31, 31, 32, 33, 34, 35]]) + >>> # Note that the local minimum a[3,3] has disappeared - Either a size or a footprint, or the structure must be provided. An - output array can optionally be provided. The origin parameter - controls the placement of the filter. The mode parameter - determines how the array borders are handled, where cval is the - value when mode is equal to 'constant'. """ tmp = grey_dilation(input, size, footprint, structure, None, mode, cval, origin) @@ -395,13 +1613,120 @@ def morphological_gradient(input, size = None, footprint = None, structure = None, output = None, mode = "reflect", cval = 0.0, origin = 0): - """Multi-dimensional morphological gradient. + """ + Multi-dimensional morphological gradient. + + The morphological gradient is calculated as the difference between a + dilation and an erosion of the input with a given structuring element. + + + Parameters + ---------- + + input : array_like + Array over which to compute the morphlogical gradient. + + size : tuple of ints + Shape of a flat and full structuring element used for the + mathematical morphology operations. Optional if `footprint` + is provided. A larger `size` yields a more blurred gradient. + + footprint : array of ints, optional + Positions of non-infinite elements of a flat structuring element + used for the morphology operations. Larger footprints + give a more blurred morphological gradient. + + structure : array of ints, optional + Structuring element used for the morphology operations. + `structure` may be a non-flat structuring element. + + output : array, optional + An array used for storing the ouput of the morphological gradient + may be provided. + + mode : {'reflect','constant','nearest','mirror', 'wrap'}, optional + The `mode` parameter determines how the array borders are + handled, where `cval` is the value when mode is equal to + 'constant'. Default is 'reflect' + + cval : scalar, optional + Value to fill past edges of input if `mode` is 'constant'. Default + is 0.0. + + origin : scalar, optional + The `origin` parameter controls the placement of the filter. + Default 0 + + Returns + ------- + + output : ndarray + Morphological gradient of `input`. + + See also + -------- + + grey_dilation, grey_erosion + + ndimage.gaussian_gradient_magnitude + + Notes + ----- + + For a flat structuring element, the morphological gradient + computed at a given point corresponds to the maximal difference + between elements of the input among the elements covered by the + structuring element centered on the point. + + References + ---------- + + .. [1] http://en.wikipedia.org/wiki/Mathematical_morphology + + Examples + -------- + + >>> a = np.zeros((7,7), dtype=np.int) + >>> a[2:5, 2:5] = 1 + >>> ndimage.morphological_gradient(a, size=(3,3)) + array([[0, 0, 0, 0, 0, 0, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 1, 1, 0, 1, 1, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 0, 0, 0, 0, 0, 0]]) + >>> # The morphological gradient is computed as the difference + >>> # between a dilation and an erosion + >>> ndimage.grey_dilation(a, size=(3,3)) -\\ + ... ndimage.grey_erosion(a, size=(3,3)) + array([[0, 0, 0, 0, 0, 0, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 1, 1, 0, 1, 1, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 0, 0, 0, 0, 0, 0]]) + >>> a = np.zeros((7,7), dtype=np.int) + >>> a[2:5, 2:5] = 1 + >>> a[4,4] = 2; a[2,3] = 3 + >>> a + array([[0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 1, 3, 1, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 2, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]]) + >>> ndimage.morphological_gradient(a, size=(3,3)) + array([[0, 0, 0, 0, 0, 0, 0], + [0, 1, 3, 3, 3, 1, 0], + [0, 1, 3, 3, 3, 1, 0], + [0, 1, 3, 2, 3, 2, 0], + [0, 1, 1, 2, 2, 2, 0], + [0, 1, 1, 2, 2, 2, 0], + [0, 0, 0, 0, 0, 0, 0]]) - Either a size or a footprint, or the structure must be provided. An - output array can optionally be provided. The origin parameter - controls the placement of the filter. The mode parameter - determines how the array borders are handled, where cval is the - value when mode is equal to 'constant'. """ tmp = grey_dilation(input, size, footprint, structure, None, mode, cval, origin) @@ -470,13 +1795,26 @@ def black_tophat(input, size = None, footprint = None, structure = None, output = None, mode = "reflect", cval = 0.0, origin = 0): - """Multi-dimensional black tophat filter. + """ + Multi-dimensional black tophat filter. Either a size or a footprint, or the structure must be provided. An output array can optionally be provided. The origin parameter controls the placement of the filter. The mode parameter determines how the array borders are handled, where cval is the value when mode is equal to 'constant'. + + See also + -------- + + grey_opening, grey_closing + + References + ---------- + + .. [1] http://cmm.ensmp.fr/Micromorph/course/sld011.htm, and following slides + .. [2] http://en.wikipedia.org/wiki/Top-hat_transform + """ tmp = grey_dilation(input, size, footprint, structure, None, mode, cval, origin) diff -Nru python-scipy-0.7.2+dfsg1/scipy/ndimage/setup.py python-scipy-0.8.0+dfsg1/scipy/ndimage/setup.py --- python-scipy-0.7.2+dfsg1/scipy/ndimage/setup.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/ndimage/setup.py 2010-07-26 15:48:32.000000000 +0100 @@ -12,6 +12,7 @@ "src/ni_measure.c", "src/ni_morphology.c","src/ni_support.c"], include_dirs=['src']+[get_include()], + extra_compile_args=['-Wall'], ) config.add_data_dir('tests') diff -Nru python-scipy-0.7.2+dfsg1/scipy/ndimage/src/nd_image.c python-scipy-0.8.0+dfsg1/scipy/ndimage/src/nd_image.c --- python-scipy-0.7.2+dfsg1/scipy/ndimage/src/nd_image.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/ndimage/src/nd_image.c 2010-07-26 15:48:32.000000000 +0100 @@ -731,399 +731,6 @@ return PyErr_Occurred() ? NULL : Py_BuildValue(""); } -static int _NI_GetIndices(PyObject* indices_object, - maybelong** result_indices, maybelong* min_label, - maybelong* max_label, maybelong* n_results) -{ - maybelong *indices = NULL, n_indices, ii; - - if (indices_object == Py_None) { - *min_label = -1; - *n_results = 1; - } else { - n_indices = NI_ObjectToLongSequenceAndLength(indices_object, &indices); - if (n_indices < 0) - goto exit; - if (n_indices < 1) { - PyErr_SetString(PyExc_RuntimeError, "no correct indices provided"); - goto exit; - } else { - *min_label = *max_label = indices[0]; - if (*min_label < 0) { - PyErr_SetString(PyExc_RuntimeError, - "negative indices not allowed"); - goto exit; - } - for(ii = 1; ii < n_indices; ii++) { - if (indices[ii] < 0) { - PyErr_SetString(PyExc_RuntimeError, - "negative indices not allowed"); - goto exit; - } - if (indices[ii] < *min_label) - *min_label = indices[ii]; - if (indices[ii] > *max_label) - *max_label = indices[ii]; - } - *result_indices = (maybelong*)malloc((*max_label - *min_label + 1) * - sizeof(maybelong)); - if (!*result_indices) { - PyErr_NoMemory(); - goto exit; - } - for(ii = 0; ii < *max_label - *min_label + 1; ii++) - (*result_indices)[ii] = -1; - *n_results = 0; - for(ii = 0; ii < n_indices; ii++) { - if ((*result_indices)[indices[ii] - *min_label] >= 0) { - PyErr_SetString(PyExc_RuntimeError, "duplicate index"); - goto exit; - } - (*result_indices)[indices[ii] - *min_label] = ii; - ++(*n_results); - } - } - } - exit: - if (indices) - free(indices); - return PyErr_Occurred() == NULL; -} - - -PyObject* _NI_BuildMeasurementResultArrayObject(maybelong n_results, - PyArrayObject** values) -{ - PyObject *result = NULL; - if (n_results > 1) { - result = PyList_New(n_results); - if (result) { - maybelong ii; - for(ii = 0; ii < n_results; ii++) { - PyList_SET_ITEM(result, ii, (PyObject*)values[ii]); - Py_XINCREF(values[ii]); - } - } - } else { - result = (PyObject*)values[0]; - Py_XINCREF(values[0]); - } - return result; -} - - -PyObject* _NI_BuildMeasurementResultDouble(maybelong n_results, - double* values) -{ - PyObject *result = NULL; - if (n_results > 1) { - result = PyList_New(n_results); - if (result) { - int ii; - for(ii = 0; ii < n_results; ii++) { - PyObject* val = PyFloat_FromDouble(values[ii]); - if (!val) { - Py_XDECREF(result); - return NULL; - } - PyList_SET_ITEM(result, ii, val); - } - } - } else { - result = Py_BuildValue("d", values[0]); - } - return result; -} - - -PyObject* _NI_BuildMeasurementResultDoubleTuple(maybelong n_results, - int tuple_size, double* values) -{ - PyObject *result = NULL; - maybelong ii; - int jj; - - if (n_results > 1) { - result = PyList_New(n_results); - if (result) { - for(ii = 0; ii < n_results; ii++) { - PyObject* val = PyTuple_New(tuple_size); - if (!val) { - Py_XDECREF(result); - return NULL; - } - for(jj = 0; jj < tuple_size; jj++) { - maybelong idx = jj + ii * tuple_size; - PyTuple_SetItem(val, jj, PyFloat_FromDouble(values[idx])); - if (PyErr_Occurred()) { - Py_XDECREF(result); - return NULL; - } - } - PyList_SET_ITEM(result, ii, val); - } - } - } else { - result = PyTuple_New(tuple_size); - if (result) { - for(ii = 0; ii < tuple_size; ii++) { - PyTuple_SetItem(result, ii, PyFloat_FromDouble(values[ii])); - if (PyErr_Occurred()) { - Py_XDECREF(result); - return NULL; - } - } - } - } - return result; -} - - -PyObject* _NI_BuildMeasurementResultInt(maybelong n_results, - maybelong* values) -{ - PyObject *result = NULL; - if (n_results > 1) { - result = PyList_New(n_results); - if (result) { - maybelong ii; - for(ii = 0; ii < n_results; ii++) { - PyObject* val = PyInt_FromLong(values[ii]); - if (!val) { - Py_XDECREF(result); - return NULL; - } - PyList_SET_ITEM(result, ii, val); - } - } - } else { - result = Py_BuildValue("l", values[0]); - } - return result; -} - - -static PyObject *Py_Statistics(PyObject *obj, PyObject *args) -{ - PyArrayObject *input = NULL, *labels = NULL; - PyObject *indices_object, *result = NULL; - PyObject *res1 = NULL, *res2 = NULL, *res3 = NULL, *res4 = NULL; - double *dresult1 = NULL, *dresult2 = NULL; - maybelong *lresult1 = NULL, *lresult2 = NULL; - maybelong min_label, max_label, *result_indices = NULL, n_results, ii; - int type; - - if (!PyArg_ParseTuple(args, "O&O&Oi", NI_ObjectToInputArray, &input, - NI_ObjectToOptionalInputArray, &labels, &indices_object, &type)) - goto exit; - - if (!_NI_GetIndices(indices_object, &result_indices, &min_label, - &max_label, &n_results)) - goto exit; - - if (type >= 0 && type <= 7) { - dresult1 = (double*)malloc(n_results * sizeof(double)); - if (!dresult1) { - PyErr_NoMemory(); - goto exit; - } - } - if (type == 2 || type == 7) { - dresult2 = (double*)malloc(n_results * sizeof(double)); - if (!dresult2) { - PyErr_NoMemory(); - goto exit; - } - } - if (type == 1 || type == 2 || (type >= 5 && type <= 7)) { - lresult1 = (maybelong*)malloc(n_results * sizeof(maybelong)); - if (!lresult1) { - PyErr_NoMemory(); - goto exit; - } - } - if (type == 7) { - lresult2 = (maybelong*)malloc(n_results * sizeof(maybelong)); - if (!lresult2) { - PyErr_NoMemory(); - goto exit; - } - } - switch(type) { - case 0: - if (!NI_Statistics(input, labels, min_label, max_label, result_indices, - n_results, dresult1, NULL, NULL, NULL, NULL, NULL, NULL)) - goto exit; - result = _NI_BuildMeasurementResultDouble(n_results, dresult1); - break; - case 1: - if (!NI_Statistics(input, labels, min_label, max_label, result_indices, - n_results, dresult1, lresult1, NULL, NULL, NULL, NULL, NULL)) - goto exit; - for(ii = 0; ii < n_results; ii++) - dresult1[ii] = lresult1[ii] > 0 ? dresult1[ii] / lresult1[ii] : 0.0; - - result = _NI_BuildMeasurementResultDouble(n_results, dresult1); - break; - case 2: - if (!NI_Statistics(input, labels, min_label, max_label, result_indices, - n_results, dresult1, lresult1, dresult2, NULL, NULL, NULL, NULL)) - goto exit; - result = _NI_BuildMeasurementResultDouble(n_results, dresult2); - break; - case 3: - if (!NI_Statistics(input, labels, min_label, max_label, result_indices, - n_results, NULL, NULL, NULL, dresult1, NULL, NULL, NULL)) - goto exit; - result = _NI_BuildMeasurementResultDouble(n_results, dresult1); - break; - case 4: - if (!NI_Statistics(input, labels, min_label, max_label, result_indices, - n_results, NULL, NULL, NULL, NULL, dresult1, NULL, NULL)) - goto exit; - result = _NI_BuildMeasurementResultDouble(n_results, dresult1); - break; - case 5: - if (!NI_Statistics(input, labels, min_label, max_label, result_indices, - n_results, NULL, NULL, NULL, dresult1, NULL, lresult1, NULL)) - goto exit; - result = _NI_BuildMeasurementResultInt(n_results, lresult1); - break; - case 6: - if (!NI_Statistics(input, labels, min_label, max_label, result_indices, - n_results, NULL, NULL, NULL, NULL, dresult1, NULL, lresult1)) - goto exit; - result = _NI_BuildMeasurementResultInt(n_results, lresult1); - break; - case 7: - if (!NI_Statistics(input, labels, min_label, max_label, result_indices, - n_results, NULL, NULL, NULL, dresult1, dresult2, - lresult1, lresult2)) - goto exit; - res1 = _NI_BuildMeasurementResultDouble(n_results, dresult1); - res2 = _NI_BuildMeasurementResultDouble(n_results, dresult2); - res3 = _NI_BuildMeasurementResultInt(n_results, lresult1); - res4 = _NI_BuildMeasurementResultInt(n_results, lresult2); - if (!res1 || !res2 || !res3 || !res4) - goto exit; - result = Py_BuildValue("OOOO", res1, res2, res3, res4); - break; - default: - PyErr_SetString(PyExc_RuntimeError, "operation not supported"); - goto exit; - } - - exit: - Py_XDECREF(input); - Py_XDECREF(labels); - if (result_indices) - free(result_indices); - if (dresult1) - free(dresult1); - if (dresult2) - free(dresult2); - if (lresult1) - free(lresult1); - if (lresult2) - free(lresult2); - return result; -} - - -static PyObject *Py_CenterOfMass(PyObject *obj, PyObject *args) -{ - PyArrayObject *input = NULL, *labels = NULL; - PyObject *indices_object, *result = NULL; - double *center_of_mass = NULL; - maybelong min_label, max_label, *result_indices = NULL, n_results; - - if (!PyArg_ParseTuple(args, "O&O&O", NI_ObjectToInputArray, &input, - NI_ObjectToOptionalInputArray, &labels, &indices_object)) - goto exit; - - if (!_NI_GetIndices(indices_object, &result_indices, &min_label, - &max_label, &n_results)) - goto exit; - - center_of_mass = (double*)malloc(input->nd * n_results * - sizeof(double)); - if (!center_of_mass) { - PyErr_NoMemory(); - goto exit; - } - - if (!NI_CenterOfMass(input, labels, min_label, max_label, - result_indices, n_results, center_of_mass)) - goto exit; - - result = _NI_BuildMeasurementResultDoubleTuple(n_results, input->nd, - center_of_mass); - - exit: - Py_XDECREF(input); - Py_XDECREF(labels); - if (result_indices) - free(result_indices); - if (center_of_mass) - free(center_of_mass); - return result; -} - -static PyObject *Py_Histogram(PyObject *obj, PyObject *args) -{ - PyArrayObject *input = NULL, *labels = NULL, **histograms = NULL; - PyObject *indices_object, *result = NULL; - maybelong min_label, max_label, *result_indices = NULL, n_results; - maybelong jj, nbins; - long nbins_in; - double min, max; - - if (!PyArg_ParseTuple(args, "O&ddlO&O", NI_ObjectToInputArray, &input, - &min, &max, &nbins_in, NI_ObjectToOptionalInputArray, - &labels, &indices_object)) - goto exit; - nbins = nbins_in; - - if (!_NI_GetIndices(indices_object, &result_indices, &min_label, - &max_label, &n_results)) - goto exit; - - /* Set all pointers to NULL, so that freeing the memory */ - /* doesn't cause problems. */ - histograms = (PyArrayObject**)calloc(input->nd * n_results, - sizeof(PyArrayObject*)); - if (!histograms) { - PyErr_NoMemory(); - goto exit; - } - for(jj = 0; jj < n_results; jj++) { - histograms[jj] = NA_NewArray(NULL, tInt32, 1, &nbins); - if (!histograms[jj]) { - PyErr_NoMemory(); - goto exit; - } - } - - if (!NI_Histogram(input, labels, min_label, max_label, result_indices, - n_results, histograms, min, max, nbins)) - goto exit; - - result = _NI_BuildMeasurementResultArrayObject(n_results, histograms); - - exit: - Py_XDECREF(input); - Py_XDECREF(labels); - if (result_indices) - free(result_indices); - if (histograms) { - for(jj = 0; jj < n_results; jj++) { - Py_XDECREF(histograms[jj]); - } - free(histograms); - } - return result; -} - static PyObject *Py_DistanceTransformBruteForce(PyObject *obj, PyObject *args) { @@ -1293,12 +900,6 @@ METH_VARARGS, NULL}, {"watershed_ift", (PyCFunction)Py_WatershedIFT, METH_VARARGS, NULL}, - {"statistics", (PyCFunction)Py_Statistics, - METH_VARARGS, NULL}, - {"center_of_mass", (PyCFunction)Py_CenterOfMass, - METH_VARARGS, NULL}, - {"histogram", (PyCFunction)Py_Histogram, - METH_VARARGS, NULL}, {"distance_transform_bf", (PyCFunction)Py_DistanceTransformBruteForce, METH_VARARGS, NULL}, {"distance_transform_op", (PyCFunction)Py_DistanceTransformOnePass, diff -Nru python-scipy-0.7.2+dfsg1/scipy/ndimage/src/ni_support.c python-scipy-0.8.0+dfsg1/scipy/ndimage/src/ni_support.c --- python-scipy-0.7.2+dfsg1/scipy/ndimage/src/ni_support.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/ndimage/src/ni_support.c 2010-07-26 15:48:32.000000000 +0100 @@ -168,6 +168,8 @@ switch (mode) { case NI_EXTEND_WRAP: + /* deal with situation where data is shorter than needed + for filling the line */ nextend = size1 / length; rextend = size1 - nextend * length; l1 = line + size1 + length - rextend; Binary files /tmp/tTuQn84fcN/python-scipy-0.7.2+dfsg1/scipy/ndimage/tests/dots.png and /tmp/9QioNV5RG6/python-scipy-0.8.0+dfsg1/scipy/ndimage/tests/dots.png differ Binary files /tmp/tTuQn84fcN/python-scipy-0.7.2+dfsg1/scipy/ndimage/tests/slice112.raw and /tmp/9QioNV5RG6/python-scipy-0.8.0+dfsg1/scipy/ndimage/tests/slice112.raw differ diff -Nru python-scipy-0.7.2+dfsg1/scipy/ndimage/tests/test_doccer.py python-scipy-0.8.0+dfsg1/scipy/ndimage/tests/test_doccer.py --- python-scipy-0.7.2+dfsg1/scipy/ndimage/tests/test_doccer.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/ndimage/tests/test_doccer.py 1970-01-01 01:00:00.000000000 +0100 @@ -1,89 +0,0 @@ -''' Some tests for the documenting decorator and support functions ''' - -import numpy as np - -from numpy.testing import assert_equal, assert_raises - -from nose.tools import assert_true - -import scipy.ndimage.doccer as sndd - -docstring = \ -"""Docstring - %(strtest1)s - %(strtest2)s - %(strtest3)s -""" -param_doc1 = \ -"""Another test - with some indent""" - -param_doc2 = \ -"""Another test, one line""" - -param_doc3 = \ -""" Another test - with some indent""" - -doc_dict = {'strtest1':param_doc1, - 'strtest2':param_doc2, - 'strtest3':param_doc3} - -filled_docstring = \ -"""Docstring - Another test - with some indent - Another test, one line - Another test - with some indent -""" - - -def test_unindent(): - yield assert_equal, sndd.unindent_string(param_doc1), param_doc1 - yield assert_equal, sndd.unindent_string(param_doc2), param_doc2 - yield assert_equal, sndd.unindent_string(param_doc3), param_doc1 - - -def test_unindent_dict(): - d2 = sndd.unindent_dict(doc_dict) - yield assert_equal, d2['strtest1'], doc_dict['strtest1'] - yield assert_equal, d2['strtest2'], doc_dict['strtest2'] - yield assert_equal, d2['strtest3'], doc_dict['strtest1'] - - -def test_docformat(): - udd = sndd.unindent_dict(doc_dict) - formatted = sndd.docformat(docstring, udd) - yield assert_equal, formatted, filled_docstring - single_doc = 'Single line doc %(strtest1)s' - formatted = sndd.docformat(single_doc, doc_dict) - # Note - initial indent of format string does not - # affect subsequent indent of inserted parameter - yield assert_equal, formatted, """Single line doc Another test - with some indent""" - - -def test_decorator(): - # with unindentation of parameters - decorator = sndd.filldoc(doc_dict, True) - @decorator - def func(): - """ Docstring - %(strtest3)s - """ - yield assert_equal, func.__doc__, """ Docstring - Another test - with some indent - """ - # without unindentation of parameters - decorator = sndd.filldoc(doc_dict, False) - @decorator - def func(): - """ Docstring - %(strtest3)s - """ - yield assert_equal, func.__doc__, """ Docstring - Another test - with some indent - """ diff -Nru python-scipy-0.7.2+dfsg1/scipy/ndimage/tests/test_io.py python-scipy-0.8.0+dfsg1/scipy/ndimage/tests/test_io.py --- python-scipy-0.7.2+dfsg1/scipy/ndimage/tests/test_io.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/ndimage/tests/test_io.py 2010-07-26 15:48:32.000000000 +0100 @@ -0,0 +1,22 @@ +from numpy.testing import * +import scipy.ndimage as ndi + +import os + +try: + from PIL import Image + pil_missing = False +except ImportError: + pil_missing = True + +@dec.skipif(pil_missing, msg="The Python Image Library could not be found.") +def test_imread(): + lp = os.path.join(os.path.dirname(__file__), 'dots.png') + img = ndi.imread(lp) + assert_array_equal(img.shape, (300, 420, 3)) + + img = ndi.imread(lp, flatten=True) + assert_array_equal(img.shape, (300, 420)) + +if __name__ == "__main__": + run_module_suite() diff -Nru python-scipy-0.7.2+dfsg1/scipy/ndimage/tests/test_ndimage.py python-scipy-0.8.0+dfsg1/scipy/ndimage/tests/test_ndimage.py --- python-scipy-0.7.2+dfsg1/scipy/ndimage/tests/test_ndimage.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/ndimage/tests/test_ndimage.py 2010-07-26 15:48:32.000000000 +0100 @@ -30,6 +30,7 @@ import math import numpy +import numpy as np from numpy import fft from numpy.testing import * import scipy.ndimage as ndimage @@ -48,6 +49,8 @@ a = numpy.asarray(a, numpy.complex128) b = numpy.asarray(b, numpy.complex128) t = ((a.real - b.real)**2).sum() + ((a.imag - b.imag)**2).sum() + if (a.dtype == numpy.object or b.dtype == numpy.object): + t = sum([diff(c,d)**2 for c,d in zip(a,b)]) else: a = numpy.asarray(a) a = a.astype(numpy.float64) @@ -2777,14 +2780,7 @@ input = numpy.array([[1, 2], [3, 4]], type) output = ndimage.sum(input, labels = labels, index = [4, 8, 2]) - self.failUnless(output == [4.0, 0.0, 5.0]) - - def test_sum13(self): - "sum 13" - input = numpy.array([1,2,3,4]) - labels = numpy.array([0,0,0,0]) - index = numpy.array([0],numpy.uint64) - self.failUnlessRaises(ValueError,ndimage.sum,input,labels,index) + self.failUnless(numpy.all(output == [4.0, 0.0, 5.0])) def test_mean01(self): "mean 1" @@ -2817,7 +2813,8 @@ input = numpy.array([[1, 2], [3, 4]], type) output = ndimage.mean(input, labels = labels, index = [4, 8, 2]) - self.failUnless(output == [4.0, 0.0, 2.5]) + self.failUnless(numpy.all(output[[0,2]] == [4.0, 2.5]) and + numpy.isnan(output[1])) def test_minimum01(self): "minimum 1" @@ -2850,7 +2847,7 @@ input = numpy.array([[1, 2], [3, 4]], type) output = ndimage.minimum(input, labels = labels, index = [2, 3, 8]) - self.failUnless(output == [2.0, 4.0, 0.0]) + self.failUnless(numpy.all(output == [2.0, 4.0, 0.0])) def test_maximum01(self): "maximum 1" @@ -2883,7 +2880,7 @@ input = numpy.array([[1, 2], [3, 4]], type) output = ndimage.maximum(input, labels = labels, index = [2, 3, 8]) - self.failUnless(output == [3.0, 4.0, 0.0]) + self.failUnless(numpy.all(output == [3.0, 4.0, 0.0])) def test_maximum05(self): "Ticket #501" @@ -2895,7 +2892,7 @@ for type in self.types: input = numpy.array([], type) output = ndimage.variance(input) - self.failUnless(float(output) == 0.0) + self.failUnless(numpy.isnan(output)) def test_variance02(self): "variance 2" @@ -2909,13 +2906,13 @@ for type in self.types: input = numpy.array([1, 3], type) output = ndimage.variance(input) - self.failUnless(output == 2.0) + self.failUnless(output == 1.0) def test_variance04(self): "variance 4" input = numpy.array([1, 0], bool) output = ndimage.variance(input) - self.failUnless(output == 0.5) + self.failUnless(output == 0.25) def test_variance05(self): "variance 5" @@ -2923,7 +2920,7 @@ for type in self.types: input = numpy.array([1, 3, 8], type) output = ndimage.variance(input, labels, 2) - self.failUnless(output == 2.0) + self.failUnless(output == 1.0) def test_variance06(self): "variance 6" @@ -2931,14 +2928,14 @@ for type in self.types: input = numpy.array([1, 3, 8, 10, 8], type) output = ndimage.variance(input, labels, [2, 3, 4]) - self.failUnless(output == [2.0, 2.0, 0.0]) + self.failUnless(numpy.all(output == [1.0, 1.0, 0.0])) def test_standard_deviation01(self): "standard deviation 1" for type in self.types: input = numpy.array([], type) output = ndimage.standard_deviation(input) - self.failUnless(float(output) == 0.0) + self.failUnless(numpy.isnan(output)) def test_standard_deviation02(self): "standard deviation 2" @@ -2952,13 +2949,13 @@ for type in self.types: input = numpy.array([1, 3], type) output = ndimage.standard_deviation(input) - self.failUnless(output == math.sqrt(2.0)) + self.failUnless(output == math.sqrt(1.0)) def test_standard_deviation04(self): "standard deviation 4" input = numpy.array([1, 0], bool) output = ndimage.standard_deviation(input) - self.failUnless(output == math.sqrt(0.5)) + self.failUnless(output == 0.5) def test_standard_deviation05(self): "standard deviation 5" @@ -2966,7 +2963,7 @@ for type in self.types: input = numpy.array([1, 3, 8], type) output = ndimage.standard_deviation(input, labels, 2) - self.failUnless(output == math.sqrt(2.0)) + self.failUnless(output == 1.0) def test_standard_deviation06(self): "standard deviation 6" @@ -2975,8 +2972,7 @@ input = numpy.array([1, 3, 8, 10, 8], type) output = ndimage.standard_deviation(input, labels, [2, 3, 4]) - self.failUnless(output == [math.sqrt(2.0), math.sqrt(2.0), - 0.0]) + self.failUnless(np.all(output == [1.0, 1.0, 0.0])) def test_minimum_position01(self): "minimum position 1" @@ -3041,7 +3037,7 @@ [1, 5, 1, 1]], type) output = ndimage.minimum_position(input, labels, [2, 3]) - self.failUnless(output == [(0, 1), (1, 2)]) + self.failUnless(output[0] == (0, 1) and output[1] == (1, 2)) def test_maximum_position01(self): "maximum position 1" @@ -3098,7 +3094,7 @@ [1, 5, 1, 1]], type) output = ndimage.maximum_position(input, labels, [1, 2]) - self.failUnless(output == [(0, 0), (1, 1)]) + self.failUnless(output[0] == (0, 0) and output[1] == (1, 1)) def test_extrema01(self): "extrema 1" @@ -3148,8 +3144,10 @@ labels = labels, index = [2, 3, 8]) output5 = ndimage.maximum_position(input, labels = labels, index = [2, 3, 8]) - self.failUnless(output1 == (output2, output3, output4, - output5)) + self.failUnless(numpy.all(output1[0] == output2)) + self.failUnless(numpy.all(output1[1] == output3)) + self.failUnless(numpy.all(output1[2] == output4)) + self.failUnless(numpy.all(output1[3] == output5)) def test_extrema04(self): "extrema 4" @@ -3165,8 +3163,10 @@ [1, 2]) output5 = ndimage.maximum_position(input, labels, [1, 2]) - self.failUnless(output1 == (output2, output3, output4, - output5)) + self.failUnless(numpy.all(output1[0] == output2)) + self.failUnless(numpy.all(output1[1] == output3)) + self.failUnless(numpy.all(output1[2] == output4)) + self.failUnless(numpy.all(output1[3] == output5)) def test_center_of_mass01(self): "center of mass 1" @@ -3260,7 +3260,7 @@ def test_histogram02(self): "histogram 2" labels = [1, 1, 1, 1, 2, 2, 2, 2] - true = [0, 2, 0, 1, 0] + true = [0, 2, 0, 1, 1] input = numpy.array([1, 1, 3, 4, 3, 3, 3, 3]) output = ndimage.histogram(input, 0, 4, 5, labels, 1) e = diff(true, output) @@ -3269,7 +3269,7 @@ def test_histogram03(self): "histogram 3" labels = [1, 0, 1, 1, 2, 2, 2, 2] - true1 = [0, 1, 0, 1, 0] + true1 = [0, 1, 0, 1, 1] true2 = [0, 0, 0, 3, 0] input = numpy.array([1, 1, 3, 4, 3, 5, 3, 3]) output = ndimage.histogram(input, 0, 4, 5, labels, (1,2)) diff -Nru python-scipy-0.7.2+dfsg1/scipy/optimize/anneal.py python-scipy-0.8.0+dfsg1/scipy/optimize/anneal.py --- python-scipy-0.7.2+dfsg1/scipy/optimize/anneal.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/optimize/anneal.py 2010-07-26 15:48:33.000000000 +0100 @@ -160,62 +160,67 @@ Schedule is a schedule class implementing the annealing schedule. Available ones are 'fast', 'cauchy', 'boltzmann' - Inputs: + Parameters + ---------- + func : callable f(x, *args) + Function to be optimized. + x0 : ndarray + Initial guess. + args : tuple + Extra parameters to `func`. + schedule : base_schedule + Annealing schedule to use (a class). + full_output : bool + Whether to return optional outputs. + T0 : float + Initial Temperature (estimated as 1.2 times the largest + cost-function deviation over random points in the range). + Tf : float + Final goal temperature. + maxeval : int + Maximum function evaluations. + maxaccept : int + Maximum changes to accept. + maxiter : int + Maximum cooling iterations. + learn_rate : float + Scale constant for adjusting guesses. + boltzmann : float + Boltzmann constant in acceptance test + (increase for less stringent test at each temperature). + feps : float + Stopping relative error tolerance for the function value in + last four coolings. + quench, m, n : float + Parameters to alter fast_sa schedule. + lower, upper : float or ndarray + Lower and upper bounds on `x`. + dwell : int + The number of times to search the space at each temperature. + + Outputs + ------- + xmin : ndarray + Point giving smallest value found. + retval : int + Flag indicating stopping condition:: - func -- Function to be optimized - x0 -- Parameters to be optimized over - args -- Extra parameters to function - schedule -- Annealing schedule to use (a class) - full_output -- Return optional outputs - T0 -- Initial Temperature (estimated as 1.2 times the largest - cost-function deviation over random points in the range) - Tf -- Final goal temperature - maxeval -- Maximum function evaluations - maxaccept -- Maximum changes to accept - maxiter -- Maximum cooling iterations - learn_rate -- scale constant for adjusting guesses - boltzmann -- Boltzmann constant in acceptance test - (increase for less stringent test at each temperature). - feps -- Stopping relative error tolerance for the function value in - last four coolings. - quench, m, n -- Parameters to alter fast_sa schedule - lower, upper -- lower and upper bounds on x0 (scalar or array). - dwell -- The number of times to search the space at each temperature. - - Outputs: (xmin, {Jmin, T, feval, iters, accept,} retval) - - xmin -- Point giving smallest value found - retval -- Flag indicating stopping condition: 0 : Cooled to global optimum 1 : Cooled to final temperature 2 : Maximum function evaluations 3 : Maximum cooling iterations reached 4 : Maximum accepted query locations reached - Jmin -- Minimum value of function found - T -- final temperature - feval -- Number of function evaluations - iters -- Number of cooling iterations - accept -- Number of tests accepted. - - See also: - - fmin, fmin_powell, fmin_cg, - fmin_bfgs, fmin_ncg -- multivariate local optimizers - leastsq -- nonlinear least squares minimizer - - fmin_l_bfgs_b, fmin_tnc, - fmin_cobyla -- constrained multivariate optimizers - - anneal, brute -- global optimizers - - fminbound, brent, golden, bracket -- local scalar minimizers - - fsolve -- n-dimenstional root-finding - - brentq, brenth, ridder, bisect, newton -- one-dimensional root-finding - - fixed_point -- scalar fixed-point finder + Jmin : float + Minimum value of function found. + T : float + Final temperature. + feval : int + Number of function evaluations. + iters : int + Number of cooling iterations. + accept : int + Number of tests accepted. """ x0 = asarray(x0) diff -Nru python-scipy-0.7.2+dfsg1/scipy/optimize/cobyla.py python-scipy-0.8.0+dfsg1/scipy/optimize/cobyla.py --- python-scipy-0.7.2+dfsg1/scipy/optimize/cobyla.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/optimize/cobyla.py 2010-07-26 15:48:33.000000000 +0100 @@ -14,60 +14,40 @@ iprint=1, maxfun=1000): """ Minimize a function using the Constrained Optimization BY Linear - Approximation (COBYLA) method + Approximation (COBYLA) method. - Arguments: - - func -- function to minimize. Called as func(x, *args) - - x0 -- initial guess to minimum - - cons -- a sequence of functions that all must be >=0 (a single function - if only 1 constraint) - - args -- extra arguments to pass to function - - consargs -- extra arguments to pass to constraints (default of None means - use same extra arguments as those passed to func). - Use () for no extra arguments. - - rhobeg -- reasonable initial changes to the variables - - rhoend -- final accuracy in the optimization (not precisely guaranteed) - - iprint -- controls the frequency of output: 0 (no output),1,2,3 - - maxfun -- maximum number of function evaluations. - - - Returns: - - x -- the minimum - - See also: - - scikits.openopt, which offers a unified syntax to call this and other solvers - - fmin, fmin_powell, fmin_cg, - fmin_bfgs, fmin_ncg -- multivariate local optimizers - leastsq -- nonlinear least squares minimizer - - fmin_l_bfgs_b, fmin_tnc, - fmin_cobyla -- constrained multivariate optimizers - - anneal, brute -- global optimizers - - fminbound, brent, golden, bracket -- local scalar minimizers - - fsolve -- n-dimenstional root-finding - - brentq, brenth, ridder, bisect, newton -- one-dimensional root-finding - - fixed_point -- scalar fixed-point finder + Parameters + ---------- + func : callable f(x, *args) + Function to minimize. + x0 : ndarray + Initial guess. + cons : sequence + Constraint functions; must all be ``>=0`` (a single function + if only 1 constraint). + args : tuple + Extra arguments to pass to function. + consargs : tuple + Extra arguments to pass to constraint functions (default of None means + use same extra arguments as those passed to func). + Use ``()`` for no extra arguments. + rhobeg : + Reasonable initial changes to the variables. + rhoend : + Final accuracy in the optimization (not precisely guaranteed). + iprint : {0, 1, 2, 3} + Controls the frequency of output; 0 implies no output. + maxfun : int + Maximum number of function evaluations. + + Returns + ------- + x : ndarray + The argument that minimises `f`. """ err = "cons must be a sequence of callable functions or a single"\ - " callable function." + " callable function." try: m = len(cons) except TypeError: @@ -92,7 +72,7 @@ k += 1 return f - xopt = _cobyla.minimize(calcfc, m=m, x=copy(x0), rhobeg=rhobeg, rhoend=rhoend, - iprint=iprint, maxfun=maxfun) + xopt = _cobyla.minimize(calcfc, m=m, x=copy(x0), rhobeg=rhobeg, + rhoend=rhoend, iprint=iprint, maxfun=maxfun) return xopt diff -Nru python-scipy-0.7.2+dfsg1/scipy/optimize/info.py python-scipy-0.8.0+dfsg1/scipy/optimize/info.py --- python-scipy-0.7.2+dfsg1/scipy/optimize/info.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/optimize/info.py 2010-07-26 15:48:33.000000000 +0100 @@ -88,6 +88,15 @@ line_search -- Return a step that satisfies the strong Wolfe conditions. check_grad -- Check the supplied derivative using finite difference techniques. + +Related Software:: + + OpenOpt -- A BSD-licensed optimisation framework (see http://openopt.org), + which includes a number of constrained and unconstrained + solvers from and beyond scipy.optimize module, + unified text and graphical output of convergence + and automatic differentiation. + """ postpone_import = 1 diff -Nru python-scipy-0.7.2+dfsg1/scipy/optimize/lbfgsb.py python-scipy-0.8.0+dfsg1/scipy/optimize/lbfgsb.py --- python-scipy-0.7.2+dfsg1/scipy/optimize/lbfgsb.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/optimize/lbfgsb.py 2010-07-26 15:48:33.000000000 +0100 @@ -1,5 +1,3 @@ -## Automatically adapted for scipy Oct 07, 2005 by convertcode.py - ## License for the Python wrapper ## ============================== @@ -40,57 +38,58 @@ """ Minimize a function func using the L-BFGS-B algorithm. - Arguments: - - func -- function to minimize. Called as func(x, *args) - - x0 -- initial guess to minimum - - fprime -- gradient of func. If None, then func returns the function - value and the gradient ( f, g = func(x, *args) ), unless - approx_grad is True then func returns only f. - Called as fprime(x, *args) - - args -- arguments to pass to function - - approx_grad -- if true, approximate the gradient numerically and func returns - only function value. - - bounds -- a list of (min, max) pairs for each element in x, defining - the bounds on that parameter. Use None for one of min or max - when there is no bound in that direction - - m -- the maximum number of variable metric corrections - used to define the limited memory matrix. (the limited memory BFGS - method does not store the full hessian but uses this many terms in an - approximation to it). - - factr -- The iteration stops when - (f^k - f^{k+1})/max{|f^k|,|f^{k+1}|,1} <= factr*epsmch - - where epsmch is the machine precision, which is automatically - generated by the code. Typical values for factr: 1e12 for - low accuracy; 1e7 for moderate accuracy; 10.0 for extremely - high accuracy. - - pgtol -- The iteration will stop when - max{|proj g_i | i = 1, ..., n} <= pgtol - where pg_i is the ith component of the projected gradient. - - epsilon -- step size used when approx_grad is true, for numerically - calculating the gradient - - iprint -- controls the frequency of output. <0 means no output. + Parameters + ---------- + func : callable f(x, *args) + Function to minimise. + x0 : ndarray + Initial guess. + fprime : callable fprime(x, *args) + The gradient of `func`. If None, then `func` returns the function + value and the gradient (``f, g = func(x, *args)``), unless + `approx_grad` is True in which case `func` returns only ``f``. + args : tuple + Arguments to pass to `func` and `fprime`. + approx_grad : bool + Whether to approximate the gradient numerically (in which case + `func` returns only the function value). + bounds : list + ``(min, max)`` pairs for each element in ``x``, defining + the bounds on that parameter. Use None for one of ``min`` or + ``max`` when there is no bound in that direction. + m : int + The maximum number of variable metric corrections + used to define the limited memory matrix. (The limited memory BFGS + method does not store the full hessian but uses this many terms in an + approximation to it.) + factr : float + The iteration stops when + ``(f^k - f^{k+1})/max{|f^k|,|f^{k+1}|,1} <= factr * eps``, + where ``eps`` is the machine precision, which is automatically + generated by the code. Typical values for `factr` are: 1e12 for + low accuracy; 1e7 for moderate accuracy; 10.0 for extremely + high accuracy. + pgtol : float + The iteration will stop when + ``max{|proj g_i | i = 1, ..., n} <= pgtol`` + where ``pg_i`` is the i-th component of the projected gradient. + epsilon : float + Step size used when `approx_grad` is True, for numerically + calculating the gradient + iprint : int + Controls the frequency of output. ``iprint < 0`` means no output. + maxfun : int + Maximum number of function evaluations. + + Returns + ------- + x : ndarray + Estimated position of the minimum. + f : float + Value of `func` at the minimum. + d : dict + Information dictionary. - maxfun -- maximum number of function evaluations. - - - Returns: - x, f, d = fmin_lbfgs_b(func, x0, ...) - - x -- position of the minimum - f -- value of func at the minimum - d -- dictionary of information from routine d['warnflag'] is 0 if converged, 1 if too many function evaluations, @@ -99,16 +98,19 @@ d['funcalls'] is the number of function calls made. - License of L-BFGS-B (Fortran code) - ================================== + Notes + ----- - The version included here (in fortran code) is 2.1 (released in 1997). It was - written by Ciyou Zhu, Richard Byrd, and Jorge Nocedal . It - carries the following condition for use: - - This software is freely available, but we expect that all publications - describing work using this software , or all commercial products using it, - quote at least one of the references given below. + License of L-BFGS-B (Fortran code): + + The version included here (in fortran code) is 2.1 (released in + 1997). It was written by Ciyou Zhu, Richard Byrd, and Jorge Nocedal + . It carries the following condition for use: + + This software is freely available, but we expect that all + publications describing work using this software, or all + commercial products using it, quote at least one of the references + given below. References * R. H. Byrd, P. Lu and J. Nocedal. A Limited Memory Algorithm for Bound @@ -118,26 +120,6 @@ FORTRAN routines for large scale bound constrained optimization (1997), ACM Transactions on Mathematical Software, Vol 23, Num. 4, pp. 550 - 560. - See also: - scikits.openopt, which offers a unified syntax to call this and other solvers - - fmin, fmin_powell, fmin_cg, - fmin_bfgs, fmin_ncg -- multivariate local optimizers - leastsq -- nonlinear least squares minimizer - - fmin_l_bfgs_b, fmin_tnc, - fmin_cobyla -- constrained multivariate optimizers - - anneal, brute -- global optimizers - - fminbound, brent, golden, bracket -- local scalar minimizers - - fsolve -- n-dimenstional root-finding - - brentq, brenth, ridder, bisect, newton -- one-dimensional root-finding - - fixed_point -- scalar fixed-point finder - """ n = len(x0) diff -Nru python-scipy-0.7.2+dfsg1/scipy/optimize/linesearch.py python-scipy-0.8.0+dfsg1/scipy/optimize/linesearch.py --- python-scipy-0.7.2+dfsg1/scipy/optimize/linesearch.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/optimize/linesearch.py 2010-07-26 15:48:33.000000000 +0100 @@ -1,4 +1,3 @@ -## Automatically adapted for scipy Oct 07, 2005 by convertcode.py from scipy.optimize import minpack2 import numpy diff -Nru python-scipy-0.7.2+dfsg1/scipy/optimize/__minpack.h python-scipy-0.8.0+dfsg1/scipy/optimize/__minpack.h --- python-scipy-0.7.2+dfsg1/scipy/optimize/__minpack.h 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/optimize/__minpack.h 2010-07-26 15:48:32.000000000 +0100 @@ -19,12 +19,24 @@ */ #if defined(NO_APPEND_FORTRAN) +#if defined(UPPERCASE_FORTRAN) +/* nothing to do in that case */ +#else #define CHKDER chkder #define HYBRD hybrd #define HYBRJ hybrj #define LMDIF lmdif #define LMDER lmder #define LMSTR lmstr +#endif +#else +#if defined(UPPERCASE_FORTRAN) +#define CHKDER CHKDER_ +#define HYBRD HYBRD_ +#define HYBRJ HYBRJ_ +#define LMDIF LMDIF_ +#define LMDER LMDER_ +#define LMSTR LMSTR_ #else #define CHKDER chkder_ #define HYBRD hybrd_ @@ -33,6 +45,7 @@ #define LMDER lmder_ #define LMSTR lmstr_ #endif +#endif extern void CHKDER(int*,int*,double*,double*,double*,int*,double*,double*,int*,double*); extern void HYBRD(void*,int*,double*,double*,double*,int*,int*,int*,double*,double*,int*,double*,int*,int*,int*,double*,int*,double*,int*,double*,double*,double*,double*,double*); diff -Nru python-scipy-0.7.2+dfsg1/scipy/optimize/minpack.py python-scipy-0.8.0+dfsg1/scipy/optimize/minpack.py --- python-scipy-0.7.2+dfsg1/scipy/optimize/minpack.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/optimize/minpack.py 2010-07-26 15:48:33.000000000 +0100 @@ -1,12 +1,13 @@ +import warnings import _minpack from numpy import atleast_1d, dot, take, triu, shape, eye, \ transpose, zeros, product, greater, array, \ - all, where, isscalar, asarray + all, where, isscalar, asarray, inf error = _minpack.error -__all__ = ['fsolve', 'leastsq', 'newton', 'fixed_point','bisection'] +__all__ = ['fsolve', 'leastsq', 'fixed_point', 'bisection', 'curve_fit'] def check_func(thefunc, x0, args, numinputs, output_shape=None): res = atleast_1d(thefunc(*((x0[:numinputs],)+args))) @@ -15,98 +16,103 @@ if len(output_shape) > 1: if output_shape[1] == 1: return shape(res) - raise TypeError, "There is a mismatch between the input and output shape of %s." % thefunc.func_name + msg = "There is a mismatch between the input and output " \ + "shape of %s." % thefunc.func_name + raise TypeError(msg) return shape(res) -def fsolve(func,x0,args=(),fprime=None,full_output=0,col_deriv=0,xtol=1.49012e-8,maxfev=0,band=None,epsfcn=0.0,factor=100,diag=None, warning=True): - """Find the roots of a function. - - Description: +def fsolve(func, x0, args=(), fprime=None, full_output=0, + col_deriv=0, xtol=1.49012e-8, maxfev=0, band=None, + epsfcn=0.0, factor=100, diag=None, warning=True): + """ + Find the roots of a function. Return the roots of the (non-linear) equations defined by - func(x)=0 given a starting estimate. - - Inputs: - - func -- A Python function or method which takes at least one - (possibly vector) argument. - x0 -- The starting estimate for the roots of func(x)=0. - args -- Any extra arguments to func are placed in this tuple. - fprime -- A function or method to compute the Jacobian of func with - derivatives across the rows. If this is None, the - Jacobian will be estimated. - full_output -- non-zero to return the optional outputs. - col_deriv -- non-zero to specify that the Jacobian function - computes derivatives down the columns (faster, because - there is no transpose operation). - warning -- True to print a warning message when the call is - unsuccessful; False to suppress the warning message. - Outputs: (x, {infodict, ier, mesg}) - - x -- the solution (or the result of the last iteration for an - unsuccessful call. - - infodict -- a dictionary of optional outputs with the keys: - 'nfev' : the number of function calls - 'njev' : the number of jacobian calls - 'fvec' : the function evaluated at the output - 'fjac' : the orthogonal matrix, q, produced by the - QR factorization of the final approximate - Jacobian matrix, stored column wise. - 'r' : upper triangular matrix produced by QR - factorization of same matrix. - 'qtf' : the vector (transpose(q) * fvec). - ier -- an integer flag. If it is equal to 1 the solution was - found. If it is not equal to 1, the solution was not - found and the following message gives more information. - mesg -- a string message giving information about the cause of - failure. - - Extended Inputs: - - xtol -- The calculation will terminate if the relative error - between two consecutive iterates is at most xtol. - maxfev -- The maximum number of calls to the function. If zero, - then 100*(N+1) is the maximum where N is the number - of elements in x0. - band -- If set to a two-sequence containing the number of sub- - and superdiagonals within the band of the Jacobi matrix, - the Jacobi matrix is considered banded (only for fprime=None). - epsfcn -- A suitable step length for the forward-difference - approximation of the Jacobian (for fprime=None). If - epsfcn is less than the machine precision, it is assumed - that the relative errors in the functions are of - the order of the machine precision. - factor -- A parameter determining the initial step bound - (factor * || diag * x||). Should be in interval (0.1,100). - diag -- A sequency of N positive entries that serve as a - scale factors for the variables. - - Remarks: - - "fsolve" is a wrapper around MINPACK's hybrd and hybrj algorithms. - - See also: - - scikits.openopt, which offers a unified syntax to call this and other solvers - - fmin, fmin_powell, fmin_cg, - fmin_bfgs, fmin_ncg -- multivariate local optimizers - leastsq -- nonlinear least squares minimizer - - fmin_l_bfgs_b, fmin_tnc, - fmin_cobyla -- constrained multivariate optimizers + ``func(x) = 0`` given a starting estimate. - anneal, brute -- global optimizers + Parameters + ---------- + func : callable f(x, *args) + A function that takes at least one (possibly vector) argument. + x0 : ndarray + The starting estimate for the roots of ``func(x) = 0``. + args : tuple + Any extra arguments to `func`. + fprime : callable(x) + A function to compute the Jacobian of `func` with derivatives + across the rows. By default, the Jacobian will be estimated. + full_output : bool + If True, return optional outputs. + col_deriv : bool + Specify whether the Jacobian function computes derivatives down + the columns (faster, because there is no transpose operation). + warning : bool + Whether to print a warning message when the call is unsuccessful. + This option is deprecated, use the warnings module instead. - fminbound, brent, golden, bracket -- local scalar minimizers - - brentq, brenth, ridder, bisect, newton -- one-dimensional root-finding + Returns + ------- + x : ndarray + The solution (or the result of the last iteration for + an unsuccessful call). + infodict : dict + A dictionary of optional outputs with the keys:: + + * 'nfev': number of function calls + * 'njev': number of Jacobian calls + * 'fvec': function evaluated at the output + * 'fjac': the orthogonal matrix, q, produced by the QR + factorization of the final approximate Jacobian + matrix, stored column wise + * 'r': upper triangular matrix produced by QR factorization of same + matrix + * 'qtf': the vector (transpose(q) * fvec) + + ier : int + An integer flag. Set to 1 if a solution was found, otherwise refer + to `mesg` for more information. + mesg : str + If no solution is found, `mesg` details the cause of failure. + + Other Parameters + ---------------- + xtol : float + The calculation will terminate if the relative error between two + consecutive iterates is at most `xtol`. + maxfev : int + The maximum number of calls to the function. If zero, then + ``100*(N+1)`` is the maximum where N is the number of elements + in `x0`. + band : tuple + If set to a two-sequence containing the number of sub- and + super-diagonals within the band of the Jacobi matrix, the + Jacobi matrix is considered banded (only for ``fprime=None``). + epsfcn : float + A suitable step length for the forward-difference + approximation of the Jacobian (for ``fprime=None``). If + `epsfcn` is less than the machine precision, it is assumed + that the relative errors in the functions are of the order of + the machine precision. + factor : float + A parameter determining the initial step bound + (``factor * || diag * x||``). Should be in the interval + ``(0.1, 100)``. + diag : sequence + N positive entries that serve as a scale factors for the + variables. + + Notes + ----- + ``fsolve`` is a wrapper around MINPACK's hybrd and hybrj algorithms. - fixed_point -- scalar and vector fixed-point finder + From scipy 0.8.0 `fsolve` returns an array of size one instead of a scalar + when solving for a single parameter. """ + if not warning : + msg = "The warning keyword is deprecated. Use the warnings module." + warnings.warn(msg, DeprecationWarning) x0 = array(x0,ndmin=1) n = len(x0) if type(args) != type(()): args = (args,) @@ -119,33 +125,40 @@ ml,mu = band[:2] if (maxfev == 0): maxfev = 200*(n+1) - retval = _minpack._hybrd(func,x0,args,full_output,xtol,maxfev,ml,mu,epsfcn,factor,diag) + retval = _minpack._hybrd(func, x0, args, full_output, xtol, + maxfev, ml, mu, epsfcn, factor, diag) else: check_func(Dfun,x0,args,n,(n,n)) if (maxfev == 0): maxfev = 100*(n+1) - retval = _minpack._hybrj(func,Dfun,x0,args,full_output,col_deriv,xtol,maxfev,factor,diag) + retval = _minpack._hybrj(func, Dfun, x0, args, full_output, + col_deriv, xtol, maxfev, factor,diag) errors = {0:["Improper input parameters were entered.",TypeError], - 1:["The solution converged.",None], - 2:["The number of calls to function has reached maxfev = %d." % maxfev, ValueError], - 3:["xtol=%f is too small, no further improvement in the approximate\n solution is possible." % xtol, ValueError], - 4:["The iteration is not making good progress, as measured by the \n improvement from the last five Jacobian evaluations.", ValueError], - 5:["The iteration is not making good progress, as measured by the \n improvement from the last ten iterations.", ValueError], + 1:["The solution converged.", None], + 2:["The number of calls to function has " + "reached maxfev = %d." % maxfev, ValueError], + 3:["xtol=%f is too small, no further improvement " + "in the approximate\n solution " + "is possible." % xtol, ValueError], + 4:["The iteration is not making good progress, as measured " + "by the \n improvement from the last five " + "Jacobian evaluations.", ValueError], + 5:["The iteration is not making good progress, " + "as measured by the \n improvement from the last " + "ten iterations.", ValueError], 'unknown': ["An error occurred.", TypeError]} info = retval[-1] # The FORTRAN return value if (info != 1 and not full_output): if info in [2,3,4,5]: - if warning: print "Warning: " + errors[info][0] + msg = errors[info][0] + warnings.warn(msg, RuntimeWarning) else: try: - raise errors[info][1], errors[info][0] + raise errors[info][1](errors[info][0]) except KeyError: - raise errors['unknown'][1], errors['unknown'][0] - - if n == 1: - retval = (retval[0][0],) + retval[1:] + raise errors['unknown'][1](errors['unknown'][0]) if full_output: try: @@ -156,118 +169,116 @@ return retval[0] -def leastsq(func,x0,args=(),Dfun=None,full_output=0,col_deriv=0,ftol=1.49012e-8,xtol=1.49012e-8,gtol=0.0,maxfev=0,epsfcn=0.0,factor=100,diag=None,warning=True): +def leastsq(func, x0, args=(), Dfun=None, full_output=0, + col_deriv=0, ftol=1.49012e-8, xtol=1.49012e-8, + gtol=0.0, maxfev=0, epsfcn=0.0, factor=100, diag=None,warning=True): """Minimize the sum of squares of a set of equations. - Description: - - Return the point which minimizes the sum of squares of M - (non-linear) equations in N unknowns given a starting estimate, x0, - using a modification of the Levenberg-Marquardt algorithm. - - x = arg min(sum(func(y)**2,axis=0)) - y - - Inputs: - - func -- A Python function or method which takes at least one - (possibly length N vector) argument and returns M - floating point numbers. - x0 -- The starting estimate for the minimization. - args -- Any extra arguments to func are placed in this tuple. - Dfun -- A function or method to compute the Jacobian of func with - derivatives across the rows. If this is None, the - Jacobian will be estimated. - full_output -- non-zero to return all optional outputs. - col_deriv -- non-zero to specify that the Jacobian function - computes derivatives down the columns (faster, because - there is no transpose operation). - warning -- True to print a warning message when the call is - unsuccessful; False to suppress the warning message. - - Outputs: (x, {cov_x, infodict, mesg}, ier) - - x -- the solution (or the result of the last iteration for an - unsuccessful call. - - cov_x -- uses the fjac and ipvt optional outputs to construct an - estimate of the covariance matrix of the solution. - None if a singular matrix encountered (indicates - infinite covariance in some direction). - infodict -- a dictionary of optional outputs with the keys: - 'nfev' : the number of function calls - 'fvec' : the function evaluated at the output - 'fjac' : A permutation of the R matrix of a QR - factorization of the final approximate - Jacobian matrix, stored column wise. - Together with ipvt, the covariance of the - estimate can be approximated. - 'ipvt' : an integer array of length N which defines - a permutation matrix, p, such that - fjac*p = q*r, where r is upper triangular - with diagonal elements of nonincreasing - magnitude. Column j of p is column ipvt(j) - of the identity matrix. - 'qtf' : the vector (transpose(q) * fvec). - mesg -- a string message giving information about the cause of failure. - ier -- an integer flag. If it is equal to 1, 2, 3 or 4, the - solution was found. Otherwise, the solution was not - found. In either case, the optional output variable 'mesg' - gives more information. - - - Extended Inputs: - - ftol -- Relative error desired in the sum of squares. - xtol -- Relative error desired in the approximate solution. - gtol -- Orthogonality desired between the function vector - and the columns of the Jacobian. - maxfev -- The maximum number of calls to the function. If zero, - then 100*(N+1) is the maximum where N is the number - of elements in x0. - epsfcn -- A suitable step length for the forward-difference - approximation of the Jacobian (for Dfun=None). If - epsfcn is less than the machine precision, it is assumed - that the relative errors in the functions are of - the order of the machine precision. - factor -- A parameter determining the initial step bound - (factor * || diag * x||). Should be in interval (0.1,100). - diag -- A sequency of N positive entries that serve as a - scale factors for the variables. - - Remarks: - - "leastsq" is a wrapper around MINPACK's lmdif and lmder algorithms. - - See also: - - scikits.openopt, which offers a unified syntax to call this and other solvers - - fmin, fmin_powell, fmin_cg, - fmin_bfgs, fmin_ncg -- multivariate local optimizers - - fmin_l_bfgs_b, fmin_tnc, - fmin_cobyla -- constrained multivariate optimizers + :: - anneal, brute -- global optimizers + x = arg min(sum(func(y)**2,axis=0)) + y - fminbound, brent, golden, bracket -- local scalar minimizers + Parameters + ---------- + func : callable + should take at least one (possibly length N vector) argument and + returns M floating point numbers. + x0 : ndarray + The starting estimate for the minimization. + args : tuple + Any extra arguments to func are placed in this tuple. + Dfun : callable + A function or method to compute the Jacobian of func with derivatives + across the rows. If this is None, the Jacobian will be estimated. + full_output : bool + non-zero to return all optional outputs. + col_deriv : bool + non-zero to specify that the Jacobian function computes derivatives + down the columns (faster, because there is no transpose operation). + ftol : float + Relative error desired in the sum of squares. + xtol : float + Relative error desired in the approximate solution. + gtol : float + Orthogonality desired between the function vector and the columns of + the Jacobian. + maxfev : int + The maximum number of calls to the function. If zero, then 100*(N+1) is + the maximum where N is the number of elements in x0. + epsfcn : float + A suitable step length for the forward-difference approximation of the + Jacobian (for Dfun=None). If epsfcn is less than the machine precision, + it is assumed that the relative errors in the functions are of the + order of the machine precision. + factor : float + A parameter determining the initial step bound + (``factor * || diag * x||``). Should be in interval ``(0.1, 100)``. + diag : sequence + N positive entries that serve as a scale factors for the variables. + warning : bool + Whether to print a warning message when the call is unsuccessful. + Deprecated, use the warnings module instead. - fsolve -- n-dimenstional root-finding + Returns + ------- + x : ndarray + The solution (or the result of the last iteration for an unsuccessful + call). + cov_x : ndarray + Uses the fjac and ipvt optional outputs to construct an + estimate of the jacobian around the solution. ``None`` if a + singular matrix encountered (indicates very flat curvature in + some direction). This matrix must be multiplied by the + residual standard deviation to get the covariance of the + parameter estimates -- see curve_fit. + infodict : dict + a dictionary of optional outputs with the keys:: + + - 'nfev' : the number of function calls + - 'fvec' : the function evaluated at the output + - 'fjac' : A permutation of the R matrix of a QR + factorization of the final approximate + Jacobian matrix, stored column wise. + Together with ipvt, the covariance of the + estimate can be approximated. + - 'ipvt' : an integer array of length N which defines + a permutation matrix, p, such that + fjac*p = q*r, where r is upper triangular + with diagonal elements of nonincreasing + magnitude. Column j of p is column ipvt(j) + of the identity matrix. + - 'qtf' : the vector (transpose(q) * fvec). + mesg : str + A string message giving information about the cause of failure. + ier : int + An integer flag. If it is equal to 1, 2, 3 or 4, the solution was + found. Otherwise, the solution was not found. In either case, the + optional output variable 'mesg' gives more information. - brentq, brenth, ridder, bisect, newton -- one-dimensional root-finding + Notes + ----- + "leastsq" is a wrapper around MINPACK's lmdif and lmder algorithms. - fixed_point -- scalar and vector fixed-point finder + From scipy 0.8.0 `leastsq` returns an array of size one instead of a scalar + when solving for a single parameter. """ + if not warning : + msg = "The warning keyword is deprecated. Use the warnings module." + warnings.warn(msg, DeprecationWarning) x0 = array(x0,ndmin=1) n = len(x0) if type(args) != type(()): args = (args,) m = check_func(func,x0,args,n)[0] + if n>m: + raise TypeError('Improper input: N=%s must not exceed M=%s' % (n,m)) if Dfun is None: if (maxfev == 0): maxfev = 200*(n+1) - retval = _minpack._lmdif(func,x0,args,full_output,ftol,xtol,gtol,maxfev,epsfcn,factor,diag) + retval = _minpack._lmdif(func, x0, args, full_output, + ftol, xtol, gtol, + maxfev, epsfcn, factor, diag) else: if col_deriv: check_func(Dfun,x0,args,n,(n,m)) @@ -278,48 +289,154 @@ retval = _minpack._lmder(func,Dfun,x0,args,full_output,col_deriv,ftol,xtol,gtol,maxfev,factor,diag) errors = {0:["Improper input parameters.", TypeError], - 1:["Both actual and predicted relative reductions in the sum of squares\n are at most %f" % ftol, None], - 2:["The relative error between two consecutive iterates is at most %f" % xtol, None], - 3:["Both actual and predicted relative reductions in the sum of squares\n are at most %f and the relative error between two consecutive iterates is at \n most %f" % (ftol,xtol), None], - 4:["The cosine of the angle between func(x) and any column of the\n Jacobian is at most %f in absolute value" % gtol, None], - 5:["Number of calls to function has reached maxfev = %d." % maxfev, ValueError], - 6:["ftol=%f is too small, no further reduction in the sum of squares\n is possible.""" % ftol, ValueError], - 7:["xtol=%f is too small, no further improvement in the approximate\n solution is possible." % xtol, ValueError], - 8:["gtol=%f is too small, func(x) is orthogonal to the columns of\n the Jacobian to machine precision." % gtol, ValueError], + 1:["Both actual and predicted relative reductions " + "in the sum of squares\n are at most %f" % ftol, None], + 2:["The relative error between two consecutive " + "iterates is at most %f" % xtol, None], + 3:["Both actual and predicted relative reductions in " + "the sum of squares\n are at most %f and the " + "relative error between two consecutive " + "iterates is at \n most %f" % (ftol,xtol), None], + 4:["The cosine of the angle between func(x) and any " + "column of the\n Jacobian is at most %f in " + "absolute value" % gtol, None], + 5:["Number of calls to function has reached " + "maxfev = %d." % maxfev, ValueError], + 6:["ftol=%f is too small, no further reduction " + "in the sum of squares\n is possible.""" % ftol, ValueError], + 7:["xtol=%f is too small, no further improvement in " + "the approximate\n solution is possible." % xtol, ValueError], + 8:["gtol=%f is too small, func(x) is orthogonal to the " + "columns of\n the Jacobian to machine " + "precision." % gtol, ValueError], 'unknown':["Unknown error.", TypeError]} info = retval[-1] # The FORTRAN return value if (info not in [1,2,3,4] and not full_output): if info in [5,6,7,8]: - if warning: print "Warning: " + errors[info][0] + warnings.warn(errors[info][0], RuntimeWarning) else: try: - raise errors[info][1], errors[info][0] + raise errors[info][1](errors[info][0]) except KeyError: - raise errors['unknown'][1], errors['unknown'][0] - - if n == 1: - retval = (retval[0][0],) + retval[1:] + raise errors['unknown'][1](errors['unknown'][0]) mesg = errors[info][0] if full_output: - from numpy.dual import inv - from numpy.linalg import LinAlgError - perm = take(eye(n),retval[1]['ipvt']-1,0) - r = triu(transpose(retval[1]['fjac'])[:n,:]) - R = dot(r, perm) - try: - cov_x = inv(dot(transpose(R),R)) - except LinAlgError: - cov_x = None + cov_x = None + if info in [1,2,3,4]: + from numpy.dual import inv + from numpy.linalg import LinAlgError + perm = take(eye(n),retval[1]['ipvt']-1,0) + r = triu(transpose(retval[1]['fjac'])[:n,:]) + R = dot(r, perm) + try: + cov_x = inv(dot(transpose(R),R)) + except LinAlgError: + pass return (retval[0], cov_x) + retval[1:-1] + (mesg,info) else: return (retval[0], info) +def _general_function(params, xdata, ydata, function): + return function(xdata, *params) - ydata + +def _weighted_general_function(params, xdata, ydata, function, weights): + return weights * (function(xdata, *params) - ydata) + +def curve_fit(f, xdata, ydata, p0=None, sigma=None, **kw): + """ + Use non-linear least squares to fit a function, f, to data. + + Assumes ``ydata = f(xdata, *params) + eps`` + + Parameters + ---------- + f : callable + The model function, f(x, ...). It must take the independent + variable as the first argument and the parameters to fit as + separate remaining arguments. + xdata : An N-length sequence or an (k,N)-shaped array + for functions with k predictors. + The independent variable where the data is measured. + ydata : N-length sequence + The dependent data --- nominally f(xdata, ...) + p0 : None, scalar, or M-length sequence + Initial guess for the parameters. If None, then the initial + values will all be 1 (if the number of parameters for the function + can be determined using introspection, otherwise a ValueError + is raised). + sigma : None or N-length sequence + If not None, it represents the standard-deviation of ydata. + This vector, if given, will be used as weights in the + least-squares problem. + + + Returns + ------- + popt : array + Optimal values for the parameters so that the sum of the squared error + of ``f(xdata, *popt) - ydata`` is minimized + pcov : 2d array + The estimated covariance of popt. The diagonals provide the variance + of the parameter estimate. + + Notes + ----- + The algorithm uses the Levenburg-Marquardt algorithm: + scipy.optimize.leastsq. Additional keyword arguments are passed directly + to that algorithm. + + Examples + -------- + >>> import numpy as np + >>> from scipy.optimize import curve_fit + >>> def func(x, a, b, c): + ... return a*np.exp(-b*x) + c + + >>> x = np.linspace(0,4,50) + >>> y = func(x, 2.5, 1.3, 0.5) + >>> yn = y + 0.2*np.random.normal(size=len(x)) + + >>> popt, pcov = curve_fit(func, x, yn) + + """ + if p0 is None or isscalar(p0): + # determine number of parameters by inspecting the function + import inspect + args, varargs, varkw, defaults = inspect.getargspec(f) + if len(args) < 2: + msg = "Unable to determine number of fit parameters." + raise ValueError(msg) + if p0 is None: + p0 = 1.0 + p0 = [p0]*(len(args)-1) + + args = (xdata, ydata, f) + if sigma is None: + func = _general_function + else: + func = _weighted_general_function + args += (1.0/asarray(sigma),) + res = leastsq(func, p0, args=args, full_output=1, **kw) + (popt, pcov, infodict, errmsg, ier) = res + + if ier not in [1,2,3,4]: + msg = "Optimal parameters not found: " + errmsg + raise RuntimeError(msg) + + if (len(ydata) > len(p0)) and pcov is not None: + s_sq = (func(popt, *args)**2).sum()/(len(ydata)-len(p0)) + pcov = pcov * s_sq + else: + pcov = inf + + return popt, pcov def check_gradient(fcn,Dfcn,x0,args=(),col_deriv=0): """Perform a simple check on the gradient for correctness. + """ x = atleast_1d(x0) @@ -348,67 +465,101 @@ return (good,err) -# Netwon-Raphson method +# Newton-Raphson method def newton(func, x0, fprime=None, args=(), tol=1.48e-8, maxiter=50): - """Given a function of a single variable and a starting point, - find a nearby zero using Newton-Raphson. - - fprime is the derivative of the function. If not given, the - Secant method is used. - - See also: - - fmin, fmin_powell, fmin_cg, - fmin_bfgs, fmin_ncg -- multivariate local optimizers - leastsq -- nonlinear least squares minimizer + """Find a zero using the Newton-Raphson or secant method. - fmin_l_bfgs_b, fmin_tnc, - fmin_cobyla -- constrained multivariate optimizers + Find a zero of the function `func` given a nearby starting point `x0`. + The Newton-Rapheson method is used if the derivative `fprime` of `func` + is provided, otherwise the secant method is used. + + Parameters + ---------- + func : function + The function whose zero is wanted. It must be a function of a + single variable of the form f(x,a,b,c...), where a,b,c... are extra + arguments that can be passed in the `args` parameter. + x0 : float + An initial estimate of the zero that should be somewhere near the + actual zero. + fprime : {None, function}, optional + The derivative of the function when available and convenient. If it + is None, then the secant method is used. The default is None. + args : tuple, optional + Extra arguments to be used in the function call. + tol : float, optional + The allowable error of the zero value. + maxiter : int, optional + Maximum number of iterations. - anneal, brute -- global optimizers - - fminbound, brent, golden, bracket -- local scalar minimizers - - fsolve -- n-dimenstional root-finding - - brentq, brenth, ridder, bisect, newton -- one-dimensional root-finding + Returns + ------- + zero : float + Estimated location where function is zero. - fixed_point -- scalar and vector fixed-point finder + See Also + -------- + brentq, brenth, ridder, bisect -- find zeroes in one dimension. + fsolve -- find zeroes in n dimensions. + + Notes + ----- + The convergence rate of the Newton-Rapheson method is quadratic while + that of the secant method is somewhat less. This means that if the + function is well behaved the actual error in the estimated zero is + approximatly the square of the requested tolerance up to roundoff + error. However, the stopping criterion used here is the step size and + there is no quarantee that a zero has been found. Consequently the + result should be verified. Safer algorithms are brentq, brenth, ridder, + and bisect, but they all require that the root first be bracketed in an + interval where the function changes sign. The brentq algorithm is + recommended for general use in one dimemsional problems when such an + interval has been found. """ + msg = "minpack.newton is moving to zeros.newton" + warnings.warn(msg, DeprecationWarning) if fprime is not None: + # Newton-Rapheson method p0 = x0 for iter in range(maxiter): - myargs = (p0,)+args + myargs = (p0,) + args fval = func(*myargs) - fpval = fprime(*myargs) - if fpval == 0: - print "Warning: zero-derivative encountered." + fder = fprime(*myargs) + if fder == 0: + msg = "derivative was zero." + warnings.warn(msg, RuntimeWarning) return p0 p = p0 - func(*myargs)/fprime(*myargs) - if abs(p-p0) < tol: + if abs(p - p0) < tol: return p p0 = p - else: # Secant method + else: + # Secant method p0 = x0 - p1 = x0*(1+1e-4) - q0 = func(*((p0,)+args)) - q1 = func(*((p1,)+args)) + if x0 >= 0: + p1 = x0*(1 + 1e-4) + 1e-4 + else: + p1 = x0*(1 + 1e-4) - 1e-4 + q0 = func(*((p0,) + args)) + q1 = func(*((p1,) + args)) for iter in range(maxiter): if q1 == q0: if p1 != p0: - print "Tolerance of %s reached" % (p1-p0) - return (p1+p0)/2.0 + msg = "Tolerance of %s reached" % (p1 - p0) + warnings.warn(msg, RuntimeWarning) + return (p1 + p0)/2.0 else: - p = p1 - q1*(p1-p0)/(q1-q0) - if abs(p-p1) < tol: + p = p1 - q1*(p1 - p0)/(q1 - q0) + if abs(p - p1) < tol: return p p0 = p1 q0 = q1 p1 = p - q1 = func(*((p1,)+args)) - raise RuntimeError, "Failed to converge after %d iterations, value is %s" % (maxiter,p) + q1 = func(*((p1,) + args)) + msg = "Failed to converge after %d iterations, value is %s" % (maxiter, p) + raise RuntimeError(msg) # Steffensen's Method using Aitken's Del^2 convergence acceleration. @@ -432,23 +583,6 @@ >>> fixed_point(func, [1.2, 1.3], args=(c1,c2)) array([ 1.4920333 , 1.37228132]) - See also: - - fmin, fmin_powell, fmin_cg, - fmin_bfgs, fmin_ncg -- multivariate local optimizers - leastsq -- nonlinear least squares minimizer - - fmin_l_bfgs_b, fmin_tnc, - fmin_cobyla -- constrained multivariate optimizers - - anneal, brute -- global optimizers - - fminbound, brent, golden, bracket -- local scalar minimizers - - fsolve -- n-dimenstional root-finding - - brentq, brenth, ridder, bisect, newton -- one-dimensional root-finding - """ if not isscalar(x0): x0 = asarray(x0) @@ -479,39 +613,26 @@ if relerr < xtol: return p p0 = p - raise RuntimeError, "Failed to converge after %d iterations, value is %s" % (maxiter,p) + msg = "Failed to converge after %d iterations, value is %s" % (maxiter, p) + raise RuntimeError(msg) def bisection(func, a, b, args=(), xtol=1e-10, maxiter=400): """Bisection root-finding method. Given a function and an interval with func(a) * func(b) < 0, find the root between a and b. - See also: - - fmin, fmin_powell, fmin_cg, - fmin_bfgs, fmin_ncg -- multivariate local optimizers - leastsq -- nonlinear least squares minimizer - - fmin_l_bfgs_b, fmin_tnc, - fmin_cobyla -- constrained multivariate optimizers - - anneal, brute -- global optimizers - - fminbound, brent, golden, bracket -- local scalar minimizers - - fsolve -- n-dimenstional root-finding - - brentq, brenth, ridder, bisect, newton -- one-dimensional root-finding - - fixed_point -- scalar and vector fixed-point finder - """ + msg = "minpack.bisection is deprecated, use zeros.bisect instead" + warnings.warn(msg, DeprecationWarning) + i = 1 eva = func(a,*args) evb = func(b,*args) - assert (eva*evb < 0), "Must start with interval with func(a) * func(b) <0" - while i<=maxiter: - dist = (b-a)/2.0 + if eva*evb >= 0: + msg = "Must start with interval where func(a) * func(b) < 0" + raise ValueError(msg) + while i <= maxiter: + dist = (b - a)/2.0 p = a + dist if dist < xtol: return p @@ -524,5 +645,5 @@ eva = ev else: b = p - print "Warning: Method failed after %d iterations." % maxiter - return p + msg = "Failed to converge after %d iterations, value is %s" % (maxiter, p) + raise RuntimeError(msg) diff -Nru python-scipy-0.7.2+dfsg1/scipy/optimize/nnls/nnls.f python-scipy-0.8.0+dfsg1/scipy/optimize/nnls/nnls.f --- python-scipy-0.7.2+dfsg1/scipy/optimize/nnls/nnls.f 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/optimize/nnls/nnls.f 2010-07-26 15:48:33.000000000 +0100 @@ -1,477 +1,477 @@ -C SUBROUTINE NNLS (A,MDA,M,N,B,X,RNORM,W,ZZ,INDEX,MODE) -C -C Algorithm NNLS: NONNEGATIVE LEAST SQUARES -C -c The original version of this code was developed by -c Charles L. Lawson and Richard J. Hanson at Jet Propulsion Laboratory -c 1973 JUN 15, and published in the book -c "SOLVING LEAST SQUARES PROBLEMS", Prentice-HalL, 1974. -c Revised FEB 1995 to accompany reprinting of the book by SIAM. -c -C GIVEN AN M BY N MATRIX, A, AND AN M-VECTOR, B, COMPUTE AN -C N-VECTOR, X, THAT SOLVES THE LEAST SQUARES PROBLEM -C -C A * X = B SUBJECT TO X .GE. 0 -C ------------------------------------------------------------------ -c Subroutine Arguments -c -C A(),MDA,M,N MDA IS THE FIRST DIMENSIONING PARAMETER FOR THE -C ARRAY, A(). ON ENTRY A() CONTAINS THE M BY N -C MATRIX, A. ON EXIT A() CONTAINS -C THE PRODUCT MATRIX, Q*A , WHERE Q IS AN -C M BY M ORTHOGONAL MATRIX GENERATED IMPLICITLY BY -C THIS SUBROUTINE. -C B() ON ENTRY B() CONTAINS THE M-VECTOR, B. ON EXIT B() CON- -C TAINS Q*B. -C X() ON ENTRY X() NEED NOT BE INITIALIZED. ON EXIT X() WILL -C CONTAIN THE SOLUTION VECTOR. -C RNORM ON EXIT RNORM CONTAINS THE EUCLIDEAN NORM OF THE -C RESIDUAL VECTOR. -C W() AN N-ARRAY OF WORKING SPACE. ON EXIT W() WILL CONTAIN -C THE DUAL SOLUTION VECTOR. W WILL SATISFY W(I) = 0. -C FOR ALL I IN SET P AND W(I) .LE. 0. FOR ALL I IN SET Z -C ZZ() AN M-ARRAY OF WORKING SPACE. -C INDEX() AN INTEGER WORKING ARRAY OF LENGTH AT LEAST N. -C ON EXIT THE CONTENTS OF THIS ARRAY DEFINE THE SETS -C P AND Z AS FOLLOWS.. -C -C INDEX(1) THRU INDEX(NSETP) = SET P. -C INDEX(IZ1) THRU INDEX(IZ2) = SET Z. -C IZ1 = NSETP + 1 = NPP1 -C IZ2 = N -C MODE THIS IS A SUCCESS-FAILURE FLAG WITH THE FOLLOWING -C MEANINGS. -C 1 THE SOLUTION HAS BEEN COMPUTED SUCCESSFULLY. -C 2 THE DIMENSIONS OF THE PROBLEM ARE BAD. -C EITHER M .LE. 0 OR N .LE. 0. -C 3 ITERATION COUNT EXCEEDED. MORE THAN 3*N ITERATIONS. -C -C ------------------------------------------------------------------ - SUBROUTINE NNLS (A,MDA,M,N,B,X,RNORM,W,ZZ,INDEX,MODE) -C ------------------------------------------------------------------ - integer I, II, IP, ITER, ITMAX, IZ, IZ1, IZ2, IZMAX, J, JJ, JZ, L - integer M, MDA, MODE,N, NPP1, NSETP, RTNKEY -c integer INDEX(N) -c double precision A(MDA,N), B(M), W(N), X(N), ZZ(M) - integer INDEX(*) - double precision A(MDA,*), B(*), W(*), X(*), ZZ(*) - double precision ALPHA, ASAVE, CC, DIFF, DUMMY, FACTOR, RNORM - double precision SM, SS, T, TEMP, TWO, UNORM, UP, WMAX - double precision ZERO, ZTEST - parameter(FACTOR = 0.01d0) - parameter(TWO = 2.0d0, ZERO = 0.0d0) -C ------------------------------------------------------------------ - MODE=1 - IF (M .le. 0 .or. N .le. 0) then - MODE=2 - RETURN - endif - ITER=0 - ITMAX=3*N -C -C INITIALIZE THE ARRAYS INDEX() AND X(). -C - DO 20 I=1,N - X(I)=ZERO - 20 INDEX(I)=I -C - IZ2=N - IZ1=1 - NSETP=0 - NPP1=1 -C ****** MAIN LOOP BEGINS HERE ****** - 30 CONTINUE -C QUIT IF ALL COEFFICIENTS ARE ALREADY IN THE SOLUTION. -C OR IF M COLS OF A HAVE BEEN TRIANGULARIZED. -C - IF (IZ1 .GT.IZ2.OR.NSETP.GE.M) GO TO 350 -C -C COMPUTE COMPONENTS OF THE DUAL (NEGATIVE GRADIENT) VECTOR W(). -C - DO 50 IZ=IZ1,IZ2 - J=INDEX(IZ) - SM=ZERO - DO 40 L=NPP1,M - 40 SM=SM+A(L,J)*B(L) - W(J)=SM - 50 continue -C FIND LARGEST POSITIVE W(J). - 60 continue - WMAX=ZERO - DO 70 IZ=IZ1,IZ2 - J=INDEX(IZ) - IF (W(J) .gt. WMAX) then - WMAX=W(J) - IZMAX=IZ - endif - 70 CONTINUE -C -C IF WMAX .LE. 0. GO TO TERMINATION. -C THIS INDICATES SATISFACTION OF THE KUHN-TUCKER CONDITIONS. -C - IF (WMAX .le. ZERO) go to 350 - IZ=IZMAX - J=INDEX(IZ) -C -C THE SIGN OF W(J) IS OK FOR J TO BE MOVED TO SET P. -C BEGIN THE TRANSFORMATION AND CHECK NEW DIAGONAL ELEMENT TO AVOID -C NEAR LINEAR DEPENDENCE. -C - ASAVE=A(NPP1,J) - CALL H12 (1,NPP1,NPP1+1,M,A(1,J),1,UP,DUMMY,1,1,0) - UNORM=ZERO - IF (NSETP .ne. 0) then - DO 90 L=1,NSETP - 90 UNORM=UNORM+A(L,J)**2 - endif - UNORM=sqrt(UNORM) - IF (DIFF(UNORM+ABS(A(NPP1,J))*FACTOR,UNORM) .gt. ZERO) then -C -C COL J IS SUFFICIENTLY INDEPENDENT. COPY B INTO ZZ, UPDATE ZZ -C AND SOLVE FOR ZTEST ( = PROPOSED NEW VALUE FOR X(J) ). -C - DO 120 L=1,M - 120 ZZ(L)=B(L) - CALL H12 (2,NPP1,NPP1+1,M,A(1,J),1,UP,ZZ,1,1,1) - ZTEST=ZZ(NPP1)/A(NPP1,J) -C -C SEE IF ZTEST IS POSITIVE -C - IF (ZTEST .gt. ZERO) go to 140 - endif -C -C REJECT J AS A CANDIDATE TO BE MOVED FROM SET Z TO SET P. -C RESTORE A(NPP1,J), SET W(J)=0., AND LOOP BACK TO TEST DUAL -C COEFFS AGAIN. -C - A(NPP1,J)=ASAVE - W(J)=ZERO - GO TO 60 -C -C THE INDEX J=INDEX(IZ) HAS BEEN SELECTED TO BE MOVED FROM -C SET Z TO SET P. UPDATE B, UPDATE INDICES, APPLY HOUSEHOLDER -C TRANSFORMATIONS TO COLS IN NEW SET Z, ZERO SUBDIAGONAL ELTS IN -C COL J, SET W(J)=0. -C - 140 continue - DO 150 L=1,M - 150 B(L)=ZZ(L) -C - INDEX(IZ)=INDEX(IZ1) - INDEX(IZ1)=J - IZ1=IZ1+1 - NSETP=NPP1 - NPP1=NPP1+1 -C - IF (IZ1 .le. IZ2) then - DO 160 JZ=IZ1,IZ2 - JJ=INDEX(JZ) - CALL H12 (2,NSETP,NPP1,M,A(1,J),1,UP,A(1,JJ),1,MDA,1) - 160 continue - endif -C - IF (NSETP .ne. M) then - DO 180 L=NPP1,M - 180 A(L,J)=ZERO - endif -C - W(J)=ZERO -C SOLVE THE TRIANGULAR SYSTEM. -C STORE THE SOLUTION TEMPORARILY IN ZZ(). - RTNKEY = 1 - GO TO 400 - 200 CONTINUE -C -C ****** SECONDARY LOOP BEGINS HERE ****** -C -C ITERATION COUNTER. -C - 210 continue - ITER=ITER+1 - IF (ITER .gt. ITMAX) then - MODE=3 - write (*,'(/a)') ' NNLS quitting on iteration count.' - GO TO 350 - endif -C -C SEE IF ALL NEW CONSTRAINED COEFFS ARE FEASIBLE. -C IF NOT COMPUTE ALPHA. -C - ALPHA=TWO - DO 240 IP=1,NSETP - L=INDEX(IP) - IF (ZZ(IP) .le. ZERO) then - T=-X(L)/(ZZ(IP)-X(L)) - IF (ALPHA .gt. T) then - ALPHA=T - JJ=IP - endif - endif - 240 CONTINUE -C -C IF ALL NEW CONSTRAINED COEFFS ARE FEASIBLE THEN ALPHA WILL -C STILL = 2. IF SO EXIT FROM SECONDARY LOOP TO MAIN LOOP. -C - IF (ALPHA.EQ.TWO) GO TO 330 -C -C OTHERWISE USE ALPHA WHICH WILL BE BETWEEN 0. AND 1. TO -C INTERPOLATE BETWEEN THE OLD X AND THE NEW ZZ. -C - DO 250 IP=1,NSETP - L=INDEX(IP) - X(L)=X(L)+ALPHA*(ZZ(IP)-X(L)) - 250 continue -C -C MODIFY A AND B AND THE INDEX ARRAYS TO MOVE COEFFICIENT I -C FROM SET P TO SET Z. -C - I=INDEX(JJ) - 260 continue - X(I)=ZERO -C - IF (JJ .ne. NSETP) then - JJ=JJ+1 - DO 280 J=JJ,NSETP - II=INDEX(J) - INDEX(J-1)=II - CALL G1 (A(J-1,II),A(J,II),CC,SS,A(J-1,II)) - A(J,II)=ZERO - DO 270 L=1,N - IF (L.NE.II) then -c -c Apply procedure G2 (CC,SS,A(J-1,L),A(J,L)) -c - TEMP = A(J-1,L) - A(J-1,L) = CC*TEMP + SS*A(J,L) - A(J,L) =-SS*TEMP + CC*A(J,L) - endif - 270 CONTINUE -c -c Apply procedure G2 (CC,SS,B(J-1),B(J)) -c - TEMP = B(J-1) - B(J-1) = CC*TEMP + SS*B(J) - B(J) =-SS*TEMP + CC*B(J) - 280 continue - endif -c - NPP1=NSETP - NSETP=NSETP-1 - IZ1=IZ1-1 - INDEX(IZ1)=I -C -C SEE IF THE REMAINING COEFFS IN SET P ARE FEASIBLE. THEY SHOULD -C BE BECAUSE OF THE WAY ALPHA WAS DETERMINED. -C IF ANY ARE INFEASIBLE IT IS DUE TO ROUND-OFF ERROR. ANY -C THAT ARE NONPOSITIVE WILL BE SET TO ZERO -C AND MOVED FROM SET P TO SET Z. -C - DO 300 JJ=1,NSETP - I=INDEX(JJ) - IF (X(I) .le. ZERO) go to 260 - 300 CONTINUE -C -C COPY B( ) INTO ZZ( ). THEN SOLVE AGAIN AND LOOP BACK. -C - DO 310 I=1,M - 310 ZZ(I)=B(I) - RTNKEY = 2 - GO TO 400 - 320 CONTINUE - GO TO 210 -C ****** END OF SECONDARY LOOP ****** -C - 330 continue - DO 340 IP=1,NSETP - I=INDEX(IP) - 340 X(I)=ZZ(IP) -C ALL NEW COEFFS ARE POSITIVE. LOOP BACK TO BEGINNING. - GO TO 30 -C -C ****** END OF MAIN LOOP ****** -C -C COME TO HERE FOR TERMINATION. -C COMPUTE THE NORM OF THE FINAL RESIDUAL VECTOR. -C - 350 continue - SM=ZERO - IF (NPP1 .le. M) then - DO 360 I=NPP1,M - 360 SM=SM+B(I)**2 - else - DO 380 J=1,N - 380 W(J)=ZERO - endif - RNORM=sqrt(SM) - RETURN -C -C THE FOLLOWING BLOCK OF CODE IS USED AS AN INTERNAL SUBROUTINE -C TO SOLVE THE TRIANGULAR SYSTEM, PUTTING THE SOLUTION IN ZZ(). -C - 400 continue - DO 430 L=1,NSETP - IP=NSETP+1-L - IF (L .ne. 1) then - DO 410 II=1,IP - ZZ(II)=ZZ(II)-A(II,JJ)*ZZ(IP+1) - 410 continue - endif - JJ=INDEX(IP) - ZZ(IP)=ZZ(IP)/A(IP,JJ) - 430 continue - go to (200, 320), RTNKEY - END - - - double precision FUNCTION DIFF(X,Y) -c -c Function used in tests that depend on machine precision. -c -c The original version of this code was developed by -c Charles L. Lawson and Richard J. Hanson at Jet Propulsion Laboratory -c 1973 JUN 7, and published in the book -c "SOLVING LEAST SQUARES PROBLEMS", Prentice-HalL, 1974. -c Revised FEB 1995 to accompany reprinting of the book by SIAM. -C - double precision X, Y - DIFF=X-Y - RETURN - END - - -C SUBROUTINE H12 (MODE,LPIVOT,L1,M,U,IUE,UP,C,ICE,ICV,NCV) -C -C CONSTRUCTION AND/OR APPLICATION OF A SINGLE -C HOUSEHOLDER TRANSFORMATION.. Q = I + U*(U**T)/B -C -c The original version of this code was developed by -c Charles L. Lawson and Richard J. Hanson at Jet Propulsion Laboratory -c 1973 JUN 12, and published in the book -c "SOLVING LEAST SQUARES PROBLEMS", Prentice-HalL, 1974. -c Revised FEB 1995 to accompany reprinting of the book by SIAM. -C ------------------------------------------------------------------ -c Subroutine Arguments -c -C MODE = 1 OR 2 Selects Algorithm H1 to construct and apply a -c Householder transformation, or Algorithm H2 to apply a -c previously constructed transformation. -C LPIVOT IS THE INDEX OF THE PIVOT ELEMENT. -C L1,M IF L1 .LE. M THE TRANSFORMATION WILL BE CONSTRUCTED TO -C ZERO ELEMENTS INDEXED FROM L1 THROUGH M. IF L1 GT. M -C THE SUBROUTINE DOES AN IDENTITY TRANSFORMATION. -C U(),IUE,UP On entry with MODE = 1, U() contains the pivot -c vector. IUE is the storage increment between elements. -c On exit when MODE = 1, U() and UP contain quantities -c defining the vector U of the Householder transformation. -c on entry with MODE = 2, U() and UP should contain -c quantities previously computed with MODE = 1. These will -c not be modified during the entry with MODE = 2. -C C() ON ENTRY with MODE = 1 or 2, C() CONTAINS A MATRIX WHICH -c WILL BE REGARDED AS A SET OF VECTORS TO WHICH THE -c HOUSEHOLDER TRANSFORMATION IS TO BE APPLIED. -c ON EXIT C() CONTAINS THE SET OF TRANSFORMED VECTORS. -C ICE STORAGE INCREMENT BETWEEN ELEMENTS OF VECTORS IN C(). -C ICV STORAGE INCREMENT BETWEEN VECTORS IN C(). -C NCV NUMBER OF VECTORS IN C() TO BE TRANSFORMED. IF NCV .LE. 0 -C NO OPERATIONS WILL BE DONE ON C(). -C ------------------------------------------------------------------ - SUBROUTINE H12 (MODE,LPIVOT,L1,M,U,IUE,UP,C,ICE,ICV,NCV) -C ------------------------------------------------------------------ - integer I, I2, I3, I4, ICE, ICV, INCR, IUE, J - integer L1, LPIVOT, M, MODE, NCV - double precision B, C(*), CL, CLINV, ONE, SM -c double precision U(IUE,M) - double precision U(IUE,*) - double precision UP - parameter(ONE = 1.0d0) -C ------------------------------------------------------------------ - IF (0.GE.LPIVOT.OR.LPIVOT.GE.L1.OR.L1.GT.M) RETURN - CL=abs(U(1,LPIVOT)) - IF (MODE.EQ.2) GO TO 60 -C ****** CONSTRUCT THE TRANSFORMATION. ****** - DO 10 J=L1,M - 10 CL=MAX(abs(U(1,J)),CL) - IF (CL) 130,130,20 - 20 CLINV=ONE/CL - SM=(U(1,LPIVOT)*CLINV)**2 - DO 30 J=L1,M - 30 SM=SM+(U(1,J)*CLINV)**2 - CL=CL*SQRT(SM) - IF (U(1,LPIVOT)) 50,50,40 - 40 CL=-CL - 50 UP=U(1,LPIVOT)-CL - U(1,LPIVOT)=CL - GO TO 70 -C ****** APPLY THE TRANSFORMATION I+U*(U**T)/B TO C. ****** -C - 60 IF (CL) 130,130,70 - 70 IF (NCV.LE.0) RETURN - B= UP*U(1,LPIVOT) -C B MUST BE NONPOSITIVE HERE. IF B = 0., RETURN. -C - IF (B) 80,130,130 - 80 B=ONE/B - I2=1-ICV+ICE*(LPIVOT-1) - INCR=ICE*(L1-LPIVOT) - DO 120 J=1,NCV - I2=I2+ICV - I3=I2+INCR - I4=I3 - SM=C(I2)*UP - DO 90 I=L1,M - SM=SM+C(I3)*U(1,I) - 90 I3=I3+ICE - IF (SM) 100,120,100 - 100 SM=SM*B - C(I2)=C(I2)+SM*UP - DO 110 I=L1,M - C(I4)=C(I4)+SM*U(1,I) - 110 I4=I4+ICE - 120 CONTINUE - 130 RETURN - END - - - - SUBROUTINE G1 (A,B,CTERM,STERM,SIG) -c -C COMPUTE ORTHOGONAL ROTATION MATRIX.. -c -c The original version of this code was developed by -c Charles L. Lawson and Richard J. Hanson at Jet Propulsion Laboratory -c 1973 JUN 12, and published in the book -c "SOLVING LEAST SQUARES PROBLEMS", Prentice-HalL, 1974. -c Revised FEB 1995 to accompany reprinting of the book by SIAM. -C -C COMPUTE.. MATRIX (C, S) SO THAT (C, S)(A) = (SQRT(A**2+B**2)) -C (-S,C) (-S,C)(B) ( 0 ) -C COMPUTE SIG = SQRT(A**2+B**2) -C SIG IS COMPUTED LAST TO ALLOW FOR THE POSSIBILITY THAT -C SIG MAY BE IN THE SAME LOCATION AS A OR B . -C ------------------------------------------------------------------ - double precision A, B, CTERM, ONE, SIG, STERM, XR, YR, ZERO - parameter(ONE = 1.0d0, ZERO = 0.0d0) -C ------------------------------------------------------------------ - if (abs(A) .gt. abs(B)) then - XR=B/A - YR=sqrt(ONE+XR**2) - CTERM=sign(ONE/YR,A) - STERM=CTERM*XR - SIG=abs(A)*YR - RETURN - endif - - if (B .ne. ZERO) then - XR=A/B - YR=sqrt(ONE+XR**2) - STERM=sign(ONE/YR,B) - CTERM=STERM*XR - SIG=abs(B)*YR - RETURN - endif - - SIG=ZERO - CTERM=ZERO - STERM=ONE - RETURN - END +C SUBROUTINE NNLS (A,MDA,M,N,B,X,RNORM,W,ZZ,INDEX,MODE) +C +C Algorithm NNLS: NONNEGATIVE LEAST SQUARES +C +c The original version of this code was developed by +c Charles L. Lawson and Richard J. Hanson at Jet Propulsion Laboratory +c 1973 JUN 15, and published in the book +c "SOLVING LEAST SQUARES PROBLEMS", Prentice-HalL, 1974. +c Revised FEB 1995 to accompany reprinting of the book by SIAM. +c +C GIVEN AN M BY N MATRIX, A, AND AN M-VECTOR, B, COMPUTE AN +C N-VECTOR, X, THAT SOLVES THE LEAST SQUARES PROBLEM +C +C A * X = B SUBJECT TO X .GE. 0 +C ------------------------------------------------------------------ +c Subroutine Arguments +c +C A(),MDA,M,N MDA IS THE FIRST DIMENSIONING PARAMETER FOR THE +C ARRAY, A(). ON ENTRY A() CONTAINS THE M BY N +C MATRIX, A. ON EXIT A() CONTAINS +C THE PRODUCT MATRIX, Q*A , WHERE Q IS AN +C M BY M ORTHOGONAL MATRIX GENERATED IMPLICITLY BY +C THIS SUBROUTINE. +C B() ON ENTRY B() CONTAINS THE M-VECTOR, B. ON EXIT B() CON- +C TAINS Q*B. +C X() ON ENTRY X() NEED NOT BE INITIALIZED. ON EXIT X() WILL +C CONTAIN THE SOLUTION VECTOR. +C RNORM ON EXIT RNORM CONTAINS THE EUCLIDEAN NORM OF THE +C RESIDUAL VECTOR. +C W() AN N-ARRAY OF WORKING SPACE. ON EXIT W() WILL CONTAIN +C THE DUAL SOLUTION VECTOR. W WILL SATISFY W(I) = 0. +C FOR ALL I IN SET P AND W(I) .LE. 0. FOR ALL I IN SET Z +C ZZ() AN M-ARRAY OF WORKING SPACE. +C INDEX() AN INTEGER WORKING ARRAY OF LENGTH AT LEAST N. +C ON EXIT THE CONTENTS OF THIS ARRAY DEFINE THE SETS +C P AND Z AS FOLLOWS.. +C +C INDEX(1) THRU INDEX(NSETP) = SET P. +C INDEX(IZ1) THRU INDEX(IZ2) = SET Z. +C IZ1 = NSETP + 1 = NPP1 +C IZ2 = N +C MODE THIS IS A SUCCESS-FAILURE FLAG WITH THE FOLLOWING +C MEANINGS. +C 1 THE SOLUTION HAS BEEN COMPUTED SUCCESSFULLY. +C 2 THE DIMENSIONS OF THE PROBLEM ARE BAD. +C EITHER M .LE. 0 OR N .LE. 0. +C 3 ITERATION COUNT EXCEEDED. MORE THAN 3*N ITERATIONS. +C +C ------------------------------------------------------------------ + SUBROUTINE NNLS (A,MDA,M,N,B,X,RNORM,W,ZZ,INDEX,MODE) +C ------------------------------------------------------------------ + integer I, II, IP, ITER, ITMAX, IZ, IZ1, IZ2, IZMAX, J, JJ, JZ, L + integer M, MDA, MODE,N, NPP1, NSETP, RTNKEY +c integer INDEX(N) +c double precision A(MDA,N), B(M), W(N), X(N), ZZ(M) + integer INDEX(*) + double precision A(MDA,*), B(*), W(*), X(*), ZZ(*) + double precision ALPHA, ASAVE, CC, DIFF, DUMMY, FACTOR, RNORM + double precision SM, SS, T, TEMP, TWO, UNORM, UP, WMAX + double precision ZERO, ZTEST + parameter(FACTOR = 0.01d0) + parameter(TWO = 2.0d0, ZERO = 0.0d0) +C ------------------------------------------------------------------ + MODE=1 + IF (M .le. 0 .or. N .le. 0) then + MODE=2 + RETURN + endif + ITER=0 + ITMAX=3*N +C +C INITIALIZE THE ARRAYS INDEX() AND X(). +C + DO 20 I=1,N + X(I)=ZERO + 20 INDEX(I)=I +C + IZ2=N + IZ1=1 + NSETP=0 + NPP1=1 +C ****** MAIN LOOP BEGINS HERE ****** + 30 CONTINUE +C QUIT IF ALL COEFFICIENTS ARE ALREADY IN THE SOLUTION. +C OR IF M COLS OF A HAVE BEEN TRIANGULARIZED. +C + IF (IZ1 .GT.IZ2.OR.NSETP.GE.M) GO TO 350 +C +C COMPUTE COMPONENTS OF THE DUAL (NEGATIVE GRADIENT) VECTOR W(). +C + DO 50 IZ=IZ1,IZ2 + J=INDEX(IZ) + SM=ZERO + DO 40 L=NPP1,M + 40 SM=SM+A(L,J)*B(L) + W(J)=SM + 50 continue +C FIND LARGEST POSITIVE W(J). + 60 continue + WMAX=ZERO + DO 70 IZ=IZ1,IZ2 + J=INDEX(IZ) + IF (W(J) .gt. WMAX) then + WMAX=W(J) + IZMAX=IZ + endif + 70 CONTINUE +C +C IF WMAX .LE. 0. GO TO TERMINATION. +C THIS INDICATES SATISFACTION OF THE KUHN-TUCKER CONDITIONS. +C + IF (WMAX .le. ZERO) go to 350 + IZ=IZMAX + J=INDEX(IZ) +C +C THE SIGN OF W(J) IS OK FOR J TO BE MOVED TO SET P. +C BEGIN THE TRANSFORMATION AND CHECK NEW DIAGONAL ELEMENT TO AVOID +C NEAR LINEAR DEPENDENCE. +C + ASAVE=A(NPP1,J) + CALL H12 (1,NPP1,NPP1+1,M,A(1,J),1,UP,DUMMY,1,1,0) + UNORM=ZERO + IF (NSETP .ne. 0) then + DO 90 L=1,NSETP + 90 UNORM=UNORM+A(L,J)**2 + endif + UNORM=sqrt(UNORM) + IF (DIFF(UNORM+ABS(A(NPP1,J))*FACTOR,UNORM) .gt. ZERO) then +C +C COL J IS SUFFICIENTLY INDEPENDENT. COPY B INTO ZZ, UPDATE ZZ +C AND SOLVE FOR ZTEST ( = PROPOSED NEW VALUE FOR X(J) ). +C + DO 120 L=1,M + 120 ZZ(L)=B(L) + CALL H12 (2,NPP1,NPP1+1,M,A(1,J),1,UP,ZZ,1,1,1) + ZTEST=ZZ(NPP1)/A(NPP1,J) +C +C SEE IF ZTEST IS POSITIVE +C + IF (ZTEST .gt. ZERO) go to 140 + endif +C +C REJECT J AS A CANDIDATE TO BE MOVED FROM SET Z TO SET P. +C RESTORE A(NPP1,J), SET W(J)=0., AND LOOP BACK TO TEST DUAL +C COEFFS AGAIN. +C + A(NPP1,J)=ASAVE + W(J)=ZERO + GO TO 60 +C +C THE INDEX J=INDEX(IZ) HAS BEEN SELECTED TO BE MOVED FROM +C SET Z TO SET P. UPDATE B, UPDATE INDICES, APPLY HOUSEHOLDER +C TRANSFORMATIONS TO COLS IN NEW SET Z, ZERO SUBDIAGONAL ELTS IN +C COL J, SET W(J)=0. +C + 140 continue + DO 150 L=1,M + 150 B(L)=ZZ(L) +C + INDEX(IZ)=INDEX(IZ1) + INDEX(IZ1)=J + IZ1=IZ1+1 + NSETP=NPP1 + NPP1=NPP1+1 +C + IF (IZ1 .le. IZ2) then + DO 160 JZ=IZ1,IZ2 + JJ=INDEX(JZ) + CALL H12 (2,NSETP,NPP1,M,A(1,J),1,UP,A(1,JJ),1,MDA,1) + 160 continue + endif +C + IF (NSETP .ne. M) then + DO 180 L=NPP1,M + 180 A(L,J)=ZERO + endif +C + W(J)=ZERO +C SOLVE THE TRIANGULAR SYSTEM. +C STORE THE SOLUTION TEMPORARILY IN ZZ(). + RTNKEY = 1 + GO TO 400 + 200 CONTINUE +C +C ****** SECONDARY LOOP BEGINS HERE ****** +C +C ITERATION COUNTER. +C + 210 continue + ITER=ITER+1 + IF (ITER .gt. ITMAX) then + MODE=3 + write (*,'(/a)') ' NNLS quitting on iteration count.' + GO TO 350 + endif +C +C SEE IF ALL NEW CONSTRAINED COEFFS ARE FEASIBLE. +C IF NOT COMPUTE ALPHA. +C + ALPHA=TWO + DO 240 IP=1,NSETP + L=INDEX(IP) + IF (ZZ(IP) .le. ZERO) then + T=-X(L)/(ZZ(IP)-X(L)) + IF (ALPHA .gt. T) then + ALPHA=T + JJ=IP + endif + endif + 240 CONTINUE +C +C IF ALL NEW CONSTRAINED COEFFS ARE FEASIBLE THEN ALPHA WILL +C STILL = 2. IF SO EXIT FROM SECONDARY LOOP TO MAIN LOOP. +C + IF (ALPHA.EQ.TWO) GO TO 330 +C +C OTHERWISE USE ALPHA WHICH WILL BE BETWEEN 0. AND 1. TO +C INTERPOLATE BETWEEN THE OLD X AND THE NEW ZZ. +C + DO 250 IP=1,NSETP + L=INDEX(IP) + X(L)=X(L)+ALPHA*(ZZ(IP)-X(L)) + 250 continue +C +C MODIFY A AND B AND THE INDEX ARRAYS TO MOVE COEFFICIENT I +C FROM SET P TO SET Z. +C + I=INDEX(JJ) + 260 continue + X(I)=ZERO +C + IF (JJ .ne. NSETP) then + JJ=JJ+1 + DO 280 J=JJ,NSETP + II=INDEX(J) + INDEX(J-1)=II + CALL G1 (A(J-1,II),A(J,II),CC,SS,A(J-1,II)) + A(J,II)=ZERO + DO 270 L=1,N + IF (L.NE.II) then +c +c Apply procedure G2 (CC,SS,A(J-1,L),A(J,L)) +c + TEMP = A(J-1,L) + A(J-1,L) = CC*TEMP + SS*A(J,L) + A(J,L) =-SS*TEMP + CC*A(J,L) + endif + 270 CONTINUE +c +c Apply procedure G2 (CC,SS,B(J-1),B(J)) +c + TEMP = B(J-1) + B(J-1) = CC*TEMP + SS*B(J) + B(J) =-SS*TEMP + CC*B(J) + 280 continue + endif +c + NPP1=NSETP + NSETP=NSETP-1 + IZ1=IZ1-1 + INDEX(IZ1)=I +C +C SEE IF THE REMAINING COEFFS IN SET P ARE FEASIBLE. THEY SHOULD +C BE BECAUSE OF THE WAY ALPHA WAS DETERMINED. +C IF ANY ARE INFEASIBLE IT IS DUE TO ROUND-OFF ERROR. ANY +C THAT ARE NONPOSITIVE WILL BE SET TO ZERO +C AND MOVED FROM SET P TO SET Z. +C + DO 300 JJ=1,NSETP + I=INDEX(JJ) + IF (X(I) .le. ZERO) go to 260 + 300 CONTINUE +C +C COPY B( ) INTO ZZ( ). THEN SOLVE AGAIN AND LOOP BACK. +C + DO 310 I=1,M + 310 ZZ(I)=B(I) + RTNKEY = 2 + GO TO 400 + 320 CONTINUE + GO TO 210 +C ****** END OF SECONDARY LOOP ****** +C + 330 continue + DO 340 IP=1,NSETP + I=INDEX(IP) + 340 X(I)=ZZ(IP) +C ALL NEW COEFFS ARE POSITIVE. LOOP BACK TO BEGINNING. + GO TO 30 +C +C ****** END OF MAIN LOOP ****** +C +C COME TO HERE FOR TERMINATION. +C COMPUTE THE NORM OF THE FINAL RESIDUAL VECTOR. +C + 350 continue + SM=ZERO + IF (NPP1 .le. M) then + DO 360 I=NPP1,M + 360 SM=SM+B(I)**2 + else + DO 380 J=1,N + 380 W(J)=ZERO + endif + RNORM=sqrt(SM) + RETURN +C +C THE FOLLOWING BLOCK OF CODE IS USED AS AN INTERNAL SUBROUTINE +C TO SOLVE THE TRIANGULAR SYSTEM, PUTTING THE SOLUTION IN ZZ(). +C + 400 continue + DO 430 L=1,NSETP + IP=NSETP+1-L + IF (L .ne. 1) then + DO 410 II=1,IP + ZZ(II)=ZZ(II)-A(II,JJ)*ZZ(IP+1) + 410 continue + endif + JJ=INDEX(IP) + ZZ(IP)=ZZ(IP)/A(IP,JJ) + 430 continue + go to (200, 320), RTNKEY + END + + + double precision FUNCTION DIFF(X,Y) +c +c Function used in tests that depend on machine precision. +c +c The original version of this code was developed by +c Charles L. Lawson and Richard J. Hanson at Jet Propulsion Laboratory +c 1973 JUN 7, and published in the book +c "SOLVING LEAST SQUARES PROBLEMS", Prentice-HalL, 1974. +c Revised FEB 1995 to accompany reprinting of the book by SIAM. +C + double precision X, Y + DIFF=X-Y + RETURN + END + + +C SUBROUTINE H12 (MODE,LPIVOT,L1,M,U,IUE,UP,C,ICE,ICV,NCV) +C +C CONSTRUCTION AND/OR APPLICATION OF A SINGLE +C HOUSEHOLDER TRANSFORMATION.. Q = I + U*(U**T)/B +C +c The original version of this code was developed by +c Charles L. Lawson and Richard J. Hanson at Jet Propulsion Laboratory +c 1973 JUN 12, and published in the book +c "SOLVING LEAST SQUARES PROBLEMS", Prentice-HalL, 1974. +c Revised FEB 1995 to accompany reprinting of the book by SIAM. +C ------------------------------------------------------------------ +c Subroutine Arguments +c +C MODE = 1 OR 2 Selects Algorithm H1 to construct and apply a +c Householder transformation, or Algorithm H2 to apply a +c previously constructed transformation. +C LPIVOT IS THE INDEX OF THE PIVOT ELEMENT. +C L1,M IF L1 .LE. M THE TRANSFORMATION WILL BE CONSTRUCTED TO +C ZERO ELEMENTS INDEXED FROM L1 THROUGH M. IF L1 GT. M +C THE SUBROUTINE DOES AN IDENTITY TRANSFORMATION. +C U(),IUE,UP On entry with MODE = 1, U() contains the pivot +c vector. IUE is the storage increment between elements. +c On exit when MODE = 1, U() and UP contain quantities +c defining the vector U of the Householder transformation. +c on entry with MODE = 2, U() and UP should contain +c quantities previously computed with MODE = 1. These will +c not be modified during the entry with MODE = 2. +C C() ON ENTRY with MODE = 1 or 2, C() CONTAINS A MATRIX WHICH +c WILL BE REGARDED AS A SET OF VECTORS TO WHICH THE +c HOUSEHOLDER TRANSFORMATION IS TO BE APPLIED. +c ON EXIT C() CONTAINS THE SET OF TRANSFORMED VECTORS. +C ICE STORAGE INCREMENT BETWEEN ELEMENTS OF VECTORS IN C(). +C ICV STORAGE INCREMENT BETWEEN VECTORS IN C(). +C NCV NUMBER OF VECTORS IN C() TO BE TRANSFORMED. IF NCV .LE. 0 +C NO OPERATIONS WILL BE DONE ON C(). +C ------------------------------------------------------------------ + SUBROUTINE H12 (MODE,LPIVOT,L1,M,U,IUE,UP,C,ICE,ICV,NCV) +C ------------------------------------------------------------------ + integer I, I2, I3, I4, ICE, ICV, INCR, IUE, J + integer L1, LPIVOT, M, MODE, NCV + double precision B, C(*), CL, CLINV, ONE, SM +c double precision U(IUE,M) + double precision U(IUE,*) + double precision UP + parameter(ONE = 1.0d0) +C ------------------------------------------------------------------ + IF (0.GE.LPIVOT.OR.LPIVOT.GE.L1.OR.L1.GT.M) RETURN + CL=abs(U(1,LPIVOT)) + IF (MODE.EQ.2) GO TO 60 +C ****** CONSTRUCT THE TRANSFORMATION. ****** + DO 10 J=L1,M + 10 CL=MAX(abs(U(1,J)),CL) + IF (CL) 130,130,20 + 20 CLINV=ONE/CL + SM=(U(1,LPIVOT)*CLINV)**2 + DO 30 J=L1,M + 30 SM=SM+(U(1,J)*CLINV)**2 + CL=CL*SQRT(SM) + IF (U(1,LPIVOT)) 50,50,40 + 40 CL=-CL + 50 UP=U(1,LPIVOT)-CL + U(1,LPIVOT)=CL + GO TO 70 +C ****** APPLY THE TRANSFORMATION I+U*(U**T)/B TO C. ****** +C + 60 IF (CL) 130,130,70 + 70 IF (NCV.LE.0) RETURN + B= UP*U(1,LPIVOT) +C B MUST BE NONPOSITIVE HERE. IF B = 0., RETURN. +C + IF (B) 80,130,130 + 80 B=ONE/B + I2=1-ICV+ICE*(LPIVOT-1) + INCR=ICE*(L1-LPIVOT) + DO 120 J=1,NCV + I2=I2+ICV + I3=I2+INCR + I4=I3 + SM=C(I2)*UP + DO 90 I=L1,M + SM=SM+C(I3)*U(1,I) + 90 I3=I3+ICE + IF (SM) 100,120,100 + 100 SM=SM*B + C(I2)=C(I2)+SM*UP + DO 110 I=L1,M + C(I4)=C(I4)+SM*U(1,I) + 110 I4=I4+ICE + 120 CONTINUE + 130 RETURN + END + + + + SUBROUTINE G1 (A,B,CTERM,STERM,SIG) +c +C COMPUTE ORTHOGONAL ROTATION MATRIX.. +c +c The original version of this code was developed by +c Charles L. Lawson and Richard J. Hanson at Jet Propulsion Laboratory +c 1973 JUN 12, and published in the book +c "SOLVING LEAST SQUARES PROBLEMS", Prentice-HalL, 1974. +c Revised FEB 1995 to accompany reprinting of the book by SIAM. +C +C COMPUTE.. MATRIX (C, S) SO THAT (C, S)(A) = (SQRT(A**2+B**2)) +C (-S,C) (-S,C)(B) ( 0 ) +C COMPUTE SIG = SQRT(A**2+B**2) +C SIG IS COMPUTED LAST TO ALLOW FOR THE POSSIBILITY THAT +C SIG MAY BE IN THE SAME LOCATION AS A OR B . +C ------------------------------------------------------------------ + double precision A, B, CTERM, ONE, SIG, STERM, XR, YR, ZERO + parameter(ONE = 1.0d0, ZERO = 0.0d0) +C ------------------------------------------------------------------ + if (abs(A) .gt. abs(B)) then + XR=B/A + YR=sqrt(ONE+XR**2) + CTERM=sign(ONE/YR,A) + STERM=CTERM*XR + SIG=abs(A)*YR + RETURN + endif + + if (B .ne. ZERO) then + XR=A/B + YR=sqrt(ONE+XR**2) + STERM=sign(ONE/YR,B) + CTERM=STERM*XR + SIG=abs(B)*YR + RETURN + endif + + SIG=ZERO + CTERM=ZERO + STERM=ONE + RETURN + END diff -Nru python-scipy-0.7.2+dfsg1/scipy/optimize/nnls.py python-scipy-0.8.0+dfsg1/scipy/optimize/nnls.py --- python-scipy-0.7.2+dfsg1/scipy/optimize/nnls.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/optimize/nnls.py 2010-07-26 15:48:33.000000000 +0100 @@ -2,19 +2,26 @@ from numpy import asarray_chkfinite, zeros, double def nnls(A,b): - """ - Solve || Ax - b ||_2 -> min with x>=0 - - Inputs: - A -- matrix as above - b -- vector as above - - Outputs: - x -- solution vector - rnorm -- residual || Ax-b ||_2 - + """ + Solve ``argmin_x || Ax - b ||_2`` for ``x>=0``. - wrapper around NNLS.F code below nnls/ directory + Parameters + ---------- + A : ndarray + Matrix ``A`` as shown above. + b : ndarray + Right-hand side vector. + + Returns + ------- + x : ndarray + Solution vector. + rnorm : float + The residual, ``|| Ax-b ||_2``. + + Notes + ----- + This is a wrapper for ``NNLS.F``. """ @@ -24,7 +31,7 @@ raise ValueError, "expected matrix" if len(b.shape)!=1: raise ValueError, "expected vector" - + m,n = A.shape if m != b.shape[0]: diff -Nru python-scipy-0.7.2+dfsg1/scipy/optimize/optimize.py python-scipy-0.8.0+dfsg1/scipy/optimize/optimize.py --- python-scipy-0.7.2+dfsg1/scipy/optimize/optimize.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/optimize/optimize.py 2010-07-26 15:48:33.000000000 +0100 @@ -41,6 +41,12 @@ m = asarray(m) return numpy.minimum.reduce(m,axis) +def is_array_scalar(x): + """Test whether `x` is either a scalar or an array scalar. + + """ + return len(atleast_1d(x) == 1) + abs = absolute import __builtin__ pymin = __builtin__.min @@ -101,55 +107,55 @@ full_output=0, disp=1, retall=0, callback=None): """Minimize a function using the downhill simplex algorithm. - :Parameters: - - func : callable func(x,*args) - The objective function to be minimized. - x0 : ndarray - Initial guess. - args : tuple - Extra arguments passed to func, i.e. ``f(x,*args)``. - callback : callable - Called after each iteration, as callback(xk), where xk is the - current parameter vector. - - :Returns: (xopt, {fopt, iter, funcalls, warnflag}) - - xopt : ndarray - Parameter that minimizes function. - fopt : float - Value of function at minimum: ``fopt = func(xopt)``. - iter : int - Number of iterations performed. - funcalls : int - Number of function calls made. - warnflag : int - 1 : Maximum number of function evaluations made. - 2 : Maximum number of iterations reached. - allvecs : list - Solution at each iteration. - - *Other Parameters*: - - xtol : float - Relative error in xopt acceptable for convergence. - ftol : number - Relative error in func(xopt) acceptable for convergence. - maxiter : int - Maximum number of iterations to perform. - maxfun : number - Maximum number of function evaluations to make. - full_output : bool - Set to True if fval and warnflag outputs are desired. - disp : bool - Set to True to print convergence messages. - retall : bool - Set to True to return list of solutions at each iteration. - - :Notes: + Parameters + ---------- + func : callable func(x,*args) + The objective function to be minimized. + x0 : ndarray + Initial guess. + args : tuple + Extra arguments passed to func, i.e. ``f(x,*args)``. + callback : callable + Called after each iteration, as callback(xk), where xk is the + current parameter vector. + + Returns + ------- + xopt : ndarray + Parameter that minimizes function. + fopt : float + Value of function at minimum: ``fopt = func(xopt)``. + iter : int + Number of iterations performed. + funcalls : int + Number of function calls made. + warnflag : int + 1 : Maximum number of function evaluations made. + 2 : Maximum number of iterations reached. + allvecs : list + Solution at each iteration. + + Other Parameters + ---------------- + xtol : float + Relative error in xopt acceptable for convergence. + ftol : number + Relative error in func(xopt) acceptable for convergence. + maxiter : int + Maximum number of iterations to perform. + maxfun : number + Maximum number of function evaluations to make. + full_output : bool + Set to True if fval and warnflag outputs are desired. + disp : bool + Set to True to print convergence messages. + retall : bool + Set to True to return list of solutions at each iteration. - Uses a Nelder-Mead simplex algorithm to find the minimum of - function of one or more variables. + Notes + ----- + Uses a Nelder-Mead simplex algorithm to find the minimum of + a function of one or more variables. """ fcalls, func = wrap_function(func, args) @@ -340,10 +346,12 @@ phi_rec = phi0 a_rec = 0 while 1: - # interpolate to find a trial step length between a_lo and a_hi - # Need to choose interpolation here. Use cubic interpolation and then if the - # result is within delta * dalpha or outside of the interval bounded by a_lo or a_hi - # then use quadratic interpolation, if the result is still too close, then use bisection + # interpolate to find a trial step length between a_lo and + # a_hi Need to choose interpolation here. Use cubic + # interpolation and then if the result is within delta * + # dalpha or outside of the interval bounded by a_lo or a_hi + # then use quadratic interpolation, if the result is still too + # close, then use bisection dalpha = a_hi-a_lo; if dalpha < 0: a,b = a_hi,a_lo @@ -351,10 +359,11 @@ # minimizer of cubic interpolant # (uses phi_lo, derphi_lo, phi_hi, and the most recent value of phi) - # if the result is too close to the end points (or out of the interval) - # then use quadratic interpolation with phi_lo, derphi_lo and phi_hi - # if the result is stil too close to the end points (or out of the interval) - # then use bisection + # if the result is too close to the end points (or out of + # the interval) then use quadratic interpolation with + # phi_lo, derphi_lo and phi_hi + # if the result is stil too close to the end points (or + # out of the interval) then use bisection if (i > 0): cchk = delta1*dalpha @@ -406,42 +415,42 @@ args=(), c1=1e-4, c2=0.9, amax=50): """Find alpha that satisfies strong Wolfe conditions. - :Parameters: + Parameters + ---------- + f : callable f(x,*args) + Objective function. + myfprime : callable f'(x,*args) + Objective function gradient (can be None). + xk : ndarray + Starting point. + pk : ndarray + Search direction. + gfk : ndarray + Gradient value for x=xk (xk being the current parameter + estimate). + args : tuple + Additional arguments passed to objective function. + c1 : float + Parameter for Armijo condition rule. + c2 : float + Parameter for curvature condition rule. + + Returns + ------- + alpha0 : float + Alpha for which ``x_new = x0 + alpha * pk``. + fc : int + Number of function evaluations made. + gc : int + Number of gradient evaluations made. - f : callable f(x,*args) - Objective function. - myfprime : callable f'(x,*args) - Objective function gradient (can be None). - xk : ndarray - Starting point. - pk : ndarray - Search direction. - gfk : ndarray - Gradient value for x=xk (xk being the current parameter - estimate). - args : tuple - Additional arguments passed to objective function. - c1 : float - Parameter for Armijo condition rule. - c2 : float - Parameter for curvature condition rule. - - :Returns: - - alpha0 : float - Alpha for which ``x_new = x0 + alpha * pk``. - fc : int - Number of function evaluations made. - gc : int - Number of gradient evaluations made. - - :Notes: - - Uses the line search algorithm to enforce strong Wolfe - conditions. See Wright and Nocedal, 'Numerical Optimization', - 1999, pg. 59-60. + Notes + ----- + Uses the line search algorithm to enforce strong Wolfe + conditions. See Wright and Nocedal, 'Numerical Optimization', + 1999, pg. 59-60. - For the zoom phase it uses an algorithm by [...]. + For the zoom phase it uses an algorithm by [...]. """ @@ -627,69 +636,65 @@ retall=0, callback=None): """Minimize a function using the BFGS algorithm. - :Parameters: - - f : callable f(x,*args) - Objective function to be minimized. - x0 : ndarray - Initial guess. - fprime : callable f'(x,*args) - Gradient of f. - args : tuple - Extra arguments passed to f and fprime. - gtol : float - Gradient norm must be less than gtol before succesful termination. - norm : float - Order of norm (Inf is max, -Inf is min) - epsilon : int or ndarray - If fprime is approximated, use this value for the step size. - callback : callable - An optional user-supplied function to call after each - iteration. Called as callback(xk), where xk is the - current parameter vector. - - :Returns: (xopt, {fopt, gopt, Hopt, func_calls, grad_calls, warnflag}, ) - - xopt : ndarray - Parameters which minimize f, i.e. f(xopt) == fopt. - fopt : float - Minimum value. - gopt : ndarray - Value of gradient at minimum, f'(xopt), which should be near 0. - Bopt : ndarray - Value of 1/f''(xopt), i.e. the inverse hessian matrix. - func_calls : int - Number of function_calls made. - grad_calls : int - Number of gradient calls made. - warnflag : integer - 1 : Maximum number of iterations exceeded. - 2 : Gradient and/or function calls not changing. - allvecs : list - Results at each iteration. Only returned if retall is True. - - *Other Parameters*: - maxiter : int - Maximum number of iterations to perform. - full_output : bool - If True,return fopt, func_calls, grad_calls, and warnflag - in addition to xopt. - disp : bool - Print convergence message if True. - retall : bool - Return a list of results at each iteration if True. - - :Notes: - - Optimize the function, f, whose gradient is given by fprime - using the quasi-Newton method of Broyden, Fletcher, Goldfarb, - and Shanno (BFGS) See Wright, and Nocedal 'Numerical - Optimization', 1999, pg. 198. + Parameters + ---------- + f : callable f(x,*args) + Objective function to be minimized. + x0 : ndarray + Initial guess. + fprime : callable f'(x,*args) + Gradient of f. + args : tuple + Extra arguments passed to f and fprime. + gtol : float + Gradient norm must be less than gtol before succesful termination. + norm : float + Order of norm (Inf is max, -Inf is min) + epsilon : int or ndarray + If fprime is approximated, use this value for the step size. + callback : callable + An optional user-supplied function to call after each + iteration. Called as callback(xk), where xk is the + current parameter vector. + + Returns + ------- + xopt : ndarray + Parameters which minimize f, i.e. f(xopt) == fopt. + fopt : float + Minimum value. + gopt : ndarray + Value of gradient at minimum, f'(xopt), which should be near 0. + Bopt : ndarray + Value of 1/f''(xopt), i.e. the inverse hessian matrix. + func_calls : int + Number of function_calls made. + grad_calls : int + Number of gradient calls made. + warnflag : integer + 1 : Maximum number of iterations exceeded. + 2 : Gradient and/or function calls not changing. + allvecs : list + Results at each iteration. Only returned if retall is True. + + Other Parameters + ---------------- + maxiter : int + Maximum number of iterations to perform. + full_output : bool + If True,return fopt, func_calls, grad_calls, and warnflag + in addition to xopt. + disp : bool + Print convergence message if True. + retall : bool + Return a list of results at each iteration if True. - *See Also*: - - scikits.openopt : SciKit which offers a unified syntax to call - this and other solvers. + Notes + ----- + Optimize the function, f, whose gradient is given by fprime + using the quasi-Newton method of Broyden, Fletcher, Goldfarb, + and Shanno (BFGS) See Wright, and Nocedal 'Numerical + Optimization', 1999, pg. 198. """ x0 = asarray(x0).squeeze() @@ -801,61 +806,63 @@ maxiter=None, full_output=0, disp=1, retall=0, callback=None): """Minimize a function using a nonlinear conjugate gradient algorithm. - :Parameters: - f : callable f(x,*args) - Objective function to be minimized. - x0 : ndarray - Initial guess. - fprime : callable f'(x,*args) - Function which computes the gradient of f. - args : tuple - Extra arguments passed to f and fprime. - gtol : float - Stop when norm of gradient is less than gtol. - norm : float - Order of vector norm to use. -Inf is min, Inf is max. - epsilon : float or ndarray - If fprime is approximated, use this value for the step - size (can be scalar or vector). - callback : callable - An optional user-supplied function, called after each - iteration. Called as callback(xk), where xk is the - current parameter vector. - - :Returns: (xopt, {fopt, func_calls, grad_calls, warnflag}, {allvecs}) - - xopt : ndarray - Parameters which minimize f, i.e. f(xopt) == fopt. - fopt : float - Minimum value found, f(xopt). - func_calls : int - The number of function_calls made. - grad_calls : int - The number of gradient calls made. - warnflag : int - 1 : Maximum number of iterations exceeded. - 2 : Gradient and/or function calls not changing. - allvecs : ndarray - If retall is True (see other parameters below), then this - vector containing the result at each iteration is returned. - - *Other Parameters*: - maxiter : int - Maximum number of iterations to perform. - full_output : bool - If True then return fopt, func_calls, grad_calls, and - warnflag in addition to xopt. - disp : bool - Print convergence message if True. - retall : bool - return a list of results at each iteration if True. - - :Notes: - - Optimize the function, f, whose gradient is given by fprime - using the nonlinear conjugate gradient algorithm of Polak and - Ribiere See Wright, and Nocedal 'Numerical Optimization', - 1999, pg. 120-122. + Parameters + ---------- + f : callable f(x,*args) + Objective function to be minimized. + x0 : ndarray + Initial guess. + fprime : callable f'(x,*args) + Function which computes the gradient of f. + args : tuple + Extra arguments passed to f and fprime. + gtol : float + Stop when norm of gradient is less than gtol. + norm : float + Order of vector norm to use. -Inf is min, Inf is max. + epsilon : float or ndarray + If fprime is approximated, use this value for the step + size (can be scalar or vector). + callback : callable + An optional user-supplied function, called after each + iteration. Called as callback(xk), where xk is the + current parameter vector. + + Returns + ------- + xopt : ndarray + Parameters which minimize f, i.e. f(xopt) == fopt. + fopt : float + Minimum value found, f(xopt). + func_calls : int + The number of function_calls made. + grad_calls : int + The number of gradient calls made. + warnflag : int + 1 : Maximum number of iterations exceeded. + 2 : Gradient and/or function calls not changing. + allvecs : ndarray + If retall is True (see other parameters below), then this + vector containing the result at each iteration is returned. + + Other Parameters + ---------------- + maxiter : int + Maximum number of iterations to perform. + full_output : bool + If True then return fopt, func_calls, grad_calls, and + warnflag in addition to xopt. + disp : bool + Print convergence message if True. + retall : bool + Return a list of results at each iteration if True. + + Notes + ----- + Optimize the function, f, whose gradient is given by fprime + using the nonlinear conjugate gradient algorithm of Polak and + Ribiere. See Wright & Nocedal, 'Numerical Optimization', + 1999, pg. 120-122. """ x0 = asarray(x0).flatten() @@ -955,72 +962,72 @@ callback=None): """Minimize a function using the Newton-CG method. - :Parameters: + Parameters + ---------- + f : callable f(x,*args) + Objective function to be minimized. + x0 : ndarray + Initial guess. + fprime : callable f'(x,*args) + Gradient of f. + fhess_p : callable fhess_p(x,p,*args) + Function which computes the Hessian of f times an + arbitrary vector, p. + fhess : callable fhess(x,*args) + Function to compute the Hessian matrix of f. + args : tuple + Extra arguments passed to f, fprime, fhess_p, and fhess + (the same set of extra arguments is supplied to all of + these functions). + epsilon : float or ndarray + If fhess is approximated, use this value for the step size. + callback : callable + An optional user-supplied function which is called after + each iteration. Called as callback(xk), where xk is the + current parameter vector. + + Returns + ------- + xopt : ndarray + Parameters which minimizer f, i.e. ``f(xopt) == fopt``. + fopt : float + Value of the function at xopt, i.e. ``fopt = f(xopt)``. + fcalls : int + Number of function calls made. + gcalls : int + Number of gradient calls made. + hcalls : int + Number of hessian calls made. + warnflag : int + Warnings generated by the algorithm. + 1 : Maximum number of iterations exceeded. + allvecs : list + The result at each iteration, if retall is True (see below). + + Other Parameters + ---------------- + avextol : float + Convergence is assumed when the average relative error in + the minimizer falls below this amount. + maxiter : int + Maximum number of iterations to perform. + full_output : bool + If True, return the optional outputs. + disp : bool + If True, print convergence message. + retall : bool + If True, return a list of results at each iteration. - f : callable f(x,*args) - Objective function to be minimized. - x0 : ndarray - Initial guess. - fprime : callable f'(x,*args) - Gradient of f. - fhess_p : callable fhess_p(x,p,*args) - Function which computes the Hessian of f times an - arbitrary vector, p. - fhess : callable fhess(x,*args) - Function to compute the Hessian matrix of f. - args : tuple - Extra arguments passed to f, fprime, fhess_p, and fhess - (the same set of extra arguments is supplied to all of - these functions). - epsilon : float or ndarray - If fhess is approximated, use this value for the step size. - callback : callable - An optional user-supplied function which is called after - each iteration. Called as callback(xk), where xk is the - current parameter vector. - - :Returns: (xopt, {fopt, fcalls, gcalls, hcalls, warnflag},{allvecs}) - - xopt : ndarray - Parameters which minimizer f, i.e. ``f(xopt) == fopt``. - fopt : float - Value of the function at xopt, i.e. ``fopt = f(xopt)``. - fcalls : int - Number of function calls made. - gcalls : int - Number of gradient calls made. - hcalls : int - Number of hessian calls made. - warnflag : int - Warnings generated by the algorithm. - 1 : Maximum number of iterations exceeded. - allvecs : list - The result at each iteration, if retall is True (see below). - - *Other Parameters*: - - avextol : float - Convergence is assumed when the average relative error in - the minimizer falls below this amount. - maxiter : int - Maximum number of iterations to perform. - full_output : bool - If True, return the optional outputs. - disp : bool - If True, print convergence message. - retall : bool - If True, return a list of results at each iteration. - - :Notes: - 1. scikits.openopt offers a unified syntax to call this and other solvers. - 2. Only one of `fhess_p` or `fhess` need to be given. If `fhess` - is provided, then `fhess_p` will be ignored. If neither `fhess` - nor `fhess_p` is provided, then the hessian product will be - approximated using finite differences on `fprime`. `fhess_p` - must compute the hessian times an arbitrary vector. If it is not - given, finite-differences on `fprime` are used to compute - it. See Wright, and Nocedal 'Numerical Optimization', 1999, - pg. 140. + Notes + ----- + Only one of `fhess_p` or `fhess` need to be given. If `fhess` + is provided, then `fhess_p` will be ignored. If neither `fhess` + nor `fhess_p` is provided, then the hessian product will be + approximated using finite differences on `fprime`. `fhess_p` + must compute the hessian times an arbitrary vector. If it is not + given, finite-differences on `fprime` are used to compute + it. See Wright & Nocedal, 'Numerical Optimization', 1999, + pg. 140. """ x0 = asarray(x0).flatten() @@ -1066,7 +1073,7 @@ # check curvature Ap = asarray(Ap).squeeze() # get rid of matrices... curv = numpy.dot(psupi,Ap) - if curv == 0.0: + if 0 <= curv <= 3*numpy.finfo(numpy.float64).eps: break elif curv < 0: if (i > 0): @@ -1132,58 +1139,55 @@ full_output=0, disp=1): """Bounded minimization for scalar functions. - :Parameters: - - func : callable f(x,*args) - Objective function to be minimized (must accept and return scalars). - x1, x2 : float or array scalar - The optimization bounds. - args : tuple - Extra arguments passed to function. - xtol : float - The convergence tolerance. - maxfun : int - Maximum number of function evaluations allowed. - full_output : bool - If True, return optional outputs. - disp : int - If non-zero, print messages. - 0 : no message printing. - 1 : non-convergence notification messages only. - 2 : print a message on convergence too. - 3 : print iteration results. - - - :Returns: (xopt, {fval, ierr, numfunc}) - - xopt : ndarray - Parameters (over given interval) which minimize the - objective function. - fval : number - The function value at the minimum point. - ierr : int - An error flag (0 if converged, 1 if maximum number of - function calls reached). - numfunc : int - The number of function calls made. - - - :Notes: - - Finds a local minimizer of the scalar function `func` in the - interval x1 < xopt < x2 using Brent's method. (See `brent` - for auto-bracketing). + Parameters + ---------- + func : callable f(x,*args) + Objective function to be minimized (must accept and return scalars). + x1, x2 : float or array scalar + The optimization bounds. + args : tuple + Extra arguments passed to function. + xtol : float + The convergence tolerance. + maxfun : int + Maximum number of function evaluations allowed. + full_output : bool + If True, return optional outputs. + disp : int + If non-zero, print messages. + 0 : no message printing. + 1 : non-convergence notification messages only. + 2 : print a message on convergence too. + 3 : print iteration results. + + + Returns + ------- + xopt : ndarray + Parameters (over given interval) which minimize the + objective function. + fval : number + The function value at the minimum point. + ierr : int + An error flag (0 if converged, 1 if maximum number of + function calls reached). + numfunc : int + The number of function calls made. + Notes + ----- + Finds a local minimizer of the scalar function `func` in the + interval x1 < xopt < x2 using Brent's method. (See `brent` + for auto-bracketing). """ # Test bounds are of correct form - x1 = atleast_1d(x1) - x2 = atleast_1d(x2) - if len(x1) != 1 or len(x2) != 1: - raise ValueError, "Optimisation bounds must be scalars" \ - " or length 1 arrays" + + if not (is_array_scalar(x1) and is_array_scalar(x2)): + raise ValueError("Optimisation bounds must be scalars" + " or array scalars.") if x1 > x2: - raise ValueError, "The lower bound exceeds the upper bound." + raise ValueError("The lower bound exceeds the upper bound.") flag = 0 header = ' Func-count x f(x) Procedure' @@ -1324,7 +1328,8 @@ if brack is None: xa,xb,xc,fa,fb,fc,funcalls = bracket(func, args=args) elif len(brack) == 2: - xa,xb,xc,fa,fb,fc,funcalls = bracket(func, xa=brack[0], xb=brack[1], args=args) + xa,xb,xc,fa,fb,fc,funcalls = bracket(func, xa=brack[0], + xb=brack[1], args=args) elif len(brack) == 3: xa,xb,xc = brack if (xa > xc): # swap so xa < xc can be assumed @@ -1336,7 +1341,8 @@ assert ((fb f(xb) < f(xc). It doesn't always mean that obtained solution will satisfy xa<=x<=xb - :Parameters: - - func : callable f(x,*args) - Objective function to minimize. - xa, xb : float - Bracketing interval. - args : tuple - Additional arguments (if present), passed to `func`. - grow_limit : float - Maximum grow limit. - maxiter : int - Maximum number of iterations to perform. - - :Returns: xa, xb, xc, fa, fb, fc, funcalls - - xa, xb, xc : float - Bracket. - fa, fb, fc : float - Objective function values in bracket. - funcalls : int - Number of function evaluations made. + Parameters + ---------- + func : callable f(x,*args) + Objective function to minimize. + xa, xb : float + Bracketing interval. + args : tuple + Additional arguments (if present), passed to `func`. + grow_limit : float + Maximum grow limit. + maxiter : int + Maximum number of iterations to perform. + + Returns + ------- + xa, xb, xc : float + Bracket. + fa, fb, fc : float + Objective function values in bracket. + funcalls : int + Number of function evaluations made. """ _gold = 1.618034 @@ -1660,63 +1665,62 @@ direc=None): """Minimize a function using modified Powell's method. - :Parameters: + Parameters + ---------- + func : callable f(x,*args) + Objective function to be minimized. + x0 : ndarray + Initial guess. + args : tuple + Eextra arguments passed to func. + callback : callable + An optional user-supplied function, called after each + iteration. Called as ``callback(xk)``, where ``xk`` is the + current parameter vector. + direc : ndarray + Initial direction set. + + Returns + ------- + xopt : ndarray + Parameter which minimizes `func`. + fopt : number + Value of function at minimum: ``fopt = func(xopt)``. + direc : ndarray + Current direction set. + iter : int + Number of iterations. + funcalls : int + Number of function calls made. + warnflag : int + Integer warning flag: + 1 : Maximum number of function evaluations. + 2 : Maximum number of iterations. + allvecs : list + List of solutions at each iteration. + + Other Parameters + ---------------- + xtol : float + Line-search error tolerance. + ftol : float + Relative error in ``func(xopt)`` acceptable for convergence. + maxiter : int + Maximum number of iterations to perform. + maxfun : int + Maximum number of function evaluations to make. + full_output : bool + If True, fopt, xi, direc, iter, funcalls, and + warnflag are returned. + disp : bool + If True, print convergence messages. + retall : bool + If True, return a list of the solution at each iteration. - func : callable f(x,*args) - Objective function to be minimized. - x0 : ndarray - Initial guess. - args : tuple - Eextra arguments passed to func. - callback : callable - An optional user-supplied function, called after each - iteration. Called as ``callback(xk)``, where ``xk`` is the - current parameter vector. - direc : ndarray - Initial direction set. - - :Returns: (xopt, {fopt, xi, direc, iter, funcalls, warnflag}, {allvecs}) - - xopt : ndarray - Parameter which minimizes `func`. - fopt : number - Value of function at minimum: ``fopt = func(xopt)``. - direc : ndarray - Current direction set. - iter : int - Number of iterations. - funcalls : int - Number of function calls made. - warnflag : int - Integer warning flag: - 1 : Maximum number of function evaluations. - 2 : Maximum number of iterations. - allvecs : list - List of solutions at each iteration. - - *Other Parameters*: - - xtol : float - Line-search error tolerance. - ftol : float - Relative error in ``func(xopt)`` acceptable for convergence. - maxiter : int - Maximum number of iterations to perform. - maxfun : int - Maximum number of function evaluations to make. - full_output : bool - If True, fopt, xi, direc, iter, funcalls, and - warnflag are returned. - disp : bool - If True, print convergence messages. - retall : bool - If True, return a list of the solution at each iteration. - - - :Notes: - - Uses a modification of Powell's method to find the minimum of - a function of N variables. + Notes + ----- + Uses a modification of Powell's method to find the minimum of + a function of N variables. """ # we need to use a mutable object here that we can update in the @@ -1830,36 +1834,36 @@ def brute(func, ranges, args=(), Ns=20, full_output=0, finish=fmin): """Minimize a function over a given range by brute force. - :Parameters: + Parameters + ---------- + func : callable ``f(x,*args)`` + Objective function to be minimized. + ranges : tuple + Each element is a tuple of parameters or a slice object to + be handed to ``numpy.mgrid``. + args : tuple + Extra arguments passed to function. + Ns : int + Default number of samples, if those are not provided. + full_output : bool + If True, return the evaluation grid. + + Returns + ------- + x0 : ndarray + Value of arguments to `func`, giving minimum over the grid. + fval : int + Function value at minimum. + grid : tuple + Representation of the evaluation grid. It has the same + length as x0. + Jout : ndarray + Function values over grid: ``Jout = func(*grid)``. - func : callable ``f(x,*args)`` - Objective function to be minimized. - ranges : tuple - Each element is a tuple of parameters or a slice object to - be handed to ``numpy.mgrid``. - args : tuple - Extra arguments passed to function. - Ns : int - Default number of samples, if those are not provided. - full_output : bool - If True, return the evaluation grid. - - :Returns: (x0, fval, {grid, Jout}) - - x0 : ndarray - Value of arguments to `func`, giving minimum over the grid. - fval : int - Function value at minimum. - grid : tuple - Representation of the evaluation grid. It has the same - length as x0. - Jout : ndarray - Function values over grid: ``Jout = func(*grid)``. - - :Notes: - - Find the minimum of a function evaluated on a grid given by - the tuple ranges. + Notes + ----- + Find the minimum of a function evaluated on a grid given by + the tuple ranges. """ N = len(ranges) diff -Nru python-scipy-0.7.2+dfsg1/scipy/optimize/slsqp/slsqp_optmz.f python-scipy-0.8.0+dfsg1/scipy/optimize/slsqp/slsqp_optmz.f --- python-scipy-0.7.2+dfsg1/scipy/optimize/slsqp/slsqp_optmz.f 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/optimize/slsqp/slsqp_optmz.f 2010-07-26 15:48:33.000000000 +0100 @@ -1,3 +1,35 @@ +C +C ALGORITHM 733, COLLECTED ALGORITHMS FROM ACM. +C TRANSACTIONS ON MATHEMATICAL SOFTWARE, +C VOL. 20, NO. 3, SEPTEMBER, 1994, PP. 262-281. +C http://doi.acm.org/10.1145/192115.192124 +C +C +C http://permalink.gmane.org/gmane.comp.python.scientific.devel/6725 +C ------ +C From: Deborah Cotton +C Date: Fri, 14 Sep 2007 12:35:55 -0500 +C Subject: RE: Algorithm License requested +C To: Alan Isaac +C +C Prof. Issac, +C +C In that case, then because the author consents to [the ACM] releasing +C the code currently archived at http://www.netlib.org/toms/733 under the +C BSD license, the ACM hereby releases this code under the BSD license. +C +C Regards, +C +C Deborah Cotton, Copyright & Permissions +C ACM Publications +C 2 Penn Plaza, Suite 701** +C New York, NY 10121-0701 +C permissions@acm.org +C 212.869.7440 ext. 652 +C Fax. 212.869.0481 +C ------ +C + ************************************************************************ * optimizer * ************************************************************************ diff -Nru python-scipy-0.7.2+dfsg1/scipy/optimize/slsqp.py python-scipy-0.8.0+dfsg1/scipy/optimize/slsqp.py --- python-scipy-0.7.2+dfsg1/scipy/optimize/slsqp.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/optimize/slsqp.py 2010-07-26 15:48:33.000000000 +0100 @@ -17,20 +17,28 @@ _epsilon = sqrt(finfo(float).eps) def approx_jacobian(x,func,epsilon,*args): - """Approximate the Jacobian matrix of callable function func + """Approximate the Jacobian matrix of a callable function. - *Parameters*: - x - The state vector at which the Jacobian matrix is desired - func - A vector-valued function of the form f(x,*args) - epsilon - The peturbation used to determine the partial derivatives - *args - Additional arguments passed to func - - *Returns*: - An array of dimensions (lenf, lenx) where lenf is the length - of the outputs of func, and lenx is the number of - - *Notes*: - The approximation is done using forward differences + Parameters + ---------- + x : array_like + The state vector at which to compute the Jacobian matrix. + func : callable f(x, *args) + The vector-valued function. + epsilon : float\ + The peturbation used to determine the partial derivatives. + *args : tuple + Additional arguments passed to func. + + Returns + ------- + An array of dimensions ``(lenf, lenx)`` where ``lenf`` is the length + of the outputs of `func`, and ``lenx`` is the number of elements in + `x`. + + Notes + ----- + The approximation is done using forward differences. """ x0 = asfarray(x) @@ -44,8 +52,6 @@ return jac.transpose() - - def fmin_slsqp( func, x0 , eqcons=[], f_eqcons=None, ieqcons=[], f_ieqcons=None, bounds = [], fprime = None, fprime_eqcons=None, fprime_ieqcons=None, args = (), iter = 100, acc = 1.0E-6, @@ -56,86 +62,94 @@ Python interface function for the SLSQP Optimization subroutine originally implemented by Dieter Kraft. - *Parameters*: - func : callable f(x,*args) - Objective function. - x0 : ndarray of float - Initial guess for the independent variable(s). - eqcons : list - A list of functions of length n such that - eqcons[j](x0,*args) == 0.0 in a successfully optimized - problem. - f_eqcons : callable f(x,*args) - Returns an array in which each element must equal 0.0 in a - successfully optimized problem. If f_eqcons is specified, - eqcons is ignored. - ieqcons : list - A list of functions of length n such that - ieqcons[j](x0,*args) >= 0.0 in a successfully optimized - problem. - f_ieqcons : callable f(x0,*args) - Returns an array in which each element must be greater or - equal to 0.0 in a successfully optimized problem. If - f_ieqcons is specified, ieqcons is ignored. - bounds : list - A list of tuples specifying the lower and upper bound - for each independent variable [(xl0, xu0),(xl1, xu1),...] - fprime : callable f(x,*args) - A function that evaluates the partial derivatives of func. - fprime_eqcons : callable f(x,*args) - A function of the form f(x, *args) that returns the m by n - array of equality constraint normals. If not provided, - the normals will be approximated. The array returned by - fprime_eqcons should be sized as ( len(eqcons), len(x0) ). - fprime_ieqcons : callable f(x,*args) - A function of the form f(x, *args) that returns the m by n - array of inequality constraint normals. If not provided, - the normals will be approximated. The array returned by - fprime_ieqcons should be sized as ( len(ieqcons), len(x0) ). - args : sequence - Additional arguments passed to func and fprime. - iter : int - The maximum number of iterations. - acc : float - Requested accuracy. - iprint : int - The verbosity of fmin_slsqp: - iprint <= 0 : Silent operation - iprint == 1 : Print summary upon completion (default) - iprint >= 2 : Print status of each iterate and summary - full_output : bool - If False, return only the minimizer of func (default). - Otherwise, output final objective function and summary - information. - epsilon : float - The step size for finite-difference derivative estimates. - - *Returns*: ( x, { fx, its, imode, smode }) - x : ndarray of float - The final minimizer of func. - fx : ndarray of float - The final value of the objective function. - its : int - The number of iterations. - imode : int - The exit mode from the optimizer (see below). - smode : string - Message describing the exit mode from the optimizer. - - *Notes* - - Exit modes are defined as follows: - -1 : Gradient evaluation required (g & a) - 0 : Optimization terminated successfully. - 1 : Function evaluation required (f & c) - 2 : More equality constraints than independent variables - 3 : More than 3*n iterations in LSQ subproblem - 4 : Inequality constraints incompatible - 5 : Singular matrix E in LSQ subproblem - 6 : Singular matrix C in LSQ subproblem - 7 : Rank-deficient equality constraint subproblem HFTI - 8 : Positive directional derivative for linesearch - 9 : Iteration limit exceeded + Parameters + ---------- + func : callable f(x,*args) + Objective function. + x0 : ndarray of float + Initial guess for the independent variable(s). + eqcons : list + A list of functions of length n such that + eqcons[j](x0,*args) == 0.0 in a successfully optimized + problem. + f_eqcons : callable f(x,*args) + Returns an array in which each element must equal 0.0 in a + successfully optimized problem. If f_eqcons is specified, + eqcons is ignored. + ieqcons : list + A list of functions of length n such that + ieqcons[j](x0,*args) >= 0.0 in a successfully optimized + problem. + f_ieqcons : callable f(x0,*args) + Returns an array in which each element must be greater or + equal to 0.0 in a successfully optimized problem. If + f_ieqcons is specified, ieqcons is ignored. + bounds : list + A list of tuples specifying the lower and upper bound + for each independent variable [(xl0, xu0),(xl1, xu1),...] + fprime : callable `f(x,*args)` + A function that evaluates the partial derivatives of func. + fprime_eqcons : callable `f(x,*args)` + A function of the form `f(x, *args)` that returns the m by n + array of equality constraint normals. If not provided, + the normals will be approximated. The array returned by + fprime_eqcons should be sized as ( len(eqcons), len(x0) ). + fprime_ieqcons : callable `f(x,*args)` + A function of the form `f(x, *args)` that returns the m by n + array of inequality constraint normals. If not provided, + the normals will be approximated. The array returned by + fprime_ieqcons should be sized as ( len(ieqcons), len(x0) ). + args : sequence + Additional arguments passed to func and fprime. + iter : int + The maximum number of iterations. + acc : float + Requested accuracy. + iprint : int + The verbosity of fmin_slsqp : + + * iprint <= 0 : Silent operation + * iprint == 1 : Print summary upon completion (default) + * iprint >= 2 : Print status of each iterate and summary + full_output : bool + If False, return only the minimizer of func (default). + Otherwise, output final objective function and summary + information. + epsilon : float + The step size for finite-difference derivative estimates. + + Returns + ------- + x : ndarray of float + The final minimizer of func. + fx : ndarray of float, if full_output is true + The final value of the objective function. + its : int, if full_output is true + The number of iterations. + imode : int, if full_output is true + The exit mode from the optimizer (see below). + smode : string, if full_output is true + Message describing the exit mode from the optimizer. + + Notes + ----- + Exit modes are defined as follows :: + + -1 : Gradient evaluation required (g & a) + 0 : Optimization terminated successfully. + 1 : Function evaluation required (f & c) + 2 : More equality constraints than independent variables + 3 : More than 3*n iterations in LSQ subproblem + 4 : Inequality constraints incompatible + 5 : Singular matrix E in LSQ subproblem + 6 : Singular matrix C in LSQ subproblem + 7 : Rank-deficient equality constraint subproblem HFTI + 8 : Positive directional derivative for linesearch + 9 : Iteration limit exceeded + + Examples + -------- + Examples are given :ref:`in the tutorial `. """ diff -Nru python-scipy-0.7.2+dfsg1/scipy/optimize/tests/test_minpack.py python-scipy-0.8.0+dfsg1/scipy/optimize/tests/test_minpack.py --- python-scipy-0.7.2+dfsg1/scipy/optimize/tests/test_minpack.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/optimize/tests/test_minpack.py 2010-07-26 15:48:33.000000000 +0100 @@ -0,0 +1,152 @@ +""" +Unit tests for optimization routines from minpack.py. +""" + +from numpy.testing import * +import numpy as np +from numpy import array, float64 + +from scipy import optimize +from scipy.optimize.minpack import fsolve, leastsq, curve_fit + +class TestFSolve(TestCase): + def pressure_network(self, flow_rates, Qtot, k): + """Evaluate non-linear equation system representing + the pressures and flows in a system of n parallel pipes:: + + f_i = P_i - P_0, for i = 1..n + f_0 = sum(Q_i) - Qtot + + Where Q_i is the flow rate in pipe i and P_i the pressure in that pipe. + Pressure is modeled as a P=kQ**2 where k is a valve coefficient and + Q is the flow rate. + + Parameters + ---------- + flow_rates : float + A 1D array of n flow rates [kg/s]. + k : float + A 1D array of n valve coefficients [1/kg m]. + Qtot : float + A scalar, the total input flow rate [kg/s]. + + Returns + ------- + F : float + A 1D array, F[i] == f_i. + + """ + P = k * flow_rates**2 + F = np.hstack((P[1:] - P[0], flow_rates.sum() - Qtot)) + return F + + def pressure_network_jacobian(self, flow_rates, Qtot, k): + """Return the jacobian of the equation system F(flow_rates) + computed by `pressure_network` with respect to + *flow_rates*. See `pressure_network` for the detailed + description of parrameters. + + Returns + ------- + jac : float + *n* by *n* matrix ``df_i/dQ_i`` where ``n = len(flow_rates)`` + and *f_i* and *Q_i* are described in the doc for `pressure_network` + """ + n = len(flow_rates) + pdiff = np.diag(flow_rates[1:] * 2 * k[1:] - 2 * flow_rates[0] * k[0]) + + jac = np.empty((n, n)) + jac[:n-1, :n-1] = pdiff + jac[:n-1, n-1] = 0 + jac[n-1, :] = np.ones(n) + + return jac + + def test_pressure_network_no_gradient(self): + """fsolve without gradient, equal pipes -> equal flows""" + k = np.ones(4) * 0.5 + Qtot = 4 + initial_guess = array([2., 0., 2., 0.]) + final_flows = optimize.fsolve( + self.pressure_network, initial_guess, args=(Qtot, k)) + assert_array_almost_equal(final_flows, np.ones(4)) + + def test_pressure_network_with_gradient(self): + """fsolve with gradient, equal pipes -> equal flows""" + k = np.ones(4) * 0.5 + Qtot = 4 + initial_guess = array([2., 0., 2., 0.]) + final_flows = optimize.fsolve( + self.pressure_network, initial_guess, args=(Qtot, k), + fprime=self.pressure_network_jacobian) + assert_array_almost_equal(final_flows, np.ones(4)) + +class TestLeastSq(TestCase): + def setUp(self): + x = np.linspace(0, 10, 40) + a,b,c = 3.1, 42, -304.2 + self.x = x + self.abc = a,b,c + y_true = a*x**2 + b*x + c + np.random.seed(0) + self.y_meas = y_true + 0.01*np.random.standard_normal(y_true.shape) + + def residuals(self, p, y, x): + a,b,c = p + err = y-(a*x**2 + b*x + c) + return err + + def test_basic(self): + p0 = array([0,0,0]) + params_fit, ier = leastsq(self.residuals, p0, + args=(self.y_meas, self.x)) + assert_(ier in (1,2,3,4), 'solution not found (ier=%d)'%ier) + # low precision due to random + assert_array_almost_equal(params_fit, self.abc, decimal=2) + + def test_full_output(self): + p0 = array([0,0,0]) + full_output = leastsq(self.residuals, p0, + args=(self.y_meas, self.x), + full_output=True) + params_fit, cov_x, infodict, mesg, ier = full_output + assert_(ier in (1,2,3,4), 'solution not found: %s'%mesg) + + def test_input_untouched(self): + p0 = array([0,0,0],dtype=float64) + p0_copy = array(p0, copy=True) + full_output = leastsq(self.residuals, p0, + args=(self.y_meas, self.x), + full_output=True) + params_fit, cov_x, infodict, mesg, ier = full_output + assert_(ier in (1,2,3,4), 'solution not found: %s'%mesg) + assert_array_equal(p0, p0_copy) + +class TestCurveFit(TestCase): + def setUp(self): + self.y = array([1.0, 3.2, 9.5, 13.7]) + self.x = array([1.0, 2.0, 3.0, 4.0]) + + def test_one_argument(self): + def func(x,a): + return x**a + popt, pcov = curve_fit(func, self.x, self.y) + assert_(len(popt)==1) + assert_(pcov.shape==(1,1)) + assert_almost_equal(popt[0], 1.9149, decimal=4) + assert_almost_equal(pcov[0,0], 0.0016, decimal=4) + + def test_two_argument(self): + def func(x, a, b): + return b*x**a + popt, pcov = curve_fit(func, self.x, self.y) + assert_(len(popt)==2) + assert_(pcov.shape==(2,2)) + assert_array_almost_equal(popt, [1.7989, 1.1642], decimal=4) + assert_array_almost_equal(pcov, [[0.0852, -0.1260],[-0.1260, 0.1912]], decimal=4) + + + + +if __name__ == "__main__": + run_module_suite() diff -Nru python-scipy-0.7.2+dfsg1/scipy/optimize/tests/test_optimize.py python-scipy-0.8.0+dfsg1/scipy/optimize/tests/test_optimize.py --- python-scipy-0.7.2+dfsg1/scipy/optimize/tests/test_optimize.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/optimize/tests/test_optimize.py 2010-07-26 15:48:33.000000000 +0100 @@ -1,12 +1,13 @@ -""" Unit tests for optimization routines +""" +Unit tests for optimization routines from optimize.py and tnc.py + Authors: - Ed Schofield, Nov 2005 - Andrew Straw, April 2008 + Ed Schofield, Nov 2005 + Andrew Straw, April 2008 To run it in its simplest form:: nosetests test_optimize.py - """ from numpy.testing import * @@ -159,10 +160,17 @@ assert abs(x - 1.5) < 1e-6 assert_raises(ValueError, optimize.fminbound, lambda x: (x - 1.5)**2 - 0.8, 5, 1) + + def test_fminbound_scalar(self): assert_raises(ValueError, - optimize.fminbound, lambda x: (x - 1.5)**2 - 0.8, + optimize.fminbound, lambda x: (x - 1.5)**2 - 0.8, np.zeros(2), 1) + assert_almost_equal( + optimize.fminbound(lambda x: (x - 1.5)**2 - 0.8, 1, np.array(5)), + 1.5) + + class TestTnc(TestCase): """TNC non-linear optimization. @@ -262,44 +270,5 @@ if ef > 1e-8: raise err -class TestLeastSq(TestCase): - def setUp(self): - x = np.linspace(0, 10, 40) - a,b,c = 3.1, 42, -304.2 - self.x = x - self.abc = a,b,c - y_true = a*x**2 + b*x + c - self.y_meas = y_true + 0.01*np.random.standard_normal( y_true.shape ) - - def residuals(self, p, y, x): - a,b,c = p - err = y-(a*x**2 + b*x + c) - return err - - def test_basic(self): - p0 = array([0,0,0]) - params_fit, ier = leastsq(self.residuals, p0, - args=(self.y_meas, self.x)) - assert ier in (1,2,3,4), 'solution not found (ier=%d)'%ier - assert_array_almost_equal( params_fit, self.abc, decimal=2) # low precision due to random - - def test_full_output(self): - p0 = array([0,0,0]) - full_output = leastsq(self.residuals, p0, - args=(self.y_meas, self.x), - full_output=True) - params_fit, cov_x, infodict, mesg, ier = full_output - assert ier in (1,2,3,4), 'solution not found: %s'%mesg - - def test_input_untouched(self): - p0 = array([0,0,0],dtype=float64) - p0_copy = array(p0, copy=True) - full_output = leastsq(self.residuals, p0, - args=(self.y_meas, self.x), - full_output=True) - params_fit, cov_x, infodict, mesg, ier = full_output - assert ier in (1,2,3,4), 'solution not found: %s'%mesg - assert_array_equal(p0, p0_copy) - if __name__ == "__main__": run_module_suite() diff -Nru python-scipy-0.7.2+dfsg1/scipy/optimize/tests/test_regression.py python-scipy-0.8.0+dfsg1/scipy/optimize/tests/test_regression.py --- python-scipy-0.7.2+dfsg1/scipy/optimize/tests/test_regression.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/optimize/tests/test_regression.py 2010-07-26 15:48:33.000000000 +0100 @@ -0,0 +1,24 @@ +"""Regression tests for optimize. + +""" + +from numpy.testing import TestCase, run_module_suite, assert_almost_equal +import scipy.optimize + +class TestRegression(TestCase): + + def test_newton_x0_is_0(self): + """Ticket #1074""" + + tgt = 1 + res = scipy.optimize.newton(lambda x: x - 1, 0) + assert_almost_equal(res, tgt) + + def test_newton_integers(self): + """Ticket #1214""" + root = scipy.optimize.newton(lambda x: x**2 - 1, x0=2, + fprime=lambda x: 2*x) + assert_almost_equal(root, 1.0) + +if __name__ == "__main__": + run_module_suite() \ No newline at end of file diff -Nru python-scipy-0.7.2+dfsg1/scipy/optimize/tnc.py python-scipy-0.8.0+dfsg1/scipy/optimize/tnc.py --- python-scipy-0.7.2+dfsg1/scipy/optimize/tnc.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/optimize/tnc.py 2010-07-26 15:48:33.000000000 +0100 @@ -86,105 +86,86 @@ """Minimize a function with variables subject to bounds, using gradient information. - :Parameters: - func : callable func(x, *args) - Function to minimize. Should return f and g, where f is - the value of the function and g its gradient (a list of - floats). If the function returns None, the minimization - is aborted. - x0 : list of floats - Initial estimate of minimum. - fprime : callable fprime(x, *args) - Gradient of func. If None, then func must return the - function value and the gradient (f,g = func(x, *args)). - args : tuple - Arguments to pass to function. - approx_grad : bool - If true, approximate the gradient numerically. - bounds : list - (min, max) pairs for each element in x, defining the - bounds on that parameter. Use None or +/-inf for one of - min or max when there is no bound in that direction. - scale : list of floats - Scaling factors to apply to each variable. If None, the - factors are up-low for interval bounded variables and - 1+|x] fo the others. Defaults to None - offset : float - Value to substract from each variable. If None, the - offsets are (up+low)/2 for interval bounded variables - and x for the others. - messages : - Bit mask used to select messages display during - minimization values defined in the MSGS dict. Defaults to - MGS_ALL. - maxCGit : int - Maximum number of hessian*vector evaluations per main - iteration. If maxCGit == 0, the direction chosen is - -gradient if maxCGit < 0, maxCGit is set to - max(1,min(50,n/2)). Defaults to -1. - maxfun : int - Maximum number of function evaluation. if None, maxfun is - set to max(100, 10*len(x0)). Defaults to None. - eta : float - Severity of the line search. if < 0 or > 1, set to 0.25. - Defaults to -1. - stepmx : float - Maximum step for the line search. May be increased during - call. If too small, it will be set to 10.0. Defaults to 0. - accuracy : float - Relative precision for finite difference calculations. If - <= machine_precision, set to sqrt(machine_precision). - Defaults to 0. - fmin : float - Minimum function value estimate. Defaults to 0. - ftol : float - Precision goal for the value of f in the stoping criterion. - If ftol < 0.0, ftol is set to 0.0 defaults to -1. - xtol : float - Precision goal for the value of x in the stopping - criterion (after applying x scaling factors). If xtol < - 0.0, xtol is set to sqrt(machine_precision). Defaults to - -1. - pgtol : float - Precision goal for the value of the projected gradient in - the stopping criterion (after applying x scaling factors). - If pgtol < 0.0, pgtol is set to 1e-2 * sqrt(accuracy). - Setting it to 0.0 is not recommended. Defaults to -1. - rescale : float - Scaling factor (in log10) used to trigger f value - rescaling. If 0, rescale at each iteration. If a large - value, never rescale. If < 0, rescale is set to 1.3. - - :Returns: - x : list of floats - The solution. - nfeval : int - The number of function evaluations. - rc : - Return code as defined in the RCSTRINGS dict. - - :SeeAlso: - - scikits.openopt, which offers a unified syntax to call this and other solvers - - - fmin, fmin_powell, fmin_cg, fmin_bfgs, fmin_ncg : - multivariate local optimizers - - - leastsq : nonlinear least squares minimizer - - - fmin_l_bfgs_b, fmin_tnc, fmin_cobyla : constrained - multivariate optimizers - - - anneal, brute : global optimizers - - - fminbound, brent, golden, bracket : local scalar minimizers + Parameters + ---------- + func : callable func(x, *args) + Function to minimize. Should return f and g, where f is + the value of the function and g its gradient (a list of + floats). If the function returns None, the minimization + is aborted. + x0 : list of floats + Initial estimate of minimum. + fprime : callable fprime(x, *args) + Gradient of func. If None, then func must return the + function value and the gradient (f,g = func(x, *args)). + args : tuple + Arguments to pass to function. + approx_grad : bool + If true, approximate the gradient numerically. + bounds : list + (min, max) pairs for each element in x, defining the + bounds on that parameter. Use None or +/-inf for one of + min or max when there is no bound in that direction. + scale : list of floats + Scaling factors to apply to each variable. If None, the + factors are up-low for interval bounded variables and + 1+|x] fo the others. Defaults to None + offset : float + Value to substract from each variable. If None, the + offsets are (up+low)/2 for interval bounded variables + and x for the others. + messages : + Bit mask used to select messages display during + minimization values defined in the MSGS dict. Defaults to + MGS_ALL. + maxCGit : int + Maximum number of hessian*vector evaluations per main + iteration. If maxCGit == 0, the direction chosen is + -gradient if maxCGit < 0, maxCGit is set to + max(1,min(50,n/2)). Defaults to -1. + maxfun : int + Maximum number of function evaluation. if None, maxfun is + set to max(100, 10*len(x0)). Defaults to None. + eta : float + Severity of the line search. if < 0 or > 1, set to 0.25. + Defaults to -1. + stepmx : float + Maximum step for the line search. May be increased during + call. If too small, it will be set to 10.0. Defaults to 0. + accuracy : float + Relative precision for finite difference calculations. If + <= machine_precision, set to sqrt(machine_precision). + Defaults to 0. + fmin : float + Minimum function value estimate. Defaults to 0. + ftol : float + Precision goal for the value of f in the stoping criterion. + If ftol < 0.0, ftol is set to 0.0 defaults to -1. + xtol : float + Precision goal for the value of x in the stopping + criterion (after applying x scaling factors). If xtol < + 0.0, xtol is set to sqrt(machine_precision). Defaults to + -1. + pgtol : float + Precision goal for the value of the projected gradient in + the stopping criterion (after applying x scaling factors). + If pgtol < 0.0, pgtol is set to 1e-2 * sqrt(accuracy). + Setting it to 0.0 is not recommended. Defaults to -1. + rescale : float + Scaling factor (in log10) used to trigger f value + rescaling. If 0, rescale at each iteration. If a large + value, never rescale. If < 0, rescale is set to 1.3. + + Returns + ------- + x : list of floats + The solution. + nfeval : int + The number of function evaluations. + rc : + Return code as defined in the RCSTRINGS dict. - - fsolve : n-dimenstional root-finding - - - brentq, brenth, ridder, bisect, newton : one-dimensional root-finding - - - fixed_point : scalar fixed-point finder - -""" + """ x0 = asarray(x0, dtype=float).tolist() n = len(x0) diff -Nru python-scipy-0.7.2+dfsg1/scipy/optimize/Zeros/brenth.c python-scipy-0.8.0+dfsg1/scipy/optimize/Zeros/brenth.c --- python-scipy-0.7.2+dfsg1/scipy/optimize/Zeros/brenth.c 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/optimize/Zeros/brenth.c 2010-07-26 15:48:32.000000000 +0100 @@ -48,14 +48,12 @@ if (fcur == 0) return xcur; params->iterations = 0; for(i = 0; i < iter; i++) { - params->iterations++; - - if (fpre*fcur < 0) { - xblk = xpre; - fblk = fpre; - spre = scur = xcur - xpre; - } + if (fpre*fcur < 0) { + xblk = xpre; + fblk = fpre; + spre = scur = xcur - xpre; + } if (fabs(fblk) < fabs(fcur)) { xpre = xcur; xcur = xblk; xblk = xpre; fpre = fcur; fcur = fblk; fblk = fpre; diff -Nru python-scipy-0.7.2+dfsg1/scipy/optimize/Zeros/ridder.c python-scipy-0.8.0+dfsg1/scipy/optimize/Zeros/ridder.c --- python-scipy-0.7.2+dfsg1/scipy/optimize/Zeros/ridder.c 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/optimize/Zeros/ridder.c 2010-07-26 15:48:32.000000000 +0100 @@ -29,7 +29,7 @@ dn = SIGN(fb - fa)*dm*fm/sqrt(fm*fm - fa*fb); xn = xm - SIGN(dn)*DMIN(fabs(dn),fabs(dm) - .5*tol); fn = (*f)(xn,params); - params->funcalls++; + params->funcalls += 2; if (fn*fm < 0.0) { xa = xn; fa = fn; xb = xm; fb = fm; } diff -Nru python-scipy-0.7.2+dfsg1/scipy/optimize/zeros.py python-scipy-0.8.0+dfsg1/scipy/optimize/zeros.py --- python-scipy-0.7.2+dfsg1/scipy/optimize/zeros.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/optimize/zeros.py 2010-07-26 15:48:33.000000000 +0100 @@ -1,5 +1,3 @@ -## Automatically adapted for scipy Oct 07, 2005 by convertcode.py - import _zeros from numpy import finfo @@ -8,13 +6,14 @@ # not actually used at the moment _rtol = finfo(float).eps * 2 -__all__ = ['bisect','ridder','brentq','brenth'] +__all__ = ['newton', 'bisect', 'ridder', 'brentq', 'brenth'] CONVERGED = 'converged' SIGNERR = 'sign error' CONVERR = 'convergence error' flag_map = {0 : CONVERGED, -1 : SIGNERR, -2 : CONVERR} + class RootResults(object): def __init__(self, root, iterations, function_calls, flag): self.root = root @@ -26,6 +25,7 @@ except KeyError: self.flag = 'unknown error %d' % (flag,) + def results_c(full_output, r): if full_output: x, funcalls, iterations, flag = r @@ -37,6 +37,103 @@ else: return r + +# Newton-Raphson method +def newton(func, x0, fprime=None, args=(), tol=1.48e-8, maxiter=50): + """Find a zero using the Newton-Raphson or secant method. + + Find a zero of the function `func` given a nearby starting point `x0`. + The Newton-Rapheson method is used if the derivative `fprime` of `func` + is provided, otherwise the secant method is used. + + Parameters + ---------- + func : function + The function whose zero is wanted. It must be a function of a + single variable of the form f(x,a,b,c...), where a,b,c... are extra + arguments that can be passed in the `args` parameter. + x0 : float + An initial estimate of the zero that should be somewhere near the + actual zero. + fprime : {None, function}, optional + The derivative of the function when available and convenient. If it + is None, then the secant method is used. The default is None. + args : tuple, optional + Extra arguments to be used in the function call. + tol : float, optional + The allowable error of the zero value. + maxiter : int, optional + Maximum number of iterations. + + Returns + ------- + zero : float + Estimated location where function is zero. + + See Also + -------- + brentq, brenth, ridder, bisect -- find zeroes in one dimension. + fsolve -- find zeroes in n dimensions. + + Notes + ----- + The convergence rate of the Newton-Rapheson method is quadratic while + that of the secant method is somewhat less. This means that if the + function is well behaved the actual error in the estimated zero is + approximatly the square of the requested tolerance up to roundoff + error. However, the stopping criterion used here is the step size and + there is no quarantee that a zero has been found. Consequently the + result should be verified. Safer algorithms are brentq, brenth, ridder, + and bisect, but they all require that the root first be bracketed in an + interval where the function changes sign. The brentq algorithm is + recommended for general use in one dimemsional problems when such an + interval has been found. + + """ + if fprime is not None: + # Newton-Rapheson method + # Multiply by 1.0 to convert to floating point. We don't use float(x0) + # so it still works if x0 is complex. + p0 = 1.0 * x0 + for iter in range(maxiter): + myargs = (p0,) + args + fval = func(*myargs) + fder = fprime(*myargs) + if fder == 0: + msg = "derivative was zero." + warnings.warn(msg, RuntimeWarning) + return p0 + p = p0 - func(*myargs)/fprime(*myargs) + if abs(p - p0) < tol: + return p + p0 = p + else: + # Secant method + p0 = x0 + if x0 >= 0: + p1 = x0*(1 + 1e-4) + 1e-4 + else: + p1 = x0*(1 + 1e-4) - 1e-4 + q0 = func(*((p0,) + args)) + q1 = func(*((p1,) + args)) + for iter in range(maxiter): + if q1 == q0: + if p1 != p0: + msg = "Tolerance of %s reached" % (p1 - p0) + warnings.warn(msg, RuntimeWarning) + return (p1 + p0)/2.0 + else: + p = p1 - q1*(p1 - p0)/(q1 - q0) + if abs(p - p1) < tol: + return p + p0 = p1 + q0 = q1 + p1 = p + q1 = func(*((p1,) + args)) + msg = "Failed to converge after %d iterations, value is %s" % (maxiter, p) + raise RuntimeError(msg) + + def bisect(f, a, b, args=(), xtol=_xtol, rtol=_rtol, maxiter=_iter, full_output=False, disp=True): @@ -84,7 +181,7 @@ -------- brentq, brenth, bisect, newton : one-dimensional root-finding fixed_point : scalar fixed-point finder - fsolve -- n-dimenstional root-finding + fsolve -- n-dimensional root-finding """ if type(args) != type(()) : @@ -92,6 +189,7 @@ r = _zeros._bisect(f,a,b,xtol,maxiter,args,full_output,disp) return results_c(full_output, r) + def ridder(f, a, b, args=(), xtol=_xtol, rtol=_rtol, maxiter=_iter, full_output=False, disp=True): @@ -161,6 +259,7 @@ r = _zeros._ridder(f,a,b,xtol,maxiter,args,full_output,disp) return results_c(full_output, r) + def brentq(f, a, b, args=(), xtol=_xtol, rtol=_rtol, maxiter=_iter, full_output=False, disp=True): @@ -233,7 +332,7 @@ `anneal`, `brute` local scalar minimizers `fminbound`, `brent`, `golden`, `bracket` - n-dimenstional root-finding + n-dimensional root-finding `fsolve` one-dimensional root-finding `brentq`, `brenth`, `ridder`, `bisect`, `newton` @@ -263,6 +362,7 @@ r = _zeros._brentq(f,a,b,xtol,maxiter,args,full_output,disp) return results_c(full_output, r) + def brenth(f, a, b, args=(), xtol=_xtol, rtol=_rtol, maxiter=_iter, full_output=False, disp=True): @@ -323,7 +423,7 @@ fminbound, brent, golden, bracket -- local scalar minimizers - fsolve -- n-dimenstional root-finding + fsolve -- n-dimensional root-finding brentq, brenth, ridder, bisect, newton -- one-dimensional root-finding diff -Nru python-scipy-0.7.2+dfsg1/scipy/setup.py python-scipy-0.8.0+dfsg1/scipy/setup.py --- python-scipy-0.7.2+dfsg1/scipy/setup.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/setup.py 2010-07-26 15:48:33.000000000 +0100 @@ -10,7 +10,6 @@ config.add_subpackage('io') config.add_subpackage('lib') config.add_subpackage('linalg') - config.add_subpackage('linsolve') config.add_subpackage('maxentropy') config.add_subpackage('misc') config.add_subpackage('odr') @@ -21,7 +20,6 @@ config.add_subpackage('special') config.add_subpackage('stats') config.add_subpackage('ndimage') - config.add_subpackage('stsci') config.add_subpackage('weave') config.make_svn_version_py() # installs __svn_version__.py config.make_config_py() diff -Nru python-scipy-0.7.2+dfsg1/scipy/setupscons.py python-scipy-0.8.0+dfsg1/scipy/setupscons.py --- python-scipy-0.7.2+dfsg1/scipy/setupscons.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/setupscons.py 2010-07-26 15:48:33.000000000 +0100 @@ -15,7 +15,6 @@ config.add_subpackage('io') config.add_subpackage('lib') config.add_subpackage('linalg') - config.add_subpackage('linsolve') config.add_subpackage('maxentropy') config.add_subpackage('misc') config.add_subpackage('odr') @@ -26,7 +25,6 @@ config.add_subpackage('special') config.add_subpackage('stats') config.add_subpackage('ndimage') - config.add_subpackage('stsci') config.add_subpackage('weave') config.make_svn_version_py() # installs __svn_version__.py diff -Nru python-scipy-0.7.2+dfsg1/scipy/signal/bsplines.py python-scipy-0.8.0+dfsg1/scipy/signal/bsplines.py --- python-scipy-0.7.2+dfsg1/scipy/signal/bsplines.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/signal/bsplines.py 2010-07-26 15:48:33.000000000 +0100 @@ -1,4 +1,3 @@ -## Automatically adapted for scipy Oct 21, 2005 by convertcode.py import scipy.special from numpy import logical_and, asarray, pi, zeros_like, \ diff -Nru python-scipy-0.7.2+dfsg1/scipy/signal/correlate_nd.c.src python-scipy-0.8.0+dfsg1/scipy/signal/correlate_nd.c.src --- python-scipy-0.7.2+dfsg1/scipy/signal/correlate_nd.c.src 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/signal/correlate_nd.c.src 2010-07-26 15:48:33.000000000 +0100 @@ -0,0 +1,328 @@ +/* + * vim:syntax=c + * vim:sw=4 + */ +#include +#define PY_ARRAY_UNIQUE_SYMBOL _scipy_signal_ARRAY_API +#define NO_IMPORT_ARRAY +#include + +#include "sigtools.h" + +enum { + CORR_MODE_VALID=0, + CORR_MODE_SAME, + CORR_MODE_FULL +}; + +static int _correlate_nd_imp(PyArrayIterObject* x, PyArrayIterObject *y, + PyArrayIterObject *z, int typenum, int mode); + +PyObject * +scipy_signal_sigtools_correlateND(PyObject *NPY_UNUSED(dummy), PyObject *args) +{ + PyObject *x, *y, *out; + PyArrayObject *ax, *ay, *aout; + PyArrayIterObject *itx, *ity, *itz; + int mode, typenum, st; + + if (!PyArg_ParseTuple(args, "OOOi", &x, &y, &out, &mode)) { + return NULL; + } + + typenum = PyArray_ObjectType(x, 0); + typenum = PyArray_ObjectType(y, typenum); + typenum = PyArray_ObjectType(out, typenum); + + ax = (PyArrayObject *)PyArray_FromObject(x, typenum, 0, 0); + if (ax == NULL) { + return NULL; + } + + ay = (PyArrayObject *)PyArray_FromObject(y, typenum, 0, 0); + if (ay == NULL) { + goto clean_ax; + } + + aout = (PyArrayObject *)PyArray_FromObject(out, typenum, 0, 0); + if (aout == NULL) { + goto clean_ay; + } + + if (ax->nd != ay->nd) { + PyErr_SetString(PyExc_ValueError, + "Arrays must have the same number of dimensions."); + goto clean_aout; + } + + if (ax->nd == 0) { + PyErr_SetString(PyExc_ValueError, "Cannot convolve zero-dimensional arrays."); + goto clean_aout; + } + + itx = (PyArrayIterObject*)PyArray_IterNew((PyObject*)ax); + if (itx == NULL) { + goto clean_aout; + } + ity = (PyArrayIterObject*)PyArray_IterNew((PyObject*)ay); + if (ity == NULL) { + goto clean_itx; + } + itz = (PyArrayIterObject*)PyArray_IterNew((PyObject*)aout); + if (itz == NULL) { + goto clean_ity; + } + + st = _correlate_nd_imp(itx, ity, itz, typenum, mode); + if (st) { + goto clean_itz; + } + + Py_DECREF(itz); + Py_DECREF(ity); + Py_DECREF(itx); + + Py_DECREF(ax); + Py_DECREF(ay); + + return PyArray_Return(aout); + +clean_itz: + Py_DECREF(itz); +clean_ity: + Py_DECREF(ity); +clean_itx: + Py_DECREF(itx); +clean_aout: + Py_DECREF(aout); +clean_ay: + Py_DECREF(ay); +clean_ax: + Py_DECREF(ax); + return NULL; +} + +/* + * Implementation of the type-specific correlation 'kernels' + */ + +/**begin repeat + * #fsuf = ubyte, byte, ushort, short, uint, int, ulong, long, ulonglong, + * longlong, float, double, longdouble# + * #type = ubyte, byte, ushort, short, uint, int, ulong, long, ulonglong, + * longlong, float, double, npy_longdouble# + */ + +static int _imp_correlate_nd_@fsuf@(PyArrayNeighborhoodIterObject *curx, + PyArrayNeighborhoodIterObject *curneighx, PyArrayIterObject *ity, + PyArrayIterObject *itz) +{ + npy_intp i, j; + @type@ acc; + + for(i = 0; i < curx->size; ++i) { + acc = 0; + PyArrayNeighborhoodIter_Reset(curneighx); + for(j = 0; j < curneighx->size; ++j) { + acc += *((@type@*)(curneighx->dataptr)) * *((@type@*)(ity->dataptr)); + + PyArrayNeighborhoodIter_Next(curneighx); + PyArray_ITER_NEXT(ity); + } + PyArrayNeighborhoodIter_Next(curx); + + *((@type@*)(itz->dataptr)) = acc; + PyArray_ITER_NEXT(itz); + + PyArray_ITER_RESET(ity); + } + + return 0; +} + +/**end repeat**/ + +/**begin repeat + * #fsuf = float, double, longdouble# + * #type = float, double, npy_longdouble# + */ + +static int _imp_correlate_nd_c@fsuf@(PyArrayNeighborhoodIterObject *curx, + PyArrayNeighborhoodIterObject *curneighx, PyArrayIterObject *ity, + PyArrayIterObject *itz) +{ + int i, j; + @type@ racc, iacc; + @type@ *ptr1, *ptr2; + + for(i = 0; i < curx->size; ++i) { + racc = 0; + iacc = 0; + PyArrayNeighborhoodIter_Reset(curneighx); + for(j = 0; j < curneighx->size; ++j) { + ptr1 = ((@type@*)(curneighx->dataptr)); + ptr2 = ((@type@*)(ity->dataptr)); + racc += ptr1[0] * ptr2[0] + ptr1[1] * ptr2[1]; + iacc += ptr1[1] * ptr2[0] - ptr1[0] * ptr2[1]; + + PyArrayNeighborhoodIter_Next(curneighx); + PyArray_ITER_NEXT(ity); + } + PyArrayNeighborhoodIter_Next(curx); + + ((@type@*)(itz->dataptr))[0] = racc; + ((@type@*)(itz->dataptr))[1] = iacc; + PyArray_ITER_NEXT(itz); + + PyArray_ITER_RESET(ity); + } + + return 0; +} + +/**end repeat**/ + +static int _imp_correlate_nd_object(PyArrayNeighborhoodIterObject *curx, + PyArrayNeighborhoodIterObject *curneighx, PyArrayIterObject *ity, + PyArrayIterObject *itz) +{ + int i, j; + PyObject *tmp, *tmp2; + char *zero; + PyArray_CopySwapFunc *copyswap = curx->ao->descr->f->copyswap; + + zero = PyArray_Zero(curx->ao); + + for(i = 0; i < curx->size; ++i) { + PyArrayNeighborhoodIter_Reset(curneighx); + copyswap(itz->dataptr, zero, 0, NULL); + + for(j = 0; j < curneighx->size; ++j) { + /* + * compute tmp2 = acc + x * y. Not all objects supporting the + * number protocol support inplace operations, so we do it the most + * straightfoward way. + */ + tmp = PyNumber_Multiply(*((PyObject**)curneighx->dataptr), + *((PyObject**)ity->dataptr)); + tmp2 = PyNumber_Add(*((PyObject**)itz->dataptr), tmp); + Py_DECREF(tmp); + + /* Update current output item (acc) */ + Py_DECREF(*((PyObject**)itz->dataptr)); + *((PyObject**)itz->dataptr) = tmp2; + + PyArrayNeighborhoodIter_Next(curneighx); + PyArray_ITER_NEXT(ity); + } + + PyArrayNeighborhoodIter_Next(curx); + + PyArray_ITER_NEXT(itz); + + PyArray_ITER_RESET(ity); + } + + PyDataMem_FREE(zero); + + return 0; +} + +static int _correlate_nd_imp(PyArrayIterObject* itx, PyArrayIterObject *ity, + PyArrayIterObject *itz, int typenum, int mode) +{ + PyArrayNeighborhoodIterObject *curneighx, *curx; + npy_intp i, nz, nx; + npy_intp bounds[NPY_MAXDIMS*2]; + + /* Compute boundaries for the neighborhood iterator curx: curx is used to + * traverse x directly, such as each point of the output is the + * innerproduct of y with the neighborhood around curx */ + switch(mode) { + case CORR_MODE_VALID: + /* Only walk through the input points such as the correponding + * output will not depend on 0 padding */ + for(i = 0; i < itx->ao->nd; ++i) { + bounds[2*i] = ity->ao->dimensions[i] - 1; + bounds[2*i+1] = itx->ao->dimensions[i] - 1; + } + break; + case CORR_MODE_SAME: + /* Only walk through the input such as the output will be centered + relatively to the output as computed in the full mode */ + for(i = 0; i < itx->ao->nd; ++i) { + nz = itx->ao->dimensions[i]; + /* Recover 'original' nx, before it was zero-padded */ + nx = nz - ity->ao->dimensions[i] + 1; + if ((nz - nx) % 2 == 0) { + bounds[2*i] = (nz - nx) / 2; + } else { + bounds[2*i] = (nz - nx - 1) / 2; + } + bounds[2*i+1] = bounds[2*i] + nx - 1; + } + break; + case CORR_MODE_FULL: + for(i = 0; i < itx->ao->nd; ++i) { + bounds[2*i] = 0; + bounds[2*i+1] = itx->ao->dimensions[i] - 1; + } + break; + default: + PyErr_BadInternalCall(); + return -1; + } + + curx = (PyArrayNeighborhoodIterObject*)PyArray_NeighborhoodIterNew(itx, + bounds, NPY_NEIGHBORHOOD_ITER_ZERO_PADDING, NULL); + if (curx == NULL) { + PyErr_SetString(PyExc_SystemError, "Could not create curx ?"); + return -1; + } + + /* Compute boundaries for the neighborhood iterator: the neighborhood for x + should have the same dimensions as y */ + for(i = 0; i < ity->ao->nd; ++i) { + bounds[2*i] = -ity->ao->dimensions[i] + 1; + bounds[2*i+1] = 0; + } + + curneighx = (PyArrayNeighborhoodIterObject*)PyArray_NeighborhoodIterNew( + (PyArrayIterObject*)curx, bounds, NPY_NEIGHBORHOOD_ITER_ZERO_PADDING, NULL); + if (curneighx == NULL) { + goto clean_curx; + } + + switch(typenum) { +/**begin repeat + * #TYPE = UBYTE, BYTE, USHORT, SHORT, UINT, INT, ULONG, LONG, ULONGLONG, + * LONGLONG, FLOAT, DOUBLE, LONGDOUBLE, CFLOAT, CDOUBLE, CLONGDOUBLE# + * #type = ubyte, byte, ushort, short, uint, int, ulong, long, ulonglong, + * longlong, float, double, longdouble, cfloat, cdouble, clongdouble# + */ + case PyArray_@TYPE@: + _imp_correlate_nd_@type@(curx, curneighx, ity, itz); + break; +/**end repeat**/ + + /* The object array case does not worth being optimized, since most of + the cost is numerical operations, not iterators moving in this case ? */ + case PyArray_OBJECT: + _imp_correlate_nd_object(curx, curneighx, ity, itz); + break; + default: + PyErr_SetString(PyExc_ValueError, "Unsupported type"); + goto clean_curneighx; + } + + Py_DECREF((PyArrayIterObject*)curx); + Py_DECREF((PyArrayIterObject*)curneighx); + + return 0; + +clean_curneighx: + Py_DECREF((PyArrayIterObject*)curneighx); +clean_curx: + Py_DECREF((PyArrayIterObject*)curx); + return -1; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/signal/filter_design.py python-scipy-0.8.0+dfsg1/scipy/signal/filter_design.py --- python-scipy-0.7.2+dfsg1/scipy/signal/filter_design.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/signal/filter_design.py 2010-07-26 15:48:33.000000000 +0100 @@ -77,16 +77,17 @@ return w, h def freqz(b, a=1, worN=None, whole=0, plot=None): - """Compute frequency response of a digital filter. + """ + Compute the frequency response of a digital filter. - Given the numerator (b) and denominator (a) of a digital filter compute - its frequency response. + Given the numerator ``b`` and denominator ``a`` of a digital filter compute + its frequency response:: jw -jw -jmw jw B(e) b[0] + b[1]e + .... + b[m]e H(e) = ---- = ------------------------------------ jw -jw -jnw - A(e) a[0] + a[2]e + .... + a[n]e + A(e) a[0] + a[1]e + .... + a[n]e Parameters ---------- @@ -95,10 +96,9 @@ a : ndarray numerator of a linear filter worN : {None, int}, optional - If None, then compute at 200 frequencies around the interesting parts - of the response curve (determined by pole-zero locations). If a single - integer, the compute at that many frequencies. Otherwise, compute the - response at frequencies given in worN. + If None, then compute at 512 frequencies around the unit circle. + If a single integer, the compute at that many frequencies. + Otherwise, compute the response at frequencies given in worN whole : {0,1}, optional Normally, frequencies are computed from 0 to pi (upper-half of unit-circle. If whole is non-zero compute frequencies from 0 to 2*pi. @@ -109,6 +109,30 @@ The frequencies at which h was computed. h : ndarray The frequency response. + + Examples + -------- + + >>> b = firwin(80, 0.5, window=('kaiser', 8)) + >>> h, w = freqz(b) + + >>> import matplotlib.pyplot as plt + >>> fig = plt.figure() + >>> plt.title('Digital filter frequency response') + >>> ax1 = fig.add_subplot(111) + + >>> plt.semilogy(h, np.abs(w), 'b') + >>> plt.ylabel('Amplitude (dB)', color='b') + >>> plt.xlabel('Frequency (rad/sample)') + >>> plt.grid() + >>> plt.legend() + + >>> ax2 = ax1.twinx() + >>> angles = np.unwrap(np.angle(w)) + >>> plt.plot(h, angles, 'g') + >>> plt.ylabel('Angle (radians)', color='g') + >>> plt.show() + """ b, a = map(atleast_1d, (b,a)) if whole: @@ -216,15 +240,15 @@ b = asarray([b],b.dtype.char) while a[0] == 0.0 and len(a) > 1: a = a[1:] - if allclose(b[:,0], 0, rtol=1e-14): - warnings.warn("Badly conditionned filter coefficients (numerator): the " - "results may be meaningless", BadCoefficients) - while allclose(b[:,0], 0, rtol=1e-14) and (b.shape[-1] > 1): - b = b[:,1:] - if b.shape[0] == 1: - b = b[0] outb = b * (1.0) / a[0] outa = a * (1.0) / a[0] + if allclose(outb[:,0], 0, rtol=1e-14): + warnings.warn("Badly conditioned filter coefficients (numerator): the " + "results may be meaningless", BadCoefficients) + while allclose(outb[:,0], 0, rtol=1e-14) and (outb.shape[-1] > 1): + outb = outb[:,1:] + if outb.shape[0] == 1: + outb = outb[0] return outb, outa @@ -1551,7 +1575,8 @@ return ceil(N), beta def firwin(N, cutoff, width=None, window='hamming'): - """FIR Filter Design using windowed ideal filter method. + """ + FIR Filter Design using windowed ideal filter method. Parameters ---------- @@ -1562,11 +1587,13 @@ width -- if width is not None, then assume it is the approximate width of the transition region (normalized so that 1 corresonds to pi) for use in kaiser FIR filter design. - window -- desired window to use. + window -- desired window to use. See get_window for a list + of windows and required parameters. Returns ------- h -- coefficients of length N fir filter. + """ from signaltools import get_window diff -Nru python-scipy-0.7.2+dfsg1/scipy/signal/info.py python-scipy-0.8.0+dfsg1/scipy/signal/info.py --- python-scipy-0.7.2+dfsg1/scipy/signal/info.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/signal/info.py 2010-07-26 15:48:33.000000000 +0100 @@ -4,111 +4,202 @@ Convolution: - convolve -- N-dimensional convolution. - correlate -- N-dimensional correlation. - fftconvolve -- N-dimensional convolution using the FFT. - convolve2d -- 2-dimensional convolution (more options). - correlate2d -- 2-dimensional correlation (more options). - sepfir2d -- Convolve with a 2-D separable FIR filter. + convolve: + N-dimensional convolution. + + correlate: + N-dimensional correlation. + fftconvolve: + N-dimensional convolution using the FFT. + convolve2d: + 2-dimensional convolution (more options). + correlate2d: + 2-dimensional correlation (more options). + sepfir2d: + Convolve with a 2-D separable FIR filter. B-splines: - bspline -- B-spline basis function of order n. - gauss_spline -- Gaussian approximation to the B-spline basis function. - cspline1d -- Coefficients for 1-D cubic (3rd order) B-spline. - qspline1d -- Coefficients for 1-D quadratic (2nd order) B-spline. - cspline2d -- Coefficients for 2-D cubic (3rd order) B-spline. - qspline2d -- Coefficients for 2-D quadratic (2nd order) B-spline. - spline_filter -- Smoothing spline (cubic) filtering of a rank-2 array. + bspline: + B-spline basis function of order n. + gauss_spline: + Gaussian approximation to the B-spline basis function. + cspline1d: + Coefficients for 1-D cubic (3rd order) B-spline. + qspline1d: + Coefficients for 1-D quadratic (2nd order) B-spline. + cspline2d: + Coefficients for 2-D cubic (3rd order) B-spline. + qspline2d: + Coefficients for 2-D quadratic (2nd order) B-spline. + spline_filter: + Smoothing spline (cubic) filtering of a rank-2 array. Filtering: - order_filter -- N-dimensional order filter. - medfilt -- N-dimensional median filter. - medfilt2 -- 2-dimensional median filter (faster). - wiener -- N-dimensional wiener filter. - - symiirorder1 -- 2nd-order IIR filter (cascade of first-order systems). - symiirorder2 -- 4th-order IIR filter (cascade of second-order systems). - lfilter -- 1-dimensional FIR and IIR digital linear filtering. - - deconvolve -- 1-d deconvolution using lfilter. - - hilbert -- Compute the analytic signal of a 1-d signal. - get_window -- Create FIR window. - - detrend -- Remove linear and/or constant trends from data. - resample -- Resample using Fourier method. + order_filter: + N-dimensional order filter. + medfilt: + N-dimensional median filter. + medfilt2: + 2-dimensional median filter (faster). + wiener: + N-dimensional wiener filter. + symiirorder1: + 2nd-order IIR filter (cascade of first-order systems). + symiirorder2: + 4th-order IIR filter (cascade of second-order systems). + lfilter: + 1-dimensional FIR and IIR digital linear filtering. + lfiltic: + Construct initial conditions for `lfilter`. + deconvolve: + 1-d deconvolution using lfilter. + hilbert: + Compute the analytic signal of a 1-d signal. + get_window: + Create FIR window. + decimate: + Downsample a signal. + detrend: + Remove linear and/or constant trends from data. + resample: + Resample using Fourier method. Filter design: - remez -- Optimal FIR filter design. - firwin -- Windowed FIR filter design. - iirdesign -- IIR filter design given bands and gains. - iirfilter -- IIR filter design given order and critical frequencies. - freqs -- Analog filter frequency response. - freqz -- Digital filter frequency response. - - unique_roots -- Unique roots and their multiplicities. - residue -- Partial fraction expansion of b(s) / a(s). - residuez -- Partial fraction expansion of b(z) / a(z). - invres -- Inverse partial fraction expansion. + bilinear: + Return a digital filter from an analog filter using the bilinear transform. + firwin: + Windowed FIR filter design. + freqs: + Analog filter frequency response. + freqz: + Digital filter frequency response. + iirdesign: + IIR filter design given bands and gains. + iirfilter: + IIR filter design given order and critical frequencies. + invres: + Inverse partial fraction expansion. + kaiserord: + Design a Kaiser window to limit ripple and width of transition region. + remez: + Optimal FIR filter design. + residue: + Partial fraction expansion of b(s) / a(s). + residuez: + Partial fraction expansion of b(z) / a(z). + unique_roots: + Unique roots and their multiplicities. Matlab-style IIR filter design: - butter (buttord) -- Butterworth - cheby1 (cheb1ord) -- Chebyshev Type I - cheby2 (cheb2ord) -- Chebyshev Type II - ellip (ellipord) -- Elliptic (Cauer) - bessel -- Bessel (no order selection available -- try butterod) + butter (buttord): + Butterworth + cheby1 (cheb1ord): + Chebyshev Type I + cheby2 (cheb2ord): + Chebyshev Type II + ellip (ellipord): + Elliptic (Cauer) + bessel: + Bessel (no order selection available -- try butterod) Linear Systems: - lti -- linear time invariant system object. - lsim -- continuous-time simulation of output to linear system. - impulse -- impulse response of linear, time-invariant (LTI) system. - step -- step response of continous-time LTI system. - - LTI Reresentations: - - tf2zpk -- transfer function to zero-pole-gain. - zpk2tf -- zero-pole-gain to transfer function. - tf2ss -- transfer function to state-space. - ss2tf -- state-pace to transfer function. - zpk2ss -- zero-pole-gain to state-space. - ss2zpk -- state-space to pole-zero-gain. + lti: + linear time invariant system object. + lsim: + continuous-time simulation of output to linear system. + lsim2: + like lsim, but `scipy.integrate.odeint` is used. + impulse: + impulse response of linear, time-invariant (LTI) system. + impulse2: + like impulse, but `scipy.integrate.odeint` is used. + step: + step response of continous-time LTI system. + step2: + like step, but `scipy.integrate.odeint` is used. + + LTI Representations: + + tf2zpk: + transfer function to zero-pole-gain. + zpk2tf: + zero-pole-gain to transfer function. + tf2ss: + transfer function to state-space. + ss2tf: + state-pace to transfer function. + zpk2ss: + zero-pole-gain to state-space. + ss2zpk: + state-space to pole-zero-gain. Waveforms: - sawtooth -- Periodic sawtooth - square -- Square wave - gausspulse -- Gaussian modulated sinusoid - chirp -- Frequency swept cosine signal + sawtooth: + Periodic sawtooth + square: + Square wave + gausspulse: + Gaussian modulated sinusoid + chirp: + Frequency swept cosine signal, with several frequency functions. + sweep_poly: + Frequency swept cosine signal; frequency is arbitrary polynomial. Window functions: - boxcar -- Boxcar window - triang -- Triangular window - parzen -- Parzen window - bohman -- Bohman window - blackman -- Blackman window - blackmanharris -- Minimum 4-term Blackman-Harris window - nuttall -- Nuttall's minimum 4-term Blackman-Harris window - flattop -- Flat top window - bartlett -- Bartlett window - hann -- Hann window - barthann -- Bartlett-Hann window - hamming -- Hamming window - kaiser -- Kaiser window - gaussian -- Gaussian window - general_gaussian -- Generalized Gaussian window - slepian -- Slepian window + get_window: + Return a window of a given length and type. + barthann: + Bartlett-Hann window + bartlett: + Bartlett window + blackman: + Blackman window + blackmanharris: + Minimum 4-term Blackman-Harris window + bohman: + Bohman window + boxcar: + Boxcar window + chebwin: + Dolph-Chebyshev window + flattop: + Flat top window + gaussian: + Gaussian window + general_gaussian: + Generalized Gaussian window + hamming: + Hamming window + hann: + Hann window + kaiser: + Kaiser window + nuttall: + Nuttall's minimum 4-term Blackman-Harris window + parzen: + Parzen window + slepian: + Slepian window + triang: + Triangular window Wavelets: - daub -- return low-pass filter for daubechies wavelets - qmf -- return quadrature mirror filter from low-pass - cascade -- compute scaling function and wavelet from coefficients + daub: + return low-pass + qmf: + return quadrature mirror filter from low-pass + cascade: + compute scaling function and wavelet from coefficients + morlet: + Complex Morlet wavelet. """ postpone_import = 1 diff -Nru python-scipy-0.7.2+dfsg1/scipy/signal/__init__.py python-scipy-0.8.0+dfsg1/scipy/signal/__init__.py --- python-scipy-0.7.2+dfsg1/scipy/signal/__init__.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/signal/__init__.py 2010-07-26 15:48:33.000000000 +0100 @@ -9,6 +9,7 @@ from bsplines import * from filter_design import * from ltisys import * +from windows import * from signaltools import * from wavelets import * diff -Nru python-scipy-0.7.2+dfsg1/scipy/signal/lfilter.c python-scipy-0.8.0+dfsg1/scipy/signal/lfilter.c --- python-scipy-0.7.2+dfsg1/scipy/signal/lfilter.c 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/signal/lfilter.c 1970-01-01 01:00:00.000000000 +0100 @@ -1,330 +0,0 @@ -#include - -static int -RawFilter(const PyArrayObject *b, const PyArrayObject *a, - const PyArrayObject *x, const PyArrayObject *zi, - const PyArrayObject *zf, PyArrayObject *y, int axis, - BasicFilterFunction *filter_func); - -static char doc_linear_filter[] = "(y,Vf) = _linear_filter(b,a,X,Dim=-1,Vi=None) implemented using Direct Form II transposed flow diagram. If Vi is not given, Vf is not returned."; - -/* - * XXX: Error checking not done yet - */ -static PyObject * -sigtools_linear_filter(PyObject * dummy, PyObject * args) -{ - PyObject *b, *a, *X, *Vi; - PyArrayObject *arY, *arb, *ara, *arX, *arVi, *arVf; - int axis, typenum, theaxis, st; - char *ara_ptr, input_flag = 0, *azero; - intp na, nb, nal; - BasicFilterFunction *basic_filter; - - axis = -1; - Vi = NULL; - if (!PyArg_ParseTuple(args, "OOO|iO", &b, &a, &X, &axis, &Vi)) { - return NULL; - } - - typenum = PyArray_ObjectType(b, 0); - typenum = PyArray_ObjectType(a, typenum); - typenum = PyArray_ObjectType(X, typenum); - if (Vi != NULL) { - typenum = PyArray_ObjectType(Vi, typenum); - } - - arY = arVf = arVi = NULL; - ara = (PyArrayObject *) PyArray_ContiguousFromObject(a, typenum, 1, 1); - arb = (PyArrayObject *) PyArray_ContiguousFromObject(b, typenum, 1, 1); - arX = (PyArrayObject *) PyArray_FromObject(X, typenum, 0, 0); - /* XXX: fix failure handling here */ - if (ara == NULL || arb == NULL || arX == NULL) { - goto fail; - } - - if (axis < -arX->nd || axis > arX->nd - 1) { - PyErr_SetString(PyExc_ValueError, - "selected axis is out of range"); - goto fail; - } - if (axis < 0) { - theaxis = arX->nd + axis; - } else { - theaxis = axis; - } - - if (Vi != NULL) { - Py_ssize_t nvi; - arVi = (PyArrayObject *) PyArray_FromObject(Vi, typenum, - arX->nd, arX->nd); - if (arVi == NULL) - goto fail; - - nvi = PyArray_Size((PyObject *) arVi); - if (nvi > 0) { - input_flag = 1; - } else { - input_flag = 0; - Py_DECREF(arVi); - arVi = NULL; - } - } - - arY = (PyArrayObject *) PyArray_SimpleNew(arX->nd, - arX->dimensions, typenum); - if (arY == NULL) { - goto fail; - } - - if (input_flag) { - arVf = (PyArrayObject *) PyArray_SimpleNew(arVi->nd, - arVi->dimensions, - typenum); - } - - basic_filter = BasicFilterFunctions[(int) (arX->descr->type_num)]; - if (basic_filter == NULL) { - PyErr_SetString(PyExc_ValueError, - "linear_filter not available for this type"); - goto fail; - } - - /* Skip over leading zeros in vector representing denominator (a) */ - /* XXX: handle this correctly */ - azero = PyArray_Zero(ara); - ara_ptr = ara->data; - nal = PyArray_ITEMSIZE(ara); - if (memcmp(ara_ptr, azero, nal) == 0) { - PyErr_SetString(PyExc_ValueError, - "BUG: filter coefficient a[0] == 0 not supported yet"); - goto fail; - } - PyDataMem_FREE(azero); - - na = PyArray_SIZE(ara); - nb = PyArray_SIZE(arb); - if (input_flag) { - if (arVi->dimensions[theaxis] != (na > nb ? na : nb) - 1) { - PyErr_SetString(PyExc_ValueError, - "The number of initial conditions must be max([len(a),len(b)]) - 1"); - goto fail; - } - } - - st = RawFilter(arb, ara, arX, arVi, arVf, arY, theaxis, basic_filter); - if (st) { - goto fail; - } - - Py_XDECREF(ara); - Py_XDECREF(arb); - Py_XDECREF(arX); - Py_XDECREF(arVi); - - if (!input_flag) { - return PyArray_Return(arY); - } else { - return Py_BuildValue("(NN)", arY, arVf); - } - - -fail: - Py_XDECREF(ara); - Py_XDECREF(arb); - Py_XDECREF(arX); - Py_XDECREF(arVi); - Py_XDECREF(arVf); - Py_XDECREF(arY); - return NULL; -} - -static int -zfill(const PyArrayObject *x, intp nx, char* xzfilled, intp nxzfilled) -{ - char *xzero; - intp i, nxl; - - nxl = PyArray_ITEMSIZE(x); - - /* PyArray_Zero does not take const pointer, hence the cast */ - xzero = PyArray_Zero((PyArrayObject*)x); - - if (nx > 0) { - memcpy(xzfilled, x->data, nx * nxl); - } - for(i = nx; i < nxzfilled; ++i) { - memcpy(xzfilled + i * nxl, xzero, nxl); - } - - PyDataMem_FREE(xzero); - - return 0; -} - -/* - * a and b assumed to be contiguous - * - * XXX: this code is very conservative, and could be considerably sped up for - * the usual cases (like contiguity). - * - * XXX: the code should be refactored (at least with/without initial - * condition), some code is wasteful here - */ -static int -RawFilter(const PyArrayObject *b, const PyArrayObject *a, - const PyArrayObject *x, const PyArrayObject *zi, - const PyArrayObject *zf, PyArrayObject *y, int axis, - BasicFilterFunction *filter_func) -{ - PyArrayIterObject *itx, *ity, *itzi, *itzf; - intp nitx, i, nxl, nzfl, j; - intp na, nb, nal, nbl; - intp nfilt; - char *azfilled, *bzfilled, *zfzfilled, *yoyo; - PyArray_CopySwapFunc *copyswap = x->descr->f->copyswap; - - itx = (PyArrayIterObject *)PyArray_IterAllButAxis( - (PyObject *)x, &axis); - if (itx == NULL) { - PyErr_SetString(PyExc_MemoryError, - "Could not create itx"); - goto fail; - } - nitx = itx->size; - - ity = (PyArrayIterObject *)PyArray_IterAllButAxis( - (PyObject *)y, &axis); - if (ity == NULL) { - PyErr_SetString(PyExc_MemoryError, - "Could not create ity"); - goto clean_itx; - } - - if (zi != NULL) { - itzi = (PyArrayIterObject *)PyArray_IterAllButAxis( - (PyObject *)zi, &axis); - if (itzi == NULL) { - PyErr_SetString(PyExc_MemoryError, - "Could not create itzi"); - goto clean_ity; - } - - itzf = (PyArrayIterObject *)PyArray_IterAllButAxis( - (PyObject *)zf, &axis); - if (itzf == NULL) { - PyErr_SetString(PyExc_MemoryError, - "Could not create itzf"); - goto clean_itzi; - } - } - - na = PyArray_SIZE(a); - nal = PyArray_ITEMSIZE(a); - nb = PyArray_SIZE(b); - nbl = PyArray_ITEMSIZE(b); - - nfilt = na > nb ? na : nb; - - azfilled = malloc(nal * nfilt); - if (azfilled == NULL) { - PyErr_SetString(PyExc_MemoryError, - "Could not create azfilled"); - goto clean_itzf; - } - bzfilled = malloc(nbl * nfilt); - if (bzfilled == NULL) { - PyErr_SetString(PyExc_MemoryError, - "Could not create bzfilled"); - goto clean_azfilled; - } - - nxl = PyArray_ITEMSIZE(x); - zfzfilled = malloc(nxl * (nfilt-1) ); - if (zfzfilled == NULL) { - PyErr_SetString(PyExc_MemoryError, - "Could not create zfzfilled"); - goto clean_bzfilled; - } - /* Initialize zfzilled to 0, so that we can use Py_XINCREF/Py_XDECREF - * on it for object arrays (necessary for copyswap to work correctly). - * Stricly speaking, it is not needed for fundamental types (as values - * are copied instead of pointers, without refcounts), but oh well... - */ - memset(zfzfilled, 0, nxl * (nfilt-1)); - - zfill(a, na, azfilled, nfilt); - zfill(b, nb, bzfilled, nfilt); - - /* XXX: Check that zf and zi have same type ? */ - if (zf != NULL) { - nzfl = PyArray_ITEMSIZE(zf); - } else { - nzfl = 0; - } - - /* Iterate over the input array */ - for(i = 0; i < nitx; ++i) { - if (zi != NULL) { - yoyo = itzi->dataptr; - /* Copy initial conditions zi in zfzfilled buffer */ - for(j = 0; j < nfilt - 1; ++j) { - copyswap(zfzfilled + j * nzfl, yoyo, 0, NULL); - yoyo += itzi->strides[axis]; - } - PyArray_ITER_NEXT(itzi); - } else { - zfill(x, 0, zfzfilled, nfilt-1); - } - - filter_func(bzfilled, azfilled, - itx->dataptr, ity->dataptr, zfzfilled, - nfilt, PyArray_DIM(x, axis), itx->strides[axis], - ity->strides[axis]); - PyArray_ITER_NEXT(itx); - PyArray_ITER_NEXT(ity); - - /* Copy tmp buffer fo final values back into zf output array */ - if (zi != NULL) { - yoyo = itzf->dataptr; - for(j = 0; j < nfilt - 1; ++j) { - copyswap(yoyo, zfzfilled + j * nzfl, 0, NULL); - yoyo += itzf->strides[axis]; - } - PyArray_ITER_NEXT(itzf); - } - } - - /* Free up allocated memory */ - free(zfzfilled); - free(bzfilled); - free(azfilled); - - if (zi != NULL) { - Py_DECREF(itzf); - Py_DECREF(itzi); - } - Py_DECREF(ity); - Py_DECREF(itx); - - return 0; - -clean_bzfilled: - free(bzfilled); -clean_azfilled: - free(azfilled); -clean_itzf: - if (zf != NULL) { - Py_DECREF(itzf); - } -clean_itzi: - if (zi != NULL) { - Py_DECREF(itzi); - } -clean_ity: - Py_DECREF(ity); -clean_itx: - Py_DECREF(itx); -fail: - return -1; -} diff -Nru python-scipy-0.7.2+dfsg1/scipy/signal/lfilter.c.src python-scipy-0.8.0+dfsg1/scipy/signal/lfilter.c.src --- python-scipy-0.7.2+dfsg1/scipy/signal/lfilter.c.src 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/signal/lfilter.c.src 2010-07-26 15:48:33.000000000 +0100 @@ -0,0 +1,586 @@ +/* + * vim:syntax=c + * vim:sw=4 + */ +#include +#define PY_ARRAY_UNIQUE_SYMBOL _scipy_signal_ARRAY_API +#define NO_IMPORT_ARRAY +#include + +#include "sigtools.h" + +static void FLOAT_filt(char *b, char *a, char *x, char *y, char *Z, + intp len_b, uintp len_x, intp stride_X, + intp stride_Y); +static void DOUBLE_filt(char *b, char *a, char *x, char *y, char *Z, + intp len_b, uintp len_x, intp stride_X, + intp stride_Y); +static void EXTENDED_filt(char *b, char *a, char *x, char *y, char *Z, + intp len_b, uintp len_x, intp stride_X, + intp stride_Y); +static void CFLOAT_filt(char *b, char *a, char *x, char *y, char *Z, + intp len_b, uintp len_x, intp stride_X, + intp stride_Y); +static void CDOUBLE_filt(char *b, char *a, char *x, char *y, char *Z, + intp len_b, uintp len_x, intp stride_X, + intp stride_Y); +static void CEXTENDED_filt(char *b, char *a, char *x, char *y, char *Z, + intp len_b, uintp len_x, intp stride_X, + intp stride_Y); +static void OBJECT_filt(char *b, char *a, char *x, char *y, char *Z, + intp len_b, uintp len_x, intp stride_X, + intp stride_Y); + +typedef void (BasicFilterFunction) (char *, char *, char *, char *, char *, intp, uintp, intp, intp); + +static BasicFilterFunction *BasicFilterFunctions[256]; + +void +scipy_signal_sigtools_linear_filter_module_init() +{ + int k; + for (k = 0; k < 256; ++k) { + BasicFilterFunctions[k] = NULL; + } + BasicFilterFunctions[NPY_FLOAT] = FLOAT_filt; + BasicFilterFunctions[NPY_DOUBLE] = DOUBLE_filt; + BasicFilterFunctions[NPY_LONGDOUBLE] = EXTENDED_filt; + BasicFilterFunctions[NPY_CFLOAT] = CFLOAT_filt; + BasicFilterFunctions[NPY_CDOUBLE] = CDOUBLE_filt; + BasicFilterFunctions[NPY_CLONGDOUBLE] = CEXTENDED_filt; + BasicFilterFunctions[NPY_OBJECT] = OBJECT_filt; +} + +/* There is the start of an OBJECT_filt, but it may need work */ + +static int +RawFilter(const PyArrayObject * b, const PyArrayObject * a, + const PyArrayObject * x, const PyArrayObject * zi, + const PyArrayObject * zf, PyArrayObject * y, int axis, + BasicFilterFunction * filter_func); + +/* + * XXX: Error checking not done yet + */ +PyObject* +scipy_signal_sigtools_linear_filter(PyObject * NPY_UNUSED(dummy), PyObject * args) +{ + PyObject *b, *a, *X, *Vi; + PyArrayObject *arY, *arb, *ara, *arX, *arVi, *arVf; + int axis, typenum, theaxis, st; + char *ara_ptr, input_flag = 0, *azero; + intp na, nb, nal; + BasicFilterFunction *basic_filter; + + axis = -1; + Vi = NULL; + if (!PyArg_ParseTuple(args, "OOO|iO", &b, &a, &X, &axis, &Vi)) { + return NULL; + } + + typenum = PyArray_ObjectType(b, 0); + typenum = PyArray_ObjectType(a, typenum); + typenum = PyArray_ObjectType(X, typenum); + if (Vi != NULL) { + typenum = PyArray_ObjectType(Vi, typenum); + } + + arY = arVf = arVi = NULL; + ara = (PyArrayObject *) PyArray_ContiguousFromObject(a, typenum, 1, 1); + arb = (PyArrayObject *) PyArray_ContiguousFromObject(b, typenum, 1, 1); + arX = (PyArrayObject *) PyArray_FromObject(X, typenum, 0, 0); + /* XXX: fix failure handling here */ + if (ara == NULL || arb == NULL || arX == NULL) { + goto fail; + } + + if (axis < -arX->nd || axis > arX->nd - 1) { + PyErr_SetString(PyExc_ValueError, "selected axis is out of range"); + goto fail; + } + if (axis < 0) { + theaxis = arX->nd + axis; + } else { + theaxis = axis; + } + + if (Vi != NULL) { + arVi = (PyArrayObject *) PyArray_FromObject(Vi, typenum, + arX->nd, arX->nd); + if (arVi == NULL) + goto fail; + + input_flag = 1; + } + + arY = (PyArrayObject *) PyArray_SimpleNew(arX->nd, + arX->dimensions, typenum); + if (arY == NULL) { + goto fail; + } + + if (input_flag) { + arVf = (PyArrayObject *) PyArray_SimpleNew(arVi->nd, + arVi->dimensions, + typenum); + } + + if (arX->descr->type_num < 256) { + basic_filter = BasicFilterFunctions[(int) (arX->descr->type_num)]; + } + else { + basic_filter = NULL; + } + if (basic_filter == NULL) { + PyObject *msg, *str; + char *s; + + str = PyObject_Str((PyObject*)arX->descr); + if (str == NULL) { + goto fail; + } + s = PyString_AsString(str); + msg = PyString_FromFormat( + "input type '%s' not supported\n", s); + Py_DECREF(str); + if (msg == NULL) { + goto fail; + } + PyErr_SetObject(PyExc_NotImplementedError, msg); + Py_DECREF(msg); + goto fail; + } + + /* Skip over leading zeros in vector representing denominator (a) */ + /* XXX: handle this correctly */ + azero = PyArray_Zero(ara); + ara_ptr = ara->data; + nal = PyArray_ITEMSIZE(ara); + if (memcmp(ara_ptr, azero, nal) == 0) { + PyErr_SetString(PyExc_ValueError, + "BUG: filter coefficient a[0] == 0 not supported yet"); + goto fail; + } + PyDataMem_FREE(azero); + + na = PyArray_SIZE(ara); + nb = PyArray_SIZE(arb); + if (input_flag) { + if (arVi->dimensions[theaxis] != (na > nb ? na : nb) - 1) { + PyErr_SetString(PyExc_ValueError, + "The number of initial conditions must be max([len(a),len(b)]) - 1"); + goto fail; + } + } + + st = RawFilter(arb, ara, arX, arVi, arVf, arY, theaxis, basic_filter); + if (st) { + goto fail; + } + + Py_XDECREF(ara); + Py_XDECREF(arb); + Py_XDECREF(arX); + Py_XDECREF(arVi); + + if (!input_flag) { + return PyArray_Return(arY); + } else { + return Py_BuildValue("(NN)", arY, arVf); + } + + + fail: + Py_XDECREF(ara); + Py_XDECREF(arb); + Py_XDECREF(arX); + Py_XDECREF(arVi); + Py_XDECREF(arVf); + Py_XDECREF(arY); + return NULL; +} + +/* + * Copy the first nxzfilled items of x into xzfilled , and fill the rest with + * 0s + */ +static int +zfill(const PyArrayObject * x, intp nx, char *xzfilled, intp nxzfilled) +{ + char *xzero; + intp i, nxl; + PyArray_CopySwapFunc *copyswap = x->descr->f->copyswap; + + nxl = PyArray_ITEMSIZE(x); + + /* PyArray_Zero does not take const pointer, hence the cast */ + xzero = PyArray_Zero((PyArrayObject *) x); + + if (nx > 0) { + for (i = 0; i < nx; ++i) { + copyswap(xzfilled + i * nxl, x->data + i * nxl, 0, NULL); + } + } + for (i = nx; i < nxzfilled; ++i) { + copyswap(xzfilled + i * nxl, xzero, 0, NULL); + } + + PyDataMem_FREE(xzero); + + return 0; +} + +/* + * a and b assumed to be contiguous + * + * XXX: this code is very conservative, and could be considerably sped up for + * the usual cases (like contiguity). + * + * XXX: the code should be refactored (at least with/without initial + * condition), some code is wasteful here + */ +static int +RawFilter(const PyArrayObject * b, const PyArrayObject * a, + const PyArrayObject * x, const PyArrayObject * zi, + const PyArrayObject * zf, PyArrayObject * y, int axis, + BasicFilterFunction * filter_func) +{ + PyArrayIterObject *itx, *ity, *itzi, *itzf; + intp nitx, i, nxl, nzfl, j; + intp na, nb, nal, nbl; + intp nfilt; + char *azfilled, *bzfilled, *zfzfilled, *yoyo; + PyArray_CopySwapFunc *copyswap = x->descr->f->copyswap; + + itx = (PyArrayIterObject *) PyArray_IterAllButAxis((PyObject *) x, + &axis); + if (itx == NULL) { + PyErr_SetString(PyExc_MemoryError, "Could not create itx"); + goto fail; + } + nitx = itx->size; + + ity = (PyArrayIterObject *) PyArray_IterAllButAxis((PyObject *) y, + &axis); + if (ity == NULL) { + PyErr_SetString(PyExc_MemoryError, "Could not create ity"); + goto clean_itx; + } + + if (zi != NULL) { + itzi = (PyArrayIterObject *) PyArray_IterAllButAxis((PyObject *) + zi, &axis); + if (itzi == NULL) { + PyErr_SetString(PyExc_MemoryError, "Could not create itzi"); + goto clean_ity; + } + + itzf = (PyArrayIterObject *) PyArray_IterAllButAxis((PyObject *) + zf, &axis); + if (itzf == NULL) { + PyErr_SetString(PyExc_MemoryError, "Could not create itzf"); + goto clean_itzi; + } + } + + na = PyArray_SIZE(a); + nal = PyArray_ITEMSIZE(a); + nb = PyArray_SIZE(b); + nbl = PyArray_ITEMSIZE(b); + + nfilt = na > nb ? na : nb; + + azfilled = malloc(nal * nfilt); + if (azfilled == NULL) { + PyErr_SetString(PyExc_MemoryError, "Could not create azfilled"); + goto clean_itzf; + } + bzfilled = malloc(nbl * nfilt); + if (bzfilled == NULL) { + PyErr_SetString(PyExc_MemoryError, "Could not create bzfilled"); + goto clean_azfilled; + } + + nxl = PyArray_ITEMSIZE(x); + zfzfilled = malloc(nxl * (nfilt - 1)); + if (zfzfilled == NULL) { + PyErr_SetString(PyExc_MemoryError, "Could not create zfzfilled"); + goto clean_bzfilled; + } + /* Initialize zero filled buffers to 0, so that we can use + * Py_XINCREF/Py_XDECREF on it for object arrays (necessary for + * copyswap to work correctly). Stricly speaking, it is not needed for + * fundamental types (as values are copied instead of pointers, without + * refcounts), but oh well... + */ + memset(azfilled, 0, nal * nfilt); + memset(bzfilled, 0, nbl * nfilt); + memset(zfzfilled, 0, nxl * (nfilt - 1)); + + zfill(a, na, azfilled, nfilt); + zfill(b, nb, bzfilled, nfilt); + + /* XXX: Check that zf and zi have same type ? */ + if (zf != NULL) { + nzfl = PyArray_ITEMSIZE(zf); + } else { + nzfl = 0; + } + + /* Iterate over the input array */ + for (i = 0; i < nitx; ++i) { + if (zi != NULL) { + yoyo = itzi->dataptr; + /* Copy initial conditions zi in zfzfilled buffer */ + for (j = 0; j < nfilt - 1; ++j) { + copyswap(zfzfilled + j * nzfl, yoyo, 0, NULL); + yoyo += itzi->strides[axis]; + } + PyArray_ITER_NEXT(itzi); + } else { + zfill(x, 0, zfzfilled, nfilt - 1); + } + + filter_func(bzfilled, azfilled, + itx->dataptr, ity->dataptr, zfzfilled, + nfilt, PyArray_DIM(x, axis), itx->strides[axis], + ity->strides[axis]); + PyArray_ITER_NEXT(itx); + PyArray_ITER_NEXT(ity); + + /* Copy tmp buffer fo final values back into zf output array */ + if (zi != NULL) { + yoyo = itzf->dataptr; + for (j = 0; j < nfilt - 1; ++j) { + copyswap(yoyo, zfzfilled + j * nzfl, 0, NULL); + yoyo += itzf->strides[axis]; + } + PyArray_ITER_NEXT(itzf); + } + } + + /* Free up allocated memory */ + free(zfzfilled); + free(bzfilled); + free(azfilled); + + if (zi != NULL) { + Py_DECREF(itzf); + Py_DECREF(itzi); + } + Py_DECREF(ity); + Py_DECREF(itx); + + return 0; + +clean_bzfilled: + free(bzfilled); +clean_azfilled: + free(azfilled); +clean_itzf: + if (zf != NULL) { + Py_DECREF(itzf); + } +clean_itzi: + if (zi != NULL) { + Py_DECREF(itzi); + } +clean_ity: + Py_DECREF(ity); +clean_itx: + Py_DECREF(itx); +fail: + return -1; +} + +/***************************************************************** + * This is code for a 1-D linear-filter along an arbitrary * + * dimension of an N-D array. * + *****************************************************************/ + +/**begin repeat + * #type = float, double, npy_longdouble# + * #NAME = FLOAT, DOUBLE, EXTENDED# + */ +static void @NAME@_filt(char *b, char *a, char *x, char *y, char *Z, + intp len_b, uintp len_x, intp stride_X, + intp stride_Y) +{ + char *ptr_x = x, *ptr_y = y; + @type@ *ptr_Z, *ptr_b; + @type@ *ptr_a; + @type@ *xn, *yn; + const @type@ a0 = *((@type@ *) a); + intp n; + uintp k; + + for (k = 0; k < len_x; k++) { + ptr_b = (@type@ *) b; /* Reset a and b pointers */ + ptr_a = (@type@ *) a; + xn = (@type@ *) ptr_x; + yn = (@type@ *) ptr_y; + if (len_b > 1) { + ptr_Z = ((@type@ *) Z); + *yn = *ptr_Z + *ptr_b / a0 * *xn; /* Calculate first delay (output) */ + ptr_b++; + ptr_a++; + /* Fill in middle delays */ + for (n = 0; n < len_b - 2; n++) { + *ptr_Z = + ptr_Z[1] + *xn * (*ptr_b / a0) - *yn * (*ptr_a / a0); + ptr_b++; + ptr_a++; + ptr_Z++; + } + /* Calculate last delay */ + *ptr_Z = *xn * (*ptr_b / a0) - *yn * (*ptr_a / a0); + } else { + *yn = *xn * (*ptr_b / a0); + } + + ptr_y += stride_Y; /* Move to next input/output point */ + ptr_x += stride_X; + } +} + +static void C@NAME@_filt(char *b, char *a, char *x, char *y, char *Z, + intp len_b, uintp len_x, intp stride_X, + intp stride_Y) +{ + char *ptr_x = x, *ptr_y = y; + @type@ *ptr_Z, *ptr_b; + @type@ *ptr_a; + @type@ *xn, *yn; + @type@ a0r = ((@type@ *) a)[0]; + @type@ a0i = ((@type@ *) a)[1]; + @type@ a0_mag, tmpr, tmpi; + intp n; + uintp k; + + a0_mag = a0r * a0r + a0i * a0i; + for (k = 0; k < len_x; k++) { + ptr_b = (@type@ *) b; /* Reset a and b pointers */ + ptr_a = (@type@ *) a; + xn = (@type@ *) ptr_x; + yn = (@type@ *) ptr_y; + if (len_b > 1) { + ptr_Z = ((@type@ *) Z); + tmpr = ptr_b[0] * a0r + ptr_b[1] * a0i; + tmpi = ptr_b[1] * a0r - ptr_b[0] * a0i; + /* Calculate first delay (output) */ + yn[0] = ptr_Z[0] + (tmpr * xn[0] - tmpi * xn[1]) / a0_mag; + yn[1] = ptr_Z[1] + (tmpi * xn[0] + tmpr * xn[1]) / a0_mag; + ptr_b += 2; + ptr_a += 2; + /* Fill in middle delays */ + for (n = 0; n < len_b - 2; n++) { + tmpr = ptr_b[0] * a0r + ptr_b[1] * a0i; + tmpi = ptr_b[1] * a0r - ptr_b[0] * a0i; + ptr_Z[0] = + ptr_Z[2] + (tmpr * xn[0] - tmpi * xn[1]) / a0_mag; + ptr_Z[1] = + ptr_Z[3] + (tmpi * xn[0] + tmpr * xn[1]) / a0_mag; + tmpr = ptr_a[0] * a0r + ptr_a[1] * a0i; + tmpi = ptr_a[1] * a0r - ptr_a[0] * a0i; + ptr_Z[0] -= (tmpr * yn[0] - tmpi * yn[1]) / a0_mag; + ptr_Z[1] -= (tmpi * yn[0] + tmpr * yn[1]) / a0_mag; + ptr_b += 2; + ptr_a += 2; + ptr_Z += 2; + } + /* Calculate last delay */ + + tmpr = ptr_b[0] * a0r + ptr_b[1] * a0i; + tmpi = ptr_b[1] * a0r - ptr_b[0] * a0i; + ptr_Z[0] = (tmpr * xn[0] - tmpi * xn[1]) / a0_mag; + ptr_Z[1] = (tmpi * xn[0] + tmpr * xn[1]) / a0_mag; + tmpr = ptr_a[0] * a0r + ptr_a[1] * a0i; + tmpi = ptr_a[1] * a0r - ptr_a[0] * a0i; + ptr_Z[0] -= (tmpr * yn[0] - tmpi * yn[1]) / a0_mag; + ptr_Z[1] -= (tmpi * yn[0] + tmpr * yn[1]) / a0_mag; + } else { + tmpr = ptr_b[0] * a0r + ptr_b[1] * a0i; + tmpi = ptr_b[1] * a0r - ptr_b[0] * a0i; + yn[0] = (tmpr * xn[0] - tmpi * xn[1]) / a0_mag; + yn[1] = (tmpi * xn[0] + tmpr * xn[1]) / a0_mag; + } + + ptr_y += stride_Y; /* Move to next input/output point */ + ptr_x += stride_X; + + } +} +/**end repeat**/ + +static void OBJECT_filt(char *b, char *a, char *x, char *y, char *Z, + intp len_b, uintp len_x, intp stride_X, + intp stride_Y) +{ + char *ptr_x = x, *ptr_y = y; + PyObject **ptr_Z, **ptr_b; + PyObject **ptr_a; + PyObject **xn, **yn; + PyObject **a0 = (PyObject **) a; + PyObject *tmp1, *tmp2, *tmp3; + intp n; + uintp k; + + /* My reference counting might not be right */ + for (k = 0; k < len_x; k++) { + ptr_b = (PyObject **) b; /* Reset a and b pointers */ + ptr_a = (PyObject **) a; + xn = (PyObject **) ptr_x; + yn = (PyObject **) ptr_y; + if (len_b > 1) { + ptr_Z = ((PyObject **) Z); + /* Calculate first delay (output) */ + tmp1 = PyNumber_Multiply(*ptr_b, *xn); + tmp2 = PyNumber_Divide(tmp1, *a0); + tmp3 = PyNumber_Add(tmp2, *ptr_Z); + Py_XDECREF(*yn); + *yn = tmp3; + Py_DECREF(tmp1); + Py_DECREF(tmp2); + ptr_b++; + ptr_a++; + + /* Fill in middle delays */ + for (n = 0; n < len_b - 2; n++) { + tmp1 = PyNumber_Multiply(*xn, *ptr_b); + tmp2 = PyNumber_Divide(tmp1, *a0); + tmp3 = PyNumber_Add(tmp2, ptr_Z[1]); + Py_DECREF(tmp1); + Py_DECREF(tmp2); + tmp1 = PyNumber_Multiply(*yn, *ptr_a); + tmp2 = PyNumber_Divide(tmp1, *a0); + Py_DECREF(tmp1); + Py_XDECREF(*ptr_Z); + *ptr_Z = PyNumber_Subtract(tmp3, tmp2); + Py_DECREF(tmp2); + Py_DECREF(tmp3); + ptr_b++; + ptr_a++; + ptr_Z++; + } + /* Calculate last delay */ + tmp1 = PyNumber_Multiply(*xn, *ptr_b); + tmp3 = PyNumber_Divide(tmp1, *a0); + Py_DECREF(tmp1); + tmp1 = PyNumber_Multiply(*yn, *ptr_a); + tmp2 = PyNumber_Divide(tmp1, *a0); + Py_DECREF(tmp1); + Py_XDECREF(*ptr_Z); + *ptr_Z = PyNumber_Subtract(tmp3, tmp2); + Py_DECREF(tmp2); + Py_DECREF(tmp3); + } else { + tmp1 = PyNumber_Multiply(*xn, *ptr_b); + Py_XDECREF(*yn); + *yn = PyNumber_Divide(tmp1, *a0); + Py_DECREF(tmp1); + } + + ptr_y += stride_Y; /* Move to next input/output point */ + ptr_x += stride_X; + } +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/signal/ltisys.py python-scipy-0.8.0+dfsg1/scipy/signal/ltisys.py --- python-scipy-0.7.2+dfsg1/scipy/signal/ltisys.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/signal/ltisys.py 2010-07-26 15:48:33.000000000 +0100 @@ -1,27 +1,40 @@ +""" +ltisys -- a collection of classes and functions for modeling linear +time invariant systems. +""" + # # Author: Travis Oliphant 2001 # +# Feb 2010: Warren Weckesser +# Rewrote lsim2 and added impulse2. +# from filter_design import tf2zpk, zpk2tf, normalize import numpy -from numpy import product, zeros, array, dot, transpose, arange, ones, \ - nan_to_num +from numpy import product, zeros, array, dot, transpose, ones, \ + nan_to_num, zeros_like, linspace import scipy.interpolate as interpolate import scipy.integrate as integrate import scipy.linalg as linalg from numpy import r_, eye, real, atleast_1d, atleast_2d, poly, \ squeeze, diag, asarray + def tf2ss(num, den): """Transfer function to state-space representation. - Inputs: - - num, den -- sequences representing the numerator and denominator polynomials. + Parameters + ---------- + num, den : array_like + Sequences representing the numerator and denominator + polynomials. + + Returns + ------- + A, B, C, D : ndarray + State space representation of the system. - Outputs: - - A, B, C, D -- state space representation of the system. """ # Controller canonical state-space representation. # if M+1 = len(num) and K+1 = len(den) then we must have M <= K @@ -59,7 +72,7 @@ C = num[:,1:] - num[:,0] * den[1:] return A, B, C, D -def none_to_empty(arg): +def _none_to_empty(arg): if arg is None: return [] else: @@ -67,8 +80,9 @@ def abcd_normalize(A=None, B=None, C=None, D=None): """Check state-space matrices and ensure they are rank-2. + """ - A, B, C, D = map(none_to_empty, (A, B, C, D)) + A, B, C, D = map(_none_to_empty, (A, B, C, D)) A, B, C, D = map(atleast_2d, (A, B, C, D)) if ((len(A.shape) > 2) or (len(B.shape) > 2) or \ @@ -109,15 +123,19 @@ def ss2tf(A, B, C, D, input=0): """State-space to transfer function. - Inputs: - - A, B, C, D -- state-space representation of linear system. - input -- For multiple-input systems, the input to use. + Parameters + ---------- + A, B, C, D : ndarray + State-space representation of linear system. + input : int + For multiple-input systems, the input to use. + + Returns + ------- + num, den : 1D ndarray + Numerator and denominator polynomials (as sequences) + respectively. - Outputs: - - num, den -- Numerator and denominator polynomials (as sequences) - respectively. """ # transfer function is C (sI - A)**(-1) B + D A, B, C, D = map(asarray, (A, B, C, D)) @@ -136,13 +154,15 @@ if D.shape[-1] != 0: D = D[:,input] - den = poly(A) + try: + den = poly(A) + except ValueError: + den = 1 if (product(B.shape,axis=0) == 0) and (product(C.shape,axis=0) == 0): num = numpy.ravel(D) if (product(D.shape,axis=0) == 0) and (product(A.shape,axis=0) == 0): den = [] - end return num, den num_states = A.shape[0] @@ -157,13 +177,18 @@ def zpk2ss(z,p,k): """Zero-pole-gain representation to state-space representation - Inputs: - - z, p, k -- zeros, poles (sequences), and gain of system - - Outputs: + Parameters + ---------- + z, p : sequence + Zeros and poles. + k : float + System gain. + + Returns + ------- + A, B, C, D : ndarray + State-space matrices. - A, B, C, D -- state-space matrices. """ return tf2ss(*zpk2tf(z,p,k)) @@ -269,99 +294,145 @@ return lsim(self, U, T, X0=X0) -def lsim2(system, U, T, X0=None): - """Simulate output of a continuous-time linear system, using ODE solver. +def lsim2(system, U=None, T=None, X0=None, **kwargs): + """ + Simulate output of a continuous-time linear system, by using + the ODE solver `scipy.integrate.odeint`. - Inputs: + Parameters + ---------- + system : an instance of the LTI class or a tuple describing the system. + The following gives the number of elements in the tuple and + the interpretation: + + * 2: (num, den) + * 3: (zeros, poles, gain) + * 4: (A, B, C, D) + + U : ndarray or array-like (1D or 2D), optional + An input array describing the input at each time T. Linear + interpolation is used between given times. If there are + multiple inputs, then each column of the rank-2 array + represents an input. If U is not given, the input is assumed + to be zero. + T : ndarray or array-like (1D or 2D), optional + The time steps at which the input is defined and at which the + output is desired. The default is 101 evenly spaced points on + the interval [0,10.0]. + X0 : ndarray or array-like (1D), optional + The initial condition of the state vector. If `X0` is not + given, the initial conditions are assumed to be 0. + kwargs : dict + Additional keyword arguments are passed on to the function + odeint. See the notes below for more details. + + Returns + ------- + T : 1D ndarray + The time values for the output. + yout : ndarray + The response of the system. + xout : ndarray + The time-evolution of the state-vector. + + Notes + ----- + This function uses :func:`scipy.integrate.odeint` to solve the + system's differential equations. Additional keyword arguments + given to `lsim2` are passed on to `odeint`. See the documentation + for :func:`scipy.integrate.odeint` for the full list of arguments. - system -- an instance of the LTI class or a tuple describing the - system. The following gives the number of elements in - the tuple and the interpretation. - 2 (num, den) - 3 (zeros, poles, gain) - 4 (A, B, C, D) - U -- an input array describing the input at each time T - (linear interpolation is assumed between given times). - If there are multiple inputs, then each column of the - rank-2 array represents an input. - T -- the time steps at which the input is defined and at which - the output is desired. - X0 -- (optional, default=0) the initial conditions on the state vector. - - Outputs: (T, yout, xout) - - T -- the time values for the output. - yout -- the response of the system. - xout -- the time-evolution of the state-vector. """ - # system is an lti system or a sequence - # with 2 (num, den) - # 3 (zeros, poles, gain) - # 4 (A, B, C, D) - # describing the system - # U is an input vector at times T - # if system describes multiple outputs - # then U can be a rank-2 array with the number of columns - # being the number of inputs - - # rather than use lsim, use direct integration and matrix-exponential. if isinstance(system, lti): sys = system else: sys = lti(*system) - U = atleast_1d(U) - T = atleast_1d(T) - if len(U.shape) == 1: - U = U.reshape((U.shape[0],1)) - sU = U.shape - if len(T.shape) != 1: - raise ValueError, "T must be a rank-1 array." - if sU[0] != len(T): - raise ValueError, "U must have the same number of rows as elements in T." - if sU[1] != sys.inputs: - raise ValueError, "System does not define that many inputs." if X0 is None: X0 = zeros(sys.B.shape[0],sys.A.dtype) - # for each output point directly integrate assume zero-order hold - # or linear interpolation. + if T is None: + # XXX T should really be a required argument, but U was + # changed from a required positional argument to a keyword, + # and T is after U in the argument list. So we either: change + # the API and move T in front of U; check here for T being + # None and raise an excpetion; or assign a default value to T + # here. This code implements the latter. + T = linspace(0, 10.0, 101) - ufunc = interpolate.interp1d(T, U, kind='linear', axis=0, bounds_error=False) + T = atleast_1d(T) + if len(T.shape) != 1: + raise ValueError, "T must be a rank-1 array." - def fprime(x, t, sys, ufunc): - return dot(sys.A,x) + squeeze(dot(sys.B,nan_to_num(ufunc([t])))) + if U is not None: + U = atleast_1d(U) + if len(U.shape) == 1: + U = U.reshape(-1,1) + sU = U.shape + if sU[0] != len(T): + raise ValueError("U must have the same number of rows " + "as elements in T.") + + if sU[1] != sys.inputs: + raise ValueError("The number of inputs in U (%d) is not " + "compatible with the number of system " + "inputs (%d)" % (sU[1], sys.inputs)) + # Create a callable that uses linear interpolation to + # calculate the input at any time. + ufunc = interpolate.interp1d(T, U, kind='linear', + axis=0, bounds_error=False) + + def fprime(x, t, sys, ufunc): + """The vector field of the linear system.""" + return dot(sys.A,x) + squeeze(dot(sys.B,nan_to_num(ufunc([t])))) + xout = integrate.odeint(fprime, X0, T, args=(sys, ufunc), **kwargs) + yout = dot(sys.C,transpose(xout)) + dot(sys.D,transpose(U)) + else: + def fprime(x, t, sys): + """The vector field of the linear system.""" + return dot(sys.A,x) + xout = integrate.odeint(fprime, X0, T, args=(sys,), **kwargs) + yout = dot(sys.C,transpose(xout)) - xout = integrate.odeint(fprime, X0, T, args=(sys, ufunc)) - yout = dot(sys.C,transpose(xout)) + dot(sys.D,transpose(U)) return T, squeeze(transpose(yout)), xout def lsim(system, U, T, X0=None, interp=1): - """Simulate output of a continuous-time linear system. + """ + Simulate output of a continuous-time linear system. - Inputs: + Parameters + ---------- + system : an instance of the LTI class or a tuple describing the system. + The following gives the number of elements in the tuple and + the interpretation: + + * 2: (num, den) + * 3: (zeros, poles, gain) + * 4: (A, B, C, D) + + U : array_like + An input array describing the input at each time `T` + (interpolation is assumed between given times). If there are + multiple inputs, then each column of the rank-2 array + represents an input. + T : array_like + The time steps at which the input is defined and at which the + output is desired. + X0 : + The initial conditions on the state vector (zero by default). + interp : {1, 0} + Whether to use linear (1) or zero-order hold (0) interpolation. + + Returns + ------- + T : 1D ndarray + Time values for the output. + yout : 1D ndarray + System response. + xout : ndarray + Time-evolution of the state-vector. - system -- an instance of the LTI class or a tuple describing the - system. The following gives the number of elements in - the tuple and the interpretation. - 2 (num, den) - 3 (zeros, poles, gain) - 4 (A, B, C, D) - U -- an input array describing the input at each time T - (interpolation is assumed between given times). - If there are multiple inputs, then each column of the - rank-2 array represents an input. - T -- the time steps at which the input is defined and at which - the output is desired. - X0 -- (optional, default=0) the initial conditions on the state vector. - interp -- linear (1) or zero-order hold (0) interpolation - - Outputs: (T, yout, xout) - - T -- the time values for the output. - yout -- the response of the system. - xout -- the time-evolution of the state-vector. """ # system is an lti system or a sequence # with 2 (num, den) @@ -384,7 +455,8 @@ if len(T.shape) != 1: raise ValueError, "T must be a rank-1 array." if sU[0] != len(T): - raise ValueError, "U must have the same number of rows as elements in T." + raise ValueError("U must have the same number of rows " + "as elements in T.") if sU[1] != sys.inputs: raise ValueError, "System does not define that many inputs." @@ -426,22 +498,59 @@ return T, squeeze(yout), squeeze(xout) -def impulse(system, X0=None, T=None, N=None): - """Impulse response of continuous-time system. +def _default_response_times(A, n): + """Compute a reasonable set of time samples for the response time. + + This function is used by impulse(), impulse2(), step() and step2() + to compute the response time when the `T` argument to the function + is None. + + Parameters + ---------- + A : square ndarray + The system matrix. + n : int + The number of time samples to generate. + + Returns + ------- + t : ndarray, 1D + The 1D array of length `n` of time samples at which the response + is to be computed. + """ + # Create a reasonable time interval. This could use some more work. + # For example, what is expected when the system is unstable? + vals = linalg.eigvals(A) + r = min(abs(real(vals))) + if r == 0.0: + r = 1.0 + tc = 1.0 / r + t = linspace(0.0, 7*tc, n) + return t - Inputs: - system -- an instance of the LTI class or a tuple with 2, 3, or 4 - elements representing (num, den), (zero, pole, gain), or - (A, B, C, D) representation of the system. - X0 -- (optional, default = 0) inital state-vector. - T -- (optional) time points (autocomputed if not given). - N -- (optional) number of time points to autocompute (100 if not given). +def impulse(system, X0=None, T=None, N=None): + """Impulse response of continuous-time system. - Ouptuts: (T, yout) + Parameters + ---------- + system : LTI class or tuple + If specified as a tuple, the system is described as + ``(num, den)``, ``(zero, pole, gain)``, or ``(A, B, C, D)``. + X0 : array_like, optional + Initial state-vector. Defaults to zero. + T : array_like, optional + Time points. Computed if not given. + N : int, optional + The number of time points to compute (if `T` is not given). + + Returns + ------- + T : 1D ndarray + Time points. + yout : 1D ndarray + Impulse response of the system (except for singularities at zero). - T -- output time points, - yout -- impulse response of system (except possible singularities at 0). """ if isinstance(system, lti): sys = system @@ -454,9 +563,7 @@ if N is None: N = 100 if T is None: - vals = linalg.eigvals(sys.A) - tc = 1.0/min(abs(real(vals))) - T = arange(0,7*tc,7*tc / float(N)) + T = _default_response_times(sys.A, N) h = zeros(T.shape, sys.A.dtype) s,v = linalg.eig(sys.A) vi = linalg.inv(v) @@ -467,22 +574,103 @@ h[k] = squeeze(dot(dot(C,eA),B)) return T, h -def step(system, X0=None, T=None, N=None): - """Step response of continuous-time system. - Inputs: +def impulse2(system, X0=None, T=None, N=None, **kwargs): + """Impulse response of a single-input continuous-time linear system. + + The solution is generated by calling `scipy.signal.lsim2`, which uses + the differential equation solver `scipy.integrate.odeint`. + + Parameters + ---------- + system : an instance of the LTI class or a tuple describing the system. + The following gives the number of elements in the tuple and + the interpretation. + 2 (num, den) + 3 (zeros, poles, gain) + 4 (A, B, C, D) + T : 1D ndarray or array-like, optional + The time steps at which the input is defined and at which the + output is desired. If `T` is not given, the function will + generate a set of time samples automatically. + X0 : 1D ndarray or array-like, optional + The initial condition of the state vector. If X0 is None, the + initial conditions are assumed to be 0. + N : int, optional + Number of time points to compute. If `N` is not given, 100 + points are used. + **kwargs : + Additional keyword arguments are passed on the function + `scipy.signal.lsim2`, which in turn passes them on to + :func:`scipy.integrate.odeint`. See the documentation for + :func:`scipy.integrate.odeint` for information about these + arguments. + + Returns + ------- + T : 1D ndarray + The time values for the output. + yout : ndarray + The output response of the system. + + See Also + -------- + scipy.signal.impulse + + Notes + ----- + .. versionadded:: 0.8.0 + """ + if isinstance(system, lti): + sys = system + else: + sys = lti(*system) + B = sys.B + if B.shape[-1] != 1: + raise ValueError, "impulse2() requires a single-input system." + B = B.squeeze() + if X0 is None: + X0 = zeros_like(B) + if N is None: + N = 100 + if T is None: + T = _default_response_times(sys.A, N) + # Move the impulse in the input to the initial conditions, and then + # solve using lsim2(). + U = zeros_like(T) + ic = B + X0 + Tr, Yr, Xr = lsim2(sys, U, T, ic, **kwargs) + return Tr, Yr - system -- an instance of the LTI class or a tuple with 2, 3, or 4 - elements representing (num, den), (zero, pole, gain), or - (A, B, C, D) representation of the system. - X0 -- (optional, default = 0) inital state-vector. - T -- (optional) time points (autocomputed if not given). - N -- (optional) number of time points to autocompute (100 if not given). - Ouptuts: (T, yout) +def step(system, X0=None, T=None, N=None): + """Step response of continuous-time system. - T -- output time points, - yout -- step response of system. + Parameters + ---------- + system : an instance of the LTI class or a tuple describing the system. + The following gives the number of elements in the tuple and + the interpretation. + 2 (num, den) + 3 (zeros, poles, gain) + 4 (A, B, C, D) + X0 : array_like, optional + Initial state-vector (default is zero). + T : array_like, optional + Time points (computed if not given). + N : int + Number of time points to compute if `T` is not given. + + Returns + ------- + T : 1D ndarray + Output time points. + yout : 1D ndarray + Step response of system. + + See also + -------- + scipy.signal.step2 """ if isinstance(system, lti): sys = system @@ -491,9 +679,62 @@ if N is None: N = 100 if T is None: - vals = linalg.eigvals(sys.A) - tc = 1.0/min(abs(real(vals))) - T = arange(0,7*tc,7*tc / float(N)) + T = _default_response_times(sys.A, N) U = ones(T.shape, sys.A.dtype) vals = lsim(sys, U, T, X0=X0) return vals[0], vals[1] + +def step2(system, X0=None, T=None, N=None, **kwargs): + """Step response of continuous-time system. + + This function is functionally the same as `scipy.signal.step`, but + it uses the function `scipy.signal.lsim2` to compute the step + response. + + Parameters + ---------- + system : an instance of the LTI class or a tuple describing the system. + The following gives the number of elements in the tuple and + the interpretation. + 2 (num, den) + 3 (zeros, poles, gain) + 4 (A, B, C, D) + X0 : array_like, optional + Initial state-vector (default is zero). + T : array_like, optional + Time points (computed if not given). + N : int + Number of time points to compute if `T` is not given. + **kwargs : + Additional keyword arguments are passed on the function + `scipy.signal.lsim2`, which in turn passes them on to + :func:`scipy.integrate.odeint`. See the documentation for + :func:`scipy.integrate.odeint` for information about these + arguments. + + Returns + ------- + T : 1D ndarray + Output time points. + yout : 1D ndarray + Step response of system. + + See also + -------- + scipy.signal.step + + Notes + ----- + .. versionadded:: 0.8.0 + """ + if isinstance(system, lti): + sys = system + else: + sys = lti(*system) + if N is None: + N = 100 + if T is None: + T = _default_response_times(sys.A, N) + U = ones(T.shape, sys.A.dtype) + vals = lsim2(sys, U, T, X0=X0, **kwargs) + return vals[0], vals[1] diff -Nru python-scipy-0.7.2+dfsg1/scipy/signal/SConscript python-scipy-0.8.0+dfsg1/scipy/signal/SConscript --- python-scipy-0.7.2+dfsg1/scipy/signal/SConscript 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/signal/SConscript 2010-07-26 15:48:33.000000000 +0100 @@ -1,4 +1,4 @@ -# Last Change: Wed Mar 05 05:00 PM 2008 J +# Last Change: Mon Apr 20 04:00 PM 2009 J # vim:syntax=python from os.path import join @@ -6,8 +6,10 @@ env = GetNumpyEnvironment(ARGUMENTS) +src = env.FromCTemplate("lfilter.c.src") +src += env.FromCTemplate("correlate_nd.c.src") env.NumpyPythonExtension('sigtools', - source = ['sigtoolsmodule.c',\ + source = src + ['sigtoolsmodule.c',\ 'firfilter.c', \ 'medianfilter.c']) diff -Nru python-scipy-0.7.2+dfsg1/scipy/signal/setup.py python-scipy-0.8.0+dfsg1/scipy/signal/setup.py --- python-scipy-0.7.2+dfsg1/scipy/signal/setup.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/signal/setup.py 2010-07-26 15:48:33.000000000 +0100 @@ -9,8 +9,10 @@ config.add_extension('sigtools', sources=['sigtoolsmodule.c', - 'firfilter.c','medianfilter.c'], - depends = ['sigtools.h', 'lfilter.c'] + 'firfilter.c','medianfilter.c', 'lfilter.c.src', + 'correlate_nd.c.src'], + depends = ['sigtools.h'], + include_dirs=['.'] ) config.add_extension('spline', diff -Nru python-scipy-0.7.2+dfsg1/scipy/signal/signaltools.py python-scipy-0.8.0+dfsg1/scipy/signal/signaltools.py --- python-scipy-0.7.2+dfsg1/scipy/signal/signaltools.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/signal/signaltools.py 2010-07-26 15:48:33.000000000 +0100 @@ -1,26 +1,39 @@ # Author: Travis Oliphant # 1999 -- 2002 -import types +import warnings + import sigtools -from scipy import special, linalg -from scipy.fftpack import fft, ifft, ifftshift, fft2, ifft2, fftn, ifftn -from numpy import polyadd, polymul, polydiv, polysub, \ - roots, poly, polyval, polyder, cast, asarray, isscalar, atleast_1d, \ - ones, sin, linspace, real, extract, real_if_close, zeros, array, arange, \ - where, sqrt, rank, newaxis, argmax, product, cos, pi, exp, \ - ravel, size, less_equal, sum, r_, iscomplexobj, take, \ - argsort, allclose, expand_dims, unique, prod, sort, reshape, \ - transpose, dot, any, mean, cosh, arccosh, \ - arccos, concatenate, flipud +from scipy import linalg +from scipy.fftpack import fft, ifft, ifftshift, fft2, ifft2, fftn, \ + ifftn, fftfreq +from numpy import polyadd, polymul, polydiv, polysub, roots, \ + poly, polyval, polyder, cast, asarray, isscalar, atleast_1d, \ + ones, real, real_if_close, zeros, array, arange, where, rank, \ + newaxis, product, ravel, sum, r_, iscomplexobj, take, \ + argsort, allclose, expand_dims, unique, prod, sort, reshape, \ + transpose, dot, any, mean, flipud, ndarray import numpy as np from scipy.misc import factorial +from windows import get_window _modedict = {'valid':0, 'same':1, 'full':2} _boundarydict = {'fill':0, 'pad':0, 'wrap':2, 'circular':2, 'symm':1, 'symmetric':1, 'reflect':4} +_SWAP_INPUTS_DEPRECATION_MSG = """\ +Current default behavior of convolve and correlate functions is deprecated. + +Convolve and corelate currently swap their arguments if the second argument +has dimensions larger than the first one, and the mode is relative to the input +with the largest dimension. The new behavior is to never swap the inputs, which +is what most people expects, and is how correlation is usually defined. + +You can control the behavior with the old_behavior flag - the flag will +disappear in scipy 0.9.0, and the functions will then implement the new +behavior only.""" + def _valfrommode(mode): try: val = _modedict[mode] @@ -42,45 +55,87 @@ return val -def correlate(in1, in2, mode='full'): - """Cross-correlate two N-dimensional arrays. - - Description: +def correlate(in1, in2, mode='full', old_behavior=True): + """ + Cross-correlate two N-dimensional arrays. - Cross-correlate in1 and in2 with the output size determined by mode. + Cross-correlate in1 and in2 with the output size determined by the mode + argument. - Inputs: + Parameters + ---------- + in1: array + first input. + in2: array + second input. Should have the same number of dimensions as in1. + mode: str {'valid', 'same', 'full'} + a string indicating the size of the output: + - 'valid': the output consists only of those elements that do not + rely on the zero-padding. + - 'same': the output is the same size as the largest input centered + with respect to the 'full' output. + - 'full': the output is the full discrete linear cross-correlation + of the inputs. (Default) + old_behavior: bool + If True (default), the old behavior of correlate is implemented: + - if in1.size < in2.size, in1 and in2 are swapped (correlate(in1, + in2) == correlate(in2, in1)) + - For complex inputs, the conjugate is not taken for in2 + If False, the new, conventional definition of correlate is implemented. - in1 -- an N-dimensional array. - in2 -- an array with the same number of dimensions as in1. - mode -- a flag indicating the size of the output - 'valid' (0): The output consists only of those elements that - do not rely on the zero-padding. - 'same' (1): The output is the same size as the largest input - centered with respect to the 'full' output. - 'full' (2): The output is the full discrete linear - cross-correlation of the inputs. (Default) + Returns + ------- + out: array + an N-dimensional array containing a subset of the discrete linear + cross-correlation of in1 with in2. - Outputs: (out,) + Notes + ----- + The correlation z of two arrays x and y of rank d is defined as - out -- an N-dimensional array containing a subset of the discrete linear - cross-correlation of in1 with in2. + z[...,k,...] = sum[..., i_l, ...] + x[..., i_l,...] * conj(y[..., i_l + k,...]) """ - # Code is faster if kernel is smallest array. - volume = asarray(in1) - kernel = asarray(in2) - if rank(volume) == rank(kernel) == 0: - return volume*kernel - if (product(kernel.shape,axis=0) > product(volume.shape,axis=0)): - temp = kernel - kernel = volume - volume = temp - del temp - val = _valfrommode(mode) - return sigtools._correlateND(volume, kernel, val) + if old_behavior: + warnings.warn(DeprecationWarning(_SWAP_INPUTS_DEPRECATION_MSG)) + if np.iscomplexobj(in2): + in2 = in2.conjugate() + if in1.size < in2.size: + swp = in2 + in2 = in1 + in1 = swp + + if mode == 'valid': + ps = [i - j + 1 for i, j in zip(in1.shape, in2.shape)] + out = np.empty(ps, in1.dtype) + for i in range(len(ps)): + if ps[i] <= 0: + raise ValueError("Dimension of x(%d) < y(%d) " \ + "not compatible with valid mode" % \ + (in1.shape[i], in2.shape[i])) + + z = sigtools._correlateND(in1, in2, out, val) + else: + ps = [i + j - 1 for i, j in zip(in1.shape, in2.shape)] + # zero pad input + in1zpadded = np.zeros(ps, in1.dtype) + sc = [slice(0, i) for i in in1.shape] + in1zpadded[sc] = in1.copy() + + if mode == 'full': + out = np.empty(ps, in1.dtype) + z = sigtools._correlateND(in1zpadded, in2, out, val) + elif mode == 'same': + out = np.empty(in1.shape, in1.dtype) + + z = sigtools._correlateND(in1zpadded, in2, out, val) + else: + raise ValueError("Uknown mode %s" % mode) + + return z def _centered(arr, newsize): # Return the center newsize portion of the array. @@ -100,9 +155,13 @@ complex_result = (np.issubdtype(in1.dtype, np.complex) or np.issubdtype(in2.dtype, np.complex)) size = s1+s2-1 - IN1 = fftn(in1,size) - IN1 *= fftn(in2,size) - ret = ifftn(IN1) + + # Always use 2**n-sized FFT + fsize = 2**np.ceil(np.log2(size)) + IN1 = fftn(in1,fsize) + IN1 *= fftn(in2,fsize) + fslice = tuple([slice(0, int(sz)) for sz in size]) + ret = ifftn(IN1)[fslice].copy() del IN1 if not complex_result: ret = ret.real @@ -118,71 +177,94 @@ return _centered(ret,abs(s2-s1)+1) -def convolve(in1, in2, mode='full'): - """Convolve two N-dimensional arrays. +def convolve(in1, in2, mode='full', old_behavior=True): + """ + Convolve two N-dimensional arrays. - Description: + Convolve in1 and in2 with output size determined by mode. - Convolve in1 and in2 with output size determined by mode. + Parameters + ---------- + in1: array + first input. + in2: array + second input. Should have the same number of dimensions as in1. + mode: str {'valid', 'same', 'full'} + a string indicating the size of the output: - Inputs: + ``valid`` : the output consists only of those elements that do not + rely on the zero-padding. - in1 -- an N-dimensional array. - in2 -- an array with the same number of dimensions as in1. - mode -- a flag indicating the size of the output - 'valid' (0): The output consists only of those elements that - are computed by scaling the larger array with all - the values of the smaller array. - 'same' (1): The output is the same size as the largest input - centered with respect to the 'full' output. - 'full' (2): The output is the full discrete linear convolution - of the inputs. (Default) + ``same`` : the output is the same size as the largest input centered + with respect to the 'full' output. - Outputs: (out,) + ``full`` : the output is the full discrete linear cross-correlation + of the inputs. (Default) - out -- an N-dimensional array containing a subset of the discrete linear - convolution of in1 with in2. + + Returns + ------- + out: array + an N-dimensional array containing a subset of the discrete linear + cross-correlation of in1 with in2. """ volume = asarray(in1) kernel = asarray(in2) + if rank(volume) == rank(kernel) == 0: return volume*kernel - if (product(kernel.shape,axis=0) > product(volume.shape,axis=0)): - temp = kernel - kernel = volume - volume = temp - del temp + elif not volume.ndim == kernel.ndim: + raise ValueError("in1 and in2 should have the same rank") slice_obj = [slice(None,None,-1)]*len(kernel.shape) - val = _valfrommode(mode) - - return sigtools._correlateND(volume,kernel[slice_obj],val) -def order_filter(a, domain, rank): - """Perform an order filter on an N-dimensional array. + if old_behavior: + warnings.warn(DeprecationWarning(_SWAP_INPUTS_DEPRECATION_MSG)) + if (product(kernel.shape,axis=0) > product(volume.shape,axis=0)): + temp = kernel + kernel = volume + volume = temp + del temp - Description: + return correlate(volume, kernel[slice_obj], mode, old_behavior=True) + else: + if mode == 'valid': + for d1, d2 in zip(volume.shape, kernel.shape): + if not d1 >= d2: + raise ValueError( + "in1 should have at least as many items as in2 in " \ + "every dimension for valid mode.") + if np.iscomplexobj(kernel): + return correlate(volume, kernel[slice_obj].conj(), mode, old_behavior=False) + else: + return correlate(volume, kernel[slice_obj], mode, old_behavior=False) - Perform an order filter on the array in. The domain argument acts as a - mask centered over each pixel. The non-zero elements of domain are - used to select elements surrounding each input pixel which are placed - in a list. The list is sorted, and the output for that pixel is the - element corresponding to rank in the sorted list. +def order_filter(a, domain, rank): + """ + Perform an order filter on an N-dimensional array. - Inputs: + Description: - in -- an N-dimensional input array. - domain -- a mask array with the same number of dimensions as in. Each - dimension should have an odd number of elements. - rank -- an non-negative integer which selects the element from the - sorted list (0 corresponds to the largest element, 1 is the - next largest element, etc.) + Perform an order filter on the array in. The domain argument acts as a + mask centered over each pixel. The non-zero elements of domain are + used to select elements surrounding each input pixel which are placed + in a list. The list is sorted, and the output for that pixel is the + element corresponding to rank in the sorted list. - Output: (out,) + Parameters + ---------- + in -- an N-dimensional input array. + domain -- a mask array with the same number of dimensions as in. Each + dimension should have an odd number of elements. + rank -- an non-negative integer which selects the element from the + sorted list (0 corresponds to the largest element, 1 is the + next largest element, etc.) - out -- the results of the order filter in an array with the same - shape as in. + Returns + ------- + out -- the results of the order filter in an array with the same + shape as in. """ domain = asarray(domain) @@ -216,7 +298,7 @@ result. """ - volume = asarray(volume) + volume = atleast_1d(volume) if kernel_size is None: kernel_size = [3] * len(volume.shape) kernel_size = asarray(kernel_size) @@ -236,25 +318,26 @@ def wiener(im,mysize=None,noise=None): - """Perform a Wiener filter on an N-dimensional array. + """ + Perform a Wiener filter on an N-dimensional array. - Description: + Description: - Apply a Wiener filter to the N-dimensional array in. + Apply a Wiener filter to the N-dimensional array in. - Inputs: + Inputs: - in -- an N-dimensional array. - kernel_size -- A scalar or an N-length list giving the size of the - median filter window in each dimension. Elements of - kernel_size should be odd. If kernel_size is a scalar, - then this scalar is used as the size in each dimension. - noise -- The noise-power to use. If None, then noise is estimated as - the average of the local variance of the input. + in -- an N-dimensional array. + kernel_size -- A scalar or an N-length list giving the size of the + Wiener filter window in each dimension. Elements of + kernel_size should be odd. If kernel_size is a scalar, + then this scalar is used as the size in each dimension. + noise -- The noise-power to use. If None, then noise is estimated as + the average of the local variance of the input. - Outputs: (out,) + Outputs: (out,) - out -- Wiener filtered result with the same shape as in. + out -- Wiener filtered result with the same shape as in. """ im = asarray(im) @@ -263,10 +346,10 @@ mysize = asarray(mysize); # Estimate the local mean - lMean = correlate(im,ones(mysize),1) / product(mysize,axis=0) + lMean = correlate(im,ones(mysize), 'same', old_behavior=False) / product(mysize,axis=0) # Estimate the local variance - lVar = correlate(im**2,ones(mysize),1) / product(mysize,axis=0) - lMean**2 + lVar = correlate(im**2,ones(mysize), 'same', old_behavior=False) / product(mysize,axis=0) - lMean**2 # Estimate the noise power if needed. if noise==None: @@ -280,7 +363,7 @@ return out -def convolve2d(in1, in2, mode='full', boundary='fill', fillvalue=0): +def convolve2d(in1, in2, mode='full', boundary='fill', fillvalue=0, old_behavior=True): """Convolve two 2-dimensional arrays. Description: @@ -311,12 +394,30 @@ convolution of in1 with in2. """ + if old_behavior: + warnings.warn(DeprecationWarning(_SWAP_INPUTS_DEPRECATION_MSG)) + + if old_behavior: + warnings.warn(DeprecationWarning(_SWAP_INPUTS_DEPRECATION_MSG)) + if (product(np.shape(in2),axis=0) > product(np.shape(in1),axis=0)): + temp = in1 + in1 = in2 + in2 = temp + del temp + else: + if mode == 'valid': + for d1, d2 in zip(np.shape(in1), np.shape(in2)): + if not d1 >= d2: + raise ValueError( + "in1 should have at least as many items as in2 in " \ + "every dimension for valid mode.") + val = _valfrommode(mode) bval = _bvalfromboundary(boundary) return sigtools._convolve2d(in1,in2,1,val,bval,fillvalue) -def correlate2d(in1, in2, mode='full', boundary='fill', fillvalue=0): +def correlate2d(in1, in2, mode='full', boundary='fill', fillvalue=0, old_behavior=True): """Cross-correlate two 2-dimensional arrays. Description: @@ -347,6 +448,8 @@ cross-correlation of in1 with in2. """ + if old_behavior: + warnings.warn(DeprecationWarning(_SWAP_INPUTS_DEPRECATION_MSG)) val = _valfrommode(mode) bval = _bvalfromboundary(boundary) @@ -433,61 +536,70 @@ maxiter, grid_density) def lfilter(b, a, x, axis=-1, zi=None): - """Filter data along one-dimension with an IIR or FIR filter. - - Description + """ + Filter data along one-dimension with an IIR or FIR filter. Filter a data sequence, x, using a digital filter. This works for many fundamental data types (including Object type). The filter is a direct form II transposed implementation of the standard difference equation - (see "Algorithm"). + (see Notes). - Inputs: - - b -- The numerator coefficient vector in a 1-D sequence. - a -- The denominator coefficient vector in a 1-D sequence. If a[0] - is not 1, then both a and b are normalized by a[0]. - x -- An N-dimensional input array. - axis -- The axis of the input data array along which to apply the - linear filter. The filter is applied to each subarray along - this axis (*Default* = -1) - zi -- Initial conditions for the filter delays. It is a vector - (or array of vectors for an N-dimensional input) of length - max(len(a),len(b)). If zi=None or is not given then initial - rest is assumed. SEE signal.lfiltic for more information. - - Outputs: (y, {zf}) - - y -- The output of the digital filter. - zf -- If zi is None, this is not returned, otherwise, zf holds the - final filter delay values. + Parameters + ---------- + b : array_like + The numerator coefficient vector in a 1-D sequence. + a : array_like + The denominator coefficient vector in a 1-D sequence. If a[0] + is not 1, then both a and b are normalized by a[0]. + x : array_like + An N-dimensional input array. + axis : int + The axis of the input data array along which to apply the + linear filter. The filter is applied to each subarray along + this axis (*Default* = -1) + zi : array_like (optional) + Initial conditions for the filter delays. It is a vector + (or array of vectors for an N-dimensional input) of length + max(len(a),len(b))-1. If zi=None or is not given then initial + rest is assumed. SEE signal.lfiltic for more information. - Algorithm: + Returns + ------- + y : array + The output of the digital filter. + zf : array (optional) + If zi is None, this is not returned, otherwise, zf holds the + final filter delay values. + Notes + ----- The filter function is implemented as a direct II transposed structure. This means that the filter implements - a[0]*y[n] = b[0]*x[n] + b[1]*x[n-1] + ... + b[nb]*x[n-nb] - - a[1]*y[n-1] - ... - a[na]*y[n-na] + :: - using the following difference equations: + a[0]*y[n] = b[0]*x[n] + b[1]*x[n-1] + ... + b[nb]*x[n-nb] + - a[1]*y[n-1] - ... - a[na]*y[n-na] - y[m] = b[0]*x[m] + z[0,m-1] - z[0,m] = b[1]*x[m] + z[1,m-1] - a[1]*y[m] - ... - z[n-3,m] = b[n-2]*x[m] + z[n-2,m-1] - a[n-2]*y[m] - z[n-2,m] = b[n-1]*x[m] - a[n-1]*y[m] + using the following difference equations:: + + y[m] = b[0]*x[m] + z[0,m-1] + z[0,m] = b[1]*x[m] + z[1,m-1] - a[1]*y[m] + ... + z[n-3,m] = b[n-2]*x[m] + z[n-2,m-1] - a[n-2]*y[m] + z[n-2,m] = b[n-1]*x[m] - a[n-1]*y[m] where m is the output sample number and n=max(len(a),len(b)) is the model order. The rational transfer function describing this filter in the - z-transform domain is - -1 -nb - b[0] + b[1]z + ... + b[nb] z - Y(z) = ---------------------------------- X(z) - -1 -na - a[0] + a[1]z + ... + a[na] z + z-transform domain is:: + + -1 -nb + b[0] + b[1]z + ... + b[nb] z + Y(z) = ---------------------------------- X(z) + -1 -na + a[0] + a[1]z + ... + a[na] z """ if isscalar(a): @@ -498,27 +610,30 @@ return sigtools._linear_filter(b, a, x, axis, zi) def lfiltic(b,a,y,x=None): - """Given a linear filter (b,a) and initial conditions on the output y + """ + Construct initial conditions for lfilter + + Given a linear filter (b,a) and initial conditions on the output y and the input x, return the inital conditions on the state vector zi which is used by lfilter to generate the output given the input. If M=len(b)-1 and N=len(a)-1. Then, the initial conditions are given - in the vectors x and y as + in the vectors x and y as:: - x = {x[-1],x[-2],...,x[-M]} - y = {y[-1],y[-2],...,y[-N]} + x = {x[-1],x[-2],...,x[-M]} + y = {y[-1],y[-2],...,y[-N]} If x is not given, its inital conditions are assumed zero. If either vector is too short, then zeros are added - to achieve the proper length. + to achieve the proper length. - The output vector zi contains + The output vector zi contains:: - zi = {z_0[-1], z_1[-1], ..., z_K-1[-1]} where K=max(M,N). + zi = {z_0[-1], z_1[-1], ..., z_K-1[-1]} where K=max(M,N). """ - N = size(a)-1 - M = size(b)-1 + N = np.size(a)-1 + M = np.size(b)-1 K = max(M,N) y = asarray(y) zi = zeros(K,y.dtype.char) @@ -526,10 +641,10 @@ x = zeros(M,y.dtype.char) else: x = asarray(x) - L = size(x) + L = np.size(x) if L < M: x = r_[x,zeros(M-L)] - L = size(y) + L = np.size(y) if L < N: y = r_[y,zeros(N-L)] @@ -560,382 +675,25 @@ return quot, rem -def boxcar(M,sym=1): - """The M-point boxcar window. - - """ - return ones(M, float) - -def triang(M,sym=1): - """The M-point triangular window. - """ - if M < 1: - return array([]) - if M == 1: - return ones(1,'d') - odd = M % 2 - if not sym and not odd: - M = M + 1 - n = arange(1,int((M+1)/2)+1) - if M % 2 == 0: - w = (2*n-1.0)/M - w = r_[w, w[::-1]] - else: - w = 2*n/(M+1.0) - w = r_[w, w[-2::-1]] - - if not sym and not odd: - w = w[:-1] - return w - -def parzen(M,sym=1): - """The M-point Parzen window. - - """ - if M < 1: - return array([]) - if M == 1: - return ones(1,'d') - odd = M % 2 - if not sym and not odd: - M = M+1 - n = arange(-(M-1)/2.0,(M-1)/2.0+0.5,1.0) - na = extract(n < -(M-1)/4.0, n) - nb = extract(abs(n) <= (M-1)/4.0, n) - wa = 2*(1-abs(na)/(M/2.0))**3.0 - wb = 1-6*(abs(nb)/(M/2.0))**2.0 + 6*(abs(nb)/(M/2.0))**3.0 - w = r_[wa,wb,wa[::-1]] - if not sym and not odd: - w = w[:-1] - return w - -def bohman(M,sym=1): - """The M-point Bohman window. - - """ - if M < 1: - return array([]) - if M == 1: - return ones(1,'d') - odd = M % 2 - if not sym and not odd: - M = M+1 - fac = abs(linspace(-1,1,M)[1:-1]) - w = (1 - fac)* cos(pi*fac) + 1.0/pi*sin(pi*fac) - w = r_[0,w,0] - if not sym and not odd: - w = w[:-1] - return w - -def blackman(M,sym=1): - """The M-point Blackman window. - - """ - if M < 1: - return array([]) - if M == 1: - return ones(1,'d') - odd = M % 2 - if not sym and not odd: - M = M+1 - n = arange(0,M) - w = 0.42-0.5*cos(2.0*pi*n/(M-1)) + 0.08*cos(4.0*pi*n/(M-1)) - if not sym and not odd: - w = w[:-1] - return w - -def nuttall(M,sym=1): - """A minimum 4-term Blackman-Harris window according to Nuttall. - - """ - if M < 1: - return array([]) - if M == 1: - return ones(1,'d') - odd = M % 2 - if not sym and not odd: - M = M+1 - a = [0.3635819, 0.4891775, 0.1365995, 0.0106411] - n = arange(0,M) - fac = n*2*pi/(M-1.0) - w = a[0] - a[1]*cos(fac) + a[2]*cos(2*fac) - a[3]*cos(3*fac) - if not sym and not odd: - w = w[:-1] - return w - -def blackmanharris(M,sym=1): - """The M-point minimum 4-term Blackman-Harris window. - - """ - if M < 1: - return array([]) - if M == 1: - return ones(1,'d') - odd = M % 2 - if not sym and not odd: - M = M+1 - a = [0.35875, 0.48829, 0.14128, 0.01168]; - n = arange(0,M) - fac = n*2*pi/(M-1.0) - w = a[0] - a[1]*cos(fac) + a[2]*cos(2*fac) - a[3]*cos(3*fac) - if not sym and not odd: - w = w[:-1] - return w - -def flattop(M,sym=1): - """The M-point Flat top window. - - """ - if M < 1: - return array([]) - if M == 1: - return ones(1,'d') - odd = M % 2 - if not sym and not odd: - M = M+1 - a = [0.2156, 0.4160, 0.2781, 0.0836, 0.0069] - n = arange(0,M) - fac = n*2*pi/(M-1.0) - w = a[0] - a[1]*cos(fac) + a[2]*cos(2*fac) - a[3]*cos(3*fac) + \ - a[4]*cos(4*fac) - if not sym and not odd: - w = w[:-1] - return w - - -def bartlett(M,sym=1): - """The M-point Bartlett window. - - """ - if M < 1: - return array([]) - if M == 1: - return ones(1,'d') - odd = M % 2 - if not sym and not odd: - M = M+1 - n = arange(0,M) - w = where(less_equal(n,(M-1)/2.0),2.0*n/(M-1),2.0-2.0*n/(M-1)) - if not sym and not odd: - w = w[:-1] - return w - -def hanning(M,sym=1): - """The M-point Hanning window. - - """ - if M < 1: - return array([]) - if M == 1: - return ones(1,'d') - odd = M % 2 - if not sym and not odd: - M = M+1 - n = arange(0,M) - w = 0.5-0.5*cos(2.0*pi*n/(M-1)) - if not sym and not odd: - w = w[:-1] - return w - -hann = hanning - -def barthann(M,sym=1): - """Return the M-point modified Bartlett-Hann window. - - """ - if M < 1: - return array([]) - if M == 1: - return ones(1,'d') - odd = M % 2 - if not sym and not odd: - M = M+1 - n = arange(0,M) - fac = abs(n/(M-1.0)-0.5) - w = 0.62 - 0.48*fac + 0.38*cos(2*pi*fac) - if not sym and not odd: - w = w[:-1] - return w - -def hamming(M,sym=1): - """The M-point Hamming window. - - """ - if M < 1: - return array([]) - if M == 1: - return ones(1,'d') - odd = M % 2 - if not sym and not odd: - M = M+1 - n = arange(0,M) - w = 0.54-0.46*cos(2.0*pi*n/(M-1)) - if not sym and not odd: - w = w[:-1] - return w - - - -def kaiser(M,beta,sym=1): - """Return a Kaiser window of length M with shape parameter beta. - - """ - if M < 1: - return array([]) - if M == 1: - return ones(1,'d') - odd = M % 2 - if not sym and not odd: - M = M+1 - n = arange(0,M) - alpha = (M-1)/2.0 - w = special.i0(beta * sqrt(1-((n-alpha)/alpha)**2.0))/special.i0(beta) - if not sym and not odd: - w = w[:-1] - return w - -def gaussian(M,std,sym=1): - """Return a Gaussian window of length M with standard-deviation std. - - """ - if M < 1: - return array([]) - if M == 1: - return ones(1,'d') - odd = M % 2 - if not sym and not odd: - M = M + 1 - n = arange(0,M)-(M-1.0)/2.0 - sig2 = 2*std*std - w = exp(-n**2 / sig2) - if not sym and not odd: - w = w[:-1] - return w - -def general_gaussian(M,p,sig,sym=1): - """Return a window with a generalized Gaussian shape. - - exp(-0.5*(x/sig)**(2*p)) - - half power point is at (2*log(2)))**(1/(2*p))*sig - - """ - if M < 1: - return array([]) - if M == 1: - return ones(1,'d') - odd = M % 2 - if not sym and not odd: - M = M+1 - n = arange(0,M)-(M-1.0)/2.0 - w = exp(-0.5*(n/sig)**(2*p)) - if not sym and not odd: - w = w[:-1] - return w - - -# contributed by Kumar Appaiah. -def chebwin(M, at, sym=1): - """Dolph-Chebyshev window. - - INPUTS: - - M : int - Window size - at : float - Attenuation (in dB) - sym : bool - Generates symmetric window if True. - - """ - if M < 1: - return array([]) - if M == 1: - return ones(1,'d') - - odd = M % 2 - if not sym and not odd: - M = M+1 - - # compute the parameter beta - order = M - 1.0 - beta = cosh(1.0/order * arccosh(10**(abs(at)/20.))) - k = r_[0:M]*1.0 - x = beta*cos(pi*k/M) - #find the window's DFT coefficients - # Use analytic definition of Chebyshev polynomial instead of expansion - # from scipy.special. Using the expansion in scipy.special leads to errors. - p = zeros(x.shape) - p[x > 1] = cosh(order * arccosh(x[x > 1])) - p[x < -1] = (1 - 2*(order%2)) * cosh(order * arccosh(-x[x < -1])) - p[np.abs(x) <=1 ] = cos(order * arccos(x[np.abs(x) <= 1])) - - # Appropriate IDFT and filling up - # depending on even/odd M - if M % 2: - w = real(fft(p)) - n = (M + 1) / 2 - w = w[:n] / w[0] - w = concatenate((w[n - 1:0:-1], w)) - else: - p = p * exp(1.j*pi / M * r_[0:M]) - w = real(fft(p)) - n = M / 2 + 1 - w = w / w[1] - w = concatenate((w[n - 1:0:-1], w[1:n])) - if not sym and not odd: - w = w[:-1] - return w - - -def slepian(M,width,sym=1): - """Return the M-point slepian window. - - """ - if (M*width > 27.38): - raise ValueError, "Cannot reliably obtain slepian sequences for"\ - " M*width > 27.38." - if M < 1: - return array([]) - if M == 1: - return ones(1,'d') - odd = M % 2 - if not sym and not odd: - M = M+1 - - twoF = width/2.0 - alpha = (M-1)/2.0 - m = arange(0,M)-alpha - n = m[:,newaxis] - k = m[newaxis,:] - AF = twoF*special.sinc(twoF*(n-k)) - [lam,vec] = linalg.eig(AF) - ind = argmax(abs(lam),axis=-1) - w = abs(vec[:,ind]) - w = w / max(w) - - if not sym and not odd: - w = w[:-1] - return w - - -def hilbert(x, N=None): +def hilbert(x, N=None, axis=-1): """Compute the analytic signal. - The transformation is done along the first axis. + The transformation is done along the last axis by default. Parameters ---------- x : array-like Signal data N : int, optional - Number of Fourier components. Default: ``x.shape[0]`` + Number of Fourier components. Default: ``x.shape[axis]`` + axis : int, optional + Returns ------- - xa : ndarray, shape (N,) + x.shape[1:] - Analytic signal of `x` + xa : ndarray + Analytic signal of `x`, of each 1d array along axis Notes ----- @@ -946,6 +704,8 @@ where ``F`` is the Fourier transform, ``U`` the unit step function, and ``y`` the Hilbert transform of ``x``. [1] + changes in scipy 0.8.0: new axis argument, new default axis=-1 + References ---------- .. [1] Wikipedia, "Analytic signal". @@ -954,13 +714,13 @@ """ x = asarray(x) if N is None: - N = len(x) + N = x.shape[axis] if N <=0: raise ValueError, "N must be positive." if iscomplexobj(x): print "Warning: imaginary part of x ignored." x = real(x) - Xf = fft(x,N,axis=0) + Xf = fft(x, N, axis=axis) h = zeros(N) if N % 2 == 0: h[0] = h[N/2] = 1 @@ -970,16 +730,33 @@ h[1:(N+1)/2] = 2 if len(x.shape) > 1: - h = h[:, newaxis] - x = ifft(Xf*h) + ind = [newaxis]*x.ndim + ind[axis] = slice(None) + h = h[ind] + x = ifft(Xf*h, axis=axis) return x def hilbert2(x,N=None): - """Compute the '2-D' analytic signal of `x` of length `N`. + """ + Compute the '2-D' analytic signal of `x` - See also - -------- - hilbert + + Parameters + ---------- + x : array_like + 2-D signal data. + N : int, optional + Number of Fourier components. Default is ``x.shape`` + + Returns + ------- + xa : ndarray + Analytic signal of `x` taken along axes (0,1). + + References + ---------- + .. [1] Wikipedia, "Analytic signal", + http://en.wikipedia.org/wiki/Analytic_signal """ x = asarray(x) @@ -993,7 +770,6 @@ if iscomplexobj(x): print "Warning: imaginary part of x ignored." x = real(x) - print N Xf = fft2(x,N,axes=(0,1)) h1 = zeros(N[0],'d') h2 = zeros(N[1],'d') @@ -1310,87 +1086,6 @@ return b, a -def get_window(window,Nx,fftbins=1): - """Return a window of length Nx and type window. - - If fftbins is 1, create a "periodic" window ready to use with ifftshift - and be multiplied by the result of an fft (SEE ALSO fftfreq). - - Window types: boxcar, triang, blackman, hamming, hanning, bartlett, - parzen, bohman, blackmanharris, nuttall, barthann, - kaiser (needs beta), gaussian (needs std), - general_gaussian (needs power, width), - slepian (needs width) - - If the window requires no parameters, then it can be a string. - If the window requires parameters, the window argument should be a tuple - with the first argument the string name of the window, and the next - arguments the needed parameters. - If window is a floating point number, it is interpreted as the beta - parameter of the kaiser window. - """ - - sym = not fftbins - try: - beta = float(window) - except (TypeError, ValueError): - args = () - if isinstance(window, types.TupleType): - winstr = window[0] - if len(window) > 1: - args = window[1:] - elif isinstance(window, types.StringType): - if window in ['kaiser', 'ksr', 'gaussian', 'gauss', 'gss', - 'general gaussian', 'general_gaussian', - 'general gauss', 'general_gauss', 'ggs']: - raise ValueError, "That window needs a parameter -- pass a tuple" - else: - winstr = window - - if winstr in ['blackman', 'black', 'blk']: - winfunc = blackman - elif winstr in ['triangle', 'triang', 'tri']: - winfunc = triang - elif winstr in ['hamming', 'hamm', 'ham']: - winfunc = hamming - elif winstr in ['bartlett', 'bart', 'brt']: - winfunc = bartlett - elif winstr in ['hanning', 'hann', 'han']: - winfunc = hanning - elif winstr in ['blackmanharris', 'blackharr','bkh']: - winfunc = blackmanharris - elif winstr in ['parzen', 'parz', 'par']: - winfunc = parzen - elif winstr in ['bohman', 'bman', 'bmn']: - winfunc = bohman - elif winstr in ['nuttall', 'nutl', 'nut']: - winfunc = nuttall - elif winstr in ['barthann', 'brthan', 'bth']: - winfunc = barthann - elif winstr in ['flattop', 'flat', 'flt']: - winfunc = flattop - elif winstr in ['kaiser', 'ksr']: - winfunc = kaiser - elif winstr in ['gaussian', 'gauss', 'gss']: - winfunc = gaussian - elif winstr in ['general gaussian', 'general_gaussian', - 'general gauss', 'general_gauss', 'ggs']: - winfunc = general_gaussian - elif winstr in ['boxcar', 'box', 'ones']: - winfunc = boxcar - elif winstr in ['slepian', 'slep', 'optimal', 'dss']: - winfunc = slepian - else: - raise ValueError, "Unknown window type." - - params = (Nx,)+args + (sym,) - else: - winfunc = kaiser - params = (Nx,beta,sym) - - return winfunc(*params) - - def resample(x,num,t=None,axis=0,window=None): """Resample to num samples using Fourier method along the given axis. @@ -1399,25 +1094,40 @@ Fourier method is used, the signal is assumed periodic. Window controls a Fourier-domain window that tapers the Fourier - spectrum before zero-padding to aleviate ringing in the resampled + spectrum before zero-padding to alleviate ringing in the resampled values for sampled signals you didn't intend to be interpreted as band-limited. + If window is a function, then it is called with a vector of inputs + indicating the frequency bins (i.e. fftfreq(x.shape[axis]) ) + + If window is an array of the same length as x.shape[axis] it is + assumed to be the window to be applied directly in the Fourier + domain (with dc and low-frequency first). + If window is a string then use the named window. If window is a float, then it represents a value of beta for a kaiser window. If window is a tuple, then the first component is a string representing the window, and the next arguments are parameters for - that window. - + that window. + Possible windows are: - 'blackman' ('black', 'blk') - 'hamming' ('hamm', 'ham') - 'bartlett' ('bart', 'brt') - 'hanning' ('hann', 'han') - 'kaiser' ('ksr') # requires parameter (beta) - 'gaussian' ('gauss', 'gss') # requires parameter (std.) - 'general gauss' ('general', 'ggs') # requires two parameters - (power, width) + 'flattop' -- 'flat', 'flt' + 'boxcar' -- 'ones', 'box' + 'triang' -- 'traing', 'tri' + 'parzen' -- 'parz', 'par' + 'bohman' -- 'bman', 'bmn' + 'blackmanharris' -- 'blackharr', 'bkh' + 'nuttall', -- 'nutl', 'nut' + 'barthann' -- 'brthan', 'bth' + 'blackman' -- 'black', 'blk' + 'hamming' -- 'hamm', 'ham' + 'bartlett' -- 'bart', 'brt' + 'hanning' -- 'hann', 'han' + ('kaiser', beta) -- 'ksr' + ('gaussian', std) -- 'gauss', 'gss' + ('general gauss', power, width) -- 'general', 'ggs' + ('slepian', width) -- 'slep', 'optimal', 'dss' The first sample of the returned vector is the same as the first sample of the input vector, the spacing between samples is changed @@ -1432,10 +1142,15 @@ X = fft(x,axis=axis) Nx = x.shape[axis] if window is not None: - W = ifftshift(get_window(window,Nx)) + if callable(window): + W = window(fftfreq(Nx)) + elif isinstance(window, ndarray) and window.shape == (Nx,): + W = window + else: + W = ifftshift(get_window(window,Nx)) newshape = ones(len(x.shape)) newshape[axis] = len(W) - W=W.reshape(newshape) + W.shape = newshape X = X*W sl = [slice(None)]*len(x.shape) newshape = list(x.shape) @@ -1534,15 +1249,14 @@ return array(zi_return) - - def filtfilt(b,a,x): + b, a, x = map(asarray, [b, a, x]) # FIXME: For now only accepting 1d arrays ntaps=max(len(a),len(b)) edge=ntaps*3 if x.ndim != 1: - raise ValueError, "Filiflit is only accepting 1 dimension arrays." + raise ValueError, "filtfilt only accepts 1-d arrays." #x must be bigger than edge if x.size < edge: @@ -1555,7 +1269,7 @@ if len(b) < ntaps: b=r_[b,zeros(len(a)-len(b))] - zi=lfilter_zi(b,a) + zi = lfilter_zi(b,a) #Grow the signal to have edges for stabilizing #the filter with inverted replicas of the signal @@ -1568,3 +1282,54 @@ (y,zf)=lfilter(b,a,flipud(y),-1,zi*y[-1]) return flipud(y[edge-1:-edge+1]) + + +from scipy.signal.filter_design import cheby1, firwin + +def decimate(x, q, n=None, ftype='iir', axis=-1): + """downsample the signal x by an integer factor q, using an order n filter + + By default an order 8 Chebyshev type I filter is used or a 30 point FIR + filter with hamming window if ftype is 'fir'. + + Parameters + ---------- + x : N-d array + the signal to be downsampled + q : int + the downsampling factor + n : int or None + the order of the filter (1 less than the length for 'fir') + ftype : {'iir' or 'fir'} + the type of the lowpass filter + axis : int + the axis along which to decimate + + Returns + ------- + y : N-d array + the down-sampled signal + + See also: resample + """ + + if not isinstance(q, int): + raise TypeError, "q must be an integer" + + if n is None: + if ftype == 'fir': + n = 30 + else: + n = 8 + + if ftype == 'fir': + b = firwin(n+1, 1./q, window='hamming') + a = 1. + else: + b, a = cheby1(n, 0.05, 0.8/q) + + y = lfilter(b, a, x, axis=axis) + + sl = [None]*y.ndim + sl[axis] = slice(None, None, q) + return y[sl] diff -Nru python-scipy-0.7.2+dfsg1/scipy/signal/sigtools.h python-scipy-0.8.0+dfsg1/scipy/signal/sigtools.h --- python-scipy-0.7.2+dfsg1/scipy/signal/sigtools.h 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/signal/sigtools.h 2010-07-26 15:48:33.000000000 +0100 @@ -1,3 +1,6 @@ +#ifndef _SCIPY_PRIVATE_SIGNAL_SIGTOOLS_H_ +#define _SCIPY_PRIVATE_SIGNAL_SIGTOOLS_H_ + #include "Python.h" #include "numpy/noprefix.h" @@ -46,7 +49,14 @@ typedef void (MultAddFunction) (char *, intp, char *, intp, char *, intp *, intp *, int, intp, int, intp *, intp *, uintp *); -typedef void (BasicFilterFunction) (char *, char *, char *, char *, char *, intp, uintp, intp, intp); +PyObject* +scipy_signal_sigtools_linear_filter(PyObject * NPY_UNUSED(dummy), PyObject * args); + +PyObject* +scipy_signal_sigtools_correlateND(PyObject *NPY_UNUSED(dummy), PyObject *args); + +void +scipy_signal_sigtools_linear_filter_module_init(); /* static int index_out_of_bounds(int *, int *, int ); @@ -55,3 +65,5 @@ static void convolveND(Generic_Array *, Generic_Array *, Generic_Array *, MultAddFunction *, int); static void RawFilter(Generic_Vector, Generic_Vector, Generic_Array, Generic_Array, Generic_Array *, Generic_Array *, BasicFilterFunction *, int); */ + +#endif diff -Nru python-scipy-0.7.2+dfsg1/scipy/signal/sigtoolsmodule.c python-scipy-0.8.0+dfsg1/scipy/signal/sigtoolsmodule.c --- python-scipy-0.7.2+dfsg1/scipy/signal/sigtoolsmodule.c 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/signal/sigtoolsmodule.c 2010-07-26 15:48:33.000000000 +0100 @@ -4,6 +4,9 @@ Permission to use, copy, modify, and distribute this software without fee is granted under the SciPy License. */ +#include +#define PY_ARRAY_UNIQUE_SYMBOL _scipy_signal_ARRAY_API +#include #include "sigtools.h" #include @@ -115,382 +118,6 @@ return incr; } -/* - All of these MultAdd functions loop over all the elements of the smallest - array, incrementing an array of indices into the large N-D array at - the same time. The - bounds for the other array are checked and if valid the product is - added to the running sum. If invalid bounds are found nothing is - done (zero is added). This has the effect of zero padding the array - to handle edges. - */ - -#define MAKE_MultAdd(NTYPE, ctype) \ -static void NTYPE ## _MultAdd(char *ip1, intp is1, char *ip2, intp is2, char *op, \ - intp *dims1, intp *dims2, int ndims, intp nels2, int check, \ - intp *loop_ind, intp *temp_ind, uintp *offset) \ -{ \ - ctype tmp=(ctype)0.0; intp i; \ - int k, incr = 1; \ - ctype *ptr1 = (ctype *)ip1, *ptr2 = (ctype *)ip2; \ -\ - i = nels2; \ -\ - temp_ind[ndims-1]--; \ - while (i--) { \ - /* Adjust index array and move ptr1 to right place */ \ - k = ndims - 1; \ - while(--incr) { \ - temp_ind[k] -= dims2[k] - 1; /* Return to start for these dimensions */ \ - k--; \ - } \ - ptr1 += offset[k]; /* Precomputed offset array */ \ - temp_ind[k]++; \ -\ - if (!(check && index_out_of_bounds(temp_ind,dims1,ndims))) { \ - tmp += (*ptr1) * (*ptr2); \ - } \ - incr = increment(loop_ind, ndims, dims2); /* Returns number of N-D indices incremented. */ \ - ptr2++; \ - \ - } \ -*((ctype *)op) = tmp; \ -} - -MAKE_MultAdd(UBYTE, ubyte) -MAKE_MultAdd(BYTE, byte) -MAKE_MultAdd(USHORT, ushort) -MAKE_MultAdd(SHORT, short) -MAKE_MultAdd(UINT, uint) -MAKE_MultAdd(INT, int) -MAKE_MultAdd(ULONG, ulong) -MAKE_MultAdd(LONG, long) -MAKE_MultAdd(ULONGLONG, ulonglong) -MAKE_MultAdd(LONGLONG, longlong) -MAKE_MultAdd(FLOAT, float) -MAKE_MultAdd(DOUBLE, double) -MAKE_MultAdd(LONGDOUBLE, longdouble) - -#define MAKE_CMultAdd(NTYPE, ctype) \ -static void NTYPE ## _MultAdd(char *ip1, intp is1, char *ip2, intp is2, char *op, \ - intp *dims1, intp *dims2, int ndims, intp nels2, int check, \ - intp *loop_ind, intp *temp_ind, uintp *offset) \ -{ \ - ctype tmpr= 0.0, tmpi = 0.0; intp i; \ - int k, incr = 1; \ - ctype *ptr1 = (ctype *)ip1, *ptr2 = (ctype *)ip2; \ - \ - i = nels2; \ -\ - temp_ind[ndims-1]--; \ - while (i--) { \ - /* Adjust index array and move ptr1 to right place */ \ - k = ndims - 1; \ - while(--incr) { \ - temp_ind[k] -= dims2[k] - 1; /* Return to start for these dimensions */ \ - k--; \ - } \ - ptr1 += 2*offset[k]; /* Precomputed offset array */ \ - temp_ind[k]++; \ -\ - if (!(check && index_out_of_bounds(temp_ind,dims1,ndims))) { \ - tmpr += ptr1[0] * ptr2[0] - ptr1[1] * ptr2[1]; \ - tmpi += ptr1[1] * ptr2[0] + ptr1[0] * ptr2[1]; \ - } \ - incr = increment(loop_ind, ndims, dims2); \ - /* Returns number of N-D indices incremented. */ \ - ptr2 += 2; \ -\ - } \ - ((ctype *)op)[0] = tmpr; ((ctype *)op)[1] = tmpi; \ -} - -MAKE_CMultAdd(CFLOAT, float) -MAKE_CMultAdd(CDOUBLE, double) -MAKE_CMultAdd(CLONGDOUBLE, longdouble) - -static void correlateND(Generic_Array *ap1, Generic_Array *ap2, Generic_Array *ret, MultAddFunction *multiply_and_add_ND, int mode) { - intp *a_ind, *b_ind, *temp_ind, *check_ind, *mode_dep; - uintp *offsets, offset1; - intp *offsets2; - int i, k, check, incr = 1; - int bytes_in_array, num_els_ret, num_els_ap2; - intp is1, is2, os; - char *ip1, *ip2, *op, *ap1_ptr; - intp *ret_ind; - - num_els_ret = 1; - for (i = 0; i < ret->nd; i++) num_els_ret *= ret->dimensions[i]; - num_els_ap2 = 1; - for (i = 0; i < ret->nd; i++) num_els_ap2 *= ap2->dimensions[i]; - bytes_in_array = ap1->nd * sizeof(intp); - mode_dep = (intp *)malloc(bytes_in_array); - switch(mode) { - case 0: - for (i = 0; i < ap1->nd; i++) mode_dep[i] = 0; - break; - case 1: - for (i = 0; i < ap1->nd; i++) mode_dep[i] = -((ap2->dimensions[i]) >> 1); - break; - case 2: - for (i = 0; i < ap1->nd; i++) mode_dep[i] = 1 - ap2->dimensions[i]; - } - - is1 = ap1->elsize; is2 = ap2->elsize; - op = ret->data; os = ret->elsize; - ip1 = ap1->data; ip2 = ap2->data; - op = ret->data; - - b_ind = (intp *)malloc(bytes_in_array); /* loop variables */ - memset(b_ind,0,bytes_in_array); - a_ind = (intp *)malloc(bytes_in_array); - ret_ind = (intp *)malloc(bytes_in_array); - memset(ret_ind,0,bytes_in_array); - temp_ind = (intp *)malloc(bytes_in_array); - check_ind = (intp *)malloc(bytes_in_array); - offsets = (uintp *)malloc(ap1->nd*sizeof(uintp)); - offsets2 = (intp *)malloc(ap1->nd*sizeof(intp)); - offset1 = compute_offsets(offsets,offsets2,ap1->dimensions,ap2->dimensions,ret->dimensions,mode_dep,ap1->nd); - /* The convolution proceeds by looping through the output array - and for each value summing all contributions from the summed - element-by-element product of the two input arrays. Index - counters are used for book-keeping in the area so that we - can tell where we are in all of the arrays and be sure that - we are not trying to access areas outside the arrays definition. - - The inner loop is implemented separately but equivalently for each - datatype. The outer loop is similar in structure and form to - to the inner loop. - */ - /* Need to keep track of a ptr to place in big (first) input - array where we start the multiplication (we pass over it in the - inner loop (and not dereferenced) - if it is pointing outside dataspace) - */ - /* Calculate it once and the just move it around appropriately */ - ap1_ptr = ip1 + offset1*is1; - for (k=0; k < ap1->nd; k++) {a_ind[k] = mode_dep[k]; check_ind[k] = ap1->dimensions[k] - ap2->dimensions[k] - mode_dep[k] - 1;} - a_ind[ap1->nd-1]--; - i = num_els_ret; - while (i--) { - k = ap1->nd - 1; - while(--incr) { - a_ind[k] -= ret->dimensions[k] - 1; /* Return to start */ - k--; - } - ap1_ptr += offsets2[k]*is1; - a_ind[k]++; - memcpy(temp_ind, a_ind, bytes_in_array); - - check = 0; k = -1; - while(!check && (++k < ap1->nd)) - check = check || (ret_ind[k] < -mode_dep[k]) || (ret_ind[k] > check_ind[k]); - - multiply_and_add_ND(ap1_ptr,is1,ip2,is2,op,ap1->dimensions,ap2->dimensions,ap1->nd,num_els_ap2,check,b_ind,temp_ind,offsets); - - incr = increment(ret_ind,ret->nd,ret->dimensions); /* increment index counter */ - op += os; /* increment to next output index */ - - } - free(b_ind); free(a_ind); free(ret_ind); - free(offsets); free(offsets2); free(temp_ind); - free(check_ind); free(mode_dep); -} - -/***************************************************************** - * This is code for a 1-D linear-filter along an arbitrary * - * dimension of an N-D array. * - *****************************************************************/ - -static void FLOAT_filt(char *b, char *a, char *x, char *y, char *Z, intp len_b, uintp len_x, intp stride_X, intp stride_Y ) { - char *ptr_x = x, *ptr_y = y; - float *ptr_Z, *ptr_b; - float *ptr_a; - float *xn, *yn; - const float a0 = *((float *)a); - int k, n; - - for (k = 0; k < len_x; k++) { - ptr_b = (float *)b; /* Reset a and b pointers */ - ptr_a = (float *)a; - xn = (float *)ptr_x; - yn = (float *)ptr_y; - if (len_b > 1) { - ptr_Z = ((float *)Z); - *yn = *ptr_Z + *ptr_b / a0 * *xn; /* Calculate first delay (output) */ - ptr_b++; ptr_a++; - /* Fill in middle delays */ - for (n = 0; n < len_b - 2; n++) { - *ptr_Z = ptr_Z[1] + *xn * (*ptr_b / a0) - *yn * (*ptr_a / a0); - ptr_b++; ptr_a++; ptr_Z++; - } - /* Calculate last delay */ - *ptr_Z = *xn * (*ptr_b / a0) - *yn * (*ptr_a / a0); - } - else { - *yn = *xn * (*ptr_b / a0); - } - - ptr_y += stride_Y; /* Move to next input/output point */ - ptr_x += stride_X; - } -} - - -static void DOUBLE_filt(char *b, char *a, char *x, char *y, char *Z, intp len_b, uintp len_x, intp stride_X, intp stride_Y ) { - char *ptr_x = x, *ptr_y = y; - double *ptr_Z, *ptr_b; - double *ptr_a; - double *xn, *yn; - double a0; - int k, n; - - a0 = *((double *)a); - for (k = 0; k < len_x; k++) { - ptr_b = (double *)b; /* Reset a and b pointers */ - ptr_a = (double *)a; - xn = (double *)ptr_x; - yn = (double *)ptr_y; - if (len_b > 1) { - ptr_Z = ((double *)Z); - *yn = *ptr_Z + *ptr_b / a0 * *xn; /* Calculate first delay (output) */ - ptr_b++; ptr_a++; - /* Fill in middle delays */ - for (n = 0; n < len_b - 2; n++) { - *ptr_Z = ptr_Z[1] + *xn * (*ptr_b / a0) - *yn * (*ptr_a / a0); - ptr_b++; ptr_a++; ptr_Z++; - } - /* Calculate last delay */ - *ptr_Z = *xn * (*ptr_b / a0) - *yn * (*ptr_a / a0); - } - else { - *yn = *xn * (*ptr_b / a0); - } - - ptr_y += stride_Y; /* Move to next input/output point */ - ptr_x += stride_X; - } -} - - -static void CFLOAT_filt(char *b, char *a, char *x, char *y, char *Z, intp len_b, uintp len_x, intp stride_X, intp stride_Y ) { - char *ptr_x = x, *ptr_y = y; - float *ptr_Z, *ptr_b; - float *ptr_a; - float *xn, *yn; - float a0r = ((float *)a)[0]; - float a0i = ((float *)a)[1]; - float a0_mag, tmpr, tmpi; - int k, n; - - a0_mag = a0r*a0r + a0i*a0i; - for (k = 0; k < len_x; k++) { - ptr_b = (float *)b; /* Reset a and b pointers */ - ptr_a = (float *)a; - xn = (float *)ptr_x; - yn = (float *)ptr_y; - if (len_b > 1) { - ptr_Z = ((float *)Z); - tmpr = ptr_b[0]*a0r + ptr_b[1]*a0i; - tmpi = ptr_b[1]*a0r - ptr_b[0]*a0i; - /* Calculate first delay (output) */ - yn[0] = ptr_Z[0] + (tmpr * xn[0] - tmpi * xn[1])/a0_mag; - yn[1] = ptr_Z[1] + (tmpi * xn[0] + tmpr * xn[1])/a0_mag; - ptr_b += 2; ptr_a += 2; - /* Fill in middle delays */ - for (n = 0; n < len_b - 2; n++) { - tmpr = ptr_b[0]*a0r + ptr_b[1]*a0i; - tmpi = ptr_b[1]*a0r - ptr_b[0]*a0i; - ptr_Z[0] = ptr_Z[2] + (tmpr * xn[0] - tmpi * xn[1])/a0_mag; - ptr_Z[1] = ptr_Z[3] + (tmpi * xn[0] + tmpr * xn[1])/a0_mag; - tmpr = ptr_a[0]*a0r + ptr_a[1]*a0i; - tmpi = ptr_a[1]*a0r - ptr_a[0]*a0i; - ptr_Z[0] -= (tmpr * yn[0] - tmpi * yn[1])/a0_mag; - ptr_Z[1] -= (tmpi * yn[0] + tmpr * yn[1])/a0_mag; - ptr_b += 2; ptr_a += 2; ptr_Z += 2; - } - /* Calculate last delay */ - - tmpr = ptr_b[0]*a0r + ptr_b[1]*a0i; - tmpi = ptr_b[1]*a0r - ptr_b[0]*a0i; - ptr_Z[0] = (tmpr * xn[0] - tmpi * xn[1])/a0_mag; - ptr_Z[1] = (tmpi * xn[0] + tmpr * xn[1])/a0_mag; - tmpr = ptr_a[0]*a0r + ptr_a[1]*a0i; - tmpi = ptr_a[1]*a0r - ptr_a[0]*a0i; - ptr_Z[0] -= (tmpr * yn[0] - tmpi * yn[1])/a0_mag; - ptr_Z[1] -= (tmpi * yn[0] + tmpr * yn[1])/a0_mag; - } - else { - tmpr = ptr_b[0]*a0r + ptr_b[1]*a0i; - tmpi = ptr_b[1]*a0r - ptr_b[0]*a0i; - yn[0] = (tmpr * xn[0] - tmpi * xn[1])/a0_mag; - yn[1] = (tmpi * xn[0] + tmpr * xn[1])/a0_mag; - } - - ptr_y += stride_Y; /* Move to next input/output point */ - ptr_x += stride_X; - - } -} - - -static void CDOUBLE_filt(char *b, char *a, char *x, char *y, char *Z, intp len_b, uintp len_x, intp stride_X, intp stride_Y ) { - char *ptr_x = x, *ptr_y = y; - double *ptr_Z, *ptr_b; - double *ptr_a; - double *xn, *yn; - double a0r = ((double *)a)[0]; - double a0i = ((double *)a)[1]; - double a0_mag, tmpr, tmpi; - int k, n; - - a0_mag = a0r*a0r + a0i*a0i; - for (k = 0; k < len_x; k++) { - ptr_b = (double *)b; /* Reset a and b pointers */ - ptr_a = (double *)a; - xn = (double *)ptr_x; - yn = (double *)ptr_y; - if (len_b > 1) { - ptr_Z = ((double *)Z); - tmpr = ptr_b[0]*a0r + ptr_b[1]*a0i; - tmpi = ptr_b[1]*a0r - ptr_b[0]*a0i; - /* Calculate first delay (output) */ - yn[0] = ptr_Z[0] + (tmpr * xn[0] - tmpi * xn[1])/a0_mag; - yn[1] = ptr_Z[1] + (tmpi * xn[0] + tmpr * xn[1])/a0_mag; - ptr_b += 2; ptr_a += 2; - /* Fill in middle delays */ - for (n = 0; n < len_b - 2; n++) { - tmpr = ptr_b[0]*a0r + ptr_b[1]*a0i; - tmpi = ptr_b[1]*a0r - ptr_b[0]*a0i; - ptr_Z[0] = ptr_Z[2] + (tmpr * xn[0] - tmpi * xn[1])/a0_mag; - ptr_Z[1] = ptr_Z[3] + (tmpi * xn[0] + tmpr * xn[1])/a0_mag; - tmpr = ptr_a[0]*a0r + ptr_a[1]*a0i; - tmpi = ptr_a[1]*a0r - ptr_a[0]*a0i; - ptr_Z[0] -= (tmpr * yn[0] - tmpi * yn[1])/a0_mag; - ptr_Z[1] -= (tmpi * yn[0] + tmpr * yn[1])/a0_mag; - ptr_b += 2; ptr_a += 2; ptr_Z += 2; - } - /* Calculate last delay */ - tmpr = ptr_b[0]*a0r + ptr_b[1]*a0i; - tmpi = ptr_b[1]*a0r - ptr_b[0]*a0i; - ptr_Z[0] = (tmpr * xn[0] - tmpi * xn[1])/a0_mag; - ptr_Z[1] = (tmpi * xn[0] + tmpr * xn[1])/a0_mag; - tmpr = ptr_a[0]*a0r + ptr_a[1]*a0i; - tmpi = ptr_a[1]*a0r - ptr_a[0]*a0i; - ptr_Z[0] -= (tmpr * yn[0] - tmpi * yn[1])/a0_mag; - ptr_Z[1] -= (tmpi * yn[0] + tmpr * yn[1])/a0_mag; - } - else { - tmpr = ptr_b[0]*a0r + ptr_b[1]*a0i; - tmpi = ptr_b[1]*a0r - ptr_b[0]*a0i; - yn[0] = (tmpr * xn[0] - tmpi * xn[1])/a0_mag; - yn[1] = (tmpi * xn[0] + tmpr * xn[1])/a0_mag; - } - ptr_y += stride_Y; /* Move to next input/output point */ - ptr_x += stride_X; - } -} - /******************************************************** * * Code taken from remez.c by Erik Kvaleberg which was @@ -1135,108 +762,6 @@ /* End of python-independent routines */ /****************************************************/ -static void OBJECT_MultAdd(char *ip1, intp is1, char *ip2, intp is2, char *op, intp *dims1, intp *dims2, int ndims, intp nels2, int check, intp *loop_ind, intp *temp_ind, uintp *offset) { - int i, k, first_time = 1, incr = 1; - PyObject *tmp1=NULL, *tmp2=NULL, *tmp=NULL; - - i = nels2; - - temp_ind[ndims-1]--; - while (i--) { - /* Adjust index array and move ptr1 to right place */ - k = ndims - 1; - while(--incr) { - temp_ind[k] -= dims2[k] - 1; /* Return to start for these dimensions */ - k--; - } - ip1 += offset[k]*is1; /* Precomputed offset array */ - temp_ind[k]++; - - if (!(check && index_out_of_bounds(temp_ind,dims1,ndims))) { - tmp1 = PyNumber_Multiply(*((PyObject **)ip1),*((PyObject **)ip2)); - if (first_time) { - tmp = tmp1; - first_time = 0; - } else { - tmp2 = PyNumber_Add(tmp, tmp1); - Py_XDECREF(tmp); - tmp = tmp2; - Py_XDECREF(tmp1); - } - } - incr = increment(loop_ind, ndims, dims2); - ip2 += is2; - } - Py_XDECREF(*((PyObject **)op)); - *((PyObject **)op) = tmp; -} - -static MultAddFunction *MultiplyAddFunctions[] = - {NULL, BYTE_MultAdd, UBYTE_MultAdd, SHORT_MultAdd, - USHORT_MultAdd,INT_MultAdd, UINT_MultAdd, LONG_MultAdd, - ULONG_MultAdd, LONGLONG_MultAdd, ULONGLONG_MultAdd, - FLOAT_MultAdd, DOUBLE_MultAdd, LONGDOUBLE_MultAdd, - CFLOAT_MultAdd, CDOUBLE_MultAdd, CLONGDOUBLE_MultAdd, - OBJECT_MultAdd, NULL, NULL, NULL}; - -static void OBJECT_filt(char *b, char *a, char *x, char *y, char *Z, intp len_b, uintp len_x, intp stride_X, intp stride_Y ) { - char *ptr_x = x, *ptr_y = y; - PyObject **ptr_Z, **ptr_b; - PyObject **ptr_a; - PyObject **xn, **yn; - PyObject **a0 = (PyObject **)a; - PyObject *tmp1, *tmp2, *tmp3; - int k, n; - - /* My reference counting might not be right */ - for (k = 0; k < len_x; k++) { - ptr_b = (PyObject **)b; /* Reset a and b pointers */ - ptr_a = (PyObject **)a; - xn = (PyObject **)ptr_x; - yn = (PyObject **)ptr_y; - if (len_b > 1) { - ptr_Z = ((PyObject **)Z); - /* Calculate first delay (output) */ - tmp1 = PyNumber_Multiply(*ptr_b,*xn); - tmp2 = PyNumber_Divide(tmp1,*a0); - tmp3 = PyNumber_Add(tmp2,*ptr_Z); - Py_XDECREF(*yn); - *yn = tmp3; Py_DECREF(tmp1); Py_DECREF(tmp2); - ptr_b++; ptr_a++; - - /* Fill in middle delays */ - for (n = 0; n < len_b - 2; n++) { - tmp1 = PyNumber_Multiply(*xn, *ptr_b); - tmp2 = PyNumber_Divide(tmp1,*a0); - tmp3 = PyNumber_Add(tmp2,ptr_Z[1]); - Py_DECREF(tmp1); Py_DECREF(tmp2); - tmp1 = PyNumber_Multiply(*yn, *ptr_a); - tmp2 = PyNumber_Divide(tmp1, *a0); Py_DECREF(tmp1); - Py_XDECREF(*ptr_Z); - *ptr_Z = PyNumber_Subtract(tmp3, tmp2); Py_DECREF(tmp2); - Py_DECREF(tmp3); - ptr_b++; ptr_a++; ptr_Z++; - } - /* Calculate last delay */ - tmp1 = PyNumber_Multiply(*xn,*ptr_b); - tmp3 = PyNumber_Divide(tmp1,*a0); Py_DECREF(tmp1); - tmp1 = PyNumber_Multiply(*yn, *ptr_a); - tmp2 = PyNumber_Divide(tmp1, *a0); Py_DECREF(tmp1); - Py_XDECREF(*ptr_Z); - *ptr_Z = PyNumber_Subtract(tmp3,tmp2); Py_DECREF(tmp2); - Py_DECREF(tmp3); - } - else { - tmp1 = PyNumber_Multiply(*xn,*ptr_b); - Py_XDECREF(*yn); - *yn = PyNumber_Divide(tmp1,*a0); Py_DECREF(tmp1); - } - - ptr_y += stride_Y; /* Move to next input/output point */ - ptr_x += stride_X; - } -} - /************************/ /* N-D Order Filtering. */ @@ -1466,135 +991,17 @@ } -static BasicFilterFunction *BasicFilterFunctions[] = \ - {NULL, NULL,NULL,NULL,NULL,NULL,NULL, NULL, NULL, NULL, NULL, \ - FLOAT_filt, DOUBLE_filt, NULL, \ - CFLOAT_filt, CDOUBLE_filt, NULL, \ - OBJECT_filt, NULL, NULL, NULL}; -/* There is the start of an OBJECT_filt, but it may need work */ - - -/* Copy data from PyArray to Generic header for use in C routines */ -static void Py_copy_info(Generic_Array *gen, PyArrayObject *py_arr) { - gen->data = py_arr->data; - gen->nd = py_arr->nd; - gen->dimensions = py_arr->dimensions; - gen->elsize = py_arr->descr->elsize; - gen->strides = py_arr->strides; - gen->zero = PyArray_Zero(py_arr); - return; -} - -static void Py_copy_info_vec(Generic_Vector *gen, PyArrayObject *py_arr) { - gen->data = py_arr->data; - gen->elsize = py_arr->descr->elsize; - gen->numels = PyArray_Size((PyObject *)py_arr); - gen->zero = PyArray_Zero(py_arr); - return; -} - /******************************************/ static char doc_correlateND[] = "out = _correlateND(a,kernel,mode) \n\n mode = 0 - 'valid', 1 - 'same', \n 2 - 'full' (default)"; -static PyObject *sigtools_correlateND(PyObject *dummy, PyObject *args) { - PyObject *kernel, *a0; - PyArrayObject *ap1, *ap2, *ret; - Generic_Array in1, in2, out; - intp *ret_dimens; - int mode=2, n1, n2, i, typenum; - MultAddFunction *multiply_and_add_ND; - - if (!PyArg_ParseTuple(args, "OO|i", &a0, &kernel, &mode)) return NULL; - - typenum = PyArray_ObjectType(a0, 0); - typenum = PyArray_ObjectType(kernel, typenum); - - ret = NULL; - ap1 = (PyArrayObject *)PyArray_ContiguousFromObject(a0, typenum, 0, 0); - if (ap1 == NULL) return NULL; - ap2 = (PyArrayObject *)PyArray_ContiguousFromObject(kernel, typenum, 0, 0); - if (ap2 == NULL) goto fail; - - if (ap1->nd != ap2->nd) { - PyErr_SetString(PyExc_ValueError, "Arrays must have the same number of dimensions."); - goto fail; - } - - if (ap1->nd == 0) { /* Zero-dimensional arrays */ - PyErr_SetString(PyExc_ValueError, "Cannot convolve zero-dimensional arrays."); - goto fail; - } - - n1 = PyArray_Size((PyObject *)ap1); - n2 = PyArray_Size((PyObject *)ap2); - - /* Swap if first argument is not the largest */ - if (n1 < n2) { ret = ap1; ap1 = ap2; ap2 = ret; ret = NULL; } - ret_dimens = malloc(ap1->nd*sizeof(intp)); - switch(mode) { - case 0: - for (i = 0; i < ap1->nd; i++) { - ret_dimens[i] = ap1->dimensions[i] - ap2->dimensions[i] + 1; - if (ret_dimens[i] < 0) { - PyErr_SetString(PyExc_ValueError, "no part of the output is valid, use option 1 (same) or 2 (full) for third argument"); - goto fail; - } - } - break; - case 1: - for (i = 0; i < ap1->nd; i++) { ret_dimens[i] = ap1->dimensions[i];} - break; - case 2: - for (i = 0; i < ap1->nd; i++) { ret_dimens[i] = ap1->dimensions[i] + ap2->dimensions[i] - 1;} - break; - default: - PyErr_SetString(PyExc_ValueError, - "mode must be 0 (valid), 1 (same), or 2 (full)"); - goto fail; - } - - ret = (PyArrayObject *)PyArray_SimpleNew(ap1->nd, ret_dimens, typenum); - free(ret_dimens); - if (ret == NULL) goto fail; - - multiply_and_add_ND = MultiplyAddFunctions[(int)(ret->descr->type_num)]; - if (multiply_and_add_ND == NULL) { - PyErr_SetString(PyExc_ValueError, - "correlateND not available for this type"); - goto fail; - } - - /* copy header information to generic structures */ - Py_copy_info(&in1, ap1); - Py_copy_info(&in2, ap2); - Py_copy_info(&out, ret); - - correlateND(&in1, &in2, &out, multiply_and_add_ND, mode); - - PyDataMem_FREE(in1.zero); - PyDataMem_FREE(in2.zero); - PyDataMem_FREE(out.zero); - - Py_DECREF(ap1); - Py_DECREF(ap2); - return PyArray_Return(ret); - -fail: - Py_XDECREF(ap1); - Py_XDECREF(ap2); - Py_XDECREF(ret); - return NULL; -} - - /*******************************************************************/ static char doc_convolve2d[] = "out = _convolve2d(in1, in2, flip, mode, boundary, fillvalue)"; extern int pylab_convolve_2d(char*,intp*,char*,intp*,char*,intp*,intp*,intp*,int,char*); -static PyObject *sigtools_convolve2d(PyObject *dummy, PyObject *args) { +static PyObject *sigtools_convolve2d(PyObject *NPY_UNUSED(dummy), PyObject *args) { PyObject *in1=NULL, *in2=NULL, *fill_value=NULL; int mode=2, boundary=0, typenum, flag, flip=1, ret; @@ -1719,7 +1126,7 @@ static char doc_order_filterND[] = "out = _order_filterND(a,domain,order)"; -static PyObject *sigtools_order_filterND(PyObject *dummy, PyObject *args) { +static PyObject *sigtools_order_filterND(PyObject *NPY_UNUSED(dummy), PyObject *args) { PyObject *domain, *a0; int order=0; @@ -1732,7 +1139,7 @@ static char doc_remez[] = "h = _remez(numtaps, bands, des, weight, type, Hz, maxiter, grid_density) \n returns the optimal (in the Chebyshev/minimax sense) FIR filter impulse \n response given a set of band edges, the desired response on those bands,\n and the weight given to the error in those bands. Bands is a monotonic\n vector with band edges given in frequency domain where Hz is the sampling\n frequency."; -static PyObject *sigtools_remez(PyObject *dummy, PyObject *args) { +static PyObject *sigtools_remez(PyObject *NPY_UNUSED(dummy), PyObject *args) { PyObject *bands, *des, *weight; int k, numtaps, numbands, type = BANDPASS, err; PyArrayObject *a_bands=NULL, *a_des=NULL, *a_weight=NULL; @@ -1832,7 +1239,7 @@ extern void d_medfilt2(double*,double*,intp*,intp*); extern void b_medfilt2(unsigned char*,unsigned char*,intp*,intp*); -static PyObject *sigtools_median2d(PyObject *dummy, PyObject *args) +static PyObject *sigtools_median2d(PyObject *NPY_UNUSED(dummy), PyObject *args) { PyObject *image=NULL, *size=NULL; int typenum; @@ -1847,7 +1254,7 @@ if (a_image == NULL) goto fail; if (size != NULL) { - a_size = (PyArrayObject *)PyArray_ContiguousFromObject(size, PyArray_LONG, 1, 1); + a_size = (PyArrayObject *)PyArray_ContiguousFromObject(size, NPY_INTP, 1, 1); if (a_size == NULL) goto fail; if ((RANK(a_size) != 1) || (DIMS(a_size)[0] < 2)) PYERR("Size must be a length two sequence"); @@ -1890,16 +1297,19 @@ } -#include "lfilter.c" +static char doc_linear_filter[] = + "(y,Vf) = _linear_filter(b,a,X,Dim=-1,Vi=None) " \ + "implemented using Direct Form II transposed flow " \ + "diagram. If Vi is not given, Vf is not returned."; static struct PyMethodDef toolbox_module_methods[] = { - {"_correlateND", sigtools_correlateND, METH_VARARGS, doc_correlateND}, + {"_correlateND", scipy_signal_sigtools_correlateND, METH_VARARGS, doc_correlateND}, {"_convolve2d", sigtools_convolve2d, METH_VARARGS, doc_convolve2d}, {"_order_filterND", sigtools_order_filterND, METH_VARARGS, doc_order_filterND}, - {"_linear_filter",sigtools_linear_filter, METH_VARARGS, doc_linear_filter}, + {"_linear_filter", scipy_signal_sigtools_linear_filter, METH_VARARGS, doc_linear_filter}, {"_remez",sigtools_remez, METH_VARARGS, doc_remez}, {"_medfilt2d", sigtools_median2d, METH_VARARGS, doc_median2d}, - {NULL, NULL, 0} /* sentinel */ + {NULL, NULL, 0, NULL} /* sentinel */ }; /* Initialization function for the module (*must* be called initsigtools) */ @@ -1928,6 +1338,7 @@ PyDict_SetItemString(d,"HILBERT", PyInt_FromLong((long) HILBERT)); */ + scipy_signal_sigtools_linear_filter_module_init(); /* Check for errors */ if (PyErr_Occurred()) { diff -Nru python-scipy-0.7.2+dfsg1/scipy/signal/splinemodule.c python-scipy-0.8.0+dfsg1/scipy/signal/splinemodule.c --- python-scipy-0.7.2+dfsg1/scipy/signal/splinemodule.c 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/signal/splinemodule.c 2010-07-26 15:48:33.000000000 +0100 @@ -62,7 +62,7 @@ " symmetric boundary conditions.\n"; -static PyObject *cspline2d(PyObject *dummy, PyObject *args) +static PyObject *cspline2d(PyObject *NPY_UNUSED(dummy), PyObject *args) { PyObject *image=NULL; PyArrayObject *a_image=NULL, *ck=NULL; @@ -119,7 +119,7 @@ " the precision used when computing the infinite sum needed to apply mirror-\n" " symmetric boundary conditions.\n"; -static PyObject *qspline2d(PyObject *dummy, PyObject *args) +static PyObject *qspline2d(PyObject *NPY_UNUSED(dummy), PyObject *args) { PyObject *image=NULL; PyArrayObject *a_image=NULL, *ck=NULL; @@ -177,7 +177,7 @@ " assumed. This function can be used to find an image given its B-spline\n" " representation."; -static PyObject *FIRsepsym2d(PyObject *dummy, PyObject *args) +static PyObject *FIRsepsym2d(PyObject *NPY_UNUSED(dummy), PyObject *args) { PyObject *image=NULL, *hrow=NULL, *hcol=NULL; PyArrayObject *a_image=NULL, *a_hrow=NULL, *a_hcol=NULL, *out=NULL; @@ -284,7 +284,7 @@ "\n" " output -- filtered signal."; -static PyObject *IIRsymorder1(PyObject *dummy, PyObject *args) +static PyObject *IIRsymorder1(PyObject *NPY_UNUSED(dummy), PyObject *args) { PyObject *sig=NULL; PyArrayObject *a_sig=NULL, *out=NULL; @@ -404,7 +404,7 @@ "\n" " output -- filtered signal.\n"; -static PyObject *IIRsymorder2(PyObject *dummy, PyObject *args) +static PyObject *IIRsymorder2(PyObject *NPY_UNUSED(dummy), PyObject *args) { PyObject *sig=NULL; PyArrayObject *a_sig=NULL, *out=NULL; @@ -465,7 +465,7 @@ {"sepfir2d", FIRsepsym2d, METH_VARARGS, doc_FIRsepsym2d}, {"symiirorder1", IIRsymorder1, METH_VARARGS, doc_IIRsymorder1}, {"symiirorder2", IIRsymorder2, METH_VARARGS, doc_IIRsymorder2}, - {NULL, NULL, 0} /* sentinel */ + {NULL, NULL, 0, NULL} /* sentinel */ }; /* Initialization function for the module (*must* be called initXXXXX) */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/signal/tests/test_filter_design.py python-scipy-0.8.0+dfsg1/scipy/signal/tests/test_filter_design.py --- python-scipy-0.7.2+dfsg1/scipy/signal/tests/test_filter_design.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/signal/tests/test_filter_design.py 2010-07-26 15:48:33.000000000 +0100 @@ -3,7 +3,8 @@ import numpy as np from numpy.testing import TestCase, assert_array_almost_equal -from scipy.signal import tf2zpk, bessel, BadCoefficients +from scipy.signal import tf2zpk, bessel, BadCoefficients, kaiserord, firwin, freqz + class TestTf2zpk(TestCase): def test_simple(self): @@ -23,12 +24,12 @@ assert_array_almost_equal(p, p_r) def test_bad_filter(self): - """Regression test for #651: better handling of badly conditionned + """Regression test for #651: better handling of badly conditioned filter coefficients.""" - b, a = bessel(20, 0.1) warnings.simplefilter("error", BadCoefficients) try: try: + b, a = bessel(20, 0.1) z, p, k = tf2zpk(b, a) raise AssertionError("tf2zpk did not warn about bad "\ "coefficients") @@ -36,3 +37,15 @@ pass finally: warnings.simplefilter("always", BadCoefficients) + + +class TestFirWin(TestCase): + + def test_lowpass(self): + width = 0.04 + ntaps, beta = kaiserord(120, width) + taps = firwin(ntaps, cutoff=0.5, window=('kaiser', beta)) + freq_samples = np.array([0.0, 0.25, 0.5-width/2, 0.5+width/2, 0.75, 1.0]) + freqs, response = freqz(taps, worN=np.pi*freq_samples) + assert_array_almost_equal(np.abs(response), + [1.0, 1.0, 1.0, 0.0, 0.0, 0.0], decimal=5) diff -Nru python-scipy-0.7.2+dfsg1/scipy/signal/tests/test_ltisys.py python-scipy-0.8.0+dfsg1/scipy/signal/tests/test_ltisys.py --- python-scipy-0.7.2+dfsg1/scipy/signal/tests/test_ltisys.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/signal/tests/test_ltisys.py 2010-07-26 15:48:33.000000000 +0100 @@ -0,0 +1,218 @@ +import warnings + +import numpy as np +from numpy.testing import assert_almost_equal, assert_equal, run_module_suite + +from scipy.signal.ltisys import ss2tf, lsim2, impulse2, step2, lti +# import BadCoefficients so we can filter the warning for lsim2.test_05 +from scipy.signal import BadCoefficients + + +class TestSS2TF: + def tst_matrix_shapes(self, p, q, r): + ss2tf(np.zeros((p, p)), + np.zeros((p, q)), + np.zeros((r, p)), + np.zeros((r, q)), 0) + + def test_basic(self): + for p, q, r in [ + (3, 3, 3), + (1, 3, 3), + (1, 1, 1)]: + yield self.tst_matrix_shapes, p, q, r + + +class Test_lsim2(object): + + def test_01(self): + t = np.linspace(0,10,1001) + u = np.zeros_like(t) + # First order system: x'(t) + x(t) = u(t), x(0) = 1. + # Exact solution is x(t) = exp(-t). + system = ([1.0],[1.0,1.0]) + tout, y, x = lsim2(system, u, t, X0=[1.0]) + expected_x = np.exp(-tout) + assert_almost_equal(x[:,0], expected_x) + + def test_02(self): + t = np.array([0.0, 1.0, 1.0, 3.0]) + u = np.array([0.0, 0.0, 1.0, 1.0]) + # Simple integrator: x'(t) = u(t) + system = ([1.0],[1.0,0.0]) + tout, y, x = lsim2(system, u, t, X0=[1.0]) + expected_x = np.maximum(1.0, tout) + assert_almost_equal(x[:,0], expected_x) + + def test_03(self): + t = np.array([0.0, 1.0, 1.0, 1.1, 1.1, 2.0]) + u = np.array([0.0, 0.0, 1.0, 1.0, 0.0, 0.0]) + # Simple integrator: x'(t) = u(t) + system = ([1.0],[1.0, 0.0]) + tout, y, x = lsim2(system, u, t, hmax=0.01) + expected_x = np.array([0.0, 0.0, 0.0, 0.1, 0.1, 0.1]) + assert_almost_equal(x[:,0], expected_x) + + def test_04(self): + t = np.linspace(0, 10, 1001) + u = np.zeros_like(t) + # Second order system with a repeated root: x''(t) + 2*x(t) + x(t) = 0. + # With initial conditions x(0)=1.0 and x'(t)=0.0, the exact solution + # is (1-t)*exp(-t). + system = ([1.0], [1.0, 2.0, 1.0]) + tout, y, x = lsim2(system, u, t, X0=[1.0, 0.0]) + expected_x = (1.0 - tout) * np.exp(-tout) + assert_almost_equal(x[:,0], expected_x) + + def test_05(self): + # This test triggers a "BadCoefficients" warning from scipy.signal.filter_design, + # but the test passes. I think the warning is related to the incomplete handling + # of multi-input systems in scipy.signal. + warnings.simplefilter("ignore", BadCoefficients) + + # A system with two state variables, two inputs, and one output. + A = np.array([[-1.0, 0.0], [0.0, -2.0]]) + B = np.array([[1.0, 0.0], [0.0, 1.0]]) + C = np.array([1.0, 0.0]) + D = np.zeros((1,2)) + + t = np.linspace(0, 10.0, 101) + tout, y, x = lsim2((A,B,C,D), T=t, X0=[1.0, 1.0]) + expected_y = np.exp(-tout) + expected_x0 = np.exp(-tout) + expected_x1 = np.exp(-2.0*tout) + assert_almost_equal(y, expected_y) + assert_almost_equal(x[:,0], expected_x0) + assert_almost_equal(x[:,1], expected_x1) + + def test_06(self): + """Test use of the default values of the arguments `T` and `U`.""" + # Second order system with a repeated root: x''(t) + 2*x(t) + x(t) = 0. + # With initial conditions x(0)=1.0 and x'(t)=0.0, the exact solution + # is (1-t)*exp(-t). + system = ([1.0], [1.0, 2.0, 1.0]) + tout, y, x = lsim2(system, X0=[1.0, 0.0]) + expected_x = (1.0 - tout) * np.exp(-tout) + assert_almost_equal(x[:,0], expected_x) + +class Test_impulse2(object): + + def test_01(self): + # First order system: x'(t) + x(t) = u(t) + # Exact impulse response is x(t) = exp(-t). + system = ([1.0],[1.0,1.0]) + tout, y = impulse2(system) + expected_y = np.exp(-tout) + assert_almost_equal(y, expected_y) + + def test_02(self): + """Specify the desired time values for the output.""" + + # First order system: x'(t) + x(t) = u(t) + # Exact impulse response is x(t) = exp(-t). + system = ([1.0],[1.0,1.0]) + n = 21 + t = np.linspace(0, 2.0, n) + tout, y = impulse2(system, T=t) + assert_equal(tout.shape, (n,)) + assert_almost_equal(tout, t) + expected_y = np.exp(-t) + assert_almost_equal(y, expected_y) + + def test_03(self): + """Specify an initial condition as a scalar.""" + + # First order system: x'(t) + x(t) = u(t), x(0)=3.0 + # Exact impulse response is x(t) = 4*exp(-t). + system = ([1.0],[1.0,1.0]) + tout, y = impulse2(system, X0=3.0) + expected_y = 4.0*np.exp(-tout) + assert_almost_equal(y, expected_y) + + def test_04(self): + """Specify an initial condition as a list.""" + + # First order system: x'(t) + x(t) = u(t), x(0)=3.0 + # Exact impulse response is x(t) = 4*exp(-t). + system = ([1.0],[1.0,1.0]) + tout, y = impulse2(system, X0=[3.0]) + expected_y = 4.0*np.exp(-tout) + assert_almost_equal(y, expected_y) + + def test_05(self): + # Simple integrator: x'(t) = u(t) + system = ([1.0],[1.0,0.0]) + tout, y = impulse2(system) + expected_y = np.ones_like(tout) + assert_almost_equal(y, expected_y) + + def test_06(self): + # Second order system with a repeated root: x''(t) + 2*x(t) + x(t) = u(t) + # The exact impulse response is t*exp(-t). + system = ([1.0], [1.0, 2.0, 1.0]) + tout, y = impulse2(system) + expected_y = tout * np.exp(-tout) + assert_almost_equal(y, expected_y) + +class Test_step2(object): + + def test_01(self): + # First order system: x'(t) + x(t) = u(t) + # Exact step response is x(t) = 1 - exp(-t). + system = ([1.0],[1.0,1.0]) + tout, y = step2(system) + expected_y = 1.0 - np.exp(-tout) + assert_almost_equal(y, expected_y) + + def test_02(self): + """Specify the desired time values for the output.""" + + # First order system: x'(t) + x(t) = u(t) + # Exact step response is x(t) = 1 - exp(-t). + system = ([1.0],[1.0,1.0]) + n = 21 + t = np.linspace(0, 2.0, n) + tout, y = step2(system, T=t) + assert_equal(tout.shape, (n,)) + assert_almost_equal(tout, t) + expected_y = 1 - np.exp(-t) + assert_almost_equal(y, expected_y) + + def test_03(self): + """Specify an initial condition as a scalar.""" + + # First order system: x'(t) + x(t) = u(t), x(0)=3.0 + # Exact step response is x(t) = 1 + 2*exp(-t). + system = ([1.0],[1.0,1.0]) + tout, y = step2(system, X0=3.0) + expected_y = 1 + 2.0*np.exp(-tout) + assert_almost_equal(y, expected_y) + + def test_04(self): + """Specify an initial condition as a list.""" + + # First order system: x'(t) + x(t) = u(t), x(0)=3.0 + # Exact step response is x(t) = 1 + 2*exp(-t). + system = ([1.0],[1.0,1.0]) + tout, y = step2(system, X0=[3.0]) + expected_y = 1 + 2.0*np.exp(-tout) + assert_almost_equal(y, expected_y) + + def test_05(self): + # Simple integrator: x'(t) = u(t) + # Exact step response is x(t) = t. + system = ([1.0],[1.0,0.0]) + tout, y = step2(system, atol=1e-10, rtol=1e-8) + expected_y = tout + assert_almost_equal(y, expected_y) + + def test_06(self): + # Second order system with a repeated root: x''(t) + 2*x(t) + x(t) = u(t) + # The exact step response is 1 - (1 + t)*exp(-t). + system = ([1.0], [1.0, 2.0, 1.0]) + tout, y = step2(system, atol=1e-10, rtol=1e-8) + expected_y = 1 - (1 + tout) * np.exp(-tout) + assert_almost_equal(y, expected_y) + +if __name__ == "__main__": + run_module_suite() diff -Nru python-scipy-0.7.2+dfsg1/scipy/signal/tests/test_signaltools.py python-scipy-0.8.0+dfsg1/scipy/signal/tests/test_signaltools.py --- python-scipy-0.7.2+dfsg1/scipy/signal/tests/test_signaltools.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/signal/tests/test_signaltools.py 2010-07-26 15:48:33.000000000 +0100 @@ -1,36 +1,206 @@ #this program corresponds to special.py from decimal import Decimal +import types from numpy.testing import * import scipy.signal as signal -from scipy.signal import lfilter +from scipy.signal import lfilter, correlate, convolve, convolve2d, hilbert from numpy import array, arange import numpy as np -# Use this to test for object arrays filtering - numpy 1.2 -# assert_array_almost_equal does not handle object arrays -def assert_array_almost_equal(x, y, decimal=6, err_msg='', verbose=True): - from numpy.core import around, number, float_ - from numpy.lib import issubdtype - from numpy.testing.utils import assert_array_compare - def compare(x, y): - z = abs(x-y) - if not issubdtype(z.dtype, number): - z = z.astype(float_) # handle object arrays - return around(z, decimal) <= 10.0**(-decimal) - assert_array_compare(compare, x, y, err_msg=err_msg, verbose=verbose, - header='Arrays are not almost equal') - -class TestConvolve(TestCase): +class _TestConvolve(TestCase): def test_basic(self): a = [3,4,5,6,5,4] b = [1,2,3] - c = signal.convolve(a,b) + c = convolve(a,b, old_behavior=self.old_behavior) assert_array_equal(c,array([3,10,22,28,32,32,23,12])) + def test_complex(self): + x = array([1+1j, 2+1j, 3+1j]) + y = array([1+1j, 2+1j]) + z = convolve(x, y,old_behavior=self.old_behavior) + assert_array_equal(z, array([2j, 2+6j, 5+8j, 5+5j])) + + def test_zero_order(self): + a = 1289 + b = 4567 + c = convolve(a,b,old_behavior=self.old_behavior) + assert_array_equal(c,a*b) + + def test_2d_arrays(self): + a = [[1,2,3],[3,4,5]] + b = [[2,3,4],[4,5,6]] + c = convolve(a,b,old_behavior=self.old_behavior) + d = array( [[2 ,7 ,16,17,12],\ + [10,30,62,58,38],\ + [12,31,58,49,30]]) + assert_array_equal(c,d) + + def test_valid_mode(self): + a = [1,2,3,6,5,3] + b = [2,3,4,5,3,4,2,2,1] + c = convolve(a,b,'valid',old_behavior=self.old_behavior) + assert_array_equal(c,array([70,78,73,65])) + +class OldTestConvolve(_TestConvolve): + old_behavior = True + @dec.deprecated() + def test_basic(self): + _TestConvolve.test_basic(self) + + @dec.deprecated() + def test_complex(self): + _TestConvolve.test_complex(self) + + @dec.deprecated() + def test_2d_arrays(self): + _TestConvolve.test_2d_arrays(self) + + @dec.deprecated() + def test_same_mode(self): + _TestConvolve.test_same_mode(self) + + @dec.deprecated() + def test_valid_mode(self): + a = [1,2,3,6,5,3] + b = [2,3,4,5,3,4,2,2,1] + c = convolve(a,b,'valid',old_behavior=self.old_behavior) + assert_array_equal(c,array([70,78,73,65])) + + @dec.deprecated() + def test_same_mode(self): + a = [1,2,3,3,1,2] + b = [1,4,3,4,5,6,7,4,3,2,1,1,3] + c = convolve(a,b,'same',old_behavior=self.old_behavior) + d = array([14,25,35,43,57,61,63,57,45,36,25,20,17]) + assert_array_equal(c,d) + +class TestConvolve(_TestConvolve): + old_behavior = False + def test_valid_mode(self): + # 'valid' mode if b.size > a.size does not make sense with the new + # behavior + a = [1,2,3,6,5,3] + b = [2,3,4,5,3,4,2,2,1] + def _test(): + convolve(a,b,'valid',old_behavior=self.old_behavior) + self.failUnlessRaises(ValueError, _test) + + def test_same_mode(self): + a = [1,2,3,3,1,2] + b = [1,4,3,4,5,6,7,4,3,2,1,1,3] + c = convolve(a,b,'same',old_behavior=self.old_behavior) + d = array([57,61,63,57,45,36]) + assert_array_equal(c,d) + +class _TestConvolve2d(TestCase): + def test_2d_arrays(self): + a = [[1,2,3],[3,4,5]] + b = [[2,3,4],[4,5,6]] + d = array( [[2 ,7 ,16,17,12],\ + [10,30,62,58,38],\ + [12,31,58,49,30]]) + e = convolve2d(a,b,old_behavior=self.old_behavior) + assert_array_equal(e,d) + + def test_valid_mode(self): + e = [[2,3,4,5,6,7,8],[4,5,6,7,8,9,10]] + f = [[1,2,3],[3,4,5]] + g = convolve2d(e,f,'valid',old_behavior=self.old_behavior) + h = array([[62,80,98,116,134]]) + assert_array_equal(g,h) + + def test_fillvalue(self): + a = [[1,2,3],[3,4,5]] + b = [[2,3,4],[4,5,6]] + fillval = 1 + c = convolve2d(a,b,'full','fill',fillval,old_behavior=self.old_behavior) + d = array([[24,26,31,34,32],\ + [28,40,62,64,52],\ + [32,46,67,62,48]]) + assert_array_equal(c,d) + + def test_wrap_boundary(self): + a = [[1,2,3],[3,4,5]] + b = [[2,3,4],[4,5,6]] + c = convolve2d(a,b,'full','wrap',old_behavior=self.old_behavior) + d = array([[80,80,74,80,80],\ + [68,68,62,68,68],\ + [80,80,74,80,80]]) + assert_array_equal(c,d) + + def test_sym_boundary(self): + a = [[1,2,3],[3,4,5]] + b = [[2,3,4],[4,5,6]] + c = convolve2d(a,b,'full','symm',old_behavior=self.old_behavior) + d = array([[34,30,44, 62, 66],\ + [52,48,62, 80, 84],\ + [82,78,92,110,114]]) + assert_array_equal(c,d) + + +class OldTestConvolve2d(_TestConvolve2d): + old_behavior = True + @dec.deprecated() + def test_2d_arrays(self): + _TestConvolve2d.test_2d_arrays(self) + + @dec.deprecated() + def test_same_mode(self): + e = [[1,2,3],[3,4,5]] + f = [[2,3,4,5,6,7,8],[4,5,6,7,8,9,10]] + g = convolve2d(e,f,'same',old_behavior=self.old_behavior) + h = array([[ 7,16,22,28, 34, 40, 37],\ + [30,62,80,98,116,134,114]]) + assert_array_equal(g,h) + + @dec.deprecated() + def test_valid_mode(self): + _TestConvolve2d.test_valid_mode(self) + + @dec.deprecated() + def test_fillvalue(self): + _TestConvolve2d.test_fillvalue(self) + + @dec.deprecated() + def test_wrap_boundary(self): + _TestConvolve2d.test_wrap_boundary(self) + + @dec.deprecated() + def test_sym_boundary(self): + _TestConvolve2d.test_sym_boundary(self) + + @dec.deprecated() + def test_valid_mode2(self): + # Test when in2.size > in1.size: old behavior is to do so that + # convolve2d(in2, in1) == convolve2d(in1, in2) + e = [[1,2,3],[3,4,5]] + f = [[2,3,4,5,6,7,8],[4,5,6,7,8,9,10]] + g = convolve2d(e,f,'valid',old_behavior=self.old_behavior) + h = array([[62,80,98,116,134]]) + assert_array_equal(g,h) + +#class TestConvolve2d(_TestConvolve2d): +# old_behavior = False +# def test_same_mode(self): +# e = [[1,2,3],[3,4,5]] +# f = [[2,3,4,5,6,7,8],[4,5,6,7,8,9,10]] +# g = convolve2d(e,f,'same',old_behavior=self.old_behavior) +# h = array([[80,98,116],\ +# [70,82,94]]) +# assert_array_equal(g,h) +# +# def test_valid_mode2(self): +# # Test when in2.size > in1.size +# e = [[1,2,3],[3,4,5]] +# f = [[2,3,4,5,6,7,8],[4,5,6,7,8,9,10]] +# def _test(): +# convolve2d(e,f,'valid',old_behavior=self.old_behavior) +# self.failUnlessRaises(ValueError, _test) + class TestFFTConvolve(TestCase): def test_real(self): x = array([1,2,3]) @@ -41,6 +211,50 @@ assert_array_almost_equal(signal.fftconvolve(x,x), [0+2.0j, 0+8j, 0+20j, 0+24j, 0+18j]) + def test_2d_real_same(self): + a = array([[1,2,3],[4,5,6]]) + assert_array_almost_equal(signal.fftconvolve(a,a),\ + array([[1,4,10,12,9],\ + [8,26,56,54,36],\ + [16,40,73,60,36]])) + + def test_2d_complex_same(self): + a = array([[1+2j,3+4j,5+6j],[2+1j,4+3j,6+5j]]) + c = signal.fftconvolve(a,a) + d = array([[-3+4j,-10+20j,-21+56j,-18+76j,-11+60j],\ + [10j,44j,118j,156j,122j],\ + [3+4j,10+20j,21+56j,18+76j,11+60j]]) + assert_array_almost_equal(c,d) + + def test_real_same_mode(self): + a = array([1,2,3]) + b = array([3,3,5,6,8,7,9,0,1]) + c = signal.fftconvolve(a,b,'same') + d = array([9.,20.,25.,35.,41.,47.,39.,28.,2.]) + assert_array_almost_equal(c,d) + + def test_real_valid_mode(self): + a = array([3,2,1]) + b = array([3,3,5,6,8,7,9,0,1]) + c = signal.fftconvolve(a,b,'valid') + d = array([24.,31.,41.,43.,49.,25.,12.]) + assert_array_almost_equal(c,d) + + def test_zero_order(self): + a = array([4967]) + b = array([3920]) + c = signal.fftconvolve(a,b) + d = a*b + assert_equal(c,d) + + def test_random_data(self): + np.random.seed(1234) + a = np.random.rand(1233) + 1j*np.random.rand(1233) + b = np.random.rand(1321) + 1j*np.random.rand(1321) + c = signal.fftconvolve(a, b, 'full') + d = np.convolve(a, b, 'full') + assert np.allclose(c, d, rtol=1e-10) + class TestMedFilt(TestCase): def test_basic(self): f = [[50, 50, 50, 50, 50, 92, 18, 27, 65, 46], @@ -68,6 +282,13 @@ [ 0, 7, 11, 7, 4, 4, 19, 19, 24, 0]]) assert_array_equal(d, e) + def test_none(self): + """Ticket #1124. Ensure this does not segfault.""" + try: + signal.medfilt(None) + except: + pass + class TestWiener(TestCase): def test_basic(self): g = array([[5,6,4,3],[3,5,6,2],[2,3,5,6],[1,6,9,7]],'d') @@ -93,48 +314,6 @@ assert_array_equal(signal.order_filter([1,2,3],[1,0,1],1), [2,3,2]) -class TestChebWin: - def test_cheb_odd(self): - cheb_odd_true = array([0.200938, 0.107729, 0.134941, 0.165348, - 0.198891, 0.235450, 0.274846, 0.316836, - 0.361119, 0.407338, 0.455079, 0.503883, - 0.553248, 0.602637, 0.651489, 0.699227, - 0.745266, 0.789028, 0.829947, 0.867485, - 0.901138, 0.930448, 0.955010, 0.974482, - 0.988591, 0.997138, 1.000000, 0.997138, - 0.988591, 0.974482, 0.955010, 0.930448, - 0.901138, 0.867485, 0.829947, 0.789028, - 0.745266, 0.699227, 0.651489, 0.602637, - 0.553248, 0.503883, 0.455079, 0.407338, - 0.361119, 0.316836, 0.274846, 0.235450, - 0.198891, 0.165348, 0.134941, 0.107729, - 0.200938]) - - cheb_odd = signal.chebwin(53, at=-40) - assert_array_almost_equal(cheb_odd, cheb_odd_true, decimal=4) - - def test_cheb_even(self): - cheb_even_true = array([0.203894, 0.107279, 0.133904, - 0.163608, 0.196338, 0.231986, - 0.270385, 0.311313, 0.354493, - 0.399594, 0.446233, 0.493983, - 0.542378, 0.590916, 0.639071, - 0.686302, 0.732055, 0.775783, - 0.816944, 0.855021, 0.889525, - 0.920006, 0.946060, 0.967339, - 0.983557, 0.994494, 1.000000, - 1.000000, 0.994494, 0.983557, - 0.967339, 0.946060, 0.920006, - 0.889525, 0.855021, 0.816944, - 0.775783, 0.732055, 0.686302, - 0.639071, 0.590916, 0.542378, - 0.493983, 0.446233, 0.399594, - 0.354493, 0.311313, 0.270385, - 0.231986, 0.196338, 0.163608, - 0.133904, 0.107279, 0.203894]) - - cheb_even = signal.chebwin(54, at=-40) - assert_array_almost_equal(cheb_even, cheb_even_true, decimal=4) class _TestLinearFilter(TestCase): dt = None @@ -208,8 +387,6 @@ assert_array_almost_equal(y_r2_a0_1, y) assert_array_almost_equal(zf, zf_r) - #@dec.skipif(True, "Skipping lfilter test with initial condition along "\ - # "axis 0: it segfaults ATM") def test_rank2_init_cond_a0(self): # Test initial condition handling along axis 0 shape = (4, 3) @@ -219,14 +396,13 @@ b = np.array([1, -1]).astype(self.dt) a = np.array([0.5, 0.5]).astype(self.dt) - y_r2_a0_0 = np.array([[1, 3, 5], [5, 3, 1], [1, 3, 5], [5 ,3 ,1]], + y_r2_a0_0 = np.array([[1, 3, 5], [5, 3, 1], [1, 3, 5], [5 ,3 ,1]], dtype=self.dt) zf_r = np.array([[-23, -23, -23]], dtype=self.dt) y, zf = lfilter(b, a, x, axis = 0, zi = np.ones((1, 3))) assert_array_almost_equal(y_r2_a0_0, y) assert_array_almost_equal(zf, zf_r) - #@dec.skipif(True, "Skipping rank > 2 test for lfilter because its segfaults ATM") def test_rank3(self): shape = (4, 3, 2) x = np.linspace(0, np.prod(shape) - 1, np.prod(shape)).reshape(shape) @@ -246,7 +422,10 @@ b = np.ones(1).astype(self.dt) x = np.arange(5).astype(self.dt) zi = np.ones(0).astype(self.dt) - lfilter(b, a, x, zi=zi) + y, zf = lfilter(b, a, x, zi=zi) + assert_array_almost_equal(y, x) + self.failUnless(zf.dtype == self.dt) + self.failUnless(zf.size == 0) class TestLinearFilterFloat32(_TestLinearFilter): dt = np.float32 @@ -254,14 +433,321 @@ class TestLinearFilterFloat64(_TestLinearFilter): dt = np.float64 +class TestLinearFilterFloatExtended(_TestLinearFilter): + dt = np.longdouble + class TestLinearFilterComplex64(_TestLinearFilter): dt = np.complex64 class TestLinearFilterComplex128(_TestLinearFilter): dt = np.complex128 +class TestLinearFilterComplexxxiExtended28(_TestLinearFilter): + dt = np.longcomplex + class TestLinearFilterDecimal(_TestLinearFilter): dt = np.dtype(Decimal) +class _TestCorrelateReal(TestCase): + dt = None + def _setup_rank1(self): + # a.size should be greated than b.size for the tests + a = np.linspace(0, 3, 4).astype(self.dt) + b = np.linspace(1, 2, 2).astype(self.dt) + + y_r = np.array([0, 2, 5, 8, 3]).astype(self.dt) + return a, b, y_r + + def test_rank1_valid(self): + a, b, y_r = self._setup_rank1() + y = correlate(a, b, 'valid', old_behavior=False) + assert_array_almost_equal(y, y_r[1:4]) + self.failUnless(y.dtype == self.dt) + + def test_rank1_same(self): + a, b, y_r = self._setup_rank1() + y = correlate(a, b, 'same', old_behavior=False) + assert_array_almost_equal(y, y_r[:-1]) + self.failUnless(y.dtype == self.dt) + + def test_rank1_full(self): + a, b, y_r = self._setup_rank1() + y = correlate(a, b, 'full', old_behavior=False) + assert_array_almost_equal(y, y_r) + self.failUnless(y.dtype == self.dt) + + @dec.deprecated() + def test_rank1_valid_old(self): + # This test assume a.size > b.size + a, b, y_r = self._setup_rank1() + y = correlate(b, a, 'valid') + assert_array_almost_equal(y, y_r[1:4]) + self.failUnless(y.dtype == self.dt) + + @dec.deprecated() + def test_rank1_same_old(self): + # This test assume a.size > b.size + a, b, y_r = self._setup_rank1() + y = correlate(b, a, 'same') + assert_array_almost_equal(y, y_r[:-1]) + self.failUnless(y.dtype == self.dt) + + @dec.deprecated() + def test_rank1_full_old(self): + # This test assume a.size > b.size + a, b, y_r = self._setup_rank1() + y = correlate(b, a, 'full') + assert_array_almost_equal(y, y_r) + self.failUnless(y.dtype == self.dt) + + def _setup_rank3(self): + a = np.linspace(0, 39, 40).reshape((2, 4, 5), order='F').astype(self.dt) + b = np.linspace(0, 23, 24).reshape((2, 3, 4), order='F').astype(self.dt) + + y_r = array([[[ 0., 184., 504., 912., 1360., 888., 472., 160.,], + [ 46., 432., 1062., 1840., 2672., 1698., 864., 266.,], + [ 134., 736., 1662., 2768., 3920., 2418., 1168., 314.,], + [ 260., 952., 1932., 3056., 4208., 2580., 1240., 332.,] , + [ 202., 664., 1290., 1984., 2688., 1590., 712., 150.,] , + [ 114., 344., 642., 960., 1280., 726., 296., 38.,]], + + [[ 23., 400., 1035., 1832., 2696., 1737., 904., 293.,], + [ 134., 920., 2166., 3680., 5280., 3306., 1640., 474.,], + [ 325., 1544., 3369., 5512., 7720., 4683., 2192., 535.,], + [ 571., 1964., 3891., 6064., 8272., 4989., 2324., 565.,], + [ 434., 1360., 2586., 3920., 5264., 3054., 1312., 230.,], + [ 241., 700., 1281., 1888., 2496., 1383., 532., 39.,]], + + [[ 22., 214., 528., 916., 1332., 846., 430., 132.,], + [ 86., 484., 1098., 1832., 2600., 1602., 772., 206.,], + [ 188., 802., 1698., 2732., 3788., 2256., 1018., 218.,], + [ 308., 1006., 1950., 2996., 4052., 2400., 1078., 230.,], + [ 230., 692., 1290., 1928., 2568., 1458., 596., 78.,], + [ 126., 354., 636., 924., 1212., 654., 234., 0.,]]], + dtype=self.dt) + + return a, b, y_r + + def test_rank3_valid(self): + a, b, y_r = self._setup_rank3() + y = correlate(a, b, "valid", old_behavior=False) + assert_array_almost_equal(y, y_r[1:2,2:4,3:5]) + self.failUnless(y.dtype == self.dt) + + def test_rank3_same(self): + a, b, y_r = self._setup_rank3() + y = correlate(a, b, "same", old_behavior=False) + assert_array_almost_equal(y, y_r[0:-1,1:-1,1:-2]) + self.failUnless(y.dtype == self.dt) + + def test_rank3_all(self): + a, b, y_r = self._setup_rank3() + y = correlate(a, b, old_behavior=False) + assert_array_almost_equal(y, y_r) + self.failUnless(y.dtype == self.dt) + + @dec.deprecated() + def test_rank3_valid_old(self): + a, b, y_r = self._setup_rank3() + y = correlate(b, a, "valid") + assert_array_almost_equal(y, y_r[1:2,2:4,3:5]) + self.failUnless(y.dtype == self.dt) + + @dec.deprecated() + def test_rank3_same_old(self): + a, b, y_r = self._setup_rank3() + y = correlate(b, a, "same") + assert_array_almost_equal(y, y_r[0:-1,1:-1,1:-2]) + self.failUnless(y.dtype == self.dt) + + @dec.deprecated() + def test_rank3_all_old(self): + a, b, y_r = self._setup_rank3() + y = correlate(b, a) + assert_array_almost_equal(y, y_r) + self.failUnless(y.dtype == self.dt) + +for i in [np.ubyte, np.byte, np.ushort, np.short, np.uint, np.int, + np.ulonglong, np.ulonglong, np.float32, np.float64, np.longdouble, + Decimal]: + name = "TestCorrelate%s" % i.__name__.title() + globals()[name] = types.ClassType(name, (_TestCorrelateReal,), {"dt": i}) + +class _TestCorrelateComplex(TestCase): + dt = None + def _setup_rank1(self, mode): + a = np.random.randn(10).astype(self.dt) + a += 1j * np.random.randn(10).astype(self.dt) + b = np.random.randn(8).astype(self.dt) + b += 1j * np.random.randn(8).astype(self.dt) + + y_r = (correlate(a.real, b.real, mode=mode, old_behavior=False) + + correlate(a.imag, b.imag, mode=mode, old_behavior=False)).astype(self.dt) + y_r += 1j * (-correlate(a.real, b.imag, mode=mode, old_behavior=False) + + correlate(a.imag, b.real, mode=mode, old_behavior=False)) + return a, b, y_r + + def test_rank1_valid(self): + a, b, y_r = self._setup_rank1('valid') + y = correlate(a, b, 'valid', old_behavior=False) + assert_array_almost_equal(y, y_r) + self.failUnless(y.dtype == self.dt) + + def test_rank1_same(self): + a, b, y_r = self._setup_rank1('same') + y = correlate(a, b, 'same', old_behavior=False) + assert_array_almost_equal(y, y_r) + self.failUnless(y.dtype == self.dt) + + def test_rank1_full(self): + a, b, y_r = self._setup_rank1('full') + y = correlate(a, b, 'full', old_behavior=False) + assert_array_almost_equal(y, y_r) + self.failUnless(y.dtype == self.dt) + + def test_rank3(self): + a = np.random.randn(10, 8, 6).astype(self.dt) + a += 1j * np.random.randn(10, 8, 6).astype(self.dt) + b = np.random.randn(8, 6, 4).astype(self.dt) + b += 1j * np.random.randn(8, 6, 4).astype(self.dt) + + y_r = (correlate(a.real, b.real, old_behavior=False) + + correlate(a.imag, b.imag, old_behavior=False)).astype(self.dt) + y_r += 1j * (-correlate(a.real, b.imag, old_behavior=False) + + correlate(a.imag, b.real, old_behavior=False)) + + y = correlate(a, b, 'full', old_behavior=False) + assert_array_almost_equal(y, y_r, decimal=4) + self.failUnless(y.dtype == self.dt) + + @dec.deprecated() + def test_rank1_valid_old(self): + a, b, y_r = self._setup_rank1('valid') + y = correlate(b, a.conj(), 'valid') + assert_array_almost_equal(y, y_r) + self.failUnless(y.dtype == self.dt) + + @dec.deprecated() + def test_rank1_same_old(self): + a, b, y_r = self._setup_rank1('same') + y = correlate(b, a.conj(), 'same') + assert_array_almost_equal(y, y_r) + self.failUnless(y.dtype == self.dt) + + @dec.deprecated() + def test_rank1_full_old(self): + a, b, y_r = self._setup_rank1('full') + y = correlate(b, a.conj(), 'full') + assert_array_almost_equal(y, y_r) + self.failUnless(y.dtype == self.dt) + + @dec.deprecated() + def test_rank3_old(self): + a = np.random.randn(10, 8, 6).astype(self.dt) + a += 1j * np.random.randn(10, 8, 6).astype(self.dt) + b = np.random.randn(8, 6, 4).astype(self.dt) + b += 1j * np.random.randn(8, 6, 4).astype(self.dt) + + y_r = (correlate(a.real, b.real, old_behavior=False) + + correlate(a.imag, b.imag, old_behavior=False)).astype(self.dt) + y_r += 1j * (-correlate(a.real, b.imag, old_behavior=False) + + correlate(a.imag, b.real, old_behavior=False)) + + y = correlate(b, a.conj(), 'full') + assert_array_almost_equal(y, y_r, decimal=4) + self.failUnless(y.dtype == self.dt) + +for i in [np.csingle, np.cdouble, np.clongdouble]: + name = "TestCorrelate%s" % i.__name__.title() + globals()[name] = types.ClassType(name, (_TestCorrelateComplex,), {"dt": i}) + +class TestFiltFilt: + def test_basic(self): + out = signal.filtfilt([1,2,3], [1,2,3], np.arange(12)) + assert_equal(out, arange(12)) + +class TestDecimate: + def test_basic(self): + x = np.arange(6) + assert_array_equal(signal.decimate(x, 2, n=1).round(), x[::2]) + + +class TestHilbert: + def test_hilbert_theoretical(self): + #test cases by Ariel Rokem + decimal = 14 + + pi = np.pi + t = np.arange(0, 2*pi, pi/256) + a0 = np.sin(t) + a1 = np.cos(t) + a2 = np.sin(2*t) + a3 = np.cos(2*t) + a = np.vstack([a0,a1,a2,a3]) + + h = hilbert(a) + h_abs = np.abs(h) + h_angle = np.angle(h) + h_real = np.real(h) + + #The real part should be equal to the original signals: + assert_almost_equal(h_real, a, decimal) + #The absolute value should be one everywhere, for this input: + assert_almost_equal(h_abs, np.ones(a.shape), decimal) + #For the 'slow' sine - the phase should go from -pi/2 to pi/2 in + #the first 256 bins: + assert_almost_equal(h_angle[0,:256], np.arange(-pi/2,pi/2,pi/256), + decimal) + #For the 'slow' cosine - the phase should go from 0 to pi in the + #same interval: + assert_almost_equal(h_angle[1,:256], np.arange(0,pi,pi/256), decimal) + #The 'fast' sine should make this phase transition in half the time: + assert_almost_equal(h_angle[2,:128], np.arange(-pi/2,pi/2,pi/128), + decimal) + #Ditto for the 'fast' cosine: + assert_almost_equal(h_angle[3,:128], np.arange(0,pi,pi/128), decimal) + + #The imaginary part of hilbert(cos(t)) = sin(t) Wikipedia + assert_almost_equal(h[1].imag, a0, decimal) + + def test_hilbert_axisN(self): + # tests for axis and N arguments + a = np.arange(18).reshape(3,6) + # test axis + aa = hilbert(a, axis=-1) + yield assert_equal, hilbert(a.T, axis=0), aa.T + # test 1d + yield assert_equal, hilbert(a[0]), aa[0] + + # test N + aan = hilbert(a, N=20, axis=-1) + yield assert_equal, aan.shape, [3,20] + yield assert_equal, hilbert(a.T, N=20, axis=0).shape, [20,3] + #the next test is just a regression test, + #no idea whether numbers make sense + a0hilb = np.array( + [ 0.000000000000000e+00-1.72015830311905j , + 1.000000000000000e+00-2.047794505137069j, + 1.999999999999999e+00-2.244055555687583j, + 3.000000000000000e+00-1.262750302935009j, + 4.000000000000000e+00-1.066489252384493j, + 5.000000000000000e+00+2.918022706971047j, + 8.881784197001253e-17+3.845658908989067j, + -9.444121133484362e-17+0.985044202202061j, + -1.776356839400251e-16+1.332257797702019j, + -3.996802888650564e-16+0.501905089898885j, + 1.332267629550188e-16+0.668696078880782j, + -1.192678053963799e-16+0.235487067862679j, + -1.776356839400251e-16+0.286439612812121j, + 3.108624468950438e-16+0.031676888064907j, + 1.332267629550188e-16-0.019275656884536j, + -2.360035624836702e-16-0.1652588660287j , + 0.000000000000000e+00-0.332049855010597j, + 3.552713678800501e-16-0.403810179797771j, + 8.881784197001253e-17-0.751023775297729j, + 9.444121133484362e-17-0.79252210110103j ]) + yield assert_almost_equal, aan[0], a0hilb, 14, 'N regression' + + if __name__ == "__main__": run_module_suite() diff -Nru python-scipy-0.7.2+dfsg1/scipy/signal/tests/test_waveforms.py python-scipy-0.8.0+dfsg1/scipy/signal/tests/test_waveforms.py --- python-scipy-0.7.2+dfsg1/scipy/signal/tests/test_waveforms.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/signal/tests/test_waveforms.py 2010-07-26 15:48:33.000000000 +0100 @@ -1,14 +1,316 @@ -#this program corresponds to special.py import numpy as np -from numpy.testing import * +from numpy.testing import TestCase, assert_almost_equal, assert_equal, assert_, \ + assert_raises, run_module_suite + +import scipy.signal.waveforms as waveforms + + +# These chirp_* functions are the instantaneous frequencies of the signals +# returned by chirp(). + +def chirp_linear(t, f0, f1, t1): + f = f0 + (f1 - f0) * t / t1 + return f + +def chirp_quadratic(t, f0, f1, t1, vertex_zero=True): + if vertex_zero: + f = f0 + (f1 - f0) * t**2 / t1**2 + else: + f = f1 - (f1 - f0) * (t1 - t)**2 / t1**2 + return f + +def chirp_geometric(t, f0, f1, t1): + f = f0 * (f1/f0)**(t/t1) + return f + +def chirp_hyperbolic(t, f0, f1, t1): + f = f0*f1*t1 / ((f0 - f1)*t + f1*t1) + return f + + +def compute_frequency(t, theta): + """Compute theta'(t)/(2*pi), where theta'(t) is the derivative of theta(t).""" + # Assume theta and t are 1D numpy arrays. + # Assume that t is uniformly spaced. + dt = t[1] - t[0] + f = np.diff(theta)/(2*np.pi) / dt + tf = 0.5*(t[1:] + t[:-1]) + return tf, f -import scipy.signal as signal class TestChirp(TestCase): - def test_log_chirp_at_zero(self): - assert_almost_equal(signal.waveforms.chirp(t=0, method='log'), - 1.0) + + def test_linear_at_zero(self): + w = waveforms.chirp(t=0, f0=1.0, f1=2.0, t1=1.0, method='linear') + assert_almost_equal(w, 1.0) + + def test_linear_freq_01(self): + method = 'linear' + f0 = 1.0 + f1 = 2.0 + t1 = 1.0 + t = np.linspace(0, t1, 100) + phase = waveforms._chirp_phase(t, f0, t1, f1, method) + tf, f = compute_frequency(t, phase) + abserr = np.max(np.abs(f - chirp_linear(tf, f0, f1, t1))) + assert_(abserr < 1e-6) + + def test_linear_freq_02(self): + method = 'linear' + f0 = 200.0 + f1 = 100.0 + t1 = 10.0 + t = np.linspace(0, t1, 100) + phase = waveforms._chirp_phase(t, f0, t1, f1, method) + tf, f = compute_frequency(t, phase) + abserr = np.max(np.abs(f - chirp_linear(tf, f0, f1, t1))) + assert_(abserr < 1e-6) + + def test_quadratic_at_zero(self): + w = waveforms.chirp(t=0, f0=1.0, f1=2.0, t1=1.0, method='quadratic') + assert_almost_equal(w, 1.0) + + def test_quadratic_at_zero2(self): + w = waveforms.chirp(t=0, f0=1.0, f1=2.0, t1=1.0, method='quadratic', + vertex_zero=False) + assert_almost_equal(w, 1.0) + + def test_quadratic_freq_01(self): + method = 'quadratic' + f0 = 1.0 + f1 = 2.0 + t1 = 1.0 + t = np.linspace(0, t1, 2000) + phase = waveforms._chirp_phase(t, f0, t1, f1, method) + tf, f = compute_frequency(t, phase) + abserr = np.max(np.abs(f - chirp_quadratic(tf, f0, f1, t1))) + assert_(abserr < 1e-6) + + def test_quadratic_freq_02(self): + method = 'quadratic' + f0 = 20.0 + f1 = 10.0 + t1 = 10.0 + t = np.linspace(0, t1, 2000) + phase = waveforms._chirp_phase(t, f0, t1, f1, method) + tf, f = compute_frequency(t, phase) + abserr = np.max(np.abs(f - chirp_quadratic(tf, f0, f1, t1))) + assert_(abserr < 1e-6) + + def test_logarithmic_at_zero(self): + w = waveforms.chirp(t=0, f0=1.0, f1=2.0, t1=1.0, method='logarithmic') + assert_almost_equal(w, 1.0) + + def test_logarithmic_freq_01(self): + method = 'logarithmic' + f0 = 1.0 + f1 = 2.0 + t1 = 1.0 + t = np.linspace(0, t1, 10000) + phase = waveforms._chirp_phase(t, f0, t1, f1, method) + tf, f = compute_frequency(t, phase) + abserr = np.max(np.abs(f - chirp_geometric(tf, f0, f1, t1))) + assert_(abserr < 1e-6) + + def test_logarithmic_freq_02(self): + method = 'logarithmic' + f0 = 200.0 + f1 = 100.0 + t1 = 10.0 + t = np.linspace(0, t1, 10000) + phase = waveforms._chirp_phase(t, f0, t1, f1, method) + tf, f = compute_frequency(t, phase) + abserr = np.max(np.abs(f - chirp_geometric(tf, f0, f1, t1))) + assert_(abserr < 1e-6) + + def test_logarithmic_freq_03(self): + method = 'logarithmic' + f0 = 100.0 + f1 = 100.0 + t1 = 10.0 + t = np.linspace(0, t1, 10000) + phase = waveforms._chirp_phase(t, f0, t1, f1, method) + tf, f = compute_frequency(t, phase) + abserr = np.max(np.abs(f - chirp_geometric(tf, f0, f1, t1))) + assert_(abserr < 1e-6) + + def test_hyperbolic_at_zero(self): + w = waveforms.chirp(t=0, f0=10.0, f1=1.0, t1=1.0, method='hyperbolic') + assert_almost_equal(w, 1.0) + + def test_hyperbolic_freq_01(self): + method = 'hyperbolic' + f0 = 10.0 + f1 = 1.0 + t1 = 1.0 + t = np.linspace(0, t1, 10000) + phase = waveforms._chirp_phase(t, f0, t1, f1, method) + tf, f = compute_frequency(t, phase) + abserr = np.max(np.abs(f - chirp_hyperbolic(tf, f0, f1, t1))) + assert_(abserr < 1e-6) + + def test_hyperbolic_freq_02(self): + method = 'hyperbolic' + f0 = 10.0 + f1 = 100.0 + t1 = 1.0 + t = np.linspace(0, t1, 10) + assert_raises(ValueError, waveforms.chirp, t, f0, t1, f1, method) + + def test_hyperbolic_freq_03(self): + method = 'hyperbolic' + f0 = -10.0 + f1 = 0.0 + t1 = 1.0 + t = np.linspace(0, t1, 10) + assert_raises(ValueError, waveforms.chirp, t, f0, t1, f1, method) + + def test_unknown_method(self): + method = "foo" + f0 = 10.0 + f1 = 20.0 + t1 = 1.0 + t = np.linspace(0, t1, 10) + assert_raises(ValueError, waveforms.chirp, t, f0, t1, f1, method) + + def test_integer_t1(self): + f0 = 10.0 + f1 = 20.0 + t = np.linspace(-1, 1, 11) + t1 = 3.0 + float_result = waveforms.chirp(t, f0, t1, f1) + t1 = 3 + int_result = waveforms.chirp(t, f0, t1, f1) + err_msg = "Integer input 't1=3' gives wrong result" + assert_equal(int_result, float_result, err_msg=err_msg) + + def test_integer_f0(self): + f1 = 20.0 + t1 = 3.0 + t = np.linspace(-1, 1, 11) + f0 = 10.0 + float_result = waveforms.chirp(t, f0, t1, f1) + f0 = 10 + int_result = waveforms.chirp(t, f0, t1, f1) + err_msg = "Integer input 'f0=10' gives wrong result" + assert_equal(int_result, float_result, err_msg=err_msg) + + def test_integer_f1(self): + f0 = 10.0 + t1 = 3.0 + t = np.linspace(-1, 1, 11) + f1 = 20.0 + float_result = waveforms.chirp(t, f0, t1, f1) + f1 = 20 + int_result = waveforms.chirp(t, f0, t1, f1) + err_msg = "Integer input 'f1=20' gives wrong result" + assert_equal(int_result, float_result, err_msg=err_msg) + + def test_integer_all(self): + f0 = 10 + t1 = 3 + f1 = 20 + t = np.linspace(-1, 1, 11) + float_result = waveforms.chirp(t, float(f0), float(t1), float(f1)) + int_result = waveforms.chirp(t, f0, t1, f1) + err_msg = "Integer input 'f0=10, t1=3, f1=20' gives wrong result" + assert_equal(int_result, float_result, err_msg=err_msg) + +class TestSweepPoly(TestCase): + + def test_sweep_poly_quad1(self): + p = np.poly1d([1.0, 0.0, 1.0]) + t = np.linspace(0, 3.0, 10000) + phase = waveforms._sweep_poly_phase(t, p) + tf, f = compute_frequency(t, phase) + expected = p(tf) + abserr = np.max(np.abs(f - expected)) + assert_(abserr < 1e-6) + + def test_sweep_poly_const(self): + p = np.poly1d(2.0) + t = np.linspace(0, 3.0, 10000) + phase = waveforms._sweep_poly_phase(t, p) + tf, f = compute_frequency(t, phase) + expected = p(tf) + abserr = np.max(np.abs(f - expected)) + assert_(abserr < 1e-6) + + def test_sweep_poly_linear(self): + p = np.poly1d([-1.0, 10.0]) + t = np.linspace(0, 3.0, 10000) + phase = waveforms._sweep_poly_phase(t, p) + tf, f = compute_frequency(t, phase) + expected = p(tf) + abserr = np.max(np.abs(f - expected)) + assert_(abserr < 1e-6) + + def test_sweep_poly_quad2(self): + p = np.poly1d([1.0, 0.0, -2.0]) + t = np.linspace(0, 3.0, 10000) + phase = waveforms._sweep_poly_phase(t, p) + tf, f = compute_frequency(t, phase) + expected = p(tf) + abserr = np.max(np.abs(f - expected)) + assert_(abserr < 1e-6) + + def test_sweep_poly_cubic(self): + p = np.poly1d([2.0, 1.0, 0.0, -2.0]) + t = np.linspace(0, 2.0, 10000) + phase = waveforms._sweep_poly_phase(t, p) + tf, f = compute_frequency(t, phase) + expected = p(tf) + abserr = np.max(np.abs(f - expected)) + assert_(abserr < 1e-6) + + def test_sweep_poly_cubic2(self): + """Use an array of coefficients instead of a poly1d.""" + p = np.array([2.0, 1.0, 0.0, -2.0]) + t = np.linspace(0, 2.0, 10000) + phase = waveforms._sweep_poly_phase(t, p) + tf, f = compute_frequency(t, phase) + expected = np.poly1d(p)(tf) + abserr = np.max(np.abs(f - expected)) + assert_(abserr < 1e-6) + + def test_sweep_poly_cubic3(self): + """Use a list of coefficients instead of a poly1d.""" + p = [2.0, 1.0, 0.0, -2.0] + t = np.linspace(0, 2.0, 10000) + phase = waveforms._sweep_poly_phase(t, p) + tf, f = compute_frequency(t, phase) + expected = np.poly1d(p)(tf) + abserr = np.max(np.abs(f - expected)) + assert_(abserr < 1e-6) + + +class TestGaussPulse(TestCase): + + def test_integer_fc(self): + float_result = waveforms.gausspulse('cutoff', fc=1000.0) + int_result = waveforms.gausspulse('cutoff', fc=1000) + err_msg = "Integer input 'fc=1000' gives wrong result" + assert_equal(int_result, float_result, err_msg=err_msg) + + def test_integer_bw(self): + float_result = waveforms.gausspulse('cutoff', bw=1.0) + int_result = waveforms.gausspulse('cutoff', bw=1) + err_msg = "Integer input 'bw=1' gives wrong result" + assert_equal(int_result, float_result, err_msg=err_msg) + + def test_integer_bwr(self): + float_result = waveforms.gausspulse('cutoff', bwr=-6.0) + int_result = waveforms.gausspulse('cutoff', bwr=-6) + err_msg = "Integer input 'bwr=-6' gives wrong result" + assert_equal(int_result, float_result, err_msg=err_msg) + + def test_integer_tpr(self): + float_result = waveforms.gausspulse('cutoff', tpr=-60.0) + int_result = waveforms.gausspulse('cutoff', tpr=-60) + err_msg = "Integer input 'tpr=-60' gives wrong result" + assert_equal(int_result, float_result, err_msg=err_msg) + if __name__ == "__main__": run_module_suite() diff -Nru python-scipy-0.7.2+dfsg1/scipy/signal/tests/test_windows.py python-scipy-0.8.0+dfsg1/scipy/signal/tests/test_windows.py --- python-scipy-0.7.2+dfsg1/scipy/signal/tests/test_windows.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/signal/tests/test_windows.py 2010-07-26 15:48:33.000000000 +0100 @@ -0,0 +1,65 @@ + +from numpy import array, ones_like +from numpy.testing import assert_array_almost_equal, assert_array_equal +from scipy import signal + + +cheb_odd_true = array([0.200938, 0.107729, 0.134941, 0.165348, + 0.198891, 0.235450, 0.274846, 0.316836, + 0.361119, 0.407338, 0.455079, 0.503883, + 0.553248, 0.602637, 0.651489, 0.699227, + 0.745266, 0.789028, 0.829947, 0.867485, + 0.901138, 0.930448, 0.955010, 0.974482, + 0.988591, 0.997138, 1.000000, 0.997138, + 0.988591, 0.974482, 0.955010, 0.930448, + 0.901138, 0.867485, 0.829947, 0.789028, + 0.745266, 0.699227, 0.651489, 0.602637, + 0.553248, 0.503883, 0.455079, 0.407338, + 0.361119, 0.316836, 0.274846, 0.235450, + 0.198891, 0.165348, 0.134941, 0.107729, + 0.200938]) + +cheb_even_true = array([0.203894, 0.107279, 0.133904, + 0.163608, 0.196338, 0.231986, + 0.270385, 0.311313, 0.354493, + 0.399594, 0.446233, 0.493983, + 0.542378, 0.590916, 0.639071, + 0.686302, 0.732055, 0.775783, + 0.816944, 0.855021, 0.889525, + 0.920006, 0.946060, 0.967339, + 0.983557, 0.994494, 1.000000, + 1.000000, 0.994494, 0.983557, + 0.967339, 0.946060, 0.920006, + 0.889525, 0.855021, 0.816944, + 0.775783, 0.732055, 0.686302, + 0.639071, 0.590916, 0.542378, + 0.493983, 0.446233, 0.399594, + 0.354493, 0.311313, 0.270385, + 0.231986, 0.196338, 0.163608, + 0.133904, 0.107279, 0.203894]) + + +class TestChebWin(object): + + def test_cheb_odd(self): + cheb_odd = signal.chebwin(53, at=-40) + assert_array_almost_equal(cheb_odd, cheb_odd_true, decimal=4) + + def test_cheb_even(self): + cheb_even = signal.chebwin(54, at=-40) + assert_array_almost_equal(cheb_even, cheb_even_true, decimal=4) + + +class TestGetWindow(object): + + def test_boxcar(self): + w = signal.get_window('boxcar', 12) + assert_array_equal(w, ones_like(w)) + + def test_cheb_odd(self): + w = signal.get_window(('chebwin', -40), 53, fftbins=False) + assert_array_almost_equal(w, cheb_odd_true, decimal=4) + + def test_cheb_even(self): + w = signal.get_window(('chebwin', -40), 54, fftbins=False) + assert_array_almost_equal(w, cheb_even_true, decimal=4) diff -Nru python-scipy-0.7.2+dfsg1/scipy/signal/waveforms.py python-scipy-0.8.0+dfsg1/scipy/signal/waveforms.py --- python-scipy-0.7.2+dfsg1/scipy/signal/waveforms.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/signal/waveforms.py 2010-07-26 15:48:33.000000000 +0100 @@ -1,14 +1,40 @@ # Author: Travis Oliphant # 2003 +# +# Feb. 2010: Updated by Warren Weckesser: +# Rewrote much of chirp() +# Added sweep_poly() +import warnings from numpy import asarray, zeros, place, nan, mod, pi, extract, log, sqrt, \ - exp, cos, sin, size, polyval, polyint, log10 + exp, cos, sin, polyval, polyint, size, log10 def sawtooth(t,width=1): - """Returns a periodic sawtooth waveform with period 2*pi - which rises from -1 to 1 on the interval 0 to width*2*pi - and drops from 1 to -1 on the interval width*2*pi to 2*pi - width must be in the interval [0,1] + """ + Return a periodic sawtooth waveform. + + The sawtooth waveform has a period 2*pi, rises from -1 to 1 on the + interval 0 to width*2*pi and drops from 1 to -1 on the interval + width*2*pi to 2*pi. `width` must be in the interval [0,1]. + + Parameters + ---------- + t : array_like + Time. + width : float, optional + Width of the waveform. Default is 1. + + Returns + ------- + y : ndarray + Output array containing the sawtooth waveform. + + Examples + -------- + >>> import matplotlib.pyplot as plt + >>> x = np.linspace(0, 20*np.pi, 500) + >>> plt.plot(x, sp.signal.sawtooth(x)) + """ t,w = asarray(t), asarray(width) w = asarray(w + (t-t)) @@ -44,9 +70,24 @@ def square(t,duty=0.5): - """Returns a periodic square-wave waveform with period 2*pi - which is +1 from 0 to 2*pi*duty and -1 from 2*pi*duty to 2*pi - duty must be in the interval [0,1] + """ + Return a periodic square-wave waveform. + + The square wave has a period 2*pi, has value +1 from 0 to 2*pi*duty + and -1 from 2*pi*duty to 2*pi. `duty` must be in the interval [0,1]. + + Parameters + ---------- + t : array_like + The input time array. + duty : float, optional + Duty cycle. + + Returns + ------- + y : array_like + The output square wave. + """ t,w = asarray(t), asarray(duty) w = asarray(w + (t-t)) @@ -81,23 +122,33 @@ return y def gausspulse(t,fc=1000,bw=0.5,bwr=-6,tpr=-60,retquad=0,retenv=0): - """Return a gaussian modulated sinusoid: exp(-a t^2) exp(1j*2*pi*fc) + """ + Return a gaussian modulated sinusoid: exp(-a t^2) exp(1j*2*pi*fc). - If retquad is non-zero, then return the real and imaginary parts - (inphase and quadrature) - If retenv is non-zero, then return the envelope (unmodulated signal). + If `retquad` is non-zero, then return the real and imaginary parts + (in-phase and quadrature) + If `retenv` is non-zero, then return the envelope (unmodulated signal). Otherwise, return the real part of the modulated sinusoid. - Inputs: + Parameters + ---------- + t : ndarray + Input array. + fc : int, optional + Center frequency (Hz). + bw : float, optional + Fractional bandwidth in frequency domain of pulse (Hz). + bwr: float, optional + Reference level at which fractional bandwidth is calculated (dB). + tpr : float, optional + If `t` is 'cutoff', then the function returns the cutoff + time for when the pulse amplitude falls below `tpr` (in dB). + retquad : int, optional + Return the quadrature (imaginary) as well as the real part + of the signal. + retenv : int, optional + Return the envelope of the signal. - t -- Input array. - fc -- Center frequency (Hz). - bw -- Fractional bandwidth in frequency domain of pulse (Hz). - bwr -- Reference level at which fractional bandwidth is calculated (dB). - tpr -- If t is 'cutoff', then the function returns the cutoff time for when the - pulse amplitude falls below tpr (in dB). - retquad -- Return the quadrature (imaginary) as well as the real part of the signal - retenv -- Return the envelope of th signal. """ if fc < 0: raise ValueError, "Center frequency (fc=%.2f) must be >=0." % fc @@ -109,18 +160,18 @@ # exp(-a t^2) <-> sqrt(pi/a) exp(-pi^2/a * f^2) = g(f) - ref = pow(10, bwr/ 20) + ref = pow(10.0, bwr / 20.0) # fdel = fc*bw/2: g(fdel) = ref --- solve this for a # # pi^2/a * fc^2 * bw^2 /4=-log(ref) - a = -(pi*fc*bw)**2 / (4*log(ref)) + a = -(pi*fc*bw)**2 / (4.0*log(ref)) if t == 'cutoff': # compute cut_off point # Solve exp(-a tc**2) = tref for tc # tc = sqrt(-log(tref) / a) where tref = 10^(tpr/20) if tpr >= 0: raise ValueError, "Reference level for time cutoff must be < 0 dB" - tref = pow(10, tpr / 20) + tref = pow(10.0, tpr / 20.0) return sqrt(-log(tref)/a) yenv = exp(-a*t*t) @@ -135,7 +186,10 @@ if retquad and retenv: return yI, yQ, yenv -def chirp(t, f0=0, t1=1, f1=100, method='linear', phi=0, qshape=None): + +# This is chirp from scipy 0.7: + +def old_chirp(t, f0=0, t1=1, f1=100, method='linear', phi=0, qshape=None): """Frequency-swept cosine generator. Parameters @@ -164,7 +218,12 @@ frequency change in time. In this case, the values of `f1`, `t1`, `method`, and `qshape` are ignored. + This function is deprecated. It will be removed in SciPy version 0.9.0. + It exists so that during in version 0.8.0, the new chirp function can + call this function to preserve the old behavior of the quadratic chirp. """ + warnings.warn("The function old_chirp is deprecated, and will be removed in " + "SciPy 0.9", DeprecationWarning) # Convert to radians. phi *= pi / 180 if size(f0) > 1: @@ -199,3 +258,227 @@ "'logarithmic' but a value of %r was given." % method) return cos(phase_angle + phi) + + +def chirp(t, f0, t1, f1, method='linear', phi=0, vertex_zero=True, + qshape=None): + """Frequency-swept cosine generator. + + In the following, 'Hz' should be interpreted as 'cycles per time unit'; + there is no assumption here that the time unit is one second. The + important distinction is that the units of rotation are cycles, not + radians. + + Parameters + ---------- + t : ndarray + Times at which to evaluate the waveform. + f0 : float + Frequency (in Hz) at time t=0. + t1 : float + Time at which `f1` is specified. + f1 : float + Frequency (in Hz) of the waveform at time `t1`. + method : {'linear', 'quadratic', 'logarithmic', 'hyperbolic'}, optional + Kind of frequency sweep. If not given, `linear` is assumed. See + Notes below for more details. + phi : float, optional + Phase offset, in degrees. Default is 0. + vertex_zero : bool, optional + This parameter is only used when `method` is 'quadratic'. + It determines whether the vertex of the parabola that is the graph + of the frequency is at t=0 or t=t1. + qshape : str (deprecated) + If `method` is `quadratic` and `qshape` is not None, chirp() will + use scipy.signal.waveforms.old_chirp to compute the wave form. + This parameter is deprecated, and will be removed in SciPy 0.9. + + Returns + ------- + A numpy array containing the signal evaluated at 't' with the requested + time-varying frequency. More precisely, the function returns: + + ``cos(phase + (pi/180)*phi)`` + + where `phase` is the integral (from 0 to t) of ``2*pi*f(t)``. + ``f(t)`` is defined below. + + See Also + -------- + scipy.signal.waveforms.sweep_poly + + Notes + ----- + There are four options for the `method`. The following formulas give + the instantaneous frequency (in Hz) of the signal generated by + `chirp()`. For convenience, the shorter names shown below may also be + used. + + linear, lin, li: + + ``f(t) = f0 + (f1 - f0) * t / t1`` + + quadratic, quad, q: + + The graph of the frequency f(t) is a parabola through (0, f0) and + (t1, f1). By default, the vertex of the parabola is at (0, f0). + If `vertex_zero` is False, then the vertex is at (t1, f1). The + formula is: + + if vertex_zero is True: + + ``f(t) = f0 + (f1 - f0) * t**2 / t1**2`` + + else: + + ``f(t) = f1 - (f1 - f0) * (t1 - t)**2 / t1**2`` + + To use a more general quadratic function, or an arbitrary + polynomial, use the function `scipy.signal.waveforms.sweep_poly`. + + logarithmic, log, lo: + + ``f(t) = f0 * (f1/f0)**(t/t1)`` + + f0 and f1 must be nonzero and have the same sign. + + This signal is also known as a geometric or exponential chirp. + + hyperbolic, hyp: + + ``f(t) = f0*f1*t1 / ((f0 - f1)*t + f1*t1)`` + + f1 must be positive, and f0 must be greater than f1. + + """ + if size(f0) > 1: + # Preserve old behavior for one release cycle; this can be + # removed in scipy 0.9. + warnings.warn("Passing a list of polynomial coefficients in f0 to the " + "function chirp is deprecated. Use scipy.signal.sweep_poly.", + DeprecationWarning) + return old_chirp(t, f0, t1, f1, method, phi, qshape) + + if method in ['quadratic', 'quad', 'q'] and qshape is not None: + # We must use the old version of the quadratic chirp. Fortunately, + # the old API *required* that qshape be either 'convex' or 'concave' + # if the quadratic method was selected--`None` would raise an error. + # So if the code reaches this point, we should use the old version. + warnings.warn("The qshape keyword argument is deprecated. " + "Use vertex_zero.", DeprecationWarning) + waveform = old_chirp(t, f0, t1, f1, method, phi, qshape) + return waveform + + # 'phase' is computed in _chirp_phase, to make testing easier. + phase = _chirp_phase(t, f0, t1, f1, method, vertex_zero) + # Convert phi to radians. + phi *= pi / 180 + return cos(phase + phi) + + +def _chirp_phase(t, f0, t1, f1, method='linear', vertex_zero=True): + """ + Calculate the phase used by chirp_phase to generate its output. See + chirp_phase for a description of the arguments. + + """ + f0 = float(f0) + t1 = float(t1) + f1 = float(f1) + if method in ['linear', 'lin', 'li']: + beta = (f1 - f0) / t1 + phase = 2*pi * (f0*t + 0.5*beta*t*t) + + elif method in ['quadratic','quad','q']: + beta = (f1 - f0)/(t1**2) + if vertex_zero: + phase = 2*pi * (f0*t + beta * t**3/3) + else: + phase = 2*pi * (f1*t + beta * ((t1 - t)**3 - t1**3)/3) + + elif method in ['logarithmic', 'log', 'lo']: + if f0*f1 <= 0.0: + raise ValueError("For a geometric chirp, f0 and f1 must be nonzero " \ + "and have the same sign.") + if f0 == f1: + phase = 2*pi * f0 * t + else: + beta = t1 / log(f1/f0) + phase = 2*pi * beta * f0 * (pow(f1/f0, t/t1) - 1.0) + + elif method in ['hyperbolic', 'hyp']: + if f1 <= 0.0 or f0 <= f1: + raise ValueError("hyperbolic chirp requires f0 > f1 > 0.0.") + c = f1*t1 + df = f0 - f1 + phase = 2*pi * (f0 * c / df) * log((df*t + c)/c) + + else: + raise ValueError("method must be 'linear', 'quadratic', 'logarithmic', " + "or 'hyperbolic', but a value of %r was given." % method) + + return phase + + +def sweep_poly(t, poly, phi=0): + """Frequency-swept cosine generator, with a time-dependent frequency + specified as a polynomial. + + This function generates a sinusoidal function whose instantaneous + frequency varies with time. The frequency at time `t` is given by + the polynomial `poly`. + + Parameters + ---------- + t : ndarray + Times at which to evaluate the waveform. + poly : 1D ndarray (or array-like), or instance of numpy.poly1d + The desired frequency expressed as a polynomial. If `poly` is + a list or ndarray of length n, then the elements of `poly` are + the coefficients of the polynomial, and the instantaneous + frequency is + + ``f(t) = poly[0]*t**(n-1) + poly[1]*t**(n-2) + ... + poly[n-1]`` + + If `poly` is an instance of numpy.poly1d, then the + instantaneous frequency is + + ``f(t) = poly(t)`` + + phi : float, optional + Phase offset, in degrees. Default is 0. + + Returns + ------- + A numpy array containing the signal evaluated at 't' with the requested + time-varying frequency. More precisely, the function returns + + ``cos(phase + (pi/180)*phi)`` + + where `phase` is the integral (from 0 to t) of ``2 * pi * f(t)``; + ``f(t)`` is defined above. + + See Also + -------- + scipy.signal.waveforms.chirp + + Notes + ----- + .. versionadded:: 0.8.0 + """ + # 'phase' is computed in _sweep_poly_phase, to make testing easier. + phase = _sweep_poly_phase(t, poly) + # Convert to radians. + phi *= pi / 180 + return cos(phase + phi) + +def _sweep_poly_phase(t, poly): + """ + Calculate the phase used by sweep_poly to generate its output. See + sweep_poly for a description of the arguments. + + """ + # polyint handles lists, ndarrays and instances of poly1d automatically. + intpoly = polyint(poly) + phase = 2*pi * polyval(intpoly, t) + return phase diff -Nru python-scipy-0.7.2+dfsg1/scipy/signal/wavelets.py python-scipy-0.8.0+dfsg1/scipy/signal/wavelets.py --- python-scipy-0.7.2+dfsg1/scipy/signal/wavelets.py 2010-03-03 14:34:11.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/signal/wavelets.py 2010-07-26 15:48:33.000000000 +0100 @@ -6,9 +6,17 @@ from scipy import linspace, pi, exp def daub(p): - """The coefficients for the FIR low-pass filter producing Daubechies wavelets. + """ + The coefficients for the FIR low-pass filter producing Daubechies wavelets. + + p>=1 gives the order of the zero at f=1/2. + There are 2p filter coefficients. + + Parameters + ---------- + p : int + Order of the zero at f=1/2, can have values from 1 to 34. - p>=1 gives the order of the zero at f=1/2. There are 2p filter coefficients. """ sqrt = np.sqrt assert(p>=1) @@ -170,7 +178,8 @@ return x, phi, psi def morlet(M, w=5.0, s=1.0, complete=True): - """Complex Morlet wavelet. + """ + Complex Morlet wavelet. Parameters ---------- @@ -183,8 +192,8 @@ complete : bool Whether to use the complete or the standard version. - Notes: - ------ + Notes + ----- The standard version: pi**-0.25 * exp(1j*w*x) * exp(-0.5*(x**2)) diff -Nru python-scipy-0.7.2+dfsg1/scipy/signal/windows.py python-scipy-0.8.0+dfsg1/scipy/signal/windows.py --- python-scipy-0.7.2+dfsg1/scipy/signal/windows.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/signal/windows.py 2010-07-26 15:48:33.000000000 +0100 @@ -0,0 +1,453 @@ +"""The suite of window functions.""" + +import types + +import numpy as np +from scipy import special, linalg +from scipy.fftpack import fft + + +def boxcar(M, sym=True): + """The M-point boxcar window. + + """ + return np.ones(M, float) + +def triang(M, sym=True): + """The M-point triangular window. + + """ + if M < 1: + return np.array([]) + if M == 1: + return np.ones(1,'d') + odd = M % 2 + if not sym and not odd: + M = M + 1 + n = np.arange(1,int((M+1)/2)+1) + if M % 2 == 0: + w = (2*n-1.0)/M + w = np.r_[w, w[::-1]] + else: + w = 2*n/(M+1.0) + w = np.r_[w, w[-2::-1]] + + if not sym and not odd: + w = w[:-1] + return w + +def parzen(M, sym=True): + """The M-point Parzen window. + + """ + if M < 1: + return np.array([]) + if M == 1: + return np.ones(1,'d') + odd = M % 2 + if not sym and not odd: + M = M+1 + n = np.arange(-(M-1)/2.0,(M-1)/2.0+0.5,1.0) + na = np.extract(n < -(M-1)/4.0, n) + nb = np.extract(abs(n) <= (M-1)/4.0, n) + wa = 2*(1-np.abs(na)/(M/2.0))**3.0 + wb = 1-6*(np.abs(nb)/(M/2.0))**2.0 + 6*(np.abs(nb)/(M/2.0))**3.0 + w = np.r_[wa,wb,wa[::-1]] + if not sym and not odd: + w = w[:-1] + return w + +def bohman(M, sym=True): + """The M-point Bohman window. + + """ + if M < 1: + return np.array([]) + if M == 1: + return np.ones(1,'d') + odd = M % 2 + if not sym and not odd: + M = M+1 + fac = np.abs(np.linspace(-1,1,M)[1:-1]) + w = (1 - fac) * np.cos(np.pi*fac) + 1.0/np.pi*np.sin(np.pi*fac) + w = np.r_[0,w,0] + if not sym and not odd: + w = w[:-1] + return w + +def blackman(M, sym=True): + """The M-point Blackman window. + + """ + if M < 1: + return np.array([]) + if M == 1: + return np.ones(1,'d') + odd = M % 2 + if not sym and not odd: + M = M+1 + n = np.arange(0,M) + w = 0.42-0.5*np.cos(2.0*np.pi*n/(M-1)) + 0.08*np.cos(4.0*np.pi*n/(M-1)) + if not sym and not odd: + w = w[:-1] + return w + +def nuttall(M, sym=True): + """A minimum 4-term Blackman-Harris window according to Nuttall. + + """ + if M < 1: + return np.array([]) + if M == 1: + return np.ones(1,'d') + odd = M % 2 + if not sym and not odd: + M = M+1 + a = [0.3635819, 0.4891775, 0.1365995, 0.0106411] + n = np.arange(0,M) + fac = n*2*np.pi/(M-1.0) + w = a[0] - a[1]*np.cos(fac) + a[2]*np.cos(2*fac) - a[3]*np.cos(3*fac) + if not sym and not odd: + w = w[:-1] + return w + +def blackmanharris(M, sym=True): + """The M-point minimum 4-term Blackman-Harris window. + + """ + if M < 1: + return np.array([]) + if M == 1: + return np.ones(1,'d') + odd = M % 2 + if not sym and not odd: + M = M+1 + a = [0.35875, 0.48829, 0.14128, 0.01168]; + n = np.arange(0,M) + fac = n*2*np.pi/(M-1.0) + w = a[0] - a[1]*np.cos(fac) + a[2]*np.cos(2*fac) - a[3]*np.cos(3*fac) + if not sym and not odd: + w = w[:-1] + return w + +def flattop(M, sym=True): + """The M-point Flat top window. + + """ + if M < 1: + return np.array([]) + if M == 1: + return np.ones(1,'d') + odd = M % 2 + if not sym and not odd: + M = M+1 + a = [0.2156, 0.4160, 0.2781, 0.0836, 0.0069] + n = np.arange(0,M) + fac = n*2*np.pi/(M-1.0) + w = a[0] - a[1]*np.cos(fac) + a[2]*np.cos(2*fac) - a[3]*np.cos(3*fac) + \ + a[4]*np.cos(4*fac) + if not sym and not odd: + w = w[:-1] + return w + + +def bartlett(M, sym=True): + """The M-point Bartlett window. + + """ + if M < 1: + return np.array([]) + if M == 1: + return np.ones(1,'d') + odd = M % 2 + if not sym and not odd: + M = M+1 + n = np.arange(0,M) + w = np.where(np.less_equal(n,(M-1)/2.0),2.0*n/(M-1),2.0-2.0*n/(M-1)) + if not sym and not odd: + w = w[:-1] + return w + +def hanning(M, sym=True): + """The M-point Hanning window. + + """ + if M < 1: + return np.array([]) + if M == 1: + return np.ones(1,'d') + odd = M % 2 + if not sym and not odd: + M = M+1 + n = np.arange(0,M) + w = 0.5-0.5*np.cos(2.0*np.pi*n/(M-1)) + if not sym and not odd: + w = w[:-1] + return w + +hann = hanning + +def barthann(M, sym=True): + """Return the M-point modified Bartlett-Hann window. + + """ + if M < 1: + return np.array([]) + if M == 1: + return np.ones(1,'d') + odd = M % 2 + if not sym and not odd: + M = M+1 + n = np.arange(0,M) + fac = np.abs(n/(M-1.0)-0.5) + w = 0.62 - 0.48*fac + 0.38*np.cos(2*np.pi*fac) + if not sym and not odd: + w = w[:-1] + return w + +def hamming(M, sym=True): + """The M-point Hamming window. + + """ + if M < 1: + return np.array([]) + if M == 1: + return np.ones(1,'d') + odd = M % 2 + if not sym and not odd: + M = M+1 + n = np.arange(0,M) + w = 0.54-0.46*np.cos(2.0*np.pi*n/(M-1)) + if not sym and not odd: + w = w[:-1] + return w + + +def kaiser(M, beta, sym=True): + """Return a Kaiser window of length M with shape parameter beta. + + """ + if M < 1: + return np.array([]) + if M == 1: + return np.ones(1,'d') + odd = M % 2 + if not sym and not odd: + M = M + 1 + n = np.arange(0,M) + alpha = (M-1)/2.0 + w = special.i0(beta * np.sqrt(1-((n-alpha)/alpha)**2.0))/special.i0(beta) + if not sym and not odd: + w = w[:-1] + return w + +def gaussian(M, std, sym=True): + """Return a Gaussian window of length M with standard-deviation std. + + """ + if M < 1: + return np.array([]) + if M == 1: + return np.ones(1,'d') + odd = M % 2 + if not sym and not odd: + M = M + 1 + n = np.arange(0,M) - (M-1.0)/2.0 + sig2 = 2*std*std + w = np.exp(-n**2 / sig2) + if not sym and not odd: + w = w[:-1] + return w + +def general_gaussian(M, p, sig, sym=True): + """Return a window with a generalized Gaussian shape. + + exp(-0.5*(x/sig)**(2*p)) + + half power point is at (2*log(2)))**(1/(2*p))*sig + + """ + if M < 1: + return np.array([]) + if M == 1: + return np.ones(1,'d') + odd = M % 2 + if not sym and not odd: + M = M + 1 + n = np.arange(0,M) - (M-1.0)/2.0 + w = np.exp(-0.5*(n/sig)**(2*p)) + if not sym and not odd: + w = w[:-1] + return w + + +# `chebwin` contributed by Kumar Appaiah. + +def chebwin(M, at, sym=True): + """Dolph-Chebyshev window. + + INPUTS: + + M : int + Window size + at : float + Attenuation (in dB) + sym : bool + Generates symmetric window if True. + + """ + if M < 1: + return np.array([]) + if M == 1: + return np.ones(1,'d') + + odd = M % 2 + if not sym and not odd: + M = M + 1 + + # compute the parameter beta + order = M - 1.0 + beta = np.cosh(1.0/order * np.arccosh(10**(np.abs(at)/20.))) + k = np.r_[0:M]*1.0 + x = beta * np.cos(np.pi*k/M) + #find the window's DFT coefficients + # Use analytic definition of Chebyshev polynomial instead of expansion + # from scipy.special. Using the expansion in scipy.special leads to errors. + p = np.zeros(x.shape) + p[x > 1] = np.cosh(order * np.arccosh(x[x > 1])) + p[x < -1] = (1 - 2*(order%2)) * np.cosh(order * np.arccosh(-x[x < -1])) + p[np.abs(x) <=1 ] = np.cos(order * np.arccos(x[np.abs(x) <= 1])) + + # Appropriate IDFT and filling up + # depending on even/odd M + if M % 2: + w = np.real(fft(p)) + n = (M + 1) / 2 + w = w[:n] / w[0] + w = np.concatenate((w[n - 1:0:-1], w)) + else: + p = p * np.exp(1.j*np.pi / M * np.r_[0:M]) + w = np.real(fft(p)) + n = M / 2 + 1 + w = w / w[1] + w = np.concatenate((w[n - 1:0:-1], w[1:n])) + if not sym and not odd: + w = w[:-1] + return w + + +def slepian(M, width, sym=True): + """Return the M-point slepian window. + + """ + if (M*width > 27.38): + raise ValueError, "Cannot reliably obtain slepian sequences for"\ + " M*width > 27.38." + if M < 1: + return np.array([]) + if M == 1: + return np.ones(1,'d') + odd = M % 2 + if not sym and not odd: + M = M+1 + + twoF = width/2.0 + alpha = (M-1)/2.0 + m = np.arange(0,M) - alpha + n = m[:,np.newaxis] + k = m[np.newaxis,:] + AF = twoF*special.sinc(twoF*(n-k)) + [lam,vec] = linalg.eig(AF) + ind = np.argmax(abs(lam),axis=-1) + w = np.abs(vec[:,ind]) + w = w / max(w) + + if not sym and not odd: + w = w[:-1] + return w + + +def get_window(window, Nx, fftbins=True): + """Return a window of length Nx and type window. + + If fftbins is True, create a "periodic" window ready to use with ifftshift + and be multiplied by the result of an fft (SEE ALSO fftfreq). + + Window types: boxcar, triang, blackman, hamming, hanning, bartlett, + parzen, bohman, blackmanharris, nuttall, barthann, + kaiser (needs beta), gaussian (needs std), + general_gaussian (needs power, width), + slepian (needs width), chebwin (needs attenuation) + + If the window requires no parameters, then it can be a string. + If the window requires parameters, the window argument should be a tuple + with the first argument the string name of the window, and the next + arguments the needed parameters. + If window is a floating point number, it is interpreted as the beta + parameter of the kaiser window. + """ + + sym = not fftbins + try: + beta = float(window) + except (TypeError, ValueError): + args = () + if isinstance(window, types.TupleType): + winstr = window[0] + if len(window) > 1: + args = window[1:] + elif isinstance(window, types.StringType): + if window in ['kaiser', 'ksr', 'gaussian', 'gauss', 'gss', + 'general gaussian', 'general_gaussian', + 'general gauss', 'general_gauss', 'ggs', + 'slepian', 'optimal', 'slep', 'dss', + 'chebwin', 'cheb']: + raise ValueError("The '" + window + "' window needs one or " + "more parameters -- pass a tuple.") + else: + winstr = window + + if winstr in ['blackman', 'black', 'blk']: + winfunc = blackman + elif winstr in ['triangle', 'triang', 'tri']: + winfunc = triang + elif winstr in ['hamming', 'hamm', 'ham']: + winfunc = hamming + elif winstr in ['bartlett', 'bart', 'brt']: + winfunc = bartlett + elif winstr in ['hanning', 'hann', 'han']: + winfunc = hanning + elif winstr in ['blackmanharris', 'blackharr','bkh']: + winfunc = blackmanharris + elif winstr in ['parzen', 'parz', 'par']: + winfunc = parzen + elif winstr in ['bohman', 'bman', 'bmn']: + winfunc = bohman + elif winstr in ['nuttall', 'nutl', 'nut']: + winfunc = nuttall + elif winstr in ['barthann', 'brthan', 'bth']: + winfunc = barthann + elif winstr in ['flattop', 'flat', 'flt']: + winfunc = flattop + elif winstr in ['kaiser', 'ksr']: + winfunc = kaiser + elif winstr in ['gaussian', 'gauss', 'gss']: + winfunc = gaussian + elif winstr in ['general gaussian', 'general_gaussian', + 'general gauss', 'general_gauss', 'ggs']: + winfunc = general_gaussian + elif winstr in ['boxcar', 'box', 'ones']: + winfunc = boxcar + elif winstr in ['slepian', 'slep', 'optimal', 'dss']: + winfunc = slepian + elif winstr in ['chebwin', 'cheb']: + winfunc = chebwin + else: + raise ValueError, "Unknown window type." + + params = (Nx,) + args + (sym,) + else: + winfunc = kaiser + params = (Nx, beta, sym) + + return winfunc(*params) diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/bsr.py python-scipy-0.8.0+dfsg1/scipy/sparse/bsr.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/bsr.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/bsr.py 2010-07-26 15:48:34.000000000 +0100 @@ -490,15 +490,10 @@ # utility functions def _binopt(self, other, op, in_shape=None, out_shape=None): """apply the binary operation fn to two sparse matrices""" - other = self.__class__(other,blocksize=self.blocksize) - if in_shape is None: - in_shape = self.shape - if out_shape is None: - out_shape = self.shape - - self.sort_indices() - other.sort_indices() + # ideally we'd take the GCDs of the blocksize dimensions + # and explode self and other to match + other = self.__class__(other, blocksize=self.blocksize) # e.g. bsr_plus_bsr, etc. fn = getattr(sparsetools, self.format + op + self.format) @@ -510,7 +505,7 @@ indices = np.empty(max_bnnz, dtype=np.intc) data = np.empty(R*C*max_bnnz, dtype=upcast(self.dtype,other.dtype)) - fn(in_shape[0]/R, in_shape[1]/C, R, C, \ + fn(self.shape[0]/R, self.shape[1]/C, R, C, self.indptr, self.indices, np.ravel(self.data), other.indptr, other.indices, np.ravel(other.data), indptr, indices, data) @@ -525,7 +520,7 @@ data = data.reshape(-1,R,C) - return self.__class__((data, indices, indptr), shape=out_shape) + return self.__class__((data, indices, indptr), shape=self.shape) # needed by _data_matrix def _with_data(self,data,copy=True): diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/compressed.py python-scipy-0.8.0+dfsg1/scipy/sparse/compressed.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/compressed.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/compressed.py 2010-07-26 15:48:34.000000000 +0100 @@ -678,27 +678,19 @@ return self.__class__((data,self.indices,self.indptr), \ shape=self.shape,dtype=data.dtype) - def _binopt(self, other, op, in_shape=None, out_shape=None): + def _binopt(self, other, op): """apply the binary operation fn to two sparse matrices""" other = self.__class__(other) - if in_shape is None: - in_shape = self.shape - if out_shape is None: - out_shape = self.shape - - self.sort_indices() - other.sort_indices() - - # e.g. csr_plus_csr, csr_mat_mat, etc. + # e.g. csr_plus_csr, csr_minus_csr, etc. fn = getattr(sparsetools, self.format + op + self.format) - maxnnz = self.nnz + other.nnz + maxnnz = self.nnz + other.nnz indptr = np.empty_like(self.indptr) indices = np.empty(maxnnz, dtype=np.intc) data = np.empty(maxnnz, dtype=upcast(self.dtype,other.dtype)) - fn(in_shape[0], in_shape[1], \ + fn(self.shape[0], self.shape[1], \ self.indptr, self.indices, self.data, other.indptr, other.indices, other.data, indptr, indices, data) @@ -711,6 +703,6 @@ indices = indices.copy() data = data.copy() - A = self.__class__((data, indices, indptr), shape=out_shape) - A.has_sorted_indices = True + A = self.__class__((data, indices, indptr), shape=self.shape) + return A diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/construct.py python-scipy-0.8.0+dfsg1/scipy/sparse/construct.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/construct.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/construct.py 2010-07-26 15:48:34.000000000 +0100 @@ -4,7 +4,7 @@ __docformat__ = "restructuredtext en" __all__ = [ 'spdiags', 'eye', 'identity', 'kron', 'kronsum', - 'hstack', 'vstack', 'bmat' ] + 'hstack', 'vstack', 'bmat', 'rand'] from warnings import warn @@ -21,7 +21,8 @@ from dia import dia_matrix def spdiags(data, diags, m, n, format=None): - """Return a sparse matrix from diagonals. + """ + Return a sparse matrix from diagonals. Parameters ---------- @@ -39,10 +40,10 @@ See Also -------- - The dia_matrix class which implements the DIAgonal format. + dia_matrix : the sparse DIAgonal format. - Example - ------- + Examples + -------- >>> data = array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]) >>> diags = array([0,-1,2]) >>> spdiags(data, diags, 4, 4).todense() @@ -233,7 +234,8 @@ def hstack(blocks, format=None, dtype=None): - """Stack sparse matrices horizontally (column wise) + """ + Stack sparse matrices horizontally (column wise) Parameters ---------- @@ -244,8 +246,12 @@ by default an appropriate sparse matrix format is returned. This choice is subject to change. - Example - ------- + See Also + -------- + vstack : stack sparse matrices vertically (row wise) + + Examples + -------- >>> from scipy.sparse import coo_matrix, vstack >>> A = coo_matrix([[1,2],[3,4]]) >>> B = coo_matrix([[5],[6]]) @@ -253,12 +259,12 @@ matrix([[1, 2, 5], [3, 4, 6]]) - """ return bmat([blocks], format=format, dtype=dtype) def vstack(blocks, format=None, dtype=None): - """Stack sparse matrices vertically (row wise) + """ + Stack sparse matrices vertically (row wise) Parameters ---------- @@ -269,8 +275,12 @@ by default an appropriate sparse matrix format is returned. This choice is subject to change. - Example - ------- + See Also + -------- + hstack : stack sparse matrices horizontally (column wise) + + Examples + -------- >>> from scipy.sparse import coo_matrix, vstack >>> A = coo_matrix([[1,2],[3,4]]) >>> B = coo_matrix([[5,6]]) @@ -279,12 +289,12 @@ [3, 4], [5, 6]]) - """ return bmat([ [b] for b in blocks ], format=format, dtype=dtype) def bmat(blocks, format=None, dtype=None): - """Build a sparse matrix from sparse sub-blocks + """ + Build a sparse matrix from sparse sub-blocks Parameters ---------- @@ -295,8 +305,8 @@ by default an appropriate sparse matrix format is returned. This choice is subject to change. - Example - ------- + Examples + -------- >>> from scipy.sparse import coo_matrix, bmat >>> A = coo_matrix([[1,2],[3,4]]) >>> B = coo_matrix([[5],[6]]) @@ -311,7 +321,6 @@ [3, 4, 0], [0, 0, 7]]) - """ blocks = np.asarray(blocks, dtype='object') @@ -380,7 +389,65 @@ shape = (np.sum(brow_lengths), np.sum(bcol_lengths)) return coo_matrix((data, (row, col)), shape=shape).asformat(format) +def rand(m, n, density=0.01, format="coo", dtype=None): + """Generate a sparse matrix of the given shape and density with uniformely + distributed values. + Parameters + ---------- + m, n: int + shape of the matrix + density: real + density of the generated matrix: density equal to one means a full + matrix, density of 0 means a matrix with no non-zero items. + format: str + sparse matrix format. + dtype: dtype + type of the returned matrix values. + + Notes + ----- + Only float types are supported for now. + """ + if density < 0 or density > 1: + raise ValueError("density expected to be 0 <= density <= 1") + if dtype and not dtype in [np.float32, np.float64, np.longdouble]: + raise NotImplementedError("type %s not supported" % dtype) + + mn = m * n + + # XXX: sparse uses intc instead of intp... + tp = np.intp + if mn > np.iinfo(tp).max: + msg = """\ +Trying to generate a random sparse matrix such as the product of dimensions is +greater than %d - this is not supported on this machine +""" + raise ValueError(msg % np.iinfo(tp).max) + + # Number of non zero values + k = long(density * m * n) + + # Generate a few more values than k so that we can get unique values + # afterwards. + # XXX: one could be smarter here + mlow = 5 + fac = 1.02 + gk = min(k + mlow, fac * k) + + def _gen_unique_rand(_gk): + id = np.random.rand(_gk) + return np.unique(np.floor(id * mn))[:k] + + id = _gen_unique_rand(gk) + while id.size < k: + gk *= 1.05 + id = _gen_unique_rand(gk) + + j = np.floor(id * 1. / m).astype(tp) + i = (id - j * m).astype(tp) + vals = np.random.rand(k).astype(dtype) + return coo_matrix((vals, (i, j)), shape=(m, n)).asformat(format) ################################# # Deprecated functions @@ -388,9 +455,10 @@ __all__ += [ 'speye','spidentity', 'spkron', 'lil_eye', 'lil_diags' ] -spkron = np.deprecate(kron, oldname='spkron', newname='scipy.sparse.kron') -speye = np.deprecate(eye, oldname='speye', newname='scipy.sparse.eye') -spidentity = np.deprecate(identity, oldname='spidentity', newname='scipy.sparse.identity') +spkron = np.deprecate(kron, old_name='spkron', new_name='scipy.sparse.kron') +speye = np.deprecate(eye, old_name='speye', new_name='scipy.sparse.eye') +spidentity = np.deprecate(identity, old_name='spidentity', + new_name='scipy.sparse.identity') def lil_eye((r,c), k=0, dtype='d'): @@ -417,7 +485,8 @@ #TODO remove this function def lil_diags(diags, offsets, (m,n), dtype='d'): - """Generate a lil_matrix with the given diagonals. + """ + Generate a lil_matrix with the given diagonals. Parameters ---------- @@ -431,8 +500,8 @@ dtype : dtype output data-type. - Example - ------- + Examples + -------- >>> lil_diags([[1,2,3],[4,5],[6]],[0,1,2],(3,3)).todense() matrix([[ 1., 4., 6.], diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/coo.py python-scipy-0.8.0+dfsg1/scipy/sparse/coo.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/coo.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/coo.py 2010-07-26 15:48:34.000000000 +0100 @@ -14,7 +14,8 @@ from sputils import upcast, to_native, isshape, getdtype class coo_matrix(_data_matrix): - """A sparse matrix in COOrdinate format. + """ + A sparse matrix in COOrdinate format. Also known as the 'ijv' or 'triplet' format. @@ -52,10 +53,7 @@ + arithmetic operations + slicing - Intended Usage - -------------- - - COO is a fast format for constructing sparse matrices - Once a matrix has been constructed, convert to CSR or CSC format for fast arithmetic and matrix vector operations diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/csc.py python-scipy-0.8.0+dfsg1/scipy/sparse/csc.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/csc.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/csc.py 2010-07-26 15:48:34.000000000 +0100 @@ -15,7 +15,8 @@ class csc_matrix(_cs_matrix): - """Compressed Sparse Column matrix + """ + Compressed Sparse Column matrix This can be instantiated in several ways: csc_matrix(D) @@ -29,14 +30,16 @@ dtype is optional, defaulting to dtype='d'. csc_matrix((data, ij), [shape=(M, N)]) - where ``data`` and ``ij`` satisfy ``a[ij[0, k], ij[1, k]] = data[k]`` + where ``data`` and ``ij`` satisfy the relationship + ``a[ij[0, k], ij[1, k]] = data[k]`` csc_matrix((data, indices, indptr), [shape=(M, N)]) is the standard CSC representation where the row indices for - column i are stored in ``indices[indptr[i]:indices[i+1]]`` and their - corresponding values are stored in ``data[indptr[i]:indptr[i+1]]``. - If the shape parameter is not supplied, the matrix dimensions - are inferred from the index arrays. + column i are stored in ``indices[indptr[i]:indices[i+1]]`` + and their corresponding values are stored in + ``data[indptr[i]:indptr[i+1]]``. If the shape parameter is + not supplied, the matrix dimensions are inferred from + the index arrays. Notes ----- @@ -46,7 +49,6 @@ - fast matrix vector products (CSR, BSR may be faster) Disadvantages of the CSC format - ------------------------------- - slow row slicing operations (consider CSR) - changes to the sparsity structure are expensive (consider LIL or DOK) diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/csr.py python-scipy-0.8.0+dfsg1/scipy/sparse/csr.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/csr.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/csr.py 2010-07-26 15:48:34.000000000 +0100 @@ -10,14 +10,15 @@ import numpy as np from sparsetools import csr_tocsc, csr_tobsr, csr_count_blocks, \ - get_csr_submatrix + get_csr_submatrix, csr_sample_values from sputils import upcast, isintlike from compressed import _cs_matrix class csr_matrix(_cs_matrix): - """Compressed Sparse Row matrix + """ + Compressed Sparse Row matrix This can be instantiated in several ways: csr_matrix(D) @@ -31,7 +32,8 @@ dtype is optional, defaulting to dtype='d'. csr_matrix((data, ij), [shape=(M, N)]) - where ``data`` and ``ij`` satisfy ``a[ij[0, k], ij[1, k]] = data[k]`` + where ``data`` and ``ij`` satisfy the relationship + ``a[ij[0, k], ij[1, k]] = data[k]`` csr_matrix((data, indices, indptr), [shape=(M, N)]) is the standard CSR representation where the column indices for @@ -182,15 +184,8 @@ raise IndexError('invalid index') else: return x - - def extractor(indices,N): - """Return a sparse matrix P so that P*self implements - slicing of the form self[[1,2,3],:] - """ - indices = asindices(indices) - + def check_bounds(indices,N): max_indx = indices.max() - if max_indx >= N: raise IndexError('index (%d) out of range' % max_indx) @@ -198,6 +193,16 @@ if min_indx < -N: raise IndexError('index (%d) out of range' % (N + min_indx)) + return (min_indx,max_indx) + + def extractor(indices,N): + """Return a sparse matrix P so that P*self implements + slicing of the form self[[1,2,3],:] + """ + indices = asindices(indices) + + (min_indx,max_indx) = check_bounds(indices,N) + if min_indx < 0: indices = indices.copy() indices[indices < 0] += N @@ -243,9 +248,18 @@ if len(row.shape) == 1: if len(row) != len(col): #[[1,2],[1,2]] raise IndexError('number of row and column indices differ') - val = [] - for i,j in zip(row,col): - val.append(self._get_single_element(i,j)) + + check_bounds(row, self.shape[0]) + check_bounds(col, self.shape[1]) + + num_samples = len(row) + val = np.empty(num_samples, dtype=self.dtype) + csr_sample_values(self.shape[0], self.shape[1], + self.indptr, self.indices, self.data, + num_samples, row, col, val) + #val = [] + #for i,j in zip(row,col): + # val.append(self._get_single_element(i,j)) return np.asmatrix(val) elif len(row.shape) == 2: diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/dok.py python-scipy-0.8.0+dfsg1/scipy/sparse/dok.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/dok.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/dok.py 2010-07-26 15:48:34.000000000 +0100 @@ -13,12 +13,13 @@ from sputils import isdense, getdtype, isshape, isintlike, isscalarlike, upcast class dok_matrix(spmatrix, dict): - """Dictionary Of Keys based sparse matrix. + """ + Dictionary Of Keys based sparse matrix. This is an efficient structure for constructing sparse matrices incrementally. - This can be instatiated in several ways: + This can be instantiated in several ways: dok_matrix(D) with a dense matrix, D @@ -218,7 +219,7 @@ raise IndexError, "index out of bounds" if np.isscalar(value): - if value==0: + if value==0 and self.has_key((i,j)): del self[(i,j)] else: dict.__setitem__(self, (i,j), self.dtype.type(value)) diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/lil.py python-scipy-0.8.0+dfsg1/scipy/sparse/lil.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/lil.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/lil.py 2010-07-26 15:48:34.000000000 +0100 @@ -223,14 +223,6 @@ else: raise IndexError - - def _insertat(self, i, j, x): - """ helper for __setitem__: insert a value at (i,j) where i, j and x - are all scalars """ - row = self.rows[i] - data = self.data[i] - self._insertat2(row, data, j, x) - def _insertat2(self, row, data, j, x): """ helper for __setitem__: insert a value in the given row/data at column j. """ @@ -241,7 +233,6 @@ if j < 0 or j >= self.shape[1]: raise IndexError('column index out of bounds') - if not np.isscalar(x): raise ValueError('setting an array element with a sequence') @@ -265,70 +256,83 @@ del row[pos] del data[pos] - - def _insertat3(self, row, data, j, x): - """ helper for __setitem__ """ + def _setitem_setrow(self, row, data, j, xrow, xdata, xcols): if isinstance(j, slice): j = self._slicetoseq(j, self.shape[1]) if issequence(j): - if isinstance(x, spmatrix): - x = x.todense() - x = np.asarray(x).squeeze() - if np.isscalar(x) or x.size == 1: + if xcols == len(j): + for jj, xi in zip(j, xrange(xcols)): + pos = bisect_left(xrow, xi) + if pos != len(xdata) and xrow[pos] == xi: + self._insertat2(row, data, jj, xdata[pos]) + else: + self._insertat2(row, data, jj, 0) + elif xcols == 1: # OK, broadcast across row + if len(xdata) > 0 and xrow[0] == 0: + val = xdata[0] + else: + val = 0 for jj in j: - self._insertat2(row, data, jj, x) + self._insertat2(row, data, jj,val) else: - # x must be one D. maybe check these things out - for jj, xx in zip(j, x): - self._insertat2(row, data, jj, xx) + raise IndexError('invalid index') elif np.isscalar(j): - self._insertat2(row, data, j, x) + if not xcols == 1: + raise ValueError('array dimensions are not compatible for copy') + if len(xdata) > 0 and xrow[0] == 0: + self._insertat2(row, data, j, xdata[0]) + else: + self._insertat2(row, data, j, 0) else: raise ValueError('invalid column value: %s' % str(j)) - def __setitem__(self, index, x): - if np.isscalar(x): - x = self.dtype.type(x) - elif not isinstance(x, spmatrix): - x = lil_matrix(x) - try: i, j = index except (ValueError, TypeError): raise IndexError('invalid index') + # shortcut for common case of single entry assign: + if np.isscalar(x) and np.isscalar(i) and np.isscalar(j): + self._insertat2(self.rows[i], self.data[i], j, x) + return + + # shortcut for common case of full matrix assign: if isspmatrix(x): - if (isinstance(i, slice) and (i == slice(None))) and \ - (isinstance(j, slice) and (j == slice(None))): - # self[:,:] = other_sparse - x = lil_matrix(x) - self.rows = x.rows - self.data = x.data - return + if isinstance(i, slice) and i == slice(None) and \ + isinstance(j, slice) and j == slice(None): + x = lil_matrix(x) + self.rows = x.rows + self.data = x.data + return + + if isinstance(i, tuple): # can't index lists with tuple + i = list(i) if np.isscalar(i): - row = self.rows[i] - data = self.data[i] - self._insertat3(row, data, j, x) - elif issequence(i) and issequence(j): - if np.isscalar(x): - for ii, jj in zip(i, j): - self._insertat(ii, jj, x) - else: - for ii, jj, xx in zip(i, j, x): - self._insertat(ii, jj, xx) - elif isinstance(i, slice) or issequence(i): + rows = [self.rows[i]] + datas = [self.data[i]] + else: rows = self.rows[i] datas = self.data[i] - if np.isscalar(x): - for row, data in zip(rows, datas): - self._insertat3(row, data, j, x) - else: - for row, data, xx in zip(rows, datas, x): - self._insertat3(row, data, j, xx) + + x = lil_matrix(x, copy=False) + xrows, xcols = x.shape + if xrows == len(rows): # normal rectangular copy + for row, data, xrow, xdata in zip(rows, datas, x.rows, x.data): + self._setitem_setrow(row, data, j, xrow, xdata, xcols) + elif xrows == 1: # OK, broadcast down column + for row, data in zip(rows, datas): + self._setitem_setrow(row, data, j, x.rows[0], x.data[0], xcols) + + # needed to pass 'test_lil_sequence_assignement' unit test: + # -- set row from column of entries -- + elif xcols == len(rows): + x = x.T + for row, data, xrow, xdata in zip(rows, datas, x.rows, x.data): + self._setitem_setrow(row, data, j, xrow, xdata, xrows) else: - raise ValueError('invalid index value: %s' % str((i, j))) + raise IndexError('invalid index') def _mul_scalar(self, other): if other == 0: diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/_csuperlumodule.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/_csuperlumodule.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/_csuperlumodule.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/_csuperlumodule.c 1970-01-01 01:00:00.000000000 +0100 @@ -1,203 +0,0 @@ - -/* Copyright 1999 Travis Oliphant - Permision to copy and modified this file is granted under the revised BSD license. - No warranty is expressed or IMPLIED -*/ - -/* - This file implements glue between the SuperLU library for - sparse matrix inversion and Python. -*/ - - -/* We want a low-level interface to: - xGSSV - - These will be done in separate files due to the include structure of - SuperLU. - - Define a user abort and a user malloc and free (to keep pointers - that will be released on errors) -*/ - -#include "Python.h" -#include "SuperLU/SRC/csp_defs.h" -#include "_superluobject.h" -#include - - -extern jmp_buf _superlu_py_jmpbuf; - - -static char doc_cgssv[] = "Direct inversion of sparse matrix.\n\nX = cgssv(A,B) solves A*X = B for X."; - -static PyObject *Py_cgssv (PyObject *self, PyObject *args, PyObject *kwdict) -{ - PyObject *Py_B=NULL, *Py_X=NULL; - PyArrayObject *nzvals=NULL; - PyArrayObject *colind=NULL, *rowptr=NULL; - int N, nnz; - int info; - int csc=0, permc_spec=2; - int *perm_r=NULL, *perm_c=NULL; - SuperMatrix A, B, L, U; - superlu_options_t options; - SuperLUStat_t stat; - - static char *kwlist[] = {"N","nnz","nzvals","colind","rowptr","B", "csc", "permc_spec",NULL}; - - /* Get input arguments */ - if (!PyArg_ParseTupleAndKeywords(args, kwdict, "iiO!O!O!O|ii", kwlist, &N, &nnz, &PyArray_Type, &nzvals, &PyArray_Type, &colind, &PyArray_Type, &rowptr, &Py_B, &csc, &permc_spec)) - return NULL; - - if (!_CHECK_INTEGER(colind) || !_CHECK_INTEGER(rowptr)) { - PyErr_SetString(PyExc_TypeError, "colind and rowptr must be of type cint"); - return NULL; - } - - - /* Create Space for output */ - Py_X = PyArray_CopyFromObject(Py_B,PyArray_CFLOAT,1,2); - if (Py_X == NULL) return NULL; - if (csc) { - if (NCFormat_from_spMatrix(&A, N, N, nnz, nzvals, colind, rowptr, PyArray_CFLOAT)) { - Py_DECREF(Py_X); - return NULL; - } - } - else { - if (NRFormat_from_spMatrix(&A, N, N, nnz, nzvals, colind, rowptr, PyArray_CFLOAT)) { - Py_DECREF(Py_X); - return NULL; - } - } - - if (DenseSuper_from_Numeric(&B, Py_X)) { - Destroy_SuperMatrix_Store(&A); - Py_DECREF(Py_X); - return NULL; - } - - /* Setup options */ - - if (setjmp(_superlu_py_jmpbuf)) goto fail; - else { - perm_c = intMalloc(N); - perm_r = intMalloc(N); - set_default_options(&options); - options.ColPerm=superlu_module_getpermc(permc_spec); - StatInit(&stat); - - /* Compute direct inverse of sparse Matrix */ - cgssv(&options, &A, perm_c, perm_r, &L, &U, &B, &stat, &info); - } - - SUPERLU_FREE(perm_r); - SUPERLU_FREE(perm_c); - Destroy_SuperMatrix_Store(&A); - Destroy_SuperMatrix_Store(&B); - Destroy_SuperNode_Matrix(&L); - Destroy_CompCol_Matrix(&U); - StatFree(&stat); - - - return Py_BuildValue("Ni", Py_X, info); - - fail: - SUPERLU_FREE(perm_r); - SUPERLU_FREE(perm_c); - Destroy_SuperMatrix_Store(&A); - Destroy_SuperMatrix_Store(&B); - Destroy_SuperNode_Matrix(&L); - Destroy_CompCol_Matrix(&U); - StatFree(&stat); - - Py_XDECREF(Py_X); - return NULL; -} - -/*******************************Begin Code Adapted from PySparse *****************/ - - -static char doc_cgstrf[] = "cgstrf(A, ...)\n\ -\n\ -performs a factorization of the sparse matrix A=*(N,nnz,nzvals,rowind,colptr) and \n\ -returns a factored_lu object.\n\ -\n\ -see dgstrf for more information."; - -static PyObject * -Py_cgstrf(PyObject *self, PyObject *args, PyObject *keywds) { - - /* default value for SuperLU parameters*/ - double diag_pivot_thresh = 1.0; - double drop_tol = 0.0; - int relax = 1; - int panel_size = 10; - int permc_spec = 2; - int N, nnz; - PyArrayObject *rowind, *colptr, *nzvals; - SuperMatrix A; - PyObject *result; - - static char *kwlist[] = {"N","nnz","nzvals","rowind","colptr","permc_spec","diag_pivot_thresh", "drop_tol", "relax", "panel_size", NULL}; - - int res = PyArg_ParseTupleAndKeywords(args, keywds, "iiO!O!O!|iddii", kwlist, - &N, &nnz, - &PyArray_Type, &nzvals, - &PyArray_Type, &rowind, - &PyArray_Type, &colptr, - &permc_spec, - &diag_pivot_thresh, - &drop_tol, - &relax, - &panel_size); - if (!res) - return NULL; - - if (!_CHECK_INTEGER(colptr) || !_CHECK_INTEGER(rowind)) { - PyErr_SetString(PyExc_TypeError, "colptr and rowind must be of type cint"); - return NULL; - } - - if (NCFormat_from_spMatrix(&A, N, N, nnz, nzvals, rowind, colptr, PyArray_CFLOAT)) goto fail; - - result = newSciPyLUObject(&A, diag_pivot_thresh, drop_tol, relax, panel_size,\ - permc_spec, PyArray_CFLOAT); - - Destroy_SuperMatrix_Store(&A); /* arrays of input matrix will not be freed */ - - return result; - - fail: - Destroy_SuperMatrix_Store(&A); /* arrays of input matrix will not be freed */ - return NULL; -} - - -/*******************************End Code Adapted from PySparse *****************/ - - -static PyMethodDef cSuperLU_Methods[] = { - {"cgssv", (PyCFunction) Py_cgssv, METH_VARARGS|METH_KEYWORDS, doc_cgssv}, - {"cgstrf", (PyCFunction) Py_cgstrf, METH_VARARGS|METH_KEYWORDS, doc_cgstrf}, - /* {"_cgstrs", Py_cgstrs, METH_VARARGS, doc_cgstrs}, - {"_cgscon", Py_cgscon, METH_VARARGS, doc_cgscon}, - {"_cgsequ", Py_cgsequ, METH_VARARGS, doc_cgsequ}, - {"_claqgs", Py_claqgs, METH_VARARGS, doc_claqgs}, - {"_cgsrfs", Py_cgsrfs, METH_VARARGS, doc_cgsrfs}, */ - {NULL, NULL} -}; - - -PyMODINIT_FUNC -init_csuperlu(void) -{ - Py_InitModule("_csuperlu", cSuperLU_Methods); - import_array(); - -} - - - - diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/_dsuperlumodule.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/_dsuperlumodule.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/_dsuperlumodule.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/_dsuperlumodule.c 1970-01-01 01:00:00.000000000 +0100 @@ -1,251 +0,0 @@ - -/* Copyright 1999 Travis Oliphant - Permision to copy and modified this file is granted under the revised BSD license. - No warranty is expressed or IMPLIED -*/ - -/* - This file implements glue between the SuperLU library for - sparse matrix inversion and Python. -*/ - - -/* We want a low-level interface to: - xGSSV - - These will be done in separate files due to the include structure of - SuperLU. - - Define a user abort and a user malloc and free (to keep pointers - that will be released on errors) -*/ - -#include "Python.h" -#include "SuperLU/SRC/dsp_defs.h" -#include "_superluobject.h" -#include - -extern jmp_buf _superlu_py_jmpbuf; - - -static char doc_dgssv[] = "Direct inversion of sparse matrix.\n\nX = dgssv(A,B) solves A*X = B for X."; - -static PyObject *Py_dgssv (PyObject *self, PyObject *args, PyObject *kwdict) -{ - PyObject *Py_B=NULL, *Py_X=NULL; - PyArrayObject *nzvals=NULL; - PyArrayObject *colind=NULL, *rowptr=NULL; - int N, nnz; - int info; - int csc=0, permc_spec=2; - int *perm_r=NULL, *perm_c=NULL; - SuperMatrix A, B, L, U; - superlu_options_t options; - SuperLUStat_t stat; - - - static char *kwlist[] = {"N","nnz","nzvals","colind","rowptr","B", "csc", "permc_spec",NULL}; - - /* Get input arguments */ - if (!PyArg_ParseTupleAndKeywords(args, kwdict, "iiO!O!O!O|ii", kwlist, &N, &nnz, &PyArray_Type, &nzvals, &PyArray_Type, &colind, &PyArray_Type, &rowptr, &Py_B, &csc, &permc_spec)) - return NULL; - - if (!_CHECK_INTEGER(colind) || !_CHECK_INTEGER(rowptr)) { - PyErr_SetString(PyExc_TypeError, "colind and rowptr must be of type cint"); - return NULL; - } - - /* Create Space for output */ - Py_X = PyArray_CopyFromObject(Py_B,PyArray_DOUBLE,1,2); - if (Py_X == NULL) return NULL; - - if (csc) { - if (NCFormat_from_spMatrix(&A, N, N, nnz, nzvals, colind, rowptr, PyArray_DOUBLE)) { - Py_DECREF(Py_X); - return NULL; - } - } - else { - if (NRFormat_from_spMatrix(&A, N, N, nnz, nzvals, colind, rowptr, PyArray_DOUBLE)) { - Py_DECREF(Py_X); - return NULL; - } - } - - if (DenseSuper_from_Numeric(&B, Py_X)) { - Destroy_SuperMatrix_Store(&A); - Py_DECREF(Py_X); - return NULL; - } - - /* B and Py_X share same data now but Py_X "owns" it */ - - /* Setup options */ - - if (setjmp(_superlu_py_jmpbuf)) goto fail; - else { - perm_c = intMalloc(N); - perm_r = intMalloc(N); - set_default_options(&options); - options.ColPerm=superlu_module_getpermc(permc_spec); - StatInit(&stat); - - /* Compute direct inverse of sparse Matrix */ - dgssv(&options, &A, perm_c, perm_r, &L, &U, &B, &stat, &info); - } - - SUPERLU_FREE(perm_r); - SUPERLU_FREE(perm_c); - Destroy_SuperMatrix_Store(&A); /* holds just a pointer to the data */ - Destroy_SuperMatrix_Store(&B); - Destroy_SuperNode_Matrix(&L); - Destroy_CompCol_Matrix(&U); - StatFree(&stat); - - return Py_BuildValue("Ni", Py_X, info); - - fail: - - SUPERLU_FREE(perm_r); - SUPERLU_FREE(perm_c); - Destroy_SuperMatrix_Store(&A); /* holds just a pointer to the data */ - Destroy_SuperMatrix_Store(&B); - Destroy_SuperNode_Matrix(&L); - Destroy_CompCol_Matrix(&U); - StatFree(&stat); - Py_XDECREF(Py_X); - return NULL; -} - - -/*******************************Begin Code Adapted from PySparse *****************/ - - -static char doc_dgstrf[] = "dgstrf(A, ...)\n\ -\n\ -performs a factorization of the sparse matrix A=*(N,nnz,nzvals,rowind,colptr) and \n\ -returns a factored_lu object.\n\ -\n\ -arguments\n\ ----------\n\ -\n\ -Matrix to be factorized is represented as N,nnz,nzvals,rowind,colptr\n\ - as separate arguments. This is compressed sparse column representation.\n\ -\n\ -N number of rows and columns \n\ -nnz number of non-zero elements\n\ -nzvals non-zero values \n\ -rowind row-index for this column (same size as nzvals)\n\ -colptr index into rowind for first non-zero value in this column\n\ - size is (N+1). Last value should be nnz. \n\ -\n\ -additional keyword arguments:\n\ ------------------------------\n\ -permc_spec specifies the matrix ordering used for the factorization\n\ - 0: natural ordering\n\ - 1: MMD applied to the structure of A^T * A\n\ - 2: MMD applied to the structure of A^T + A\n\ - 3: COLAMD, approximate minimum degree column ordering\n\ - (default: 2)\n\ -\n\ -diag_pivot_thresh threshhold for partial pivoting.\n\ - 0.0 <= diag_pivot_thresh <= 1.0\n\ - 0.0 corresponds to no pivoting\n\ - 1.0 corresponds to partial pivoting\n\ - (default: 1.0)\n\ -\n\ -drop_tol drop tolerance parameter\n\ - 0.0 <= drop_tol <= 1.0\n\ - 0.0 corresponds to exact factorization\n\ - CAUTION: the drop_tol is not implemented in SuperLU 2.0\n\ - (default: 0.0)\n\ -\n\ -relax to control degree of relaxing supernodes\n\ - (default: 1)\n\ -\n\ -panel_size a panel consist of at most panel_size consecutive columns.\n\ - (default: 10)\n\ -"; - -static PyObject * -Py_dgstrf(PyObject *self, PyObject *args, PyObject *keywds) { - - /* default value for SuperLU parameters*/ - double diag_pivot_thresh = 1.0; - double drop_tol = 0.0; - int relax = 1; - int panel_size = 10; - int permc_spec = 2; - int N, nnz; - PyArrayObject *rowind, *colptr, *nzvals; - SuperMatrix A; - PyObject *result; - - static char *kwlist[] = {"N","nnz","nzvals","rowind","colptr","permc_spec","diag_pivot_thresh", "drop_tol", "relax", "panel_size", NULL}; - - int res = PyArg_ParseTupleAndKeywords(args, keywds, "iiO!O!O!|iddii", kwlist, - &N, &nnz, - &PyArray_Type, &nzvals, - &PyArray_Type, &rowind, - &PyArray_Type, &colptr, - &permc_spec, - &diag_pivot_thresh, - &drop_tol, - &relax, - &panel_size); - if (!res) - return NULL; - - if (!_CHECK_INTEGER(colptr) || !_CHECK_INTEGER(rowind)) { - PyErr_SetString(PyExc_TypeError, "rowind and colptr must be of type cint"); - return NULL; - } - - if (NCFormat_from_spMatrix(&A, N, N, nnz, nzvals, rowind, colptr, PyArray_DOUBLE)) goto fail; - - result = newSciPyLUObject(&A, diag_pivot_thresh, drop_tol, relax, panel_size,\ - permc_spec, PyArray_DOUBLE); - if (result == NULL) goto fail; - - Destroy_SuperMatrix_Store(&A); /* arrays of input matrix will not be freed */ - return result; - - fail: - Destroy_SuperMatrix_Store(&A); /* arrays of input matrix will not be freed */ - return NULL; -} - - -/*******************************End Code Adapted from PySparse *****************/ - -static PyMethodDef dSuperLU_Methods[] = { - {"dgssv", (PyCFunction) Py_dgssv, METH_VARARGS|METH_KEYWORDS, doc_dgssv}, - {"dgstrf", (PyCFunction) Py_dgstrf, METH_VARARGS|METH_KEYWORDS, doc_dgstrf}, - /* - {"_dgstrs", Py_dgstrs, METH_VARARGS, doc_dgstrs}, - {"_dgscon", Py_dgscon, METH_VARARGS, doc_dgscon}, - {"_dgsequ", Py_dgsequ, METH_VARARGS, doc_dgsequ}, - {"_dlaqgs", Py_dlaqgs, METH_VARARGS, doc_dlaqgs}, - {"_dgsrfs", Py_dgsrfs, METH_VARARGS, doc_dgsrfs}, */ - {NULL, NULL} -}; - - -PyMODINIT_FUNC -init_dsuperlu(void) -{ - PyObject *m, *d; - - SciPySuperLUType.ob_type = &PyType_Type; - - m = Py_InitModule("_dsuperlu", dSuperLU_Methods); - d = PyModule_GetDict(m); - - PyDict_SetItemString(d, "SciPyLUType", (PyObject *)&SciPySuperLUType); - - import_array(); -} - - - - diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/linsolve.py python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/linsolve.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/linsolve.py 2010-03-03 14:34:12.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/linsolve.py 2010-07-26 15:48:34.000000000 +0100 @@ -18,11 +18,7 @@ useUmfpack = True -__all__ = [ 'use_solver', 'spsolve', 'splu', 'factorized' ] - -#convert numpy char to superLU char -superLU_transtabl = {'f':'s', 'd':'d', 'F':'c', 'D':'z'} - +__all__ = [ 'use_solver', 'spsolve', 'splu', 'spilu', 'factorized' ] def use_solver( **kwargs ): """ @@ -45,7 +41,7 @@ umfpack.configure( **kwargs ) -def spsolve(A, b, permc_spec=2): +def spsolve(A, b, permc_spec=None, use_umfpack=True): """Solve the sparse linear system Ax=b """ if isspmatrix( b ): @@ -71,8 +67,9 @@ raise ValueError, "matrix - rhs size mismatch (%s - %s)"\ % (A.shape, b.size) + use_umfpack = use_umfpack and useUmfpack - if isUmfpack and useUmfpack: + if isUmfpack and use_umfpack: if noScikit: warn( 'scipy.sparse.linalg.dsolve.umfpack will be removed,'\ ' install scikits.umfpack instead', DeprecationWarning ) @@ -90,25 +87,71 @@ else: if isspmatrix_csc(A): flag = 1 # CSC format - else: + elif isspmatrix_csr(A): flag = 0 # CSR format + else: + A = csc_matrix(A) + flag = 1 - ftype = superLU_transtabl[A.dtype.char] - - gssv = eval('_superlu.' + ftype + 'gssv') b = asarray(b, dtype=A.dtype) + options = dict(ColPerm=permc_spec) + return _superlu.gssv(N, A.nnz, A.data, A.indices, A.indptr, b, flag, + options=options)[0] - return gssv(N, A.nnz, A.data, A.indices, A.indptr, b, flag, permc_spec)[0] - -def splu(A, permc_spec=2, diag_pivot_thresh=1.0, - drop_tol=0.0, relax=1, panel_size=10): +def splu(A, permc_spec=None, diag_pivot_thresh=None, + drop_tol=None, relax=None, panel_size=None, options=dict()): """ - A linear solver, for a sparse, square matrix A, using LU decomposition where - L is a lower triangular matrix and U is an upper triagular matrix. + Compute the LU decomposition of a sparse, square matrix. - Returns a factored_lu object. (scipy.sparse.linalg.dsolve._superlu.SciPyLUType) + Parameters + ---------- + A + Sparse matrix to factorize. Should be in CSR or CSC format. + + permc_spec : str, optional + How to permute the columns of the matrix for sparsity preservation. + (default: 'COLAMD') + + - ``NATURAL``: natural ordering. + - ``MMD_ATA``: minimum degree ordering on the structure of A^T A. + - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A. + - ``COLAMD``: approximate minimum degree column ordering + + diag_pivot_thresh : float, optional + Threshold used for a diagonal entry to be an acceptable pivot. + See SuperLU user's guide for details [SLU]_ + drop_tol : float, optional + (deprecated) No effect. + relax : int, optional + Expert option for customizing the degree of relaxing supernodes. + See SuperLU user's guide for details [SLU]_ + panel_size : int, optional + Expert option for customizing the panel size. + See SuperLU user's guide for details [SLU]_ + options : dict, optional + Dictionary containing additional expert options to SuperLU. + See SuperLU user guide [SLU]_ (section 2.4 on the 'Options' argument) + for more details. For example, you can specify + ``options=dict(Equil=False, IterRefine='SINGLE'))`` + to turn equilibration off and perform a single iterative refinement. + + Returns + ------- + invA : scipy.sparse.linalg.dsolve._superlu.SciPyLUType + Object, which has a ``solve`` method. + + See also + -------- + spilu : incomplete LU decomposition + + Notes + ----- + This function uses the SuperLU library. + + References + ---------- + .. [SLU] SuperLU http://crd.lbl.gov/~xiaoye/SuperLU/ - See scipy.sparse.linalg.dsolve._superlu.dgstrf for more info. """ if not isspmatrix_csc(A): @@ -122,11 +165,84 @@ if (M != N): raise ValueError, "can only factor square matrices" #is this true? - ftype = superLU_transtabl[A.dtype.char] + _options = dict(DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, + PanelSize=panel_size, Relax=relax) + if options is not None: + _options.update(options) + return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, + ilu=False, options=_options) + +def spilu(A, drop_tol=None, fill_factor=None, drop_rule=None, permc_spec=None, + diag_pivot_thresh=None, relax=None, panel_size=None, options=None): + """ + Compute an incomplete LU decomposition for a sparse, square matrix A. + + The resulting object is an approximation to the inverse of A. + + Parameters + ---------- + A + Sparse matrix to factorize + + drop_tol : float, optional + Drop tolerance (0 <= tol <= 1) for an incomplete LU decomposition. + (default: 1e-4) + fill_factor : float, optional + Specifies the fill ratio upper bound (>= 1.0) for ILU. (default: 10) + drop_rule : str, optional + Comma-separated string of drop rules to use. + Available rules: ``basic``, ``prows``, ``column``, ``area``, + ``secondary``, ``dynamic``, ``interp``. (Default: ``basic,area``) + + See SuperLU documentation for details. + milu : str, optional + Which version of modified ILU to use. (Choices: ``silu``, + ``smilu_1``, ``smilu_2`` (default), ``smilu_3``.) + + Remaining other options + Same as for `splu` + + Returns + ------- + invA_approx : scipy.sparse.linalg.dsolve._superlu.SciPyLUType + Object, which has a ``solve`` method. + + See also + -------- + splu : complete LU decomposition + + Notes + ----- + To improve the better approximation to the inverse, you may need to + increase ``fill_factor`` AND decrease ``drop_tol``. + + This function uses the SuperLU library. + + References + ---------- + .. [SLU] SuperLU http://crd.lbl.gov/~xiaoye/SuperLU/ + + """ + + if not isspmatrix_csc(A): + A = csc_matrix(A) + warn('splu requires CSC matrix format', SparseEfficiencyWarning) + + A.sort_indices() + A = A.asfptype() #upcast to a floating point format + + M, N = A.shape + if (M != N): + raise ValueError, "can only factor square matrices" #is this true? - gstrf = eval('_superlu.' + ftype + 'gstrf') - return gstrf(N, A.nnz, A.data, A.indices, A.indptr, permc_spec, - diag_pivot_thresh, drop_tol, relax, panel_size) + _options = dict(ILU_DropRule=drop_rule, ILU_DropTol=drop_tol, + ILU_FillFactor=fill_factor, + DiagPivotThresh=diag_pivot_thresh, ColPerm=permc_spec, + PanelSize=panel_size, Relax=relax) + if options is not None: + _options.update(options) + return _superlu.gstrf(N, A.nnz, A.data, A.indices, A.indptr, + ilu=True, options=_options) def factorized( A ): """ diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SConscript python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SConscript --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SConscript 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SConscript 2010-07-26 15:48:34.000000000 +0100 @@ -1,4 +1,5 @@ -from os.path import join as pjoin +import os +import glob import sys from numscons import GetNumpyEnvironment @@ -29,46 +30,22 @@ if sys.platform == 'win32': superlu_def.append((('NO_TIMER'), 1)) superlu_def.append((('USE_VENDOR_BLAS'), 2)) -superlu_env.Append(CPPDEFINES = superlu_def) +superlu_env.Append(CPPDEFINES=superlu_def) +superlu_env.Append(CPPPATH=[os.path.join('SuperLU', 'SRC')]) -superlu_src = [pjoin('SuperLU', 'SRC', s) for s in [ "ccolumn_bmod.c", -"ccolumn_dfs.c", "ccopy_to_ucol.c", "cgscon.c", "cgsequ.c", "cgsrfs.c", -"cgssv.c", "cgssvx.c", "cgstrf.c", "cgstrs.c", "clacon.c", "clangs.c", -"claqgs.c", "cmemory.c", "colamd.c", "cpanel_bmod.c", "cpanel_dfs.c", -"cpivotL.c", "cpivotgrowth.c", "cpruneL.c", "creadhb.c", "csnode_bmod.c", -"csnode_dfs.c", "csp_blas2.c", "csp_blas3.c", "cutil.c", "dGetDiagU.c", -"dcolumn_bmod.c", "dcolumn_dfs.c", "dcomplex.c", "dcopy_to_ucol.c", "dgscon.c", -"dgsequ.c", "dgsrfs.c", "dgssv.c", "dgssvx.c", "dgstrf.c", "dgstrs.c", -"dgstrsL.c", "dlacon.c", "dlamch.c", "dlangs.c", "dlaqgs.c", "dmemory.c", -"dpanel_bmod.c", "dpanel_dfs.c", "dpivotL.c", "dpivotgrowth.c", "dpruneL.c", -"dreadhb.c", "dsnode_bmod.c", "dsnode_dfs.c", "dsp_blas2.c", "dsp_blas3.c", -"dutil.c", "dzsum1.c", "get_perm_c.c", "heap_relax_snode.c", "icmax1.c", -"izmax1.c", "memory.c", "mmd.c", "relax_snode.c", "scolumn_bmod.c", -"scolumn_dfs.c", "scomplex.c", "scopy_to_ucol.c", "scsum1.c", "sgscon.c", -"sgsequ.c", "sgsrfs.c", "sgssv.c", "sgssvx.c", "sgstrf.c", "sgstrs.c", -"slacon.c", "slamch.c", "slangs.c", "slaqgs.c", "smemory.c", "sp_coletree.c", -"sp_ienv.c", "sp_preorder.c", "spanel_bmod.c", "spanel_dfs.c", "spivotL.c", -"spivotgrowth.c", "spruneL.c", "sreadhb.c", "ssnode_bmod.c", "ssnode_dfs.c", -"ssp_blas2.c", "ssp_blas3.c", "superlu_timer.c", "sutil.c", "util.c", -"xerbla.c", "zcolumn_bmod.c", "zcolumn_dfs.c", "zcopy_to_ucol.c", "zgscon.c", -"zgsequ.c", "zgsrfs.c", "zgssv.c", "zgssvx.c", "zgstrf.c", "zgstrs.c", -"zlacon.c", "zlangs.c", "zlaqgs.c", "zmemory.c", "zpanel_bmod.c", -"zpanel_dfs.c", "zpivotL.c", "zpivotgrowth.c", "zpruneL.c", "zreadhb.c", -"zsnode_bmod.c", "zsnode_dfs.c", "zsp_blas2.c", "zsp_blas3.c", "zutil.c"]] +superlu_src = env.Glob(os.path.join('SuperLU', 'SRC', "*.c")) # XXX: we should detect whether lsame is already defined in BLAS/LAPACK. Here, # when using MSVC + MKL, lsame is already in MKL if not (built_with_mstools(env) and (not built_with_gnu_f77(env))): - superlu_src.append(pjoin("SuperLU", "SRC", "lsame.c")) -superlu = superlu_env.DistutilsStaticExtLibrary('superlu_src', source = superlu_src) + superlu_src.append(os.path.join("SuperLU", "SRC", "lsame.c")) +superlu = superlu_env.DistutilsStaticExtLibrary('superlu_src', source=superlu_src) # Build python extensions pyenv = env.Clone() -pyenv.Append(CPPPATH = [pjoin('SuperLU', 'SRC')]) -pyenv.Prepend(LIBS = superlu) +pyenv.Append(CPPPATH=[os.path.join('SuperLU', 'SRC')]) +pyenv.Prepend(LIBPATH=["."]) +pyenv.Prepend(LIBS=["superlu_src"]) common_src = ['_superlu_utils.c', '_superluobject.c'] -for prec in ['z', 'd', 'c', 's']: - pyenv.NumpyPythonExtension('_%ssuperlu' % prec, - source = common_src + \ - ['_%ssuperlumodule.c' % prec]) +pyenv.NumpyPythonExtension('_superlu', source=common_src + ['_superlumodule.c']) diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/setup.py python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/setup.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/setup.py 2010-03-03 14:34:12.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/setup.py 2010-07-26 15:48:34.000000000 +0100 @@ -1,6 +1,7 @@ #!/usr/bin/env python -from os.path import join +from os.path import join, dirname import sys +import os def configuration(parent_package='',top_path=None): from numpy.distutils.misc_util import Configuration @@ -16,43 +17,21 @@ superlu_defs = [] superlu_defs.append(('USE_VENDOR_BLAS',1)) + superlu_src = os.path.join(dirname(__file__), 'SuperLU', 'SRC') + config.add_library('superlu_src', - sources = [join('SuperLU','SRC','*.c')], - macros = superlu_defs + sources = [join(superlu_src,'*.c')], + macros = superlu_defs, + include_dirs=[superlu_src], ) - #SuperLU/SRC/util.h has been modifed to use these by default - #macs = [('USER_ABORT','superlu_python_module_abort'), - # ('USER_MALLOC','superlu_python_module_malloc'), - # ('USER_FREE','superlu_python_module_free')] - # Extension - config.add_extension('_zsuperlu', - sources = ['_zsuperlumodule.c','_superlu_utils.c', - '_superluobject.c'], - libraries = ['superlu_src'], - extra_info = lapack_opt - ) - - config.add_extension('_dsuperlu', - sources = ['_dsuperlumodule.c','_superlu_utils.c', - '_superluobject.c'], - libraries = ['superlu_src'], - extra_info = lapack_opt - ) - - config.add_extension('_csuperlu', - sources = ['_csuperlumodule.c','_superlu_utils.c', - '_superluobject.c'], - libraries = ['superlu_src'], - extra_info = lapack_opt - ) - - config.add_extension('_ssuperlu', - sources = ['_ssuperlumodule.c','_superlu_utils.c', + config.add_extension('_superlu', + sources = ['_superlumodule.c', + '_superlu_utils.c', '_superluobject.c'], libraries = ['superlu_src'], - extra_info = lapack_opt + extra_info = lapack_opt, ) config.add_subpackage('umfpack') diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/_ssuperlumodule.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/_ssuperlumodule.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/_ssuperlumodule.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/_ssuperlumodule.c 1970-01-01 01:00:00.000000000 +0100 @@ -1,202 +0,0 @@ - -/* Copyright 1999 Travis Oliphant - Permision to copy and modified this file is granted under the revised BSD license. - No warranty is expressed or IMPLIED -*/ - -/* - This file implements glue between the SuperLU library for - sparse matrix inversion and Python. -*/ - - -/* We want a low-level interface to: - xGSSV - - These will be done in separate files due to the include structure of - SuperLU. - - Define a user abort and a user malloc and free (to keep pointers - that will be released on errors) -*/ - -#include "Python.h" -#include "SuperLU/SRC/ssp_defs.h" -#include "_superluobject.h" -#include - -extern jmp_buf _superlu_py_jmpbuf; - - -static char doc_sgssv[] = "Direct inversion of sparse matrix.\n\nX = sgssv(A,B) solves A*X = B for X."; - -static PyObject *Py_sgssv (PyObject *self, PyObject *args, PyObject *kwdict) -{ - PyObject *Py_B=NULL, *Py_X=NULL; - PyArrayObject *nzvals=NULL; - PyArrayObject *colind=NULL, *rowptr=NULL; - int N, nnz; - int info; - int csc=0, permc_spec=2; - int *perm_r=NULL, *perm_c=NULL; - SuperMatrix A, B, L, U; - superlu_options_t options; - SuperLUStat_t stat; - - static char *kwlist[] = {"N","nnz","nzvals","colind","rowptr","B", "csc", "permc_spec",NULL}; - - /* Get input arguments */ - if (!PyArg_ParseTupleAndKeywords(args, kwdict, "iiO!O!O!O|ii", kwlist, &N, &nnz, &PyArray_Type, &nzvals, &PyArray_Type, &colind, &PyArray_Type, &rowptr, &Py_B, &csc, &permc_spec)) - return NULL; - - if (!_CHECK_INTEGER(colind) || !_CHECK_INTEGER(rowptr)) { - PyErr_SetString(PyExc_TypeError, "colind and rowptr must be of type cint"); - return NULL; - } - - /* Create Space for output */ - Py_X = PyArray_CopyFromObject(Py_B,PyArray_FLOAT,1,2); - - if (Py_X == NULL) return NULL; - - if (csc) { - if (NCFormat_from_spMatrix(&A, N, N, nnz, nzvals, colind, rowptr, PyArray_FLOAT)) { - Py_DECREF(Py_X); - return NULL; - } - } - else { - if (NRFormat_from_spMatrix(&A, N, N, nnz, nzvals, colind, rowptr, PyArray_FLOAT)) { - Py_DECREF(Py_X); - return NULL; - } - } - - if (DenseSuper_from_Numeric(&B, Py_X)) { - Destroy_SuperMatrix_Store(&A); - Py_DECREF(Py_X); - return NULL; - } - /* B and Py_X share same data now but Py_X "owns" it */ - - /* Setup options */ - - if (setjmp(_superlu_py_jmpbuf)) goto fail; - else { - perm_c = intMalloc(N); - perm_r = intMalloc(N); - set_default_options(&options); - options.ColPerm=superlu_module_getpermc(permc_spec); - StatInit(&stat); - - /* Compute direct inverse of sparse Matrix */ - sgssv(&options, &A, perm_c, perm_r, &L, &U, &B, &stat, &info); - } - - SUPERLU_FREE(perm_r); - SUPERLU_FREE(perm_c); - Destroy_SuperMatrix_Store(&A); - Destroy_SuperMatrix_Store(&B); - Destroy_SuperNode_Matrix(&L); - Destroy_CompCol_Matrix(&U); - StatFree(&stat); - - return Py_BuildValue("Ni", Py_X, info); - - fail: - SUPERLU_FREE(perm_r); - SUPERLU_FREE(perm_c); - Destroy_SuperMatrix_Store(&A); - Destroy_SuperMatrix_Store(&B); - Destroy_SuperNode_Matrix(&L); - Destroy_CompCol_Matrix(&U); - StatFree(&stat); - - Py_XDECREF(Py_X); - return NULL; -} - -/*******************************Begin Code Adapted from PySparse *****************/ - - -static char doc_sgstrf[] = "sgstrf(A, ...)\n\ -\n\ -performs a factorization of the sparse matrix A=*(N,nnz,nzvals,rowind,colptr) and \n\ -returns a factored_lu object.\n\ -\n\ -see dgstrf for more information."; - -static PyObject * -Py_sgstrf(PyObject *self, PyObject *args, PyObject *keywds) { - - /* default value for SuperLU parameters*/ - double diag_pivot_thresh = 1.0; - double drop_tol = 0.0; - int relax = 1; - int panel_size = 10; - int permc_spec = 2; - int N, nnz; - PyArrayObject *rowind, *colptr, *nzvals; - SuperMatrix A; - PyObject *result; - - static char *kwlist[] = {"N","nnz","nzvals","rowind","colptr","permc_spec","diag_pivot_thresh", "drop_tol", "relax", "panel_size", NULL}; - - int res = PyArg_ParseTupleAndKeywords(args, keywds, "iiO!O!O!|iddii", kwlist, - &N, &nnz, - &PyArray_Type, &nzvals, - &PyArray_Type, &rowind, - &PyArray_Type, &colptr, - &permc_spec, - &diag_pivot_thresh, - &drop_tol, - &relax, - &panel_size); - if (!res) - return NULL; - - if (!_CHECK_INTEGER(colptr) || !_CHECK_INTEGER(rowind)) { - PyErr_SetString(PyExc_TypeError, "colptr and rowind must be of type cint"); - return NULL; - } - - if (NCFormat_from_spMatrix(&A, N, N, nnz, nzvals, rowind, colptr, PyArray_FLOAT)) goto fail; - - result = newSciPyLUObject(&A, diag_pivot_thresh, drop_tol, relax, panel_size,\ - permc_spec, PyArray_FLOAT); - if (result == NULL) goto fail; - - Destroy_SuperMatrix_Store(&A); /* arrays of input matrix will not be freed */ - return result; - - fail: - Destroy_SuperMatrix_Store(&A); /* arrays of input matrix will not be freed */ - return NULL; -} - - -/*******************************End Code Adapted from PySparse *****************/ - - -static PyMethodDef sSuperLU_Methods[] = { - {"sgssv", (PyCFunction) Py_sgssv, METH_VARARGS|METH_KEYWORDS, doc_sgssv}, - {"sgstrf", (PyCFunction) Py_sgstrf, METH_VARARGS|METH_KEYWORDS, doc_sgstrf}, - /* {"_sgstrs", Py_sgstrs, METH_VARARGS, doc_sgstrs}, - {"_sgscon", Py_sgscon, METH_VARARGS, doc_sgscon}, - {"_sgsequ", Py_sgsequ, METH_VARARGS, doc_sgsequ}, - {"_slaqgs", Py_slaqgs, METH_VARARGS, doc_slaqgs}, - {"_sgsrfs", Py_sgsrfs, METH_VARARGS, doc_sgsrfs}, */ - {NULL, NULL} -}; - -PyMODINIT_FUNC -init_ssuperlu(void) -{ - Py_InitModule("_ssuperlu", sSuperLU_Methods); - import_array(); - -} - - - - diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ccolumn_bmod.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ccolumn_bmod.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ccolumn_bmod.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ccolumn_bmod.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,27 +1,29 @@ -/* +/*! @file ccolumn_bmod.c + * \brief performs numeric block updates + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
- */
-/*
-  Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
- 
-  THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
-  EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
- 
-  Permission is hereby granted to use or copy this program for any
-  purpose, provided the above notices are retained on all copies.
-  Permission to modify the code and to distribute modified code is
-  granted, provided the above notices are retained, and a notice that
-  the code was modified is included with the above copyright notice.
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ *  Permission is hereby granted to use or copy this program for any
+ *  purpose, provided the above notices are retained on all copies.
+ *  Permission to modify the code and to distribute modified code is
+ *  granted, provided the above notices are retained, and a notice that
+ *  the code was modified is included with the above copyright notice.
+ * 
*/ #include #include -#include "csp_defs.h" +#include "slu_cdefs.h" /* * Function prototypes @@ -32,8 +34,17 @@ -/* Return value: 0 - successful return +/*! \brief + * + *
+ * Purpose:
+ * ========
+ * Performs numeric block updates (sup-col) in topological order.
+ * It features: col-col, 2cols-col, 3cols-col, and sup-col updates.
+ * Special processing on the supernodal portion of L\U[*,j]
+ * Return value:   0 - successful return
  *               > 0 - number of bytes allocated when run out of space
+ * 
*/ int ccolumn_bmod ( @@ -48,14 +59,7 @@ SuperLUStat_t *stat /* output */ ) { -/* - * Purpose: - * ======== - * Performs numeric block updates (sup-col) in topological order. - * It features: col-col, 2cols-col, 3cols-col, and sup-col updates. - * Special processing on the supernodal portion of L\U[*,j] - * - */ + #ifdef _CRAY _fcd ftcs1 = _cptofcd("L", strlen("L")), ftcs2 = _cptofcd("N", strlen("N")), diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ccolumn_dfs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ccolumn_dfs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ccolumn_dfs.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ccolumn_dfs.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,50 +1,38 @@ - -/* +/*! @file ccolumn_dfs.c + * \brief Performs a symbolic factorization + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
- */
-/*
-  Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
- 
-  THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
-  EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
- 
-  Permission is hereby granted to use or copy this program for any
-  purpose, provided the above notices are retained on all copies.
-  Permission to modify the code and to distribute modified code is
-  granted, provided the above notices are retained, and a notice that
-  the code was modified is included with the above copyright notice.
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -#include "csp_defs.h" +#include "slu_cdefs.h" -/* What type of supernodes we want */ +/*! \brief What type of supernodes we want */ #define T2_SUPER -int -ccolumn_dfs( - const int m, /* in - number of rows in the matrix */ - const int jcol, /* in */ - int *perm_r, /* in */ - int *nseg, /* modified - with new segments appended */ - int *lsub_col, /* in - defines the RHS vector to start the dfs */ - int *segrep, /* modified - with new segments appended */ - int *repfnz, /* modified */ - int *xprune, /* modified */ - int *marker, /* modified */ - int *parent, /* working array */ - int *xplore, /* working array */ - GlobalLU_t *Glu /* modified */ - ) -{ -/* + +/*! \brief + * + *
  * Purpose
  * =======
- *   "column_dfs" performs a symbolic factorization on column jcol, and
+ *   CCOLUMN_DFS performs a symbolic factorization on column jcol, and
  *   decide the supernode boundary.
  *
  *   This routine does not use numeric values, but only use the RHS 
@@ -72,8 +60,25 @@
  * ============
  *     0  success;
  *   > 0  number of bytes allocated when run out of space.
- *
+ * 
*/ +int +ccolumn_dfs( + const int m, /* in - number of rows in the matrix */ + const int jcol, /* in */ + int *perm_r, /* in */ + int *nseg, /* modified - with new segments appended */ + int *lsub_col, /* in - defines the RHS vector to start the dfs */ + int *segrep, /* modified - with new segments appended */ + int *repfnz, /* modified */ + int *xprune, /* modified */ + int *marker, /* modified */ + int *parent, /* working array */ + int *xplore, /* working array */ + GlobalLU_t *Glu /* modified */ + ) +{ + int jcolp1, jcolm1, jsuper, nsuper, nextl; int k, krep, krow, kmark, kperm; int *marker2; /* Used for small panel LU */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ccopy_to_ucol.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ccopy_to_ucol.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ccopy_to_ucol.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ccopy_to_ucol.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,27 +1,26 @@ - -/* +/*! @file ccopy_to_ucol.c + * \brief Copy a computed column of U to the compressed data structure + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
  *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "csp_defs.h" -#include "util.h" +#include "slu_cdefs.h" int ccopy_to_ucol( @@ -47,7 +46,6 @@ complex *ucol; int *usub, *xusub; int nzumax; - complex zero = {0.0, 0.0}; xsup = Glu->xsup; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cdiagonal.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cdiagonal.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cdiagonal.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cdiagonal.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,133 @@ + +/*! @file cdiagonal.c + * \brief Auxiliary routines to work with diagonal elements + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory
+ * June 30, 2009
+ * 
+ */ + +#include "slu_cdefs.h" + +int cfill_diag(int n, NCformat *Astore) +/* fill explicit zeros on the diagonal entries, so that the matrix is not + structurally singular. */ +{ + complex *nzval = (complex *)Astore->nzval; + int *rowind = Astore->rowind; + int *colptr = Astore->colptr; + int nnz = colptr[n]; + int fill = 0; + complex *nzval_new; + complex zero = {1.0, 0.0}; + int *rowind_new; + int i, j, diag; + + for (i = 0; i < n; i++) + { + diag = -1; + for (j = colptr[i]; j < colptr[i + 1]; j++) + if (rowind[j] == i) diag = j; + if (diag < 0) fill++; + } + if (fill) + { + nzval_new = complexMalloc(nnz + fill); + rowind_new = intMalloc(nnz + fill); + fill = 0; + for (i = 0; i < n; i++) + { + diag = -1; + for (j = colptr[i] - fill; j < colptr[i + 1]; j++) + { + if ((rowind_new[j + fill] = rowind[j]) == i) diag = j; + nzval_new[j + fill] = nzval[j]; + } + if (diag < 0) + { + rowind_new[colptr[i + 1] + fill] = i; + nzval_new[colptr[i + 1] + fill] = zero; + fill++; + } + colptr[i + 1] += fill; + } + Astore->nzval = nzval_new; + Astore->rowind = rowind_new; + SUPERLU_FREE(nzval); + SUPERLU_FREE(rowind); + } + Astore->nnz += fill; + return fill; +} + +int cdominate(int n, NCformat *Astore) +/* make the matrix diagonally dominant */ +{ + complex *nzval = (complex *)Astore->nzval; + int *rowind = Astore->rowind; + int *colptr = Astore->colptr; + int nnz = colptr[n]; + int fill = 0; + complex *nzval_new; + int *rowind_new; + int i, j, diag; + double s; + + for (i = 0; i < n; i++) + { + diag = -1; + for (j = colptr[i]; j < colptr[i + 1]; j++) + if (rowind[j] == i) diag = j; + if (diag < 0) fill++; + } + if (fill) + { + nzval_new = complexMalloc(nnz + fill); + rowind_new = intMalloc(nnz+ fill); + fill = 0; + for (i = 0; i < n; i++) + { + s = 1e-6; + diag = -1; + for (j = colptr[i] - fill; j < colptr[i + 1]; j++) + { + if ((rowind_new[j + fill] = rowind[j]) == i) diag = j; + nzval_new[j + fill] = nzval[j]; + s += slu_c_abs1(&nzval_new[j + fill]); + } + if (diag >= 0) { + nzval_new[diag+fill].r = s * 3.0; + nzval_new[diag+fill].i = 0.0; + } else { + rowind_new[colptr[i + 1] + fill] = i; + nzval_new[colptr[i + 1] + fill].r = s * 3.0; + nzval_new[colptr[i + 1] + fill].i = 0.0; + fill++; + } + colptr[i + 1] += fill; + } + Astore->nzval = nzval_new; + Astore->rowind = rowind_new; + SUPERLU_FREE(nzval); + SUPERLU_FREE(rowind); + } + else + { + for (i = 0; i < n; i++) + { + s = 1e-6; + diag = -1; + for (j = colptr[i]; j < colptr[i + 1]; j++) + { + if (rowind[j] == i) diag = j; + s += slu_c_abs1(&nzval[j]); + } + nzval[diag].r = s * 3.0; + nzval[diag].i = 0.0; + } + } + Astore->nnz += fill; + return fill; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgscon.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgscon.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgscon.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgscon.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,69 +1,80 @@ -/* +/*! @file cgscon.c + * \brief Estimates reciprocal of the condition number of a general matrix + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
+ * Modified from lapack routines CGECON.
+ * 
*/ + /* * File name: cgscon.c * History: Modified from lapack routines CGECON. */ #include -#include "csp_defs.h" +#include "slu_cdefs.h" + +/*! \brief + * + *
+ *   Purpose   
+ *   =======   
+ *
+ *   CGSCON estimates the reciprocal of the condition number of a general 
+ *   real matrix A, in either the 1-norm or the infinity-norm, using   
+ *   the LU factorization computed by CGETRF.   *
+ *
+ *   An estimate is obtained for norm(inv(A)), and the reciprocal of the   
+ *   condition number is computed as   
+ *      RCOND = 1 / ( norm(A) * norm(inv(A)) ).   
+ *
+ *   See supermatrix.h for the definition of 'SuperMatrix' structure.
+ * 
+ *   Arguments   
+ *   =========   
+ *
+ *    NORM    (input) char*
+ *            Specifies whether the 1-norm condition number or the   
+ *            infinity-norm condition number is required:   
+ *            = '1' or 'O':  1-norm;   
+ *            = 'I':         Infinity-norm.
+ *	    
+ *    L       (input) SuperMatrix*
+ *            The factor L from the factorization Pr*A*Pc=L*U as computed by
+ *            cgstrf(). Use compressed row subscripts storage for supernodes,
+ *            i.e., L has types: Stype = SLU_SC, Dtype = SLU_C, Mtype = SLU_TRLU.
+ * 
+ *    U       (input) SuperMatrix*
+ *            The factor U from the factorization Pr*A*Pc=L*U as computed by
+ *            cgstrf(). Use column-wise storage scheme, i.e., U has types:
+ *            Stype = SLU_NC, Dtype = SLU_C, Mtype = SLU_TRU.
+ *	    
+ *    ANORM   (input) float
+ *            If NORM = '1' or 'O', the 1-norm of the original matrix A.   
+ *            If NORM = 'I', the infinity-norm of the original matrix A.
+ *	    
+ *    RCOND   (output) float*
+ *           The reciprocal of the condition number of the matrix A,   
+ *           computed as RCOND = 1/(norm(A) * norm(inv(A))).
+ *	    
+ *    INFO    (output) int*
+ *           = 0:  successful exit   
+ *           < 0:  if INFO = -i, the i-th argument had an illegal value   
+ *
+ *    ===================================================================== 
+ * 
+ */ void cgscon(char *norm, SuperMatrix *L, SuperMatrix *U, float anorm, float *rcond, SuperLUStat_t *stat, int *info) { -/* - Purpose - ======= - - CGSCON estimates the reciprocal of the condition number of a general - real matrix A, in either the 1-norm or the infinity-norm, using - the LU factorization computed by CGETRF. - - An estimate is obtained for norm(inv(A)), and the reciprocal of the - condition number is computed as - RCOND = 1 / ( norm(A) * norm(inv(A)) ). - - See supermatrix.h for the definition of 'SuperMatrix' structure. - - Arguments - ========= - - NORM (input) char* - Specifies whether the 1-norm condition number or the - infinity-norm condition number is required: - = '1' or 'O': 1-norm; - = 'I': Infinity-norm. - - L (input) SuperMatrix* - The factor L from the factorization Pr*A*Pc=L*U as computed by - cgstrf(). Use compressed row subscripts storage for supernodes, - i.e., L has types: Stype = SLU_SC, Dtype = SLU_C, Mtype = SLU_TRLU. - - U (input) SuperMatrix* - The factor U from the factorization Pr*A*Pc=L*U as computed by - cgstrf(). Use column-wise storage scheme, i.e., U has types: - Stype = SLU_NC, Dtype = SLU_C, Mtype = TRU. - - ANORM (input) float - If NORM = '1' or 'O', the 1-norm of the original matrix A. - If NORM = 'I', the infinity-norm of the original matrix A. - - RCOND (output) float* - The reciprocal of the condition number of the matrix A, - computed as RCOND = 1/(norm(A) * norm(inv(A))). - - INFO (output) int* - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - ===================================================================== -*/ /* Local variables */ int kase, kase1, onenrm, i; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgsequ.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgsequ.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgsequ.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgsequ.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,81 +1,90 @@ - -/* +/*! @file cgsequ.c + * \brief Computes row and column scalings + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
  *
+ * Modified from LAPACK routine CGEEQU
+ * 
*/ /* * File name: cgsequ.c * History: Modified from LAPACK routine CGEEQU */ #include -#include "csp_defs.h" -#include "util.h" +#include "slu_cdefs.h" + + +/*! \brief + * + *
+ * Purpose   
+ *   =======   
+ *
+ *   CGSEQU computes row and column scalings intended to equilibrate an   
+ *   M-by-N sparse matrix A and reduce its condition number. R returns the row
+ *   scale factors and C the column scale factors, chosen to try to make   
+ *   the largest element in each row and column of the matrix B with   
+ *   elements B(i,j)=R(i)*A(i,j)*C(j) have absolute value 1.   
+ *
+ *   R(i) and C(j) are restricted to be between SMLNUM = smallest safe   
+ *   number and BIGNUM = largest safe number.  Use of these scaling   
+ *   factors is not guaranteed to reduce the condition number of A but   
+ *   works well in practice.   
+ *
+ *   See supermatrix.h for the definition of 'SuperMatrix' structure.
+ *
+ *   Arguments   
+ *   =========   
+ *
+ *   A       (input) SuperMatrix*
+ *           The matrix of dimension (A->nrow, A->ncol) whose equilibration
+ *           factors are to be computed. The type of A can be:
+ *           Stype = SLU_NC; Dtype = SLU_C; Mtype = SLU_GE.
+ *	    
+ *   R       (output) float*, size A->nrow
+ *           If INFO = 0 or INFO > M, R contains the row scale factors   
+ *           for A.
+ *	    
+ *   C       (output) float*, size A->ncol
+ *           If INFO = 0,  C contains the column scale factors for A.
+ *	    
+ *   ROWCND  (output) float*
+ *           If INFO = 0 or INFO > M, ROWCND contains the ratio of the   
+ *           smallest R(i) to the largest R(i).  If ROWCND >= 0.1 and   
+ *           AMAX is neither too large nor too small, it is not worth   
+ *           scaling by R.
+ *	    
+ *   COLCND  (output) float*
+ *           If INFO = 0, COLCND contains the ratio of the smallest   
+ *           C(i) to the largest C(i).  If COLCND >= 0.1, it is not   
+ *           worth scaling by C.
+ *	    
+ *   AMAX    (output) float*
+ *           Absolute value of largest matrix element.  If AMAX is very   
+ *           close to overflow or very close to underflow, the matrix   
+ *           should be scaled.
+ *	    
+ *   INFO    (output) int*
+ *           = 0:  successful exit   
+ *           < 0:  if INFO = -i, the i-th argument had an illegal value   
+ *           > 0:  if INFO = i,  and i is   
+ *                 <= A->nrow:  the i-th row of A is exactly zero   
+ *                 >  A->ncol:  the (i-M)-th column of A is exactly zero   
+ *
+ *   ===================================================================== 
+ * 
+ */ void cgsequ(SuperMatrix *A, float *r, float *c, float *rowcnd, float *colcnd, float *amax, int *info) { -/* - Purpose - ======= - - CGSEQU computes row and column scalings intended to equilibrate an - M-by-N sparse matrix A and reduce its condition number. R returns the row - scale factors and C the column scale factors, chosen to try to make - the largest element in each row and column of the matrix B with - elements B(i,j)=R(i)*A(i,j)*C(j) have absolute value 1. - - R(i) and C(j) are restricted to be between SMLNUM = smallest safe - number and BIGNUM = largest safe number. Use of these scaling - factors is not guaranteed to reduce the condition number of A but - works well in practice. - - See supermatrix.h for the definition of 'SuperMatrix' structure. - - Arguments - ========= - - A (input) SuperMatrix* - The matrix of dimension (A->nrow, A->ncol) whose equilibration - factors are to be computed. The type of A can be: - Stype = SLU_NC; Dtype = SLU_C; Mtype = SLU_GE. - - R (output) float*, size A->nrow - If INFO = 0 or INFO > M, R contains the row scale factors - for A. - - C (output) float*, size A->ncol - If INFO = 0, C contains the column scale factors for A. - - ROWCND (output) float* - If INFO = 0 or INFO > M, ROWCND contains the ratio of the - smallest R(i) to the largest R(i). If ROWCND >= 0.1 and - AMAX is neither too large nor too small, it is not worth - scaling by R. - - COLCND (output) float* - If INFO = 0, COLCND contains the ratio of the smallest - C(i) to the largest C(i). If COLCND >= 0.1, it is not - worth scaling by C. - - AMAX (output) float* - Absolute value of largest matrix element. If AMAX is very - close to overflow or very close to underflow, the matrix - should be scaled. - - INFO (output) int* - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - > 0: if INFO = i, and i is - <= A->nrow: the i-th row of A is exactly zero - > A->ncol: the (i-M)-th column of A is exactly zero - ===================================================================== -*/ /* Local variables */ NCformat *Astore; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgsisx.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgsisx.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgsisx.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgsisx.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,693 @@ + +/*! @file cgsisx.c + * \brief Gives the approximate solutions of linear equations A*X=B or A'*X=B + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory.
+ * June 30, 2009
+ * 
+ */ +#include "slu_cdefs.h" + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *
+ * CGSISX gives the approximate solutions of linear equations A*X=B or A'*X=B,
+ * using the ILU factorization from cgsitrf(). An estimation of
+ * the condition number is provided. It performs the following steps:
+ *
+ *   1. If A is stored column-wise (A->Stype = SLU_NC):
+ *  
+ *	1.1. If options->Equil = YES or options->RowPerm = LargeDiag, scaling
+ *	     factors are computed to equilibrate the system:
+ *	     options->Trans = NOTRANS:
+ *		 diag(R)*A*diag(C) *inv(diag(C))*X = diag(R)*B
+ *	     options->Trans = TRANS:
+ *		 (diag(R)*A*diag(C))**T *inv(diag(R))*X = diag(C)*B
+ *	     options->Trans = CONJ:
+ *		 (diag(R)*A*diag(C))**H *inv(diag(R))*X = diag(C)*B
+ *	     Whether or not the system will be equilibrated depends on the
+ *	     scaling of the matrix A, but if equilibration is used, A is
+ *	     overwritten by diag(R)*A*diag(C) and B by diag(R)*B
+ *	     (if options->Trans=NOTRANS) or diag(C)*B (if options->Trans
+ *	     = TRANS or CONJ).
+ *
+ *	1.2. Permute columns of A, forming A*Pc, where Pc is a permutation
+ *	     matrix that usually preserves sparsity.
+ *	     For more details of this step, see sp_preorder.c.
+ *
+ *	1.3. If options->Fact != FACTORED, the LU decomposition is used to
+ *	     factor the matrix A (after equilibration if options->Equil = YES)
+ *	     as Pr*A*Pc = L*U, with Pr determined by partial pivoting.
+ *
+ *	1.4. Compute the reciprocal pivot growth factor.
+ *
+ *	1.5. If some U(i,i) = 0, so that U is exactly singular, then the
+ *	     routine fills a small number on the diagonal entry, that is
+ *		U(i,i) = ||A(:,i)||_oo * options->ILU_FillTol ** (1 - i / n),
+ *	     and info will be increased by 1. The factored form of A is used
+ *	     to estimate the condition number of the preconditioner. If the
+ *	     reciprocal of the condition number is less than machine precision,
+ *	     info = A->ncol+1 is returned as a warning, but the routine still
+ *	     goes on to solve for X.
+ *
+ *	1.6. The system of equations is solved for X using the factored form
+ *	     of A.
+ *
+ *	1.7. options->IterRefine is not used
+ *
+ *	1.8. If equilibration was used, the matrix X is premultiplied by
+ *	     diag(C) (if options->Trans = NOTRANS) or diag(R)
+ *	     (if options->Trans = TRANS or CONJ) so that it solves the
+ *	     original system before equilibration.
+ *
+ *	1.9. options for ILU only
+ *	     1) If options->RowPerm = LargeDiag, MC64 is used to scale and
+ *		permute the matrix to an I-matrix, that is Pr*Dr*A*Dc has
+ *		entries of modulus 1 on the diagonal and off-diagonal entries
+ *		of modulus at most 1. If MC64 fails, dgsequ() is used to
+ *		equilibrate the system.
+ *	     2) options->ILU_DropTol = tau is the threshold for dropping.
+ *		For L, it is used directly (for the whole row in a supernode);
+ *		For U, ||A(:,i)||_oo * tau is used as the threshold
+ *	        for the	i-th column.
+ *		If a secondary dropping rule is required, tau will
+ *	        also be used to compute the second threshold.
+ *	     3) options->ILU_FillFactor = gamma, used as the initial guess
+ *		of memory growth.
+ *		If a secondary dropping rule is required, it will also
+ *              be used as an upper bound of the memory.
+ *	     4) options->ILU_DropRule specifies the dropping rule.
+ *		Option		Explanation
+ *		======		===========
+ *		DROP_BASIC:	Basic dropping rule, supernodal based ILU.
+ *		DROP_PROWS:	Supernodal based ILUTP, p = gamma * nnz(A) / n.
+ *		DROP_COLUMN:	Variation of ILUTP, for j-th column,
+ *				p = gamma * nnz(A(:,j)).
+ *		DROP_AREA;	Variation of ILUTP, for j-th column, use
+ *				nnz(F(:,1:j)) / nnz(A(:,1:j)) to control the
+ *				memory.
+ *		DROP_DYNAMIC:	Modify the threshold tau during the
+ *				factorizaion.
+ *				If nnz(L(:,1:j)) / nnz(A(:,1:j)) < gamma
+ *				    tau_L(j) := MIN(1, tau_L(j-1) * 2);
+ *				Otherwise
+ *				    tau_L(j) := MIN(1, tau_L(j-1) * 2);
+ *				tau_U(j) uses the similar rule.
+ *				NOTE: the thresholds used by L and U are
+ *				indenpendent.
+ *		DROP_INTERP:	Compute the second dropping threshold by
+ *				interpolation instead of sorting (default).
+ *				In this case, the actual fill ratio is not
+ *				guaranteed smaller than gamma.
+ *		DROP_PROWS, DROP_COLUMN and DROP_AREA are mutually exclusive.
+ *		( The default option is DROP_BASIC | DROP_AREA. )
+ *	     5) options->ILU_Norm is the criterion of computing the average
+ *		value of a row in L.
+ *		options->ILU_Norm	average(x[1:n])
+ *		=================	===============
+ *		ONE_NORM		||x||_1 / n
+ *		TWO_NORM		||x||_2 / sqrt(n)
+ *		INF_NORM		max{|x[i]|}
+ *	     6) options->ILU_MILU specifies the type of MILU's variation.
+ *		= SILU (default): do not perform MILU;
+ *		= SMILU_1 (not recommended):
+ *		    U(i,i) := U(i,i) + sum(dropped entries);
+ *		= SMILU_2:
+ *		    U(i,i) := U(i,i) + SGN(U(i,i)) * sum(dropped entries);
+ *		= SMILU_3:
+ *		    U(i,i) := U(i,i) + SGN(U(i,i)) * sum(|dropped entries|);
+ *		NOTE: Even SMILU_1 does not preserve the column sum because of
+ *		late dropping.
+ *	     7) options->ILU_FillTol is used as the perturbation when
+ *		encountering zero pivots. If some U(i,i) = 0, so that U is
+ *		exactly singular, then
+ *		   U(i,i) := ||A(:,i)|| * options->ILU_FillTol ** (1 - i / n).
+ *
+ *   2. If A is stored row-wise (A->Stype = SLU_NR), apply the above algorithm
+ *	to the transpose of A:
+ *
+ *	2.1. If options->Equil = YES or options->RowPerm = LargeDiag, scaling
+ *	     factors are computed to equilibrate the system:
+ *	     options->Trans = NOTRANS:
+ *		 diag(R)*A*diag(C) *inv(diag(C))*X = diag(R)*B
+ *	     options->Trans = TRANS:
+ *		 (diag(R)*A*diag(C))**T *inv(diag(R))*X = diag(C)*B
+ *	     options->Trans = CONJ:
+ *		 (diag(R)*A*diag(C))**H *inv(diag(R))*X = diag(C)*B
+ *	     Whether or not the system will be equilibrated depends on the
+ *	     scaling of the matrix A, but if equilibration is used, A' is
+ *	     overwritten by diag(R)*A'*diag(C) and B by diag(R)*B
+ *	     (if trans='N') or diag(C)*B (if trans = 'T' or 'C').
+ *
+ *	2.2. Permute columns of transpose(A) (rows of A),
+ *	     forming transpose(A)*Pc, where Pc is a permutation matrix that
+ *	     usually preserves sparsity.
+ *	     For more details of this step, see sp_preorder.c.
+ *
+ *	2.3. If options->Fact != FACTORED, the LU decomposition is used to
+ *	     factor the transpose(A) (after equilibration if
+ *	     options->Fact = YES) as Pr*transpose(A)*Pc = L*U with the
+ *	     permutation Pr determined by partial pivoting.
+ *
+ *	2.4. Compute the reciprocal pivot growth factor.
+ *
+ *	2.5. If some U(i,i) = 0, so that U is exactly singular, then the
+ *	     routine fills a small number on the diagonal entry, that is
+ *		 U(i,i) = ||A(:,i)||_oo * options->ILU_FillTol ** (1 - i / n).
+ *	     And info will be increased by 1. The factored form of A is used
+ *	     to estimate the condition number of the preconditioner. If the
+ *	     reciprocal of the condition number is less than machine precision,
+ *	     info = A->ncol+1 is returned as a warning, but the routine still
+ *	     goes on to solve for X.
+ *
+ *	2.6. The system of equations is solved for X using the factored form
+ *	     of transpose(A).
+ *
+ *	2.7. If options->IterRefine is not used.
+ *
+ *	2.8. If equilibration was used, the matrix X is premultiplied by
+ *	     diag(C) (if options->Trans = NOTRANS) or diag(R)
+ *	     (if options->Trans = TRANS or CONJ) so that it solves the
+ *	     original system before equilibration.
+ *
+ *   See supermatrix.h for the definition of 'SuperMatrix' structure.
+ *
+ * Arguments
+ * =========
+ *
+ * options (input) superlu_options_t*
+ *	   The structure defines the input parameters to control
+ *	   how the LU decomposition will be performed and how the
+ *	   system will be solved.
+ *
+ * A	   (input/output) SuperMatrix*
+ *	   Matrix A in A*X=B, of dimension (A->nrow, A->ncol). The number
+ *	   of the linear equations is A->nrow. Currently, the type of A can be:
+ *	   Stype = SLU_NC or SLU_NR, Dtype = SLU_C, Mtype = SLU_GE.
+ *	   In the future, more general A may be handled.
+ *
+ *	   On entry, If options->Fact = FACTORED and equed is not 'N',
+ *	   then A must have been equilibrated by the scaling factors in
+ *	   R and/or C.
+ *	   On exit, A is not modified if options->Equil = NO, or if
+ *	   options->Equil = YES but equed = 'N' on exit.
+ *	   Otherwise, if options->Equil = YES and equed is not 'N',
+ *	   A is scaled as follows:
+ *	   If A->Stype = SLU_NC:
+ *	     equed = 'R':  A := diag(R) * A
+ *	     equed = 'C':  A := A * diag(C)
+ *	     equed = 'B':  A := diag(R) * A * diag(C).
+ *	   If A->Stype = SLU_NR:
+ *	     equed = 'R':  transpose(A) := diag(R) * transpose(A)
+ *	     equed = 'C':  transpose(A) := transpose(A) * diag(C)
+ *	     equed = 'B':  transpose(A) := diag(R) * transpose(A) * diag(C).
+ *
+ * perm_c  (input/output) int*
+ *	   If A->Stype = SLU_NC, Column permutation vector of size A->ncol,
+ *	   which defines the permutation matrix Pc; perm_c[i] = j means
+ *	   column i of A is in position j in A*Pc.
+ *	   On exit, perm_c may be overwritten by the product of the input
+ *	   perm_c and a permutation that postorders the elimination tree
+ *	   of Pc'*A'*A*Pc; perm_c is not changed if the elimination tree
+ *	   is already in postorder.
+ *
+ *	   If A->Stype = SLU_NR, column permutation vector of size A->nrow,
+ *	   which describes permutation of columns of transpose(A) 
+ *	   (rows of A) as described above.
+ *
+ * perm_r  (input/output) int*
+ *	   If A->Stype = SLU_NC, row permutation vector of size A->nrow, 
+ *	   which defines the permutation matrix Pr, and is determined
+ *	   by partial pivoting.  perm_r[i] = j means row i of A is in 
+ *	   position j in Pr*A.
+ *
+ *	   If A->Stype = SLU_NR, permutation vector of size A->ncol, which
+ *	   determines permutation of rows of transpose(A)
+ *	   (columns of A) as described above.
+ *
+ *	   If options->Fact = SamePattern_SameRowPerm, the pivoting routine
+ *	   will try to use the input perm_r, unless a certain threshold
+ *	   criterion is violated. In that case, perm_r is overwritten by a
+ *	   new permutation determined by partial pivoting or diagonal
+ *	   threshold pivoting.
+ *	   Otherwise, perm_r is output argument.
+ *
+ * etree   (input/output) int*,  dimension (A->ncol)
+ *	   Elimination tree of Pc'*A'*A*Pc.
+ *	   If options->Fact != FACTORED and options->Fact != DOFACT,
+ *	   etree is an input argument, otherwise it is an output argument.
+ *	   Note: etree is a vector of parent pointers for a forest whose
+ *	   vertices are the integers 0 to A->ncol-1; etree[root]==A->ncol.
+ *
+ * equed   (input/output) char*
+ *	   Specifies the form of equilibration that was done.
+ *	   = 'N': No equilibration.
+ *	   = 'R': Row equilibration, i.e., A was premultiplied by diag(R).
+ *	   = 'C': Column equilibration, i.e., A was postmultiplied by diag(C).
+ *	   = 'B': Both row and column equilibration, i.e., A was replaced 
+ *		  by diag(R)*A*diag(C).
+ *	   If options->Fact = FACTORED, equed is an input argument,
+ *	   otherwise it is an output argument.
+ *
+ * R	   (input/output) float*, dimension (A->nrow)
+ *	   The row scale factors for A or transpose(A).
+ *	   If equed = 'R' or 'B', A (if A->Stype = SLU_NC) or transpose(A)
+ *	       (if A->Stype = SLU_NR) is multiplied on the left by diag(R).
+ *	   If equed = 'N' or 'C', R is not accessed.
+ *	   If options->Fact = FACTORED, R is an input argument,
+ *	       otherwise, R is output.
+ *	   If options->zFact = FACTORED and equed = 'R' or 'B', each element
+ *	       of R must be positive.
+ *
+ * C	   (input/output) float*, dimension (A->ncol)
+ *	   The column scale factors for A or transpose(A).
+ *	   If equed = 'C' or 'B', A (if A->Stype = SLU_NC) or transpose(A)
+ *	       (if A->Stype = SLU_NR) is multiplied on the right by diag(C).
+ *	   If equed = 'N' or 'R', C is not accessed.
+ *	   If options->Fact = FACTORED, C is an input argument,
+ *	       otherwise, C is output.
+ *	   If options->Fact = FACTORED and equed = 'C' or 'B', each element
+ *	       of C must be positive.
+ *
+ * L	   (output) SuperMatrix*
+ *	   The factor L from the factorization
+ *	       Pr*A*Pc=L*U		(if A->Stype SLU_= NC) or
+ *	       Pr*transpose(A)*Pc=L*U	(if A->Stype = SLU_NR).
+ *	   Uses compressed row subscripts storage for supernodes, i.e.,
+ *	   L has types: Stype = SLU_SC, Dtype = SLU_C, Mtype = SLU_TRLU.
+ *
+ * U	   (output) SuperMatrix*
+ *	   The factor U from the factorization
+ *	       Pr*A*Pc=L*U		(if A->Stype = SLU_NC) or
+ *	       Pr*transpose(A)*Pc=L*U	(if A->Stype = SLU_NR).
+ *	   Uses column-wise storage scheme, i.e., U has types:
+ *	   Stype = SLU_NC, Dtype = SLU_C, Mtype = SLU_TRU.
+ *
+ * work    (workspace/output) void*, size (lwork) (in bytes)
+ *	   User supplied workspace, should be large enough
+ *	   to hold data structures for factors L and U.
+ *	   On exit, if fact is not 'F', L and U point to this array.
+ *
+ * lwork   (input) int
+ *	   Specifies the size of work array in bytes.
+ *	   = 0:  allocate space internally by system malloc;
+ *	   > 0:  use user-supplied work array of length lwork in bytes,
+ *		 returns error if space runs out.
+ *	   = -1: the routine guesses the amount of space needed without
+ *		 performing the factorization, and returns it in
+ *		 mem_usage->total_needed; no other side effects.
+ *
+ *	   See argument 'mem_usage' for memory usage statistics.
+ *
+ * B	   (input/output) SuperMatrix*
+ *	   B has types: Stype = SLU_DN, Dtype = SLU_C, Mtype = SLU_GE.
+ *	   On entry, the right hand side matrix.
+ *	   If B->ncol = 0, only LU decomposition is performed, the triangular
+ *			   solve is skipped.
+ *	   On exit,
+ *	      if equed = 'N', B is not modified; otherwise
+ *	      if A->Stype = SLU_NC:
+ *		 if options->Trans = NOTRANS and equed = 'R' or 'B',
+ *		    B is overwritten by diag(R)*B;
+ *		 if options->Trans = TRANS or CONJ and equed = 'C' of 'B',
+ *		    B is overwritten by diag(C)*B;
+ *	      if A->Stype = SLU_NR:
+ *		 if options->Trans = NOTRANS and equed = 'C' or 'B',
+ *		    B is overwritten by diag(C)*B;
+ *		 if options->Trans = TRANS or CONJ and equed = 'R' of 'B',
+ *		    B is overwritten by diag(R)*B.
+ *
+ * X	   (output) SuperMatrix*
+ *	   X has types: Stype = SLU_DN, Dtype = SLU_C, Mtype = SLU_GE.
+ *	   If info = 0 or info = A->ncol+1, X contains the solution matrix
+ *	   to the original system of equations. Note that A and B are modified
+ *	   on exit if equed is not 'N', and the solution to the equilibrated
+ *	   system is inv(diag(C))*X if options->Trans = NOTRANS and
+ *	   equed = 'C' or 'B', or inv(diag(R))*X if options->Trans = 'T' or 'C'
+ *	   and equed = 'R' or 'B'.
+ *
+ * recip_pivot_growth (output) float*
+ *	   The reciprocal pivot growth factor max_j( norm(A_j)/norm(U_j) ).
+ *	   The infinity norm is used. If recip_pivot_growth is much less
+ *	   than 1, the stability of the LU factorization could be poor.
+ *
+ * rcond   (output) float*
+ *	   The estimate of the reciprocal condition number of the matrix A
+ *	   after equilibration (if done). If rcond is less than the machine
+ *	   precision (in particular, if rcond = 0), the matrix is singular
+ *	   to working precision. This condition is indicated by a return
+ *	   code of info > 0.
+ *
+ * mem_usage (output) mem_usage_t*
+ *	   Record the memory usage statistics, consisting of following fields:
+ *	   - for_lu (float)
+ *	     The amount of space used in bytes for L\U data structures.
+ *	   - total_needed (float)
+ *	     The amount of space needed in bytes to perform factorization.
+ *	   - expansions (int)
+ *	     The number of memory expansions during the LU factorization.
+ *
+ * stat   (output) SuperLUStat_t*
+ *	  Record the statistics on runtime and floating-point operation count.
+ *	  See slu_util.h for the definition of 'SuperLUStat_t'.
+ *
+ * info    (output) int*
+ *	   = 0: successful exit
+ *	   < 0: if info = -i, the i-th argument had an illegal value
+ *	   > 0: if info = i, and i is
+ *		<= A->ncol: number of zero pivots. They are replaced by small
+ *		      entries due to options->ILU_FillTol.
+ *		= A->ncol+1: U is nonsingular, but RCOND is less than machine
+ *		      precision, meaning that the matrix is singular to
+ *		      working precision. Nevertheless, the solution and
+ *		      error bounds are computed because there are a number
+ *		      of situations where the computed solution can be more
+ *		      accurate than the value of RCOND would suggest.
+ *		> A->ncol+1: number of bytes allocated when memory allocation
+ *		      failure occurred, plus A->ncol.
+ * 
+ */ + +void +cgsisx(superlu_options_t *options, SuperMatrix *A, int *perm_c, int *perm_r, + int *etree, char *equed, float *R, float *C, + SuperMatrix *L, SuperMatrix *U, void *work, int lwork, + SuperMatrix *B, SuperMatrix *X, + float *recip_pivot_growth, float *rcond, + mem_usage_t *mem_usage, SuperLUStat_t *stat, int *info) +{ + + DNformat *Bstore, *Xstore; + complex *Bmat, *Xmat; + int ldb, ldx, nrhs; + SuperMatrix *AA;/* A in SLU_NC format used by the factorization routine.*/ + SuperMatrix AC; /* Matrix postmultiplied by Pc */ + int colequ, equil, nofact, notran, rowequ, permc_spec, mc64; + trans_t trant; + char norm[1]; + int i, j, info1; + float amax, anorm, bignum, smlnum, colcnd, rowcnd, rcmax, rcmin; + int relax, panel_size; + float diag_pivot_thresh; + double t0; /* temporary time */ + double *utime; + + int *perm = NULL; + + /* External functions */ + extern float clangs(char *, SuperMatrix *); + + Bstore = B->Store; + Xstore = X->Store; + Bmat = Bstore->nzval; + Xmat = Xstore->nzval; + ldb = Bstore->lda; + ldx = Xstore->lda; + nrhs = B->ncol; + + *info = 0; + nofact = (options->Fact != FACTORED); + equil = (options->Equil == YES); + notran = (options->Trans == NOTRANS); + mc64 = (options->RowPerm == LargeDiag); + if ( nofact ) { + *(unsigned char *)equed = 'N'; + rowequ = FALSE; + colequ = FALSE; + } else { + rowequ = lsame_(equed, "R") || lsame_(equed, "B"); + colequ = lsame_(equed, "C") || lsame_(equed, "B"); + smlnum = slamch_("Safe minimum"); + bignum = 1. / smlnum; + } + + /* Test the input parameters */ + if (!nofact && options->Fact != DOFACT && options->Fact != SamePattern && + options->Fact != SamePattern_SameRowPerm && + !notran && options->Trans != TRANS && options->Trans != CONJ && + !equil && options->Equil != NO) + *info = -1; + else if ( A->nrow != A->ncol || A->nrow < 0 || + (A->Stype != SLU_NC && A->Stype != SLU_NR) || + A->Dtype != SLU_C || A->Mtype != SLU_GE ) + *info = -2; + else if (options->Fact == FACTORED && + !(rowequ || colequ || lsame_(equed, "N"))) + *info = -6; + else { + if (rowequ) { + rcmin = bignum; + rcmax = 0.; + for (j = 0; j < A->nrow; ++j) { + rcmin = SUPERLU_MIN(rcmin, R[j]); + rcmax = SUPERLU_MAX(rcmax, R[j]); + } + if (rcmin <= 0.) *info = -7; + else if ( A->nrow > 0) + rowcnd = SUPERLU_MAX(rcmin,smlnum) / SUPERLU_MIN(rcmax,bignum); + else rowcnd = 1.; + } + if (colequ && *info == 0) { + rcmin = bignum; + rcmax = 0.; + for (j = 0; j < A->nrow; ++j) { + rcmin = SUPERLU_MIN(rcmin, C[j]); + rcmax = SUPERLU_MAX(rcmax, C[j]); + } + if (rcmin <= 0.) *info = -8; + else if (A->nrow > 0) + colcnd = SUPERLU_MAX(rcmin,smlnum) / SUPERLU_MIN(rcmax,bignum); + else colcnd = 1.; + } + if (*info == 0) { + if ( lwork < -1 ) *info = -12; + else if ( B->ncol < 0 || Bstore->lda < SUPERLU_MAX(0, A->nrow) || + B->Stype != SLU_DN || B->Dtype != SLU_C || + B->Mtype != SLU_GE ) + *info = -13; + else if ( X->ncol < 0 || Xstore->lda < SUPERLU_MAX(0, A->nrow) || + (B->ncol != 0 && B->ncol != X->ncol) || + X->Stype != SLU_DN || + X->Dtype != SLU_C || X->Mtype != SLU_GE ) + *info = -14; + } + } + if (*info != 0) { + i = -(*info); + xerbla_("cgsisx", &i); + return; + } + + /* Initialization for factor parameters */ + panel_size = sp_ienv(1); + relax = sp_ienv(2); + diag_pivot_thresh = options->DiagPivotThresh; + + utime = stat->utime; + + /* Convert A to SLU_NC format when necessary. */ + if ( A->Stype == SLU_NR ) { + NRformat *Astore = A->Store; + AA = (SuperMatrix *) SUPERLU_MALLOC( sizeof(SuperMatrix) ); + cCreate_CompCol_Matrix(AA, A->ncol, A->nrow, Astore->nnz, + Astore->nzval, Astore->colind, Astore->rowptr, + SLU_NC, A->Dtype, A->Mtype); + if ( notran ) { /* Reverse the transpose argument. */ + trant = TRANS; + notran = 0; + } else { + trant = NOTRANS; + notran = 1; + } + } else { /* A->Stype == SLU_NC */ + trant = options->Trans; + AA = A; + } + + if ( nofact ) { + register int i, j; + NCformat *Astore = AA->Store; + int nnz = Astore->nnz; + int *colptr = Astore->colptr; + int *rowind = Astore->rowind; + complex *nzval = (complex *)Astore->nzval; + int n = AA->nrow; + + if ( mc64 ) { + *equed = 'B'; + rowequ = colequ = 1; + t0 = SuperLU_timer_(); + if ((perm = intMalloc(n)) == NULL) + ABORT("SUPERLU_MALLOC fails for perm[]"); + + info1 = cldperm(5, n, nnz, colptr, rowind, nzval, perm, R, C); + + if (info1 > 0) { /* MC64 fails, call cgsequ() later */ + mc64 = 0; + SUPERLU_FREE(perm); + perm = NULL; + } else { + for (i = 0; i < n; i++) { + R[i] = exp(R[i]); + C[i] = exp(C[i]); + } + /* permute and scale the matrix */ + for (j = 0; j < n; j++) { + for (i = colptr[j]; i < colptr[j + 1]; i++) { + cs_mult(&nzval[i], &nzval[i], R[rowind[i]] * C[j]); + rowind[i] = perm[rowind[i]]; + } + } + } + utime[EQUIL] = SuperLU_timer_() - t0; + } + if ( !mc64 & equil ) { + t0 = SuperLU_timer_(); + /* Compute row and column scalings to equilibrate the matrix A. */ + cgsequ(AA, R, C, &rowcnd, &colcnd, &amax, &info1); + + if ( info1 == 0 ) { + /* Equilibrate matrix A. */ + claqgs(AA, R, C, rowcnd, colcnd, amax, equed); + rowequ = lsame_(equed, "R") || lsame_(equed, "B"); + colequ = lsame_(equed, "C") || lsame_(equed, "B"); + } + utime[EQUIL] = SuperLU_timer_() - t0; + } + } + + if ( nrhs > 0 ) { + /* Scale the right hand side if equilibration was performed. */ + if ( notran ) { + if ( rowequ ) { + for (j = 0; j < nrhs; ++j) + for (i = 0; i < A->nrow; ++i) { + cs_mult(&Bmat[i+j*ldb], &Bmat[i+j*ldb], R[i]); + } + } + } else if ( colequ ) { + for (j = 0; j < nrhs; ++j) + for (i = 0; i < A->nrow; ++i) { + cs_mult(&Bmat[i+j*ldb], &Bmat[i+j*ldb], C[i]); + } + } + } + + if ( nofact ) { + + t0 = SuperLU_timer_(); + /* + * Gnet column permutation vector perm_c[], according to permc_spec: + * permc_spec = NATURAL: natural ordering + * permc_spec = MMD_AT_PLUS_A: minimum degree on structure of A'+A + * permc_spec = MMD_ATA: minimum degree on structure of A'*A + * permc_spec = COLAMD: approximate minimum degree column ordering + * permc_spec = MY_PERMC: the ordering already supplied in perm_c[] + */ + permc_spec = options->ColPerm; + if ( permc_spec != MY_PERMC && options->Fact == DOFACT ) + get_perm_c(permc_spec, AA, perm_c); + utime[COLPERM] = SuperLU_timer_() - t0; + + t0 = SuperLU_timer_(); + sp_preorder(options, AA, perm_c, etree, &AC); + utime[ETREE] = SuperLU_timer_() - t0; + + /* Compute the LU factorization of A*Pc. */ + t0 = SuperLU_timer_(); + cgsitrf(options, &AC, relax, panel_size, etree, work, lwork, + perm_c, perm_r, L, U, stat, info); + utime[FACT] = SuperLU_timer_() - t0; + + if ( lwork == -1 ) { + mem_usage->total_needed = *info - A->ncol; + return; + } + } + + if ( options->PivotGrowth ) { + if ( *info > 0 ) return; + + /* Compute the reciprocal pivot growth factor *recip_pivot_growth. */ + *recip_pivot_growth = cPivotGrowth(A->ncol, AA, perm_c, L, U); + } + + if ( options->ConditionNumber ) { + /* Estimate the reciprocal of the condition number of A. */ + t0 = SuperLU_timer_(); + if ( notran ) { + *(unsigned char *)norm = '1'; + } else { + *(unsigned char *)norm = 'I'; + } + anorm = clangs(norm, AA); + cgscon(norm, L, U, anorm, rcond, stat, &info1); + utime[RCOND] = SuperLU_timer_() - t0; + } + + if ( nrhs > 0 ) { + /* Compute the solution matrix X. */ + for (j = 0; j < nrhs; j++) /* Save a copy of the right hand sides */ + for (i = 0; i < B->nrow; i++) + Xmat[i + j*ldx] = Bmat[i + j*ldb]; + + t0 = SuperLU_timer_(); + cgstrs (trant, L, U, perm_c, perm_r, X, stat, &info1); + utime[SOLVE] = SuperLU_timer_() - t0; + + /* Transform the solution matrix X to a solution of the original + system. */ + if ( notran ) { + if ( colequ ) { + for (j = 0; j < nrhs; ++j) + for (i = 0; i < A->nrow; ++i) { + cs_mult(&Xmat[i+j*ldx], &Xmat[i+j*ldx], C[i]); + } + } + } else { + if ( rowequ ) { + if (perm) { + complex *tmp; + int n = A->nrow; + + if ((tmp = complexMalloc(n)) == NULL) + ABORT("SUPERLU_MALLOC fails for tmp[]"); + for (j = 0; j < nrhs; j++) { + for (i = 0; i < n; i++) + tmp[i] = Xmat[i + j * ldx]; /*dcopy*/ + for (i = 0; i < n; i++) + cs_mult(&Xmat[i+j*ldx], &tmp[perm[i]], R[i]); + } + SUPERLU_FREE(tmp); + } else { + for (j = 0; j < nrhs; ++j) + for (i = 0; i < A->nrow; ++i) { + cs_mult(&Xmat[i+j*ldx], &Xmat[i+j*ldx], R[i]); + } + } + } + } + } /* end if nrhs > 0 */ + + if ( options->ConditionNumber ) { + /* Set INFO = A->ncol+1 if the matrix is singular to working precision. */ + if ( *rcond < slamch_("E") && *info == 0) *info = A->ncol + 1; + } + + if (perm) SUPERLU_FREE(perm); + + if ( nofact ) { + ilu_cQuerySpace(L, U, mem_usage); + Destroy_CompCol_Permuted(&AC); + } + if ( A->Stype == SLU_NR ) { + Destroy_SuperMatrix_Store(AA); + SUPERLU_FREE(AA); + } + +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgsitrf.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgsitrf.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgsitrf.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgsitrf.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,628 @@ + +/*! @file cgsitf.c + * \brief Computes an ILU factorization of a general sparse matrix + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory.
+ * June 30, 2009
+ * 
+ */ + +#include "slu_cdefs.h" + +#ifdef DEBUG +int num_drop_L; +#endif + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *
+ * CGSITRF computes an ILU factorization of a general sparse m-by-n
+ * matrix A using partial pivoting with row interchanges.
+ * The factorization has the form
+ *     Pr * A = L * U
+ * where Pr is a row permutation matrix, L is lower triangular with unit
+ * diagonal elements (lower trapezoidal if A->nrow > A->ncol), and U is upper
+ * triangular (upper trapezoidal if A->nrow < A->ncol).
+ *
+ * See supermatrix.h for the definition of 'SuperMatrix' structure.
+ *
+ * Arguments
+ * =========
+ *
+ * options (input) superlu_options_t*
+ *	   The structure defines the input parameters to control
+ *	   how the ILU decomposition will be performed.
+ *
+ * A	    (input) SuperMatrix*
+ *	    Original matrix A, permuted by columns, of dimension
+ *	    (A->nrow, A->ncol). The type of A can be:
+ *	    Stype = SLU_NCP; Dtype = SLU_C; Mtype = SLU_GE.
+ *
+ * relax    (input) int
+ *	    To control degree of relaxing supernodes. If the number
+ *	    of nodes (columns) in a subtree of the elimination tree is less
+ *	    than relax, this subtree is considered as one supernode,
+ *	    regardless of the row structures of those columns.
+ *
+ * panel_size (input) int
+ *	    A panel consists of at most panel_size consecutive columns.
+ *
+ * etree    (input) int*, dimension (A->ncol)
+ *	    Elimination tree of A'*A.
+ *	    Note: etree is a vector of parent pointers for a forest whose
+ *	    vertices are the integers 0 to A->ncol-1; etree[root]==A->ncol.
+ *	    On input, the columns of A should be permuted so that the
+ *	    etree is in a certain postorder.
+ *
+ * work     (input/output) void*, size (lwork) (in bytes)
+ *	    User-supplied work space and space for the output data structures.
+ *	    Not referenced if lwork = 0;
+ *
+ * lwork   (input) int
+ *	   Specifies the size of work array in bytes.
+ *	   = 0:  allocate space internally by system malloc;
+ *	   > 0:  use user-supplied work array of length lwork in bytes,
+ *		 returns error if space runs out.
+ *	   = -1: the routine guesses the amount of space needed without
+ *		 performing the factorization, and returns it in
+ *		 *info; no other side effects.
+ *
+ * perm_c   (input) int*, dimension (A->ncol)
+ *	    Column permutation vector, which defines the
+ *	    permutation matrix Pc; perm_c[i] = j means column i of A is
+ *	    in position j in A*Pc.
+ *	    When searching for diagonal, perm_c[*] is applied to the
+ *	    row subscripts of A, so that diagonal threshold pivoting
+ *	    can find the diagonal of A, rather than that of A*Pc.
+ *
+ * perm_r   (input/output) int*, dimension (A->nrow)
+ *	    Row permutation vector which defines the permutation matrix Pr,
+ *	    perm_r[i] = j means row i of A is in position j in Pr*A.
+ *	    If options->Fact = SamePattern_SameRowPerm, the pivoting routine
+ *	       will try to use the input perm_r, unless a certain threshold
+ *	       criterion is violated. In that case, perm_r is overwritten by
+ *	       a new permutation determined by partial pivoting or diagonal
+ *	       threshold pivoting.
+ *	    Otherwise, perm_r is output argument;
+ *
+ * L	    (output) SuperMatrix*
+ *	    The factor L from the factorization Pr*A=L*U; use compressed row
+ *	    subscripts storage for supernodes, i.e., L has type:
+ *	    Stype = SLU_SC, Dtype = SLU_C, Mtype = SLU_TRLU.
+ *
+ * U	    (output) SuperMatrix*
+ *	    The factor U from the factorization Pr*A*Pc=L*U. Use column-wise
+ *	    storage scheme, i.e., U has types: Stype = SLU_NC,
+ *	    Dtype = SLU_C, Mtype = SLU_TRU.
+ *
+ * stat     (output) SuperLUStat_t*
+ *	    Record the statistics on runtime and floating-point operation count.
+ *	    See slu_util.h for the definition of 'SuperLUStat_t'.
+ *
+ * info     (output) int*
+ *	    = 0: successful exit
+ *	    < 0: if info = -i, the i-th argument had an illegal value
+ *	    > 0: if info = i, and i is
+ *	       <= A->ncol: number of zero pivots. They are replaced by small
+ *		  entries according to options->ILU_FillTol.
+ *	       > A->ncol: number of bytes allocated when memory allocation
+ *		  failure occurred, plus A->ncol. If lwork = -1, it is
+ *		  the estimated amount of space needed, plus A->ncol.
+ *
+ * ======================================================================
+ *
+ * Local Working Arrays:
+ * ======================
+ *   m = number of rows in the matrix
+ *   n = number of columns in the matrix
+ *
+ *   marker[0:3*m-1]: marker[i] = j means that node i has been
+ *	reached when working on column j.
+ *	Storage: relative to original row subscripts
+ *	NOTE: There are 4 of them:
+ *	      marker/marker1 are used for panel dfs, see (ilu_)dpanel_dfs.c;
+ *	      marker2 is used for inner-factorization, see (ilu)_dcolumn_dfs.c;
+ *	      marker_relax(has its own space) is used for relaxed supernodes.
+ *
+ *   parent[0:m-1]: parent vector used during dfs
+ *	Storage: relative to new row subscripts
+ *
+ *   xplore[0:m-1]: xplore[i] gives the location of the next (dfs)
+ *	unexplored neighbor of i in lsub[*]
+ *
+ *   segrep[0:nseg-1]: contains the list of supernodal representatives
+ *	in topological order of the dfs. A supernode representative is the
+ *	last column of a supernode.
+ *	The maximum size of segrep[] is n.
+ *
+ *   repfnz[0:W*m-1]: for a nonzero segment U[*,j] that ends at a
+ *	supernodal representative r, repfnz[r] is the location of the first
+ *	nonzero in this segment.  It is also used during the dfs: repfnz[r]>0
+ *	indicates the supernode r has been explored.
+ *	NOTE: There are W of them, each used for one column of a panel.
+ *
+ *   panel_lsub[0:W*m-1]: temporary for the nonzeros row indices below
+ *	the panel diagonal. These are filled in during dpanel_dfs(), and are
+ *	used later in the inner LU factorization within the panel.
+ *	panel_lsub[]/dense[] pair forms the SPA data structure.
+ *	NOTE: There are W of them.
+ *
+ *   dense[0:W*m-1]: sparse accumulating (SPA) vector for intermediate values;
+ *		   NOTE: there are W of them.
+ *
+ *   tempv[0:*]: real temporary used for dense numeric kernels;
+ *	The size of this array is defined by NUM_TEMPV() in slu_util.h.
+ *	It is also used by the dropping routine ilu_ddrop_row().
+ * 
+ */ + +void +cgsitrf(superlu_options_t *options, SuperMatrix *A, int relax, int panel_size, + int *etree, void *work, int lwork, int *perm_c, int *perm_r, + SuperMatrix *L, SuperMatrix *U, SuperLUStat_t *stat, int *info) +{ + /* Local working arrays */ + NCPformat *Astore; + int *iperm_r = NULL; /* inverse of perm_r; used when + options->Fact == SamePattern_SameRowPerm */ + int *iperm_c; /* inverse of perm_c */ + int *swap, *iswap; /* swap is used to store the row permutation + during the factorization. Initially, it is set + to iperm_c (row indeces of Pc*A*Pc'). + iswap is the inverse of swap. After the + factorization, it is equal to perm_r. */ + int *iwork; + complex *cwork; + int *segrep, *repfnz, *parent, *xplore; + int *panel_lsub; /* dense[]/panel_lsub[] pair forms a w-wide SPA */ + int *marker, *marker_relax; + complex *dense, *tempv; + float *stempv; + int *relax_end, *relax_fsupc; + complex *a; + int *asub; + int *xa_begin, *xa_end; + int *xsup, *supno; + int *xlsub, *xlusup, *xusub; + int nzlumax; + float *amax; + complex drop_sum; + static GlobalLU_t Glu; /* persistent to facilitate multiple factors. */ + int *iwork2; /* used by the second dropping rule */ + + /* Local scalars */ + fact_t fact = options->Fact; + double diag_pivot_thresh = options->DiagPivotThresh; + double drop_tol = options->ILU_DropTol; /* tau */ + double fill_ini = options->ILU_FillTol; /* tau^hat */ + double gamma = options->ILU_FillFactor; + int drop_rule = options->ILU_DropRule; + milu_t milu = options->ILU_MILU; + double fill_tol; + int pivrow; /* pivotal row number in the original matrix A */ + int nseg1; /* no of segments in U-column above panel row jcol */ + int nseg; /* no of segments in each U-column */ + register int jcol; + register int kcol; /* end column of a relaxed snode */ + register int icol; + register int i, k, jj, new_next, iinfo; + int m, n, min_mn, jsupno, fsupc, nextlu, nextu; + int w_def; /* upper bound on panel width */ + int usepr, iperm_r_allocated = 0; + int nnzL, nnzU; + int *panel_histo = stat->panel_histo; + flops_t *ops = stat->ops; + + int last_drop;/* the last column which the dropping rules applied */ + int quota; + int nnzAj; /* number of nonzeros in A(:,1:j) */ + int nnzLj, nnzUj; + double tol_L = drop_tol, tol_U = drop_tol; + complex zero = {0.0, 0.0}; + + /* Executable */ + iinfo = 0; + m = A->nrow; + n = A->ncol; + min_mn = SUPERLU_MIN(m, n); + Astore = A->Store; + a = Astore->nzval; + asub = Astore->rowind; + xa_begin = Astore->colbeg; + xa_end = Astore->colend; + + /* Allocate storage common to the factor routines */ + *info = cLUMemInit(fact, work, lwork, m, n, Astore->nnz, panel_size, + gamma, L, U, &Glu, &iwork, &cwork); + if ( *info ) return; + + xsup = Glu.xsup; + supno = Glu.supno; + xlsub = Glu.xlsub; + xlusup = Glu.xlusup; + xusub = Glu.xusub; + + SetIWork(m, n, panel_size, iwork, &segrep, &parent, &xplore, + &repfnz, &panel_lsub, &marker_relax, &marker); + cSetRWork(m, panel_size, cwork, &dense, &tempv); + + usepr = (fact == SamePattern_SameRowPerm); + if ( usepr ) { + /* Compute the inverse of perm_r */ + iperm_r = (int *) intMalloc(m); + for (k = 0; k < m; ++k) iperm_r[perm_r[k]] = k; + iperm_r_allocated = 1; + } + + iperm_c = (int *) intMalloc(n); + for (k = 0; k < n; ++k) iperm_c[perm_c[k]] = k; + swap = (int *)intMalloc(n); + for (k = 0; k < n; k++) swap[k] = iperm_c[k]; + iswap = (int *)intMalloc(n); + for (k = 0; k < n; k++) iswap[k] = perm_c[k]; + amax = (float *) floatMalloc(panel_size); + if (drop_rule & DROP_SECONDARY) + iwork2 = (int *)intMalloc(n); + else + iwork2 = NULL; + + nnzAj = 0; + nnzLj = 0; + nnzUj = 0; + last_drop = SUPERLU_MAX(min_mn - 2 * sp_ienv(3), (int)(min_mn * 0.95)); + + /* Identify relaxed snodes */ + relax_end = (int *) intMalloc(n); + relax_fsupc = (int *) intMalloc(n); + if ( options->SymmetricMode == YES ) + ilu_heap_relax_snode(n, etree, relax, marker, relax_end, relax_fsupc); + else + ilu_relax_snode(n, etree, relax, marker, relax_end, relax_fsupc); + + ifill (perm_r, m, EMPTY); + ifill (marker, m * NO_MARKER, EMPTY); + supno[0] = -1; + xsup[0] = xlsub[0] = xusub[0] = xlusup[0] = 0; + w_def = panel_size; + + /* Mark the rows used by relaxed supernodes */ + ifill (marker_relax, m, EMPTY); + i = mark_relax(m, relax_end, relax_fsupc, xa_begin, xa_end, + asub, marker_relax); +#if ( PRNTlevel >= 1) + printf("%d relaxed supernodes.\n", i); +#endif + + /* + * Work on one "panel" at a time. A panel is one of the following: + * (a) a relaxed supernode at the bottom of the etree, or + * (b) panel_size contiguous columns, defined by the user + */ + for (jcol = 0; jcol < min_mn; ) { + + if ( relax_end[jcol] != EMPTY ) { /* start of a relaxed snode */ + kcol = relax_end[jcol]; /* end of the relaxed snode */ + panel_histo[kcol-jcol+1]++; + + /* Drop small rows in the previous supernode. */ + if (jcol > 0 && jcol < last_drop) { + int first = xsup[supno[jcol - 1]]; + int last = jcol - 1; + int quota; + + /* Compute the quota */ + if (drop_rule & DROP_PROWS) + quota = gamma * Astore->nnz / m * (m - first) / m + * (last - first + 1); + else if (drop_rule & DROP_COLUMN) { + int i; + quota = 0; + for (i = first; i <= last; i++) + quota += xa_end[i] - xa_begin[i]; + quota = gamma * quota * (m - first) / m; + } else if (drop_rule & DROP_AREA) + quota = gamma * nnzAj * (1.0 - 0.5 * (last + 1.0) / m) + - nnzLj; + else + quota = m * n; + fill_tol = pow(fill_ini, 1.0 - 0.5 * (first + last) / min_mn); + + /* Drop small rows */ + stempv = (float *) tempv; + i = ilu_cdrop_row(options, first, last, tol_L, quota, &nnzLj, + &fill_tol, &Glu, stempv, iwork2, 0); + /* Reset the parameters */ + if (drop_rule & DROP_DYNAMIC) { + if (gamma * nnzAj * (1.0 - 0.5 * (last + 1.0) / m) + < nnzLj) + tol_L = SUPERLU_MIN(1.0, tol_L * 2.0); + else + tol_L = SUPERLU_MAX(drop_tol, tol_L * 0.5); + } + if (fill_tol < 0) iinfo -= (int)fill_tol; +#ifdef DEBUG + num_drop_L += i * (last - first + 1); +#endif + } + + /* -------------------------------------- + * Factorize the relaxed supernode(jcol:kcol) + * -------------------------------------- */ + /* Determine the union of the row structure of the snode */ + if ( (*info = ilu_csnode_dfs(jcol, kcol, asub, xa_begin, xa_end, + marker, &Glu)) != 0 ) + return; + + nextu = xusub[jcol]; + nextlu = xlusup[jcol]; + jsupno = supno[jcol]; + fsupc = xsup[jsupno]; + new_next = nextlu + (xlsub[fsupc+1]-xlsub[fsupc])*(kcol-jcol+1); + nzlumax = Glu.nzlumax; + while ( new_next > nzlumax ) { + if ((*info = cLUMemXpand(jcol, nextlu, LUSUP, &nzlumax, &Glu))) + return; + } + + for (icol = jcol; icol <= kcol; icol++) { + xusub[icol+1] = nextu; + + amax[0] = 0.0; + /* Scatter into SPA dense[*] */ + for (k = xa_begin[icol]; k < xa_end[icol]; k++) { + register float tmp = slu_c_abs1 (&a[k]); + if (tmp > amax[0]) amax[0] = tmp; + dense[asub[k]] = a[k]; + } + nnzAj += xa_end[icol] - xa_begin[icol]; + if (amax[0] == 0.0) { + amax[0] = fill_ini; +#if ( PRNTlevel >= 1) + printf("Column %d is entirely zero!\n", icol); + fflush(stdout); +#endif + } + + /* Numeric update within the snode */ + csnode_bmod(icol, jsupno, fsupc, dense, tempv, &Glu, stat); + + if (usepr) pivrow = iperm_r[icol]; + fill_tol = pow(fill_ini, 1.0 - (double)icol / (double)min_mn); + if ( (*info = ilu_cpivotL(icol, diag_pivot_thresh, &usepr, + perm_r, iperm_c[icol], swap, iswap, + marker_relax, &pivrow, + amax[0] * fill_tol, milu, zero, + &Glu, stat)) ) { + iinfo++; + marker[pivrow] = kcol; + } + + } + + jcol = kcol + 1; + + } else { /* Work on one panel of panel_size columns */ + + /* Adjust panel_size so that a panel won't overlap with the next + * relaxed snode. + */ + panel_size = w_def; + for (k = jcol + 1; k < SUPERLU_MIN(jcol+panel_size, min_mn); k++) + if ( relax_end[k] != EMPTY ) { + panel_size = k - jcol; + break; + } + if ( k == min_mn ) panel_size = min_mn - jcol; + panel_histo[panel_size]++; + + /* symbolic factor on a panel of columns */ + ilu_cpanel_dfs(m, panel_size, jcol, A, perm_r, &nseg1, + dense, amax, panel_lsub, segrep, repfnz, + marker, parent, xplore, &Glu); + + /* numeric sup-panel updates in topological order */ + cpanel_bmod(m, panel_size, jcol, nseg1, dense, + tempv, segrep, repfnz, &Glu, stat); + + /* Sparse LU within the panel, and below panel diagonal */ + for (jj = jcol; jj < jcol + panel_size; jj++) { + + k = (jj - jcol) * m; /* column index for w-wide arrays */ + + nseg = nseg1; /* Begin after all the panel segments */ + + nnzAj += xa_end[jj] - xa_begin[jj]; + + if ((*info = ilu_ccolumn_dfs(m, jj, perm_r, &nseg, + &panel_lsub[k], segrep, &repfnz[k], + marker, parent, xplore, &Glu))) + return; + + /* Numeric updates */ + if ((*info = ccolumn_bmod(jj, (nseg - nseg1), &dense[k], + tempv, &segrep[nseg1], &repfnz[k], + jcol, &Glu, stat)) != 0) return; + + /* Make a fill-in position if the column is entirely zero */ + if (xlsub[jj + 1] == xlsub[jj]) { + register int i, row; + int nextl; + int nzlmax = Glu.nzlmax; + int *lsub = Glu.lsub; + int *marker2 = marker + 2 * m; + + /* Allocate memory */ + nextl = xlsub[jj] + 1; + if (nextl >= nzlmax) { + int error = cLUMemXpand(jj, nextl, LSUB, &nzlmax, &Glu); + if (error) { *info = error; return; } + lsub = Glu.lsub; + } + xlsub[jj + 1]++; + assert(xlusup[jj]==xlusup[jj+1]); + xlusup[jj + 1]++; + Glu.lusup[xlusup[jj]] = zero; + + /* Choose a row index (pivrow) for fill-in */ + for (i = jj; i < n; i++) + if (marker_relax[swap[i]] <= jj) break; + row = swap[i]; + marker2[row] = jj; + lsub[xlsub[jj]] = row; +#ifdef DEBUG + printf("Fill col %d.\n", jj); + fflush(stdout); +#endif + } + + /* Computer the quota */ + if (drop_rule & DROP_PROWS) + quota = gamma * Astore->nnz / m * jj / m; + else if (drop_rule & DROP_COLUMN) + quota = gamma * (xa_end[jj] - xa_begin[jj]) * + (jj + 1) / m; + else if (drop_rule & DROP_AREA) + quota = gamma * 0.9 * nnzAj * 0.5 - nnzUj; + else + quota = m; + + /* Copy the U-segments to ucol[*] and drop small entries */ + if ((*info = ilu_ccopy_to_ucol(jj, nseg, segrep, &repfnz[k], + perm_r, &dense[k], drop_rule, + milu, amax[jj - jcol] * tol_U, + quota, &drop_sum, &nnzUj, &Glu, + iwork2)) != 0) + return; + + /* Reset the dropping threshold if required */ + if (drop_rule & DROP_DYNAMIC) { + if (gamma * 0.9 * nnzAj * 0.5 < nnzLj) + tol_U = SUPERLU_MIN(1.0, tol_U * 2.0); + else + tol_U = SUPERLU_MAX(drop_tol, tol_U * 0.5); + } + + cs_mult(&drop_sum, &drop_sum, MILU_ALPHA); + if (usepr) pivrow = iperm_r[jj]; + fill_tol = pow(fill_ini, 1.0 - (double)jj / (double)min_mn); + if ( (*info = ilu_cpivotL(jj, diag_pivot_thresh, &usepr, perm_r, + iperm_c[jj], swap, iswap, + marker_relax, &pivrow, + amax[jj - jcol] * fill_tol, milu, + drop_sum, &Glu, stat)) ) { + iinfo++; + marker[m + pivrow] = jj; + marker[2 * m + pivrow] = jj; + } + + /* Reset repfnz[] for this column */ + resetrep_col (nseg, segrep, &repfnz[k]); + + /* Start a new supernode, drop the previous one */ + if (jj > 0 && supno[jj] > supno[jj - 1] && jj < last_drop) { + int first = xsup[supno[jj - 1]]; + int last = jj - 1; + int quota; + + /* Compute the quota */ + if (drop_rule & DROP_PROWS) + quota = gamma * Astore->nnz / m * (m - first) / m + * (last - first + 1); + else if (drop_rule & DROP_COLUMN) { + int i; + quota = 0; + for (i = first; i <= last; i++) + quota += xa_end[i] - xa_begin[i]; + quota = gamma * quota * (m - first) / m; + } else if (drop_rule & DROP_AREA) + quota = gamma * nnzAj * (1.0 - 0.5 * (last + 1.0) + / m) - nnzLj; + else + quota = m * n; + fill_tol = pow(fill_ini, 1.0 - 0.5 * (first + last) / + (double)min_mn); + + /* Drop small rows */ + stempv = (float *) tempv; + i = ilu_cdrop_row(options, first, last, tol_L, quota, + &nnzLj, &fill_tol, &Glu, stempv, iwork2, + 1); + + /* Reset the parameters */ + if (drop_rule & DROP_DYNAMIC) { + if (gamma * nnzAj * (1.0 - 0.5 * (last + 1.0) / m) + < nnzLj) + tol_L = SUPERLU_MIN(1.0, tol_L * 2.0); + else + tol_L = SUPERLU_MAX(drop_tol, tol_L * 0.5); + } + if (fill_tol < 0) iinfo -= (int)fill_tol; +#ifdef DEBUG + num_drop_L += i * (last - first + 1); +#endif + } /* if start a new supernode */ + + } /* for */ + + jcol += panel_size; /* Move to the next panel */ + + } /* else */ + + } /* for */ + + *info = iinfo; + + if ( m > n ) { + k = 0; + for (i = 0; i < m; ++i) + if ( perm_r[i] == EMPTY ) { + perm_r[i] = n + k; + ++k; + } + } + + ilu_countnz(min_mn, &nnzL, &nnzU, &Glu); + fixupL(min_mn, perm_r, &Glu); + + cLUWorkFree(iwork, cwork, &Glu); /* Free work space and compress storage */ + + if ( fact == SamePattern_SameRowPerm ) { + /* L and U structures may have changed due to possibly different + pivoting, even though the storage is available. + There could also be memory expansions, so the array locations + may have changed, */ + ((SCformat *)L->Store)->nnz = nnzL; + ((SCformat *)L->Store)->nsuper = Glu.supno[n]; + ((SCformat *)L->Store)->nzval = Glu.lusup; + ((SCformat *)L->Store)->nzval_colptr = Glu.xlusup; + ((SCformat *)L->Store)->rowind = Glu.lsub; + ((SCformat *)L->Store)->rowind_colptr = Glu.xlsub; + ((NCformat *)U->Store)->nnz = nnzU; + ((NCformat *)U->Store)->nzval = Glu.ucol; + ((NCformat *)U->Store)->rowind = Glu.usub; + ((NCformat *)U->Store)->colptr = Glu.xusub; + } else { + cCreate_SuperNode_Matrix(L, A->nrow, min_mn, nnzL, Glu.lusup, + Glu.xlusup, Glu.lsub, Glu.xlsub, Glu.supno, + Glu.xsup, SLU_SC, SLU_C, SLU_TRLU); + cCreate_CompCol_Matrix(U, min_mn, min_mn, nnzU, Glu.ucol, + Glu.usub, Glu.xusub, SLU_NC, SLU_C, SLU_TRU); + } + + ops[FACT] += ops[TRSV] + ops[GEMV]; + + if ( iperm_r_allocated ) SUPERLU_FREE (iperm_r); + SUPERLU_FREE (iperm_c); + SUPERLU_FREE (relax_end); + SUPERLU_FREE (swap); + SUPERLU_FREE (iswap); + SUPERLU_FREE (relax_fsupc); + SUPERLU_FREE (amax); + if ( iwork2 ) SUPERLU_FREE (iwork2); + +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgsrfs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgsrfs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgsrfs.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgsrfs.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,25 +1,26 @@ -/* +/*! @file cgsrfs.c + * \brief Improves computed solution to a system of inear equations + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
+ * Modified from lapack routine CGERFS
+ * 
*/ /* * File name: cgsrfs.c * History: Modified from lapack routine CGERFS */ #include -#include "csp_defs.h" +#include "slu_cdefs.h" -void -cgsrfs(trans_t trans, SuperMatrix *A, SuperMatrix *L, SuperMatrix *U, - int *perm_c, int *perm_r, char *equed, float *R, float *C, - SuperMatrix *B, SuperMatrix *X, float *ferr, float *berr, - SuperLUStat_t *stat, int *info) -{ -/* +/*! \brief + * + *
  *   Purpose   
  *   =======   
  *
@@ -123,7 +124,15 @@
  *
  *    ITMAX is the maximum number of steps of iterative refinement.   
  *
- */  
+ * 
+ */ +void +cgsrfs(trans_t trans, SuperMatrix *A, SuperMatrix *L, SuperMatrix *U, + int *perm_c, int *perm_r, char *equed, float *R, float *C, + SuperMatrix *B, SuperMatrix *X, float *ferr, float *berr, + SuperLUStat_t *stat, int *info) +{ + #define ITMAX 5 @@ -224,6 +233,8 @@ nz = A->ncol + 1; eps = slamch_("Epsilon"); safmin = slamch_("Safe minimum"); + /* Set SAFE1 essentially to be the underflow threshold times the + number of additions in each row. */ safe1 = nz * safmin; safe2 = safe1 / eps; @@ -274,7 +285,7 @@ where abs(Z) is the componentwise absolute value of the matrix or vector Z. If the i-th component of the denominator is less than SAFE2, then SAFE1 is added to the i-th component of the - numerator and denominator before dividing. */ + numerator before dividing. */ for (i = 0; i < A->nrow; ++i) rwork[i] = slu_c_abs1( &Bptr[i] ); @@ -297,11 +308,13 @@ } s = 0.; for (i = 0; i < A->nrow; ++i) { - if (rwork[i] > safe2) + if (rwork[i] > safe2) { s = SUPERLU_MAX( s, slu_c_abs1(&work[i]) / rwork[i] ); - else - s = SUPERLU_MAX( s, (slu_c_abs1(&work[i]) + safe1) / - (rwork[i] + safe1) ); + } else if ( rwork[i] != 0.0 ) { + s = SUPERLU_MAX( s, (slu_c_abs1(&work[i]) + safe1) / rwork[i] ); + } + /* If rwork[i] is exactly 0.0, then we know the true + residual also must be exactly 0.0. */ } berr[j] = s; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgssv.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgssv.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgssv.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgssv.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,20 +1,19 @@ - -/* +/*! @file cgssv.c + * \brief Solves the system of linear equations A*X=B + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
- *
+ * 
*/ -#include "csp_defs.h" +#include "slu_cdefs.h" -void -cgssv(superlu_options_t *options, SuperMatrix *A, int *perm_c, int *perm_r, - SuperMatrix *L, SuperMatrix *U, SuperMatrix *B, - SuperLUStat_t *stat, int *info ) -{ -/* +/*! \brief + * + *
  * Purpose
  * =======
  *
@@ -127,15 +126,21 @@
  *                so the solution could not be computed.
  *             > A->ncol: number of bytes allocated when memory allocation
  *                failure occurred, plus A->ncol.
- *   
+ * 
*/ + +void +cgssv(superlu_options_t *options, SuperMatrix *A, int *perm_c, int *perm_r, + SuperMatrix *L, SuperMatrix *U, SuperMatrix *B, + SuperLUStat_t *stat, int *info ) +{ + DNformat *Bstore; SuperMatrix *AA;/* A in SLU_NC format used by the factorization routine.*/ SuperMatrix AC; /* Matrix postmultiplied by Pc */ int lwork = 0, *etree, i; /* Set default values for some parameters */ - float drop_tol = 0.; int panel_size; /* panel size */ int relax; /* no of columns in a relaxed snodes */ int permc_spec; @@ -201,8 +206,8 @@ relax, panel_size, sp_ienv(3), sp_ienv(4));*/ t = SuperLU_timer_(); /* Compute the LU factorization of A. */ - cgstrf(options, &AC, drop_tol, relax, panel_size, - etree, NULL, lwork, perm_c, perm_r, L, U, stat, info); + cgstrf(options, &AC, relax, panel_size, etree, + NULL, lwork, perm_c, perm_r, L, U, stat, info); utime[FACT] = SuperLU_timer_() - t; t = SuperLU_timer_(); diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgssvx.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgssvx.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgssvx.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgssvx.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,22 +1,19 @@ -/* +/*! @file cgssvx.c + * \brief Solves the system of linear equations A*X=B or A'*X=B + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
- *
+ * 
*/ -#include "csp_defs.h" +#include "slu_cdefs.h" -void -cgssvx(superlu_options_t *options, SuperMatrix *A, int *perm_c, int *perm_r, - int *etree, char *equed, float *R, float *C, - SuperMatrix *L, SuperMatrix *U, void *work, int lwork, - SuperMatrix *B, SuperMatrix *X, float *recip_pivot_growth, - float *rcond, float *ferr, float *berr, - mem_usage_t *mem_usage, SuperLUStat_t *stat, int *info ) -{ -/* +/*! \brief + * + *
  * Purpose
  * =======
  *
@@ -314,7 +311,7 @@
  *
  * stat   (output) SuperLUStat_t*
  *        Record the statistics on runtime and floating-point operation count.
- *        See util.h for the definition of 'SuperLUStat_t'.
+ *        See slu_util.h for the definition of 'SuperLUStat_t'.
  *
  * info    (output) int*
  *         = 0: successful exit   
@@ -332,9 +329,19 @@
  *                    accurate than the value of RCOND would suggest.   
  *              > A->ncol+1: number of bytes allocated when memory allocation
  *                    failure occurred, plus A->ncol.
- *
+ * 
*/ +void +cgssvx(superlu_options_t *options, SuperMatrix *A, int *perm_c, int *perm_r, + int *etree, char *equed, float *R, float *C, + SuperMatrix *L, SuperMatrix *U, void *work, int lwork, + SuperMatrix *B, SuperMatrix *X, float *recip_pivot_growth, + float *rcond, float *ferr, float *berr, + mem_usage_t *mem_usage, SuperLUStat_t *stat, int *info ) +{ + + DNformat *Bstore, *Xstore; complex *Bmat, *Xmat; int ldb, ldx, nrhs; @@ -346,13 +353,12 @@ int i, j, info1; float amax, anorm, bignum, smlnum, colcnd, rowcnd, rcmax, rcmin; int relax, panel_size; - float diag_pivot_thresh, drop_tol; + float diag_pivot_thresh; double t0; /* temporary time */ double *utime; /* External functions */ extern float clangs(char *, SuperMatrix *); - extern double slamch_(char *); Bstore = B->Store; Xstore = X->Store; @@ -443,7 +449,6 @@ panel_size = sp_ienv(1); relax = sp_ienv(2); diag_pivot_thresh = options->DiagPivotThresh; - drop_tol = 0.0; utime = stat->utime; @@ -455,7 +460,7 @@ Astore->nzval, Astore->colind, Astore->rowptr, SLU_NC, A->Dtype, A->Mtype); if ( notran ) { /* Reverse the transpose argument. */ - trant = CONJ; + trant = TRANS; notran = 0; } else { trant = NOTRANS; @@ -523,8 +528,8 @@ /* Compute the LU factorization of A*Pc. */ t0 = SuperLU_timer_(); - cgstrf(options, &AC, drop_tol, relax, panel_size, - etree, work, lwork, perm_c, perm_r, L, U, stat, info); + cgstrf(options, &AC, relax, panel_size, etree, + work, lwork, perm_c, perm_r, L, U, stat, info); utime[FACT] = SuperLU_timer_() - t0; if ( lwork == -1 ) { diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgstrf.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgstrf.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgstrf.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgstrf.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,33 +1,32 @@ -/* +/*! @file cgstrf.c + * \brief Computes an LU factorization of a general sparse matrix + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
+ * 
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
  *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "csp_defs.h" -void -cgstrf (superlu_options_t *options, SuperMatrix *A, float drop_tol, - int relax, int panel_size, int *etree, void *work, int lwork, - int *perm_c, int *perm_r, SuperMatrix *L, SuperMatrix *U, - SuperLUStat_t *stat, int *info) -{ -/* +#include "slu_cdefs.h" + +/*! \brief + * + *
  * Purpose
  * =======
  *
@@ -53,11 +52,6 @@
  *          (A->nrow, A->ncol). The type of A can be:
  *          Stype = SLU_NCP; Dtype = SLU_C; Mtype = SLU_GE.
  *
- * drop_tol (input) float (NOT IMPLEMENTED)
- *	    Drop tolerance parameter. At step j of the Gaussian elimination,
- *          if abs(A_ij)/(max_i abs(A_ij)) < drop_tol, drop entry A_ij.
- *          0 <= drop_tol <= 1. The default value of drop_tol is 0.
- *
  * relax    (input) int
  *          To control degree of relaxing supernodes. If the number
  *          of nodes (columns) in a subtree of the elimination tree is less
@@ -117,7 +111,7 @@
  *
  * stat     (output) SuperLUStat_t*
  *          Record the statistics on runtime and floating-point operation count.
- *          See util.h for the definition of 'SuperLUStat_t'.
+ *          See slu_util.h for the definition of 'SuperLUStat_t'.
  *
  * info     (output) int*
  *          = 0: successful exit
@@ -177,13 +171,20 @@
  *	    	   NOTE: there are W of them.
  *
  *   tempv[0:*]: real temporary used for dense numeric kernels;
- *	The size of this array is defined by NUM_TEMPV() in csp_defs.h.
- *
+ *	The size of this array is defined by NUM_TEMPV() in slu_cdefs.h.
+ * 
*/ + +void +cgstrf (superlu_options_t *options, SuperMatrix *A, + int relax, int panel_size, int *etree, void *work, int lwork, + int *perm_c, int *perm_r, SuperMatrix *L, SuperMatrix *U, + SuperLUStat_t *stat, int *info) +{ /* Local working arrays */ NCPformat *Astore; - int *iperm_r; /* inverse of perm_r; - used when options->Fact == SamePattern_SameRowPerm */ + int *iperm_r = NULL; /* inverse of perm_r; used when + options->Fact == SamePattern_SameRowPerm */ int *iperm_c; /* inverse of perm_c */ int *iwork; complex *cwork; @@ -199,7 +200,8 @@ int *xsup, *supno; int *xlsub, *xlusup, *xusub; int nzlumax; - static GlobalLU_t Glu; /* persistent to facilitate multiple factors. */ + float fill_ratio = sp_ienv(6); /* estimated fill ratio */ + static GlobalLU_t Glu; /* persistent to facilitate multiple factors. */ /* Local scalars */ fact_t fact = options->Fact; @@ -230,7 +232,7 @@ /* Allocate storage common to the factor routines */ *info = cLUMemInit(fact, work, lwork, m, n, Astore->nnz, - panel_size, L, U, &Glu, &iwork, &cwork); + panel_size, fill_ratio, L, U, &Glu, &iwork, &cwork); if ( *info ) return; xsup = Glu.xsup; @@ -417,7 +419,7 @@ ((NCformat *)U->Store)->rowind = Glu.usub; ((NCformat *)U->Store)->colptr = Glu.xusub; } else { - cCreate_SuperNode_Matrix(L, A->nrow, A->ncol, nnzL, Glu.lusup, + cCreate_SuperNode_Matrix(L, A->nrow, min_mn, nnzL, Glu.lusup, Glu.xlusup, Glu.lsub, Glu.xlsub, Glu.supno, Glu.xsup, SLU_SC, SLU_C, SLU_TRLU); cCreate_CompCol_Matrix(U, min_mn, min_mn, nnzU, Glu.ucol, @@ -425,6 +427,7 @@ } ops[FACT] += ops[TRSV] + ops[GEMV]; + stat->expansions = --(Glu.num_expansions); if ( iperm_r_allocated ) SUPERLU_FREE (iperm_r); SUPERLU_FREE (iperm_c); diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgstrs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgstrs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgstrs.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cgstrs.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,25 +1,27 @@ -/* +/*! @file cgstrs.c + * \brief Solves a system using LU factorization + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "csp_defs.h" +#include "slu_cdefs.h" /* @@ -29,13 +31,9 @@ void clsolve(int, int, complex*, complex*); void cmatvec(int, int, int, complex*, complex*, complex*); - -void -cgstrs (trans_t trans, SuperMatrix *L, SuperMatrix *U, - int *perm_c, int *perm_r, SuperMatrix *B, - SuperLUStat_t *stat, int *info) -{ -/* +/*! \brief + * + *
  * Purpose
  * =======
  *
@@ -85,8 +83,15 @@
  * info    (output) int*
  * 	   = 0: successful exit
  *	   < 0: if info = -i, the i-th argument had an illegal value
- *
+ * 
*/ + +void +cgstrs (trans_t trans, SuperMatrix *L, SuperMatrix *U, + int *perm_c, int *perm_r, SuperMatrix *B, + SuperLUStat_t *stat, int *info) +{ + #ifdef _CRAY _fcd ftcs1, ftcs2, ftcs3, ftcs4; #endif @@ -293,7 +298,7 @@ stat->ops[SOLVE] = solve_ops; - } else { /* Solve A'*X=B */ + } else { /* Solve A'*X=B or CONJ(A)*X=B */ /* Permute right hand sides to form Pc'*B. */ for (i = 0; i < nrhs; i++) { rhs_work = &Bmat[i*ldb]; @@ -302,28 +307,23 @@ } stat->ops[SOLVE] = 0; - if (trans == TRANS) { - - for (k = 0; k < nrhs; ++k) { + for (k = 0; k < nrhs; ++k) { + /* Multiply by inv(U'). */ + sp_ctrsv("U", "T", "N", L, U, &Bmat[k*ldb], stat, info); - /* Multiply by inv(U'). */ - sp_ctrsv("U", "T", "N", L, U, &Bmat[k*ldb], stat, info); - - /* Multiply by inv(L'). */ - sp_ctrsv("L", "T", "U", L, U, &Bmat[k*ldb], stat, info); - } - } - else { + /* Multiply by inv(L'). */ + sp_ctrsv("L", "T", "U", L, U, &Bmat[k*ldb], stat, info); + } + } else { /* trans == CONJ */ for (k = 0; k < nrhs; ++k) { /* Multiply by conj(inv(U')). */ sp_ctrsv("U", "C", "N", L, U, &Bmat[k*ldb], stat, info); /* Multiply by conj(inv(L')). */ sp_ctrsv("L", "C", "U", L, U, &Bmat[k*ldb], stat, info); - } - } - + } + } /* Compute the final solution X := Pr'*X (=inv(Pr)*X) */ for (i = 0; i < nrhs; i++) { rhs_work = &Bmat[i*ldb]; @@ -331,7 +331,7 @@ for (k = 0; k < n; k++) rhs_work[k] = soln[k]; } - } + } SUPERLU_FREE(work); SUPERLU_FREE(soln); diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/clacon.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/clacon.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/clacon.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/clacon.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,67 +1,74 @@ - -/* +/*! @file clacon.c + * \brief Estimates the 1-norm + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
- *
+ * 
*/ #include -#include "Cnames.h" -#include "scomplex.h" +#include "slu_Cnames.h" +#include "slu_scomplex.h" + +/*! \brief + * + *
+ *   Purpose   
+ *   =======   
+ *
+ *   CLACON estimates the 1-norm of a square matrix A.   
+ *   Reverse communication is used for evaluating matrix-vector products. 
+ * 
+ *
+ *   Arguments   
+ *   =========   
+ *
+ *   N      (input) INT
+ *          The order of the matrix.  N >= 1.   
+ *
+ *   V      (workspace) COMPLEX PRECISION array, dimension (N)   
+ *          On the final return, V = A*W,  where  EST = norm(V)/norm(W)   
+ *          (W is not returned).   
+ *
+ *   X      (input/output) COMPLEX PRECISION array, dimension (N)   
+ *          On an intermediate return, X should be overwritten by   
+ *                A * X,   if KASE=1,   
+ *                A' * X,  if KASE=2,
+ *          where A' is the conjugate transpose of A,
+ *         and CLACON must be re-called with all the other parameters   
+ *          unchanged.   
+ *
+ *
+ *   EST    (output) FLOAT PRECISION   
+ *          An estimate (a lower bound) for norm(A).   
+ *
+ *   KASE   (input/output) INT
+ *          On the initial call to CLACON, KASE should be 0.   
+ *          On an intermediate return, KASE will be 1 or 2, indicating   
+ *          whether X should be overwritten by A * X  or A' * X.   
+ *          On the final return from CLACON, KASE will again be 0.   
+ *
+ *   Further Details   
+ *   ======= =======   
+ *
+ *   Contributed by Nick Higham, University of Manchester.   
+ *   Originally named CONEST, dated March 16, 1988.   
+ *
+ *   Reference: N.J. Higham, "FORTRAN codes for estimating the one-norm of 
+ *   a real or complex matrix, with applications to condition estimation", 
+ *   ACM Trans. Math. Soft., vol. 14, no. 4, pp. 381-396, December 1988.   
+ *   ===================================================================== 
+ * 
+ */ int clacon_(int *n, complex *v, complex *x, float *est, int *kase) { -/* - Purpose - ======= - - CLACON estimates the 1-norm of a square matrix A. - Reverse communication is used for evaluating matrix-vector products. - - - Arguments - ========= - - N (input) INT - The order of the matrix. N >= 1. - - V (workspace) COMPLEX PRECISION array, dimension (N) - On the final return, V = A*W, where EST = norm(V)/norm(W) - (W is not returned). - - X (input/output) COMPLEX PRECISION array, dimension (N) - On an intermediate return, X should be overwritten by - A * X, if KASE=1, - A' * X, if KASE=2, - where A' is the conjugate transpose of A, - and CLACON must be re-called with all the other parameters - unchanged. - - - EST (output) FLOAT PRECISION - An estimate (a lower bound) for norm(A). - - KASE (input/output) INT - On the initial call to CLACON, KASE should be 0. - On an intermediate return, KASE will be 1 or 2, indicating - whether X should be overwritten by A * X or A' * X. - On the final return from CLACON, KASE will again be 0. - - Further Details - ======= ======= - - Contributed by Nick Higham, University of Manchester. - Originally named CONEST, dated March 16, 1988. - - Reference: N.J. Higham, "FORTRAN codes for estimating the one-norm of - a real or complex matrix, with applications to condition estimation", - ACM Trans. Math. Soft., vol. 14, no. 4, pp. 381-396, December 1988. - ===================================================================== -*/ + /* Table of constant values */ int c__1 = 1; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/clangs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/clangs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/clangs.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/clangs.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,58 +1,65 @@ - -/* +/*! @file clangs.c + * \brief Returns the value of the one norm + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
  *
+ * Modified from lapack routine CLANGE 
+ * 
*/ /* * File name: clangs.c * History: Modified from lapack routine CLANGE */ #include -#include "csp_defs.h" -#include "util.h" +#include "slu_cdefs.h" + +/*! \brief + * + *
+ * Purpose   
+ *   =======   
+ *
+ *   CLANGS returns the value of the one norm, or the Frobenius norm, or 
+ *   the infinity norm, or the element of largest absolute value of a 
+ *   real matrix A.   
+ *
+ *   Description   
+ *   ===========   
+ *
+ *   CLANGE returns the value   
+ *
+ *      CLANGE = ( max(abs(A(i,j))), NORM = 'M' or 'm'   
+ *               (   
+ *               ( norm1(A),         NORM = '1', 'O' or 'o'   
+ *               (   
+ *               ( normI(A),         NORM = 'I' or 'i'   
+ *               (   
+ *               ( normF(A),         NORM = 'F', 'f', 'E' or 'e'   
+ *
+ *   where  norm1  denotes the  one norm of a matrix (maximum column sum), 
+ *   normI  denotes the  infinity norm  of a matrix  (maximum row sum) and 
+ *   normF  denotes the  Frobenius norm of a matrix (square root of sum of 
+ *   squares).  Note that  max(abs(A(i,j)))  is not a  matrix norm.   
+ *
+ *   Arguments   
+ *   =========   
+ *
+ *   NORM    (input) CHARACTER*1   
+ *           Specifies the value to be returned in CLANGE as described above.   
+ *   A       (input) SuperMatrix*
+ *           The M by N sparse matrix A. 
+ *
+ *  =====================================================================
+ * 
+ */ float clangs(char *norm, SuperMatrix *A) { -/* - Purpose - ======= - - CLANGS returns the value of the one norm, or the Frobenius norm, or - the infinity norm, or the element of largest absolute value of a - real matrix A. - - Description - =========== - - CLANGE returns the value - - CLANGE = ( max(abs(A(i,j))), NORM = 'M' or 'm' - ( - ( norm1(A), NORM = '1', 'O' or 'o' - ( - ( normI(A), NORM = 'I' or 'i' - ( - ( normF(A), NORM = 'F', 'f', 'E' or 'e' - - where norm1 denotes the one norm of a matrix (maximum column sum), - normI denotes the infinity norm of a matrix (maximum row sum) and - normF denotes the Frobenius norm of a matrix (square root of sum of - squares). Note that max(abs(A(i,j))) is not a matrix norm. - - Arguments - ========= - - NORM (input) CHARACTER*1 - Specifies the value to be returned in CLANGE as described above. - A (input) SuperMatrix* - The M by N sparse matrix A. - - ===================================================================== -*/ /* Local variables */ NCformat *Astore; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/claqgs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/claqgs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/claqgs.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/claqgs.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,80 +1,88 @@ - -/* +/*! @file claqgs.c + * \brief Equlibrates a general sprase matrix + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
- *
+ * 
+ * Modified from LAPACK routine CLAQGE
+ * 
*/ /* * File name: claqgs.c * History: Modified from LAPACK routine CLAQGE */ #include -#include "csp_defs.h" -#include "util.h" +#include "slu_cdefs.h" + +/*! \brief + * + *
+ *   Purpose   
+ *   =======   
+ *
+ *   CLAQGS equilibrates a general sparse M by N matrix A using the row and   
+ *   scaling factors in the vectors R and C.   
+ *
+ *   See supermatrix.h for the definition of 'SuperMatrix' structure.
+ *
+ *   Arguments   
+ *   =========   
+ *
+ *   A       (input/output) SuperMatrix*
+ *           On exit, the equilibrated matrix.  See EQUED for the form of 
+ *           the equilibrated matrix. The type of A can be:
+ *	    Stype = NC; Dtype = SLU_C; Mtype = GE.
+ *	    
+ *   R       (input) float*, dimension (A->nrow)
+ *           The row scale factors for A.
+ *	    
+ *   C       (input) float*, dimension (A->ncol)
+ *           The column scale factors for A.
+ *	    
+ *   ROWCND  (input) float
+ *           Ratio of the smallest R(i) to the largest R(i).
+ *	    
+ *   COLCND  (input) float
+ *           Ratio of the smallest C(i) to the largest C(i).
+ *	    
+ *   AMAX    (input) float
+ *           Absolute value of largest matrix entry.
+ *	    
+ *   EQUED   (output) char*
+ *           Specifies the form of equilibration that was done.   
+ *           = 'N':  No equilibration   
+ *           = 'R':  Row equilibration, i.e., A has been premultiplied by  
+ *                   diag(R).   
+ *           = 'C':  Column equilibration, i.e., A has been postmultiplied  
+ *                   by diag(C).   
+ *           = 'B':  Both row and column equilibration, i.e., A has been
+ *                   replaced by diag(R) * A * diag(C).   
+ *
+ *   Internal Parameters   
+ *   ===================   
+ *
+ *   THRESH is a threshold value used to decide if row or column scaling   
+ *   should be done based on the ratio of the row or column scaling   
+ *   factors.  If ROWCND < THRESH, row scaling is done, and if   
+ *   COLCND < THRESH, column scaling is done.   
+ *
+ *   LARGE and SMALL are threshold values used to decide if row scaling   
+ *   should be done based on the absolute size of the largest matrix   
+ *   element.  If AMAX > LARGE or AMAX < SMALL, row scaling is done.   
+ *
+ *   ===================================================================== 
+ * 
+ */ void claqgs(SuperMatrix *A, float *r, float *c, float rowcnd, float colcnd, float amax, char *equed) { -/* - Purpose - ======= - - CLAQGS equilibrates a general sparse M by N matrix A using the row and - scaling factors in the vectors R and C. - - See supermatrix.h for the definition of 'SuperMatrix' structure. - - Arguments - ========= - - A (input/output) SuperMatrix* - On exit, the equilibrated matrix. See EQUED for the form of - the equilibrated matrix. The type of A can be: - Stype = NC; Dtype = SLU_C; Mtype = GE. - - R (input) float*, dimension (A->nrow) - The row scale factors for A. - - C (input) float*, dimension (A->ncol) - The column scale factors for A. - - ROWCND (input) float - Ratio of the smallest R(i) to the largest R(i). - - COLCND (input) float - Ratio of the smallest C(i) to the largest C(i). - - AMAX (input) float - Absolute value of largest matrix entry. - - EQUED (output) char* - Specifies the form of equilibration that was done. - = 'N': No equilibration - = 'R': Row equilibration, i.e., A has been premultiplied by - diag(R). - = 'C': Column equilibration, i.e., A has been postmultiplied - by diag(C). - = 'B': Both row and column equilibration, i.e., A has been - replaced by diag(R) * A * diag(C). - - Internal Parameters - =================== - - THRESH is a threshold value used to decide if row or column scaling - should be done based on the ratio of the row or column scaling - factors. If ROWCND < THRESH, row scaling is done, and if - COLCND < THRESH, column scaling is done. - - LARGE and SMALL are threshold values used to decide if row scaling - should be done based on the absolute size of the largest matrix - element. If AMAX > LARGE or AMAX < SMALL, row scaling is done. - ===================================================================== -*/ #define THRESH (0.1) diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cldperm.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cldperm.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cldperm.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cldperm.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,168 @@ + +/*! @file + * \brief Finds a row permutation so that the matrix has large entries on the diagonal + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory.
+ * June 30, 2009
+ * 
+ */ + +#include "slu_cdefs.h" + +extern void mc64id_(int_t*); +extern void mc64ad_(int_t*, int_t*, int_t*, int_t [], int_t [], double [], + int_t*, int_t [], int_t*, int_t[], int_t*, double [], + int_t [], int_t []); + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *
+ *   CLDPERM finds a row permutation so that the matrix has large
+ *   entries on the diagonal.
+ *
+ * Arguments
+ * =========
+ *
+ * job    (input) int
+ *        Control the action. Possible values for JOB are:
+ *        = 1 : Compute a row permutation of the matrix so that the
+ *              permuted matrix has as many entries on its diagonal as
+ *              possible. The values on the diagonal are of arbitrary size.
+ *              HSL subroutine MC21A/AD is used for this.
+ *        = 2 : Compute a row permutation of the matrix so that the smallest 
+ *              value on the diagonal of the permuted matrix is maximized.
+ *        = 3 : Compute a row permutation of the matrix so that the smallest
+ *              value on the diagonal of the permuted matrix is maximized.
+ *              The algorithm differs from the one used for JOB = 2 and may
+ *              have quite a different performance.
+ *        = 4 : Compute a row permutation of the matrix so that the sum
+ *              of the diagonal entries of the permuted matrix is maximized.
+ *        = 5 : Compute a row permutation of the matrix so that the product
+ *              of the diagonal entries of the permuted matrix is maximized
+ *              and vectors to scale the matrix so that the nonzero diagonal 
+ *              entries of the permuted matrix are one in absolute value and 
+ *              all the off-diagonal entries are less than or equal to one in 
+ *              absolute value.
+ *        Restriction: 1 <= JOB <= 5.
+ *
+ * n      (input) int
+ *        The order of the matrix.
+ *
+ * nnz    (input) int
+ *        The number of nonzeros in the matrix.
+ *
+ * adjncy (input) int*, of size nnz
+ *        The adjacency structure of the matrix, which contains the row
+ *        indices of the nonzeros.
+ *
+ * colptr (input) int*, of size n+1
+ *        The pointers to the beginning of each column in ADJNCY.
+ *
+ * nzval  (input) complex*, of size nnz
+ *        The nonzero values of the matrix. nzval[k] is the value of
+ *        the entry corresponding to adjncy[k].
+ *        It is not used if job = 1.
+ *
+ * perm   (output) int*, of size n
+ *        The permutation vector. perm[i] = j means row i in the
+ *        original matrix is in row j of the permuted matrix.
+ *
+ * u      (output) double*, of size n
+ *        If job = 5, the natural logarithms of the row scaling factors. 
+ *
+ * v      (output) double*, of size n
+ *        If job = 5, the natural logarithms of the column scaling factors. 
+ *        The scaled matrix B has entries b_ij = a_ij * exp(u_i + v_j).
+ * 
+ */ + +int +cldperm(int_t job, int_t n, int_t nnz, int_t colptr[], int_t adjncy[], + complex nzval[], int_t *perm, float u[], float v[]) +{ + int_t i, liw, ldw, num; + int_t *iw, icntl[10], info[10]; + double *dw; + double *nzval_d = (double *) SUPERLU_MALLOC(nnz * sizeof(double)); + +#if ( DEBUGlevel>=1 ) + CHECK_MALLOC(0, "Enter cldperm()"); +#endif + liw = 5*n; + if ( job == 3 ) liw = 10*n + nnz; + if ( !(iw = intMalloc(liw)) ) ABORT("Malloc fails for iw[]"); + ldw = 3*n + nnz; + if ( !(dw = (double*) SUPERLU_MALLOC(ldw * sizeof(double))) ) + ABORT("Malloc fails for dw[]"); + + /* Increment one to get 1-based indexing. */ + for (i = 0; i <= n; ++i) ++colptr[i]; + for (i = 0; i < nnz; ++i) ++adjncy[i]; +#if ( DEBUGlevel>=2 ) + printf("LDPERM(): n %d, nnz %d\n", n, nnz); + slu_PrintInt10("colptr", n+1, colptr); + slu_PrintInt10("adjncy", nnz, adjncy); +#endif + + /* + * NOTE: + * ===== + * + * MC64AD assumes that column permutation vector is defined as: + * perm(i) = j means column i of permuted A is in column j of original A. + * + * Since a symmetric permutation preserves the diagonal entries. Then + * by the following relation: + * P'(A*P')P = P'A + * we can apply inverse(perm) to rows of A to get large diagonal entries. + * But, since 'perm' defined in MC64AD happens to be the reverse of + * SuperLU's definition of permutation vector, therefore, it is already + * an inverse for our purpose. We will thus use it directly. + * + */ + mc64id_(icntl); +#if 0 + /* Suppress error and warning messages. */ + icntl[0] = -1; + icntl[1] = -1; +#endif + + for (i = 0; i < nnz; ++i) nzval_d[i] = slu_c_abs1(&nzval[i]); + mc64ad_(&job, &n, &nnz, colptr, adjncy, nzval_d, &num, perm, + &liw, iw, &ldw, dw, icntl, info); + +#if ( DEBUGlevel>=2 ) + slu_PrintInt10("perm", n, perm); + printf(".. After MC64AD info %d\tsize of matching %d\n", info[0], num); +#endif + if ( info[0] == 1 ) { /* Structurally singular */ + printf(".. The last %d permutations:\n", n-num); + slu_PrintInt10("perm", n-num, &perm[num]); + } + + /* Restore to 0-based indexing. */ + for (i = 0; i <= n; ++i) --colptr[i]; + for (i = 0; i < nnz; ++i) --adjncy[i]; + for (i = 0; i < n; ++i) --perm[i]; + + if ( job == 5 ) + for (i = 0; i < n; ++i) { + u[i] = dw[i]; + v[i] = dw[n+i]; + } + + SUPERLU_FREE(iw); + SUPERLU_FREE(dw); + SUPERLU_FREE(nzval_d); + +#if ( DEBUGlevel>=1 ) + CHECK_MALLOC(0, "Exit cldperm()"); +#endif + + return info[0]; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cmemory.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cmemory.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cmemory.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cmemory.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,54 +1,32 @@ -/* - * -- SuperLU routine (version 3.0) -- - * Univ. of California Berkeley, Xerox Palo Alto Research Center, - * and Lawrence Berkeley National Lab. - * October 15, 2003 +/*! @file cmemory.c + * \brief Memory details * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory.
+ * June 30, 2009
+ * 
*/ -#include "csp_defs.h" +#include "slu_cdefs.h" -/* Constants */ -#define NO_MEMTYPE 4 /* 0: lusup; - 1: ucol; - 2: lsub; - 3: usub */ -#define GluIntArray(n) (5 * (n) + 5) /* Internal prototypes */ void *cexpand (int *, MemType,int, int, GlobalLU_t *); -int cLUWorkInit (int, int, int, int **, complex **, LU_space_t); +int cLUWorkInit (int, int, int, int **, complex **, GlobalLU_t *); void copy_mem_complex (int, void *, void *); void cStackCompress (GlobalLU_t *); -void cSetupSpace (void *, int, LU_space_t *); -void *cuser_malloc (int, int); -void cuser_free (int, int); +void cSetupSpace (void *, int, GlobalLU_t *); +void *cuser_malloc (int, int, GlobalLU_t *); +void cuser_free (int, int, GlobalLU_t *); -/* External prototypes (in memory.c - prec-indep) */ +/* External prototypes (in memory.c - prec-independent) */ extern void copy_mem_int (int, void *, void *); extern void user_bcopy (char *, char *, int); -/* Headers for 4 types of dynamatically managed memory */ -typedef struct e_node { - int size; /* length of the memory that has been used */ - void *mem; /* pointer to the new malloc'd store */ -} ExpHeader; - -typedef struct { - int size; - int used; - int top1; /* grow upward, relative to &array[0] */ - int top2; /* grow downward */ - void *array; -} LU_stack_t; - -/* Variables local to this file */ -static ExpHeader *expanders = 0; /* Array of pointers to 4 types of memory */ -static LU_stack_t stack; -static int no_expand; /* Macros to manipulate stack */ -#define StackFull(x) ( x + stack.used >= stack.size ) +#define StackFull(x) ( x + Glu->stack.used >= Glu->stack.size ) #define NotDoubleAlign(addr) ( (long int)addr & 7 ) #define DoubleAlign(addr) ( ((long int)addr + 7) & ~7L ) #define TempSpace(m, w) ( (2*w + 4 + NO_MARKER) * m * sizeof(int) + \ @@ -58,66 +36,67 @@ -/* - * Setup the memory model to be used for factorization. +/*! \brief Setup the memory model to be used for factorization. + * * lwork = 0: use system malloc; * lwork > 0: use user-supplied work[] space. */ -void cSetupSpace(void *work, int lwork, LU_space_t *MemModel) +void cSetupSpace(void *work, int lwork, GlobalLU_t *Glu) { if ( lwork == 0 ) { - *MemModel = SYSTEM; /* malloc/free */ + Glu->MemModel = SYSTEM; /* malloc/free */ } else if ( lwork > 0 ) { - *MemModel = USER; /* user provided space */ - stack.used = 0; - stack.top1 = 0; - stack.top2 = (lwork/4)*4; /* must be word addressable */ - stack.size = stack.top2; - stack.array = (void *) work; + Glu->MemModel = USER; /* user provided space */ + Glu->stack.used = 0; + Glu->stack.top1 = 0; + Glu->stack.top2 = (lwork/4)*4; /* must be word addressable */ + Glu->stack.size = Glu->stack.top2; + Glu->stack.array = (void *) work; } } -void *cuser_malloc(int bytes, int which_end) +void *cuser_malloc(int bytes, int which_end, GlobalLU_t *Glu) { void *buf; if ( StackFull(bytes) ) return (NULL); if ( which_end == HEAD ) { - buf = (char*) stack.array + stack.top1; - stack.top1 += bytes; + buf = (char*) Glu->stack.array + Glu->stack.top1; + Glu->stack.top1 += bytes; } else { - stack.top2 -= bytes; - buf = (char*) stack.array + stack.top2; + Glu->stack.top2 -= bytes; + buf = (char*) Glu->stack.array + Glu->stack.top2; } - stack.used += bytes; + Glu->stack.used += bytes; return buf; } -void cuser_free(int bytes, int which_end) +void cuser_free(int bytes, int which_end, GlobalLU_t *Glu) { if ( which_end == HEAD ) { - stack.top1 -= bytes; + Glu->stack.top1 -= bytes; } else { - stack.top2 += bytes; + Glu->stack.top2 += bytes; } - stack.used -= bytes; + Glu->stack.used -= bytes; } -/* +/*! \brief + * + *
  * mem_usage consists of the following fields:
  *    - for_lu (float)
  *      The amount of space used in bytes for the L\U data structures.
  *    - total_needed (float)
  *      The amount of space needed in bytes to perform factorization.
- *    - expansions (int)
- *      Number of memory expansions during the LU factorization.
+ * 
*/ int cQuerySpace(SuperMatrix *L, SuperMatrix *U, mem_usage_t *mem_usage) { @@ -132,33 +111,75 @@ dword = sizeof(complex); /* For LU factors */ - mem_usage->for_lu = (float)( (4*n + 3) * iword + Lstore->nzval_colptr[n] * - dword + Lstore->rowind_colptr[n] * iword ); - mem_usage->for_lu += (float)( (n + 1) * iword + + mem_usage->for_lu = (float)( (4.0*n + 3.0) * iword + + Lstore->nzval_colptr[n] * dword + + Lstore->rowind_colptr[n] * iword ); + mem_usage->for_lu += (float)( (n + 1.0) * iword + Ustore->colptr[n] * (dword + iword) ); /* Working storage to support factorization */ mem_usage->total_needed = mem_usage->for_lu + - (float)( (2 * panel_size + 4 + NO_MARKER) * n * iword + - (panel_size + 1) * n * dword ); - - mem_usage->expansions = --no_expand; + (float)( (2.0 * panel_size + 4.0 + NO_MARKER) * n * iword + + (panel_size + 1.0) * n * dword ); return 0; } /* cQuerySpace */ -/* - * Allocate storage for the data structures common to all factor routines. - * For those unpredictable size, make a guess as FILL * nnz(A). + +/*! \brief + * + *
+ * mem_usage consists of the following fields:
+ *    - for_lu (float)
+ *      The amount of space used in bytes for the L\U data structures.
+ *    - total_needed (float)
+ *      The amount of space needed in bytes to perform factorization.
+ * 
+ */ +int ilu_cQuerySpace(SuperMatrix *L, SuperMatrix *U, mem_usage_t *mem_usage) +{ + SCformat *Lstore; + NCformat *Ustore; + register int n, panel_size = sp_ienv(1); + register float iword, dword; + + Lstore = L->Store; + Ustore = U->Store; + n = L->ncol; + iword = sizeof(int); + dword = sizeof(double); + + /* For LU factors */ + mem_usage->for_lu = (float)( (4.0f * n + 3.0f) * iword + + Lstore->nzval_colptr[n] * dword + + Lstore->rowind_colptr[n] * iword ); + mem_usage->for_lu += (float)( (n + 1.0f) * iword + + Ustore->colptr[n] * (dword + iword) ); + + /* Working storage to support factorization. + ILU needs 5*n more integers than LU */ + mem_usage->total_needed = mem_usage->for_lu + + (float)( (2.0f * panel_size + 9.0f + NO_MARKER) * n * iword + + (panel_size + 1.0f) * n * dword ); + + return 0; +} /* ilu_cQuerySpace */ + + +/*! \brief Allocate storage for the data structures common to all factor routines. + * + *
+ * For those unpredictable size, estimate as fill_ratio * nnz(A).
  * Return value:
  *     If lwork = -1, return the estimated amount of space required, plus n;
  *     otherwise, return the amount of space actually allocated when
  *     memory allocation failure occurred.
+ * 
*/ int cLUMemInit(fact_t fact, void *work, int lwork, int m, int n, int annz, - int panel_size, SuperMatrix *L, SuperMatrix *U, GlobalLU_t *Glu, - int **iwork, complex **dwork) + int panel_size, float fill_ratio, SuperMatrix *L, SuperMatrix *U, + GlobalLU_t *Glu, int **iwork, complex **dwork) { int info, iword, dword; SCformat *Lstore; @@ -170,32 +191,33 @@ complex *ucol; int *usub, *xusub; int nzlmax, nzumax, nzlumax; - int FILL = sp_ienv(6); - Glu->n = n; - no_expand = 0; iword = sizeof(int); dword = sizeof(complex); + Glu->n = n; + Glu->num_expansions = 0; - if ( !expanders ) - expanders = (ExpHeader*)SUPERLU_MALLOC(NO_MEMTYPE * sizeof(ExpHeader)); - if ( !expanders ) ABORT("SUPERLU_MALLOC fails for expanders"); + if ( !Glu->expanders ) + Glu->expanders = (ExpHeader*)SUPERLU_MALLOC( NO_MEMTYPE * + sizeof(ExpHeader) ); + if ( !Glu->expanders ) ABORT("SUPERLU_MALLOC fails for expanders"); if ( fact != SamePattern_SameRowPerm ) { /* Guess for L\U factors */ - nzumax = nzlumax = FILL * annz; - nzlmax = SUPERLU_MAX(1, FILL/4.) * annz; + nzumax = nzlumax = fill_ratio * annz; + nzlmax = SUPERLU_MAX(1, fill_ratio/4.) * annz; if ( lwork == -1 ) { return ( GluIntArray(n) * iword + TempSpace(m, panel_size) + (nzlmax+nzumax)*iword + (nzlumax+nzumax)*dword + n ); } else { - cSetupSpace(work, lwork, &Glu->MemModel); + cSetupSpace(work, lwork, Glu); } -#ifdef DEBUG - printf("cLUMemInit() called: annz %d, MemModel %d\n", - annz, Glu->MemModel); +#if ( PRNTlevel >= 1 ) + printf("cLUMemInit() called: fill_ratio %ld, nzlmax %ld, nzumax %ld\n", + fill_ratio, nzlmax, nzumax); + fflush(stdout); #endif /* Integer pointers for L\U factors */ @@ -206,11 +228,11 @@ xlusup = intMalloc(n+1); xusub = intMalloc(n+1); } else { - xsup = (int *)cuser_malloc((n+1) * iword, HEAD); - supno = (int *)cuser_malloc((n+1) * iword, HEAD); - xlsub = (int *)cuser_malloc((n+1) * iword, HEAD); - xlusup = (int *)cuser_malloc((n+1) * iword, HEAD); - xusub = (int *)cuser_malloc((n+1) * iword, HEAD); + xsup = (int *)cuser_malloc((n+1) * iword, HEAD, Glu); + supno = (int *)cuser_malloc((n+1) * iword, HEAD, Glu); + xlsub = (int *)cuser_malloc((n+1) * iword, HEAD, Glu); + xlusup = (int *)cuser_malloc((n+1) * iword, HEAD, Glu); + xusub = (int *)cuser_malloc((n+1) * iword, HEAD, Glu); } lusup = (complex *) cexpand( &nzlumax, LUSUP, 0, 0, Glu ); @@ -225,7 +247,8 @@ SUPERLU_FREE(lsub); SUPERLU_FREE(usub); } else { - cuser_free((nzlumax+nzumax)*dword+(nzlmax+nzumax)*iword, HEAD); + cuser_free((nzlumax+nzumax)*dword+(nzlmax+nzumax)*iword, + HEAD, Glu); } nzlumax /= 2; nzumax /= 2; @@ -234,6 +257,11 @@ printf("Not enough memory to perform factorization.\n"); return (cmemory_usage(nzlmax, nzumax, nzlumax, n) + n); } +#if ( PRNTlevel >= 1) + printf("cLUMemInit() reduce size: nzlmax %ld, nzumax %ld\n", + nzlmax, nzumax); + fflush(stdout); +#endif lusup = (complex *) cexpand( &nzlumax, LUSUP, 0, 0, Glu ); ucol = (complex *) cexpand( &nzumax, UCOL, 0, 0, Glu ); lsub = (int *) cexpand( &nzlmax, LSUB, 0, 0, Glu ); @@ -260,18 +288,18 @@ Glu->MemModel = SYSTEM; } else { Glu->MemModel = USER; - stack.top2 = (lwork/4)*4; /* must be word-addressable */ - stack.size = stack.top2; + Glu->stack.top2 = (lwork/4)*4; /* must be word-addressable */ + Glu->stack.size = Glu->stack.top2; } - lsub = expanders[LSUB].mem = Lstore->rowind; - lusup = expanders[LUSUP].mem = Lstore->nzval; - usub = expanders[USUB].mem = Ustore->rowind; - ucol = expanders[UCOL].mem = Ustore->nzval;; - expanders[LSUB].size = nzlmax; - expanders[LUSUP].size = nzlumax; - expanders[USUB].size = nzumax; - expanders[UCOL].size = nzumax; + lsub = Glu->expanders[LSUB].mem = Lstore->rowind; + lusup = Glu->expanders[LUSUP].mem = Lstore->nzval; + usub = Glu->expanders[USUB].mem = Ustore->rowind; + ucol = Glu->expanders[UCOL].mem = Ustore->nzval;; + Glu->expanders[LSUB].size = nzlmax; + Glu->expanders[LUSUP].size = nzlumax; + Glu->expanders[USUB].size = nzumax; + Glu->expanders[UCOL].size = nzumax; } Glu->xsup = xsup; @@ -287,20 +315,20 @@ Glu->nzumax = nzumax; Glu->nzlumax = nzlumax; - info = cLUWorkInit(m, n, panel_size, iwork, dwork, Glu->MemModel); + info = cLUWorkInit(m, n, panel_size, iwork, dwork, Glu); if ( info ) return ( info + cmemory_usage(nzlmax, nzumax, nzlumax, n) + n); - ++no_expand; + ++Glu->num_expansions; return 0; } /* cLUMemInit */ -/* Allocate known working storage. Returns 0 if success, otherwise +/*! \brief Allocate known working storage. Returns 0 if success, otherwise returns the number of bytes allocated so far when failure occurred. */ int cLUWorkInit(int m, int n, int panel_size, int **iworkptr, - complex **dworkptr, LU_space_t MemModel) + complex **dworkptr, GlobalLU_t *Glu) { int isize, dsize, extra; complex *old_ptr; @@ -311,19 +339,19 @@ dsize = (m * panel_size + NUM_TEMPV(m,panel_size,maxsuper,rowblk)) * sizeof(complex); - if ( MemModel == SYSTEM ) + if ( Glu->MemModel == SYSTEM ) *iworkptr = (int *) intCalloc(isize/sizeof(int)); else - *iworkptr = (int *) cuser_malloc(isize, TAIL); + *iworkptr = (int *) cuser_malloc(isize, TAIL, Glu); if ( ! *iworkptr ) { fprintf(stderr, "cLUWorkInit: malloc fails for local iworkptr[]\n"); return (isize + n); } - if ( MemModel == SYSTEM ) + if ( Glu->MemModel == SYSTEM ) *dworkptr = (complex *) SUPERLU_MALLOC(dsize); else { - *dworkptr = (complex *) cuser_malloc(dsize, TAIL); + *dworkptr = (complex *) cuser_malloc(dsize, TAIL, Glu); if ( NotDoubleAlign(*dworkptr) ) { old_ptr = *dworkptr; *dworkptr = (complex*) DoubleAlign(*dworkptr); @@ -332,8 +360,8 @@ #ifdef DEBUG printf("cLUWorkInit: not aligned, extra %d\n", extra); #endif - stack.top2 -= extra; - stack.used += extra; + Glu->stack.top2 -= extra; + Glu->stack.used += extra; } } if ( ! *dworkptr ) { @@ -345,8 +373,7 @@ } -/* - * Set up pointers for real working arrays. +/*! \brief Set up pointers for real working arrays. */ void cSetRWork(int m, int panel_size, complex *dworkptr, @@ -362,8 +389,7 @@ cfill (*tempv, NUM_TEMPV(m,panel_size,maxsuper,rowblk), zero); } -/* - * Free the working storage used by factor routines. +/*! \brief Free the working storage used by factor routines. */ void cLUWorkFree(int *iwork, complex *dwork, GlobalLU_t *Glu) { @@ -371,18 +397,21 @@ SUPERLU_FREE (iwork); SUPERLU_FREE (dwork); } else { - stack.used -= (stack.size - stack.top2); - stack.top2 = stack.size; + Glu->stack.used -= (Glu->stack.size - Glu->stack.top2); + Glu->stack.top2 = Glu->stack.size; /* cStackCompress(Glu); */ } - SUPERLU_FREE (expanders); - expanders = 0; + SUPERLU_FREE (Glu->expanders); + Glu->expanders = NULL; } -/* Expand the data structures for L and U during the factorization. +/*! \brief Expand the data structures for L and U during the factorization. + * + *
  * Return value:   0 - successful return
  *               > 0 - number of bytes allocated when run out of space
+ * 
*/ int cLUMemXpand(int jcol, @@ -446,8 +475,7 @@ for (i = 0; i < howmany; i++) dnew[i] = dold[i]; } -/* - * Expand the existing storage to accommodate more fill-ins. +/*! \brief Expand the existing storage to accommodate more fill-ins. */ void *cexpand ( @@ -463,12 +491,14 @@ float alpha; void *new_mem, *old_mem; int new_len, tries, lword, extra, bytes_to_copy; + ExpHeader *expanders = Glu->expanders; /* Array of 4 types of memory */ alpha = EXPAND; - if ( no_expand == 0 || keep_prev ) /* First time allocate requested */ + if ( Glu->num_expansions == 0 || keep_prev ) { + /* First time allocate requested */ new_len = *prev_len; - else { + } else { new_len = alpha * *prev_len; } @@ -476,9 +506,8 @@ else lword = sizeof(complex); if ( Glu->MemModel == SYSTEM ) { - new_mem = (void *) SUPERLU_MALLOC(new_len * lword); -/* new_mem = (void *) calloc(new_len, lword); */ - if ( no_expand != 0 ) { + new_mem = (void *) SUPERLU_MALLOC((size_t)new_len * lword); + if ( Glu->num_expansions != 0 ) { tries = 0; if ( keep_prev ) { if ( !new_mem ) return (NULL); @@ -487,8 +516,7 @@ if ( ++tries > 10 ) return (NULL); alpha = Reduce(alpha); new_len = alpha * *prev_len; - new_mem = (void *) SUPERLU_MALLOC(new_len * lword); -/* new_mem = (void *) calloc(new_len, lword); */ + new_mem = (void *) SUPERLU_MALLOC((size_t)new_len * lword); } } if ( type == LSUB || type == USUB ) { @@ -501,8 +529,8 @@ expanders[type].mem = (void *) new_mem; } else { /* MemModel == USER */ - if ( no_expand == 0 ) { - new_mem = cuser_malloc(new_len * lword, HEAD); + if ( Glu->num_expansions == 0 ) { + new_mem = cuser_malloc(new_len * lword, HEAD, Glu); if ( NotDoubleAlign(new_mem) && (type == LUSUP || type == UCOL) ) { old_mem = new_mem; @@ -511,12 +539,11 @@ #ifdef DEBUG printf("expand(): not aligned, extra %d\n", extra); #endif - stack.top1 += extra; - stack.used += extra; + Glu->stack.top1 += extra; + Glu->stack.used += extra; } expanders[type].mem = (void *) new_mem; - } - else { + } else { tries = 0; extra = (new_len - *prev_len) * lword; if ( keep_prev ) { @@ -532,7 +559,7 @@ if ( type != USUB ) { new_mem = (void*)((char*)expanders[type + 1].mem + extra); - bytes_to_copy = (char*)stack.array + stack.top1 + bytes_to_copy = (char*)Glu->stack.array + Glu->stack.top1 - (char*)expanders[type + 1].mem; user_bcopy(expanders[type+1].mem, new_mem, bytes_to_copy); @@ -548,11 +575,11 @@ Glu->ucol = expanders[UCOL].mem = (void*)((char*)expanders[UCOL].mem + extra); } - stack.top1 += extra; - stack.used += extra; + Glu->stack.top1 += extra; + Glu->stack.used += extra; if ( type == UCOL ) { - stack.top1 += extra; /* Add same amount for USUB */ - stack.used += extra; + Glu->stack.top1 += extra; /* Add same amount for USUB */ + Glu->stack.used += extra; } } /* if ... */ @@ -562,15 +589,14 @@ expanders[type].size = new_len; *prev_len = new_len; - if ( no_expand ) ++no_expand; + if ( Glu->num_expansions ) ++Glu->num_expansions; return (void *) expanders[type].mem; } /* cexpand */ -/* - * Compress the work[] array to remove fragmentation. +/*! \brief Compress the work[] array to remove fragmentation. */ void cStackCompress(GlobalLU_t *Glu) @@ -610,9 +636,9 @@ usub = ito; last = (char*)usub + xusub[ndim] * iword; - fragment = (char*) (((char*)stack.array + stack.top1) - last); - stack.used -= (long int) fragment; - stack.top1 -= (long int) fragment; + fragment = (char*) (((char*)Glu->stack.array + Glu->stack.top1) - last); + Glu->stack.used -= (long int) fragment; + Glu->stack.top1 -= (long int) fragment; Glu->ucol = ucol; Glu->lsub = lsub; @@ -626,8 +652,7 @@ } -/* - * Allocate storage for original matrix A +/*! \brief Allocate storage for original matrix A */ void callocateA(int n, int nnz, complex **a, int **asub, int **xa) @@ -641,7 +666,7 @@ complex *complexMalloc(int n) { complex *buf; - buf = (complex *) SUPERLU_MALLOC(n * sizeof(complex)); + buf = (complex *) SUPERLU_MALLOC((size_t)n * sizeof(complex)); if ( !buf ) { ABORT("SUPERLU_MALLOC failed for buf in complexMalloc()\n"); } @@ -653,7 +678,7 @@ complex *buf; register int i; complex zero = {0.0, 0.0}; - buf = (complex *) SUPERLU_MALLOC(n * sizeof(complex)); + buf = (complex *) SUPERLU_MALLOC((size_t)n * sizeof(complex)); if ( !buf ) { ABORT("SUPERLU_MALLOC failed for buf in complexCalloc()\n"); } diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/Cnames.h python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/Cnames.h --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/Cnames.h 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/Cnames.h 1970-01-01 01:00:00.000000000 +0100 @@ -1,278 +0,0 @@ -/* - * -- SuperLU routine (version 2.0) -- - * Univ. of California Berkeley, Xerox Palo Alto Research Center, - * and Lawrence Berkeley National Lab. - * November 1, 1997 - * - */ -#ifndef __SUPERLU_CNAMES /* allow multiple inclusions */ -#define __SUPERLU_CNAMES - -/* - * These macros define how C routines will be called. ADD_ assumes that - * they will be called by fortran, which expects C routines to have an - * underscore postfixed to the name (Suns, and the Intel expect this). - * NOCHANGE indicates that fortran will be calling, and that it expects - * the name called by fortran to be identical to that compiled by the C - * (RS6K's do this). UPCASE says it expects C routines called by fortran - * to be in all upcase (CRAY wants this). - */ - -#define ADD_ 0 -#define ADD__ 1 -#define NOCHANGE 2 -#define UPCASE 3 -#define C_CALL 4 - -#ifdef UpCase -#define F77_CALL_C UPCASE -#endif - -#ifdef NoChange -#define F77_CALL_C NOCHANGE -#endif - -#ifdef Add_ -#define F77_CALL_C ADD_ -#endif - -#ifdef Add__ -#define F77_CALL_C ADD__ -#endif - -/* Default */ -#ifndef F77_CALL_C -#define F77_CALL_C ADD_ -#endif - - -#if (F77_CALL_C == ADD_) -/* - * These defines set up the naming scheme required to have a fortran 77 - * routine call a C routine - * No redefinition necessary to have following Fortran to C interface: - * FORTRAN CALL C DECLARATION - * call dgemm(...) void dgemm_(...) - * - * This is the default. - */ - -#endif - -#if (F77_CALL_C == ADD__) -/* - * These defines set up the naming scheme required to have a fortran 77 - * routine call a C routine - * for following Fortran to C interface: - * FORTRAN CALL C DECLARATION - * call dgemm(...) void dgemm__(...) - */ -#define sasum_ sasum__ -#define isamax_ isamax__ -#define scopy_ scopy__ -#define sscal_ sscal__ -#define sger_ sger__ -#define snrm2_ snrm2__ -#define ssymv_ ssymv__ -#define sdot_ sdot__ -#define saxpy_ saxpy__ -#define ssyr2_ ssyr2__ -#define srot_ srot__ -#define sgemv_ sgemv__ -#define strsv_ strsv__ -#define sgemm_ sgemm__ -#define strsm_ strsm__ - -#define dasum_ dasum__ -#define idamax_ idamax__ -#define dcopy_ dcopy__ -#define dscal_ dscal__ -#define dger_ dger__ -#define dnrm2_ dnrm2__ -#define dsymv_ dsymv__ -#define ddot_ ddot__ -#define daxpy_ daxpy__ -#define dsyr2_ dsyr2__ -#define drot_ drot__ -#define dgemv_ dgemv__ -#define dtrsv_ dtrsv__ -#define dgemm_ dgemm__ -#define dtrsm_ dtrsm__ - -#define scasum_ scasum__ -#define icamax_ icamax__ -#define ccopy_ ccopy__ -#define cscal_ cscal__ -#define scnrm2_ scnrm2__ -#define caxpy_ caxpy__ -#define cgemv_ cgemv__ -#define ctrsv_ ctrsv__ -#define cgemm_ cgemm__ -#define ctrsm_ ctrsm__ -#define cgerc_ cgerc__ -#define chemv_ chemv__ -#define cher2_ cher2__ - -#define dzasum_ dzasum__ -#define izamax_ izamax__ -#define zcopy_ zcopy__ -#define zscal_ zscal__ -#define dznrm2_ dznrm2__ -#define zaxpy_ zaxpy__ -#define zgemv_ zgemv__ -#define ztrsv_ ztrsv__ -#define zgemm_ zgemm__ -#define ztrsm_ ztrsm__ -#define zgerc_ zgerc__ -#define zhemv_ zhemv__ -#define zher2_ zher2__ - -#define c_bridge_dgssv_ c_bridge_dgssv__ -#define c_fortran_dgssv_ c_fortran_dgssv__ -#endif - -#if (F77_CALL_C == UPCASE) -/* - * These defines set up the naming scheme required to have a fortran 77 - * routine call a C routine - * following Fortran to C interface: - * FORTRAN CALL C DECLARATION - * call dgemm(...) void DGEMM(...) - */ -#define sasum_ SASUM -#define isamax_ ISAMAX -#define scopy_ SCOPY -#define sscal_ SSCAL -#define sger_ SGER -#define snrm2_ SNRM2 -#define ssymv_ SSYMV -#define sdot_ SDOT -#define saxpy_ SAXPY -#define ssyr2_ SSYR2 -#define srot_ SROT -#define sgemv_ SGEMV -#define strsv_ STRSV -#define sgemm_ SGEMM -#define strsm_ STRSM - -#define dasum_ SASUM -#define idamax_ ISAMAX -#define dcopy_ SCOPY -#define dscal_ SSCAL -#define dger_ SGER -#define dnrm2_ SNRM2 -#define dsymv_ SSYMV -#define ddot_ SDOT -#define daxpy_ SAXPY -#define dsyr2_ SSYR2 -#define drot_ SROT -#define dgemv_ SGEMV -#define dtrsv_ STRSV -#define dgemm_ SGEMM -#define dtrsm_ STRSM - -#define scasum_ SCASUM -#define icamax_ ICAMAX -#define ccopy_ CCOPY -#define cscal_ CSCAL -#define scnrm2_ SCNRM2 -#define caxpy_ CAXPY -#define cgemv_ CGEMV -#define ctrsv_ CTRSV -#define cgemm_ CGEMM -#define ctrsm_ CTRSM -#define cgerc_ CGERC -#define chemv_ CHEMV -#define cher2_ CHER2 - -#define dzasum_ SCASUM -#define izamax_ ICAMAX -#define zcopy_ CCOPY -#define zscal_ CSCAL -#define dznrm2_ SCNRM2 -#define zaxpy_ CAXPY -#define zgemv_ CGEMV -#define ztrsv_ CTRSV -#define zgemm_ CGEMM -#define ztrsm_ CTRSM -#define zgerc_ CGERC -#define zhemv_ CHEMV -#define zher2_ CHER2 - -#define c_bridge_dgssv_ C_BRIDGE_DGSSV -#define c_fortran_dgssv_ C_FORTRAN_DGSSV -#endif - -#if (F77_CALL_C == NOCHANGE) -/* - * These defines set up the naming scheme required to have a fortran 77 - * routine call a C routine - * for following Fortran to C interface: - * FORTRAN CALL C DECLARATION - * call dgemm(...) void dgemm(...) - */ -#define sasum_ sasum -#define isamax_ isamax -#define scopy_ scopy -#define sscal_ sscal -#define sger_ sger -#define snrm2_ snrm2 -#define ssymv_ ssymv -#define sdot_ sdot -#define saxpy_ saxpy -#define ssyr2_ ssyr2 -#define srot_ srot -#define sgemv_ sgemv -#define strsv_ strsv -#define sgemm_ sgemm -#define strsm_ strsm - -#define dasum_ dasum -#define idamax_ idamax -#define dcopy_ dcopy -#define dscal_ dscal -#define dger_ dger -#define dnrm2_ dnrm2 -#define dsymv_ dsymv -#define ddot_ ddot -#define daxpy_ daxpy -#define dsyr2_ dsyr2 -#define drot_ drot -#define dgemv_ dgemv -#define dtrsv_ dtrsv -#define dgemm_ dgemm -#define dtrsm_ dtrsm - -#define scasum_ scasum -#define icamax_ icamax -#define ccopy_ ccopy -#define cscal_ cscal -#define scnrm2_ scnrm2 -#define caxpy_ caxpy -#define cgemv_ cgemv -#define ctrsv_ ctrsv -#define cgemm_ cgemm -#define ctrsm_ ctrsm -#define cgerc_ cgerc -#define chemv_ chemv -#define cher2_ cher2 - -#define dzasum_ dzasum -#define izamax_ izamax -#define zcopy_ zcopy -#define zscal_ zscal -#define dznrm2_ dznrm2 -#define zaxpy_ zaxpy -#define zgemv_ zgemv -#define ztrsv_ ztrsv -#define zgemm_ zgemm -#define ztrsm_ ztrsm -#define zgerc_ zgerc -#define zhemv_ zhemv -#define zher2_ zher2 - -#define c_bridge_dgssv_ c_bridge_dgssv -#define c_fortran_dgssv_ c_fortran_dgssv -#endif - -#endif /* __SUPERLU_CNAMES */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/colamd.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/colamd.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/colamd.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/colamd.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,9 +1,19 @@ -/* ========================================================================== */ -/* === colamd - a sparse matrix column ordering algorithm =================== */ -/* ========================================================================== */ +/*! @file colamd.c + *\brief A sparse matrix column ordering algorithm + +
+    ========================================================================== 
+    === colamd/symamd - a sparse matrix column ordering algorithm ============ 
+    ========================================================================== 
+
+
+    colamd:  an approximate minimum degree column ordering algorithm,
+    	for LU factorization of symmetric or unsymmetric matrices,
+	QR factorization, least squares, interior point methods for
+	linear programming problems, and other related problems.
 
-/*
-    colamd:  An approximate minimum degree column ordering algorithm.
+    symamd:  an approximate minimum degree ordering algorithm for Cholesky
+    	factorization of symmetric matrices.
 
     Purpose:
 
@@ -14,12 +24,16 @@
 	factorization, and P is computed during numerical factorization via
 	conventional partial pivoting with row interchanges.  Colamd is the
 	column ordering method used in SuperLU, part of the ScaLAPACK library.
-	It is also available as user-contributed software for Matlab 5.2,
+	It is also available as built-in function in MATLAB Version 6,
 	available from MathWorks, Inc. (http://www.mathworks.com).  This
-	routine can be used in place of COLMMD in Matlab.  By default, the \
-	and / operators in Matlab perform a column ordering (using COLMMD)
-	prior to LU factorization using sparse partial pivoting, in the
-	built-in Matlab LU(A) routine.
+	routine can be used in place of colmmd in MATLAB.
+
+    	Symamd computes a permutation P of a symmetric matrix A such that the
+	Cholesky factorization of PAP' has less fill-in and requires fewer
+	floating point operations than A.  Symamd constructs a matrix M such
+	that M'M has the same nonzero pattern of A, and then orders the columns
+	of M using colmmd.  The column ordering of M is then returned as the
+	row and column ordering P of A. 
 
     Authors:
 
@@ -30,112 +44,124 @@
 
     Date:
 
-	August 3, 1998.  Version 1.0.
+	September 8, 2003.  Version 2.3.
 
     Acknowledgements:
 
 	This work was supported by the National Science Foundation, under
 	grants DMS-9504974 and DMS-9803599.
 
-    Notice:
+    Copyright and License:
 
-	Copyright (c) 1998 by the University of Florida.  All Rights Reserved.
+	Copyright (c) 1998-2003 by the University of Florida.
+	All Rights Reserved.
 
 	THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
 	EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
 
-	Permission is hereby granted to use or copy this program for any
-	purpose, provided the above notices are retained on all copies.
-	User documentation of any code that uses this code must cite the
-	Authors, the Copyright, and "Used by permission."  If this code is
-	accessible from within Matlab, then typing "help colamd" or "colamd"
-	(with no arguments) must cite the Authors.  Permission to modify the
-	code and to distribute modified code is granted, provided the above
-	notices are retained, and a notice that the code was modified is
-	included with the above copyright notice.  You must also retain the
-	Availability information below, of the original version.
-
-	This software is provided free of charge.
+	Permission is hereby granted to use, copy, modify, and/or distribute
+	this program, provided that the Copyright, this License, and the
+	Availability of the original version is retained on all copies and made
+	accessible to the end-user of any code or package that includes COLAMD
+	or any modified version of COLAMD. 
 
     Availability:
 
-	This file is located at
+	The colamd/symamd library is available at
 
-		http://www.cise.ufl.edu/~davis/colamd/colamd.c
+	    http://www.cise.ufl.edu/research/sparse/colamd/
 
-	The colamd.h file is required, located in the same directory.
-	The colamdmex.c file provides a Matlab interface for colamd.
-	The symamdmex.c file provides a Matlab interface for symamd, which is
-	a symmetric ordering based on this code, colamd.c.  All codes are
-	purely ANSI C compliant (they use no Unix-specific routines, include
-	files, etc.).
-*/
+	This is the http://www.cise.ufl.edu/research/sparse/colamd/colamd.c
+	file.  It requires the colamd.h file.  It is required by the colamdmex.c
+	and symamdmex.c files, for the MATLAB interface to colamd and symamd.
 
-/* ========================================================================== */
-/* === Description of user-callable routines ================================ */
-/* ========================================================================== */
+    See the ChangeLog file for changes since Version 1.0.
+
+    ========================================================================== 
+    === Description of user-callable routines ================================ 
+    ========================================================================== 
 
-/*
-    Each user-callable routine (declared as PUBLIC) is briefly described below.
-    Refer to the comments preceding each routine for more details.
 
     ----------------------------------------------------------------------------
     colamd_recommended:
     ----------------------------------------------------------------------------
 
-	Usage:
+	C syntax:
+
+	    #include "colamd.h"
+	    int colamd_recommended (int nnz, int n_row, int n_col) ;
 
-	    Alen = colamd_recommended (nnz, n_row, n_col) ;
+	    or as a C macro
+
+	    #include "colamd.h"
+	    Alen = COLAMD_RECOMMENDED (int nnz, int n_row, int n_col) ;
 
 	Purpose:
 
 	    Returns recommended value of Alen for use by colamd.  Returns -1
-	    if any input argument is negative.
+	    if any input argument is negative.  The use of this routine
+	    or macro is optional.  Note that the macro uses its arguments
+	    more than once, so be careful for side effects, if you pass
+	    expressions as arguments to COLAMD_RECOMMENDED.  Not needed for
+	    symamd, which dynamically allocates its own memory.
 
-	Arguments:
+	Arguments (all input arguments):
 
 	    int nnz ;		Number of nonzeros in the matrix A.  This must
 				be the same value as p [n_col] in the call to
 				colamd - otherwise you will get a wrong value
 				of the recommended memory to use.
+
 	    int n_row ;		Number of rows in the matrix A.
+
 	    int n_col ;		Number of columns in the matrix A.
 
     ----------------------------------------------------------------------------
     colamd_set_defaults:
     ----------------------------------------------------------------------------
 
-	Usage:
+	C syntax:
 
-	    colamd_set_defaults (knobs) ;
+	    #include "colamd.h"
+	    colamd_set_defaults (double knobs [COLAMD_KNOBS]) ;
 
 	Purpose:
 
-	    Sets the default parameters.
+	    Sets the default parameters.  The use of this routine is optional.
 
 	Arguments:
 
 	    double knobs [COLAMD_KNOBS] ;	Output only.
 
-		Rows with more than (knobs [COLAMD_DENSE_ROW] * n_col) entries
-		are removed prior to ordering.  Columns with more than
-		(knobs [COLAMD_DENSE_COL] * n_row) entries are removed
-		prior to ordering, and placed last in the output column
-		ordering.  Default values of these two knobs are both 0.5.
-		Currently, only knobs [0] and knobs [1] are used, but future
-		versions may use more knobs.  If so, they will be properly set
-		to their defaults by the future version of colamd_set_defaults,
-		so that the code that calls colamd will not need to change,
-		assuming that you either use colamd_set_defaults, or pass a
-		(double *) NULL pointer as the knobs array to colamd.
+		Colamd: rows with more than (knobs [COLAMD_DENSE_ROW] * n_col)
+		entries are removed prior to ordering.  Columns with more than
+		(knobs [COLAMD_DENSE_COL] * n_row) entries are removed prior to
+		ordering, and placed last in the output column ordering. 
+
+		Symamd: uses only knobs [COLAMD_DENSE_ROW], which is knobs [0].
+		Rows and columns with more than (knobs [COLAMD_DENSE_ROW] * n)
+		entries are removed prior to ordering, and placed last in the
+		output ordering.
+
+		COLAMD_DENSE_ROW and COLAMD_DENSE_COL are defined as 0 and 1,
+		respectively, in colamd.h.  Default values of these two knobs
+		are both 0.5.  Currently, only knobs [0] and knobs [1] are
+		used, but future versions may use more knobs.  If so, they will
+		be properly set to their defaults by the future version of
+		colamd_set_defaults, so that the code that calls colamd will
+		not need to change, assuming that you either use
+		colamd_set_defaults, or pass a (double *) NULL pointer as the
+		knobs array to colamd or symamd.
 
     ----------------------------------------------------------------------------
     colamd:
     ----------------------------------------------------------------------------
 
-	Usage:
+	C syntax:
 
-	    colamd (n_row, n_col, Alen, A, p, knobs) ;
+	    #include "colamd.h"
+	    int colamd (int n_row, int n_col, int Alen, int *A, int *p,
+	    	double knobs [COLAMD_KNOBS], int stats [COLAMD_STATS]) ;
 
 	Purpose:
 
@@ -143,34 +169,44 @@
 	    (AQ)'AQ=LL' have less fill-in and require fewer floating point
 	    operations than factorizing the unpermuted matrix A or A'A,
 	    respectively.
+	    
+	Returns:
+
+	    TRUE (1) if successful, FALSE (0) otherwise.
 
 	Arguments:
 
-	    int n_row ;
+	    int n_row ;		Input argument.
 
 		Number of rows in the matrix A.
 		Restriction:  n_row >= 0.
 		Colamd returns FALSE if n_row is negative.
 
-	    int n_col ;
+	    int n_col ;		Input argument.
 
 		Number of columns in the matrix A.
 		Restriction:  n_col >= 0.
 		Colamd returns FALSE if n_col is negative.
 
-	    int Alen ;
+	    int Alen ;		Input argument.
 
 		Restriction (see note):
-		Alen >= 2*nnz + 6*(n_col+1) + 4*(n_row+1) + n_col + COLAMD_STATS
+		Alen >= 2*nnz + 6*(n_col+1) + 4*(n_row+1) + n_col
 		Colamd returns FALSE if these conditions are not met.
 
 		Note:  this restriction makes an modest assumption regarding
-		the size of the two typedef'd structures, below.  We do,
-		however, guarantee that
-		Alen >= colamd_recommended (nnz, n_row, n_col)
+		the size of the two typedef's structures in colamd.h.
+		We do, however, guarantee that
+
+			Alen >= colamd_recommended (nnz, n_row, n_col)
+		
+		or equivalently as a C preprocessor macro: 
+
+			Alen >= COLAMD_RECOMMENDED (nnz, n_row, n_col)
+
 		will be sufficient.
 
-	    int A [Alen] ;	Input argument, stats on output.
+	    int A [Alen] ;	Input argument, undefined on output.
 
 		A is an integer array of size Alen.  Alen must be at least as
 		large as the bare minimum value given above, but this is very
@@ -191,21 +227,8 @@
 		n_row-1, and columns are in the range 0 to n_col-1.  Colamd
 		returns FALSE if any row index is out of range.
 
-		The contents of A are modified during ordering, and are thus
-		undefined on output with the exception of a few statistics
-		about the ordering (A [0..COLAMD_STATS-1]):
-		A [0]:  number of dense or empty rows ignored.
-		A [1]:  number of dense or empty columns ignored (and ordered
-			last in the output permutation p)
-		A [2]:  number of garbage collections performed.
-		A [3]:  0, if all row indices in each column were in sorted
-			  order, and no duplicates were present.
-			1, otherwise (in which case colamd had to do more work)
-		Note that a row can become "empty" if it contains only
-		"dense" and/or "empty" columns, and similarly a column can
-		become "empty" if it only contains "dense" and/or "empty" rows.
-		Future versions may return more statistics in A, but the usage
-		of these 4 entries in A will remain unchanged.
+		The contents of A are modified during ordering, and are
+		undefined on output.
 
 	    int p [n_col+1] ;	Both input and output argument.
 
@@ -227,25 +250,334 @@
 		If colamd returns FALSE, then no permutation is returned, and
 		p is undefined on output.
 
-	    double knobs [COLAMD_KNOBS] ;	Input only.
+	    double knobs [COLAMD_KNOBS] ;	Input argument.
 
-		See colamd_set_defaults for a description.  If the knobs array
-		is not present (that is, if a (double *) NULL pointer is passed
-		in its place), then the default values of the parameters are
-		used instead.
+		See colamd_set_defaults for a description.
 
-*/
+	    int stats [COLAMD_STATS] ;		Output argument.
 
+		Statistics on the ordering, and error status.
+		See colamd.h for related definitions.
+		Colamd returns FALSE if stats is not present.
 
-/* ========================================================================== */
-/* === Include files ======================================================== */
-/* ========================================================================== */
+		stats [0]:  number of dense or empty rows ignored.
 
-/* limits.h:  the largest positive integer (INT_MAX) */
-#include 
+		stats [1]:  number of dense or empty columns ignored (and
+				ordered last in the output permutation p)
+				Note that a row can become "empty" if it
+				contains only "dense" and/or "empty" columns,
+				and similarly a column can become "empty" if it
+				only contains "dense" and/or "empty" rows.
 
-/* colamd.h:  knob array size, stats output size, and global prototypes */
-#include "colamd.h"
+		stats [2]:  number of garbage collections performed.
+				This can be excessively high if Alen is close
+				to the minimum required value.
+
+		stats [3]:  status code.  < 0 is an error code.
+			    > 1 is a warning or notice.
+
+			0	OK.  Each column of the input matrix contained
+				row indices in increasing order, with no
+				duplicates.
+
+			1	OK, but columns of input matrix were jumbled
+				(unsorted columns or duplicate entries).  Colamd
+				had to do some extra work to sort the matrix
+				first and remove duplicate entries, but it
+				still was able to return a valid permutation
+				(return value of colamd was TRUE).
+
+					stats [4]: highest numbered column that
+						is unsorted or has duplicate
+						entries.
+					stats [5]: last seen duplicate or
+						unsorted row index.
+					stats [6]: number of duplicate or
+						unsorted row indices.
+
+			-1	A is a null pointer
+
+			-2	p is a null pointer
+
+			-3 	n_row is negative
+
+					stats [4]: n_row
+
+			-4	n_col is negative
+
+					stats [4]: n_col
+
+			-5	number of nonzeros in matrix is negative
+
+					stats [4]: number of nonzeros, p [n_col]
+
+			-6	p [0] is nonzero
+
+					stats [4]: p [0]
+
+			-7	A is too small
+
+					stats [4]: required size
+					stats [5]: actual size (Alen)
+
+			-8	a column has a negative number of entries
+
+					stats [4]: column with < 0 entries
+					stats [5]: number of entries in col
+
+			-9	a row index is out of bounds
+
+					stats [4]: column with bad row index
+					stats [5]: bad row index
+					stats [6]: n_row, # of rows of matrx
+
+			-10	(unused; see symamd.c)
+
+			-999	(unused; see symamd.c)
+
+		Future versions may return more statistics in the stats array.
+
+	Example:
+	
+	    See http://www.cise.ufl.edu/research/sparse/colamd/example.c
+	    for a complete example.
+
+	    To order the columns of a 5-by-4 matrix with 11 nonzero entries in
+	    the following nonzero pattern
+
+	    	x 0 x 0
+		x 0 x x
+		0 x x 0
+		0 0 x x
+		x x 0 0
+
+	    with default knobs and no output statistics, do the following:
+
+		#include "colamd.h"
+		#define ALEN COLAMD_RECOMMENDED (11, 5, 4)
+		int A [ALEN] = {1, 2, 5, 3, 5, 1, 2, 3, 4, 2, 4} ;
+		int p [ ] = {0, 3, 5, 9, 11} ;
+		int stats [COLAMD_STATS] ;
+		colamd (5, 4, ALEN, A, p, (double *) NULL, stats) ;
+
+	    The permutation is returned in the array p, and A is destroyed.
+
+    ----------------------------------------------------------------------------
+    symamd:
+    ----------------------------------------------------------------------------
+
+	C syntax:
+
+	    #include "colamd.h"
+	    int symamd (int n, int *A, int *p, int *perm,
+	    	double knobs [COLAMD_KNOBS], int stats [COLAMD_STATS],
+		void (*allocate) (size_t, size_t), void (*release) (void *)) ;
+
+	Purpose:
+
+    	    The symamd routine computes an ordering P of a symmetric sparse
+	    matrix A such that the Cholesky factorization PAP' = LL' remains
+	    sparse.  It is based on a column ordering of a matrix M constructed
+	    so that the nonzero pattern of M'M is the same as A.  The matrix A
+	    is assumed to be symmetric; only the strictly lower triangular part
+	    is accessed.  You must pass your selected memory allocator (usually
+	    calloc/free or mxCalloc/mxFree) to symamd, for it to allocate
+	    memory for the temporary matrix M.
+
+	Returns:
+
+	    TRUE (1) if successful, FALSE (0) otherwise.
+
+	Arguments:
+
+	    int n ;		Input argument.
+
+	    	Number of rows and columns in the symmetrix matrix A.
+		Restriction:  n >= 0.
+		Symamd returns FALSE if n is negative.
+
+	    int A [nnz] ;	Input argument.
+
+	    	A is an integer array of size nnz, where nnz = p [n].
+		
+		The row indices of the entries in column c of the matrix are
+		held in A [(p [c]) ... (p [c+1]-1)].  The row indices in a
+		given column c need not be in ascending order, and duplicate
+		row indices may be present.  However, symamd will run faster
+		if the columns are in sorted order with no duplicate entries. 
+
+		The matrix is 0-based.  That is, rows are in the range 0 to
+		n-1, and columns are in the range 0 to n-1.  Symamd
+		returns FALSE if any row index is out of range.
+
+		The contents of A are not modified.
+
+	    int p [n+1] ;   	Input argument.
+
+		p is an integer array of size n+1.  On input, it holds the
+		"pointers" for the column form of the matrix A.  Column c of
+		the matrix A is held in A [(p [c]) ... (p [c+1]-1)].  The first
+		entry, p [0], must be zero, and p [c] <= p [c+1] must hold
+		for all c in the range 0 to n-1.  The value p [n] is
+		thus the total number of entries in the pattern of the matrix A.
+		Symamd returns FALSE if these conditions are not met.
+
+		The contents of p are not modified.
+
+	    int perm [n+1] ;   	Output argument.
+
+		On output, if symamd returns TRUE, the array perm holds the
+		permutation P, where perm [0] is the first index in the new
+		ordering, and perm [n-1] is the last.  That is, perm [k] = j
+		means that row and column j of A is the kth column in PAP',
+		where k is in the range 0 to n-1 (perm [0] = j means
+		that row and column j of A are the first row and column in
+		PAP').  The array is used as a workspace during the ordering,
+		which is why it must be of length n+1, not just n.
+
+	    double knobs [COLAMD_KNOBS] ;	Input argument.
+
+		See colamd_set_defaults for a description.
+
+	    int stats [COLAMD_STATS] ;		Output argument.
+
+		Statistics on the ordering, and error status.
+		See colamd.h for related definitions.
+		Symamd returns FALSE if stats is not present.
+
+		stats [0]:  number of dense or empty row and columns ignored
+				(and ordered last in the output permutation 
+				perm).  Note that a row/column can become
+				"empty" if it contains only "dense" and/or
+				"empty" columns/rows.
+
+		stats [1]:  (same as stats [0])
+
+		stats [2]:  number of garbage collections performed.
+
+		stats [3]:  status code.  < 0 is an error code.
+			    > 1 is a warning or notice.
+
+			0	OK.  Each column of the input matrix contained
+				row indices in increasing order, with no
+				duplicates.
+
+			1	OK, but columns of input matrix were jumbled
+				(unsorted columns or duplicate entries).  Symamd
+				had to do some extra work to sort the matrix
+				first and remove duplicate entries, but it
+				still was able to return a valid permutation
+				(return value of symamd was TRUE).
+
+					stats [4]: highest numbered column that
+						is unsorted or has duplicate
+						entries.
+					stats [5]: last seen duplicate or
+						unsorted row index.
+					stats [6]: number of duplicate or
+						unsorted row indices.
+
+			-1	A is a null pointer
+
+			-2	p is a null pointer
+
+			-3	(unused, see colamd.c)
+
+			-4 	n is negative
+
+					stats [4]: n
+
+			-5	number of nonzeros in matrix is negative
+
+					stats [4]: # of nonzeros (p [n]).
+
+			-6	p [0] is nonzero
+
+					stats [4]: p [0]
+
+			-7	(unused)
+
+			-8	a column has a negative number of entries
+
+					stats [4]: column with < 0 entries
+					stats [5]: number of entries in col
+
+			-9	a row index is out of bounds
+
+					stats [4]: column with bad row index
+					stats [5]: bad row index
+					stats [6]: n_row, # of rows of matrx
+
+			-10	out of memory (unable to allocate temporary
+				workspace for M or count arrays using the
+				"allocate" routine passed into symamd).
+
+			-999	internal error.  colamd failed to order the
+				matrix M, when it should have succeeded.  This
+				indicates a bug.  If this (and *only* this)
+				error code occurs, please contact the authors.
+				Don't contact the authors if you get any other
+				error code.
+
+		Future versions may return more statistics in the stats array.
+
+	    void * (*allocate) (size_t, size_t)
+
+	    	A pointer to a function providing memory allocation.  The
+		allocated memory must be returned initialized to zero.  For a
+		C application, this argument should normally be a pointer to
+		calloc.  For a MATLAB mexFunction, the routine mxCalloc is
+		passed instead.
+
+	    void (*release) (size_t, size_t)
+
+	    	A pointer to a function that frees memory allocated by the
+		memory allocation routine above.  For a C application, this
+		argument should normally be a pointer to free.  For a MATLAB
+		mexFunction, the routine mxFree is passed instead.
+
+
+    ----------------------------------------------------------------------------
+    colamd_report:
+    ----------------------------------------------------------------------------
+
+	C syntax:
+
+	    #include "colamd.h"
+	    colamd_report (int stats [COLAMD_STATS]) ;
+
+	Purpose:
+
+	    Prints the error status and statistics recorded in the stats
+	    array on the standard error output (for a standard C routine)
+	    or on the MATLAB output (for a mexFunction).
+
+	Arguments:
+
+	    int stats [COLAMD_STATS] ;	Input only.  Statistics from colamd.
+
+
+    ----------------------------------------------------------------------------
+    symamd_report:
+    ----------------------------------------------------------------------------
+
+	C syntax:
+
+	    #include "colamd.h"
+	    symamd_report (int stats [COLAMD_STATS]) ;
+
+	Purpose:
+
+	    Prints the error status and statistics recorded in the stats
+	    array on the standard error output (for a standard C routine)
+	    or on the MATLAB output (for a mexFunction).
+
+	Arguments:
+
+	    int stats [COLAMD_STATS] ;	Input only.  Statistics from symamd.
+
+ 
+*/ /* ========================================================================== */ /* === Scaffolding code definitions ======================================== */ @@ -254,10 +586,7 @@ /* Ensure that debugging is turned off: */ #ifndef NDEBUG #define NDEBUG -#endif - -/* assert.h: the assert macro (no debugging if NDEBUG is defined) */ -#include +#endif /* NDEBUG */ /* Our "scaffolding code" philosophy: In our opinion, well-written library @@ -276,77 +605,62 @@ (3) (gasp!) for actually finding bugs. This code has been heavily tested and "should" be fully functional and bug-free ... but you never know... - To enable debugging, comment out the "#define NDEBUG" above. The code will - become outrageously slow when debugging is enabled. To control the level of - debugging output, set an environment variable D to 0 (little), 1 (some), - 2, 3, or 4 (lots). + To enable debugging, comment out the "#define NDEBUG" above. For a MATLAB + mexFunction, you will also need to modify mexopts.sh to remove the -DNDEBUG + definition. The code will become outrageously slow when debugging is + enabled. To control the level of debugging output, set an environment + variable D to 0 (little), 1 (some), 2, 3, or 4 (lots). When debugging, + you should see the following message on the standard output: + + colamd: debug version, D = 1 (THIS WILL BE SLOW!) + + or a similar message for symamd. If you don't, then debugging has not + been enabled. + */ /* ========================================================================== */ -/* === Row and Column structures ============================================ */ +/* === Include files ======================================================== */ /* ========================================================================== */ -typedef struct ColInfo_struct -{ - int start ; /* index for A of first row in this column, or DEAD */ - /* if column is dead */ - int length ; /* number of rows in this column */ - union - { - int thickness ; /* number of original columns represented by this */ - /* col, if the column is alive */ - int parent ; /* parent in parent tree super-column structure, if */ - /* the column is dead */ - } shared1 ; - union - { - int score ; /* the score used to maintain heap, if col is alive */ - int order ; /* pivot ordering of this column, if col is dead */ - } shared2 ; - union - { - int headhash ; /* head of a hash bucket, if col is at the head of */ - /* a degree list */ - int hash ; /* hash value, if col is not in a degree list */ - int prev ; /* previous column in degree list, if col is in a */ - /* degree list (but not at the head of a degree list) */ - } shared3 ; - union - { - int degree_next ; /* next column, if col is in a degree list */ - int hash_next ; /* next column, if col is in a hash list */ - } shared4 ; - -} ColInfo ; - -typedef struct RowInfo_struct -{ - int start ; /* index for A of first col in this row */ - int length ; /* number of principal columns in this row */ - union - { - int degree ; /* number of principal & non-principal columns in row */ - int p ; /* used as a row pointer in init_rows_cols () */ - } shared1 ; - union - { - int mark ; /* for computing set differences and marking dead rows*/ - int first_column ;/* first column in row (used in garbage collection) */ - } shared2 ; +#include "colamd.h" +#include -} RowInfo ; +#ifdef MATLAB_MEX_FILE +#include "mex.h" +#include "matrix.h" +#else +#include +#include +#endif /* MATLAB_MEX_FILE */ /* ========================================================================== */ /* === Definitions ========================================================== */ /* ========================================================================== */ +/* Routines are either PUBLIC (user-callable) or PRIVATE (not user-callable) */ +#define PUBLIC +#define PRIVATE static + #define MAX(a,b) (((a) > (b)) ? (a) : (b)) #define MIN(a,b) (((a) < (b)) ? (a) : (b)) #define ONES_COMPLEMENT(r) (-(r)-1) -#define TRUE (1) -#define FALSE (0) +/* -------------------------------------------------------------------------- */ +/* Change for version 2.1: define TRUE and FALSE only if not yet defined */ +/* -------------------------------------------------------------------------- */ + +#ifndef TRUE +#define TRUE (1) +#endif + +#ifndef FALSE +#define FALSE (0) +#endif + +/* -------------------------------------------------------------------------- */ + #define EMPTY (-1) /* Row and column status */ @@ -368,9 +682,29 @@ #define KILL_PRINCIPAL_COL(c) { Col [c].start = DEAD_PRINCIPAL ; } #define KILL_NON_PRINCIPAL_COL(c) { Col [c].start = DEAD_NON_PRINCIPAL ; } -/* Routines are either PUBLIC (user-callable) or PRIVATE (not user-callable) */ -#define PUBLIC -#define PRIVATE static +/* ========================================================================== */ +/* === Colamd reporting mechanism =========================================== */ +/* ========================================================================== */ + +#ifdef MATLAB_MEX_FILE + +/* use mexPrintf in a MATLAB mexFunction, for debugging and statistics output */ +#define PRINTF mexPrintf + +/* In MATLAB, matrices are 1-based to the user, but 0-based internally */ +#define INDEX(i) ((i)+1) + +#else + +/* Use printf in standard C environment, for debugging and statistics output. */ +/* Output is generated only if debugging is enabled at compile time, or if */ +/* the caller explicitly calls colamd_report or symamd_report. */ +#define PRINTF printf + +/* In C, matrices are 0-based and indices are reported as such in *_report */ +#define INDEX(i) (i) + +#endif /* MATLAB_MEX_FILE */ /* ========================================================================== */ /* === Prototypes of PRIVATE routines ======================================= */ @@ -380,18 +714,19 @@ ( int n_row, int n_col, - RowInfo Row [], - ColInfo Col [], + Colamd_Row Row [], + Colamd_Col Col [], int A [], - int p [] + int p [], + int stats [COLAMD_STATS] ) ; PRIVATE void init_scoring ( int n_row, int n_col, - RowInfo Row [], - ColInfo Col [], + Colamd_Row Row [], + Colamd_Col Col [], int A [], int head [], double knobs [COLAMD_KNOBS], @@ -405,8 +740,8 @@ int n_row, int n_col, int Alen, - RowInfo Row [], - ColInfo Col [], + Colamd_Row Row [], + Colamd_Col Col [], int A [], int head [], int n_col2, @@ -417,17 +752,19 @@ PRIVATE void order_children ( int n_col, - ColInfo Col [], + Colamd_Col Col [], int p [] ) ; PRIVATE void detect_super_cols ( + #ifndef NDEBUG int n_col, - RowInfo Row [], -#endif - ColInfo Col [], + Colamd_Row Row [], +#endif /* NDEBUG */ + + Colamd_Col Col [], int A [], int head [], int row_start, @@ -438,8 +775,8 @@ ( int n_row, int n_col, - RowInfo Row [], - ColInfo Col [], + Colamd_Row Row [], + Colamd_Col Col [], int A [], int *pfree ) ; @@ -447,29 +784,49 @@ PRIVATE int clear_mark ( int n_row, - RowInfo Row [] + Colamd_Row Row [] +) ; + +PRIVATE void print_report +( + char *method, + int stats [COLAMD_STATS] ) ; /* ========================================================================== */ -/* === Debugging definitions ================================================ */ +/* === Debugging prototypes and definitions ================================= */ /* ========================================================================== */ #ifndef NDEBUG -/* === With debugging ======================================================= */ +/* colamd_debug is the *ONLY* global variable, and is only */ +/* present when debugging */ + +PRIVATE int colamd_debug ; /* debug print level */ -/* stdlib.h: for getenv and atoi, to get debugging level from environment */ -#include +#define DEBUG0(params) { (void) PRINTF params ; } +#define DEBUG1(params) { if (colamd_debug >= 1) (void) PRINTF params ; } +#define DEBUG2(params) { if (colamd_debug >= 2) (void) PRINTF params ; } +#define DEBUG3(params) { if (colamd_debug >= 3) (void) PRINTF params ; } +#define DEBUG4(params) { if (colamd_debug >= 4) (void) PRINTF params ; } -/* stdio.h: for printf (no printing if debugging is turned off) */ -#include +#ifdef MATLAB_MEX_FILE +#define ASSERT(expression) (mxAssert ((expression), "")) +#else +#define ASSERT(expression) (assert (expression)) +#endif /* MATLAB_MEX_FILE */ + +PRIVATE void colamd_get_debug /* gets the debug print level from getenv */ +( + char *method +) ; PRIVATE void debug_deg_lists ( int n_row, int n_col, - RowInfo Row [], - ColInfo Col [], + Colamd_Row Row [], + Colamd_Col Col [], int head [], int min_score, int should, @@ -479,7 +836,7 @@ PRIVATE void debug_mark ( int n_row, - RowInfo Row [], + Colamd_Row Row [], int tag_mark, int max_mark ) ; @@ -488,8 +845,8 @@ ( int n_row, int n_col, - RowInfo Row [], - ColInfo Col [], + Colamd_Row Row [], + Colamd_Col Col [], int A [] ) ; @@ -497,24 +854,13 @@ ( int n_row, int n_col, - RowInfo Row [], - ColInfo Col [], + Colamd_Row Row [], + Colamd_Col Col [], int A [], int n_col2 ) ; -/* the following is the *ONLY* global variable in this file, and is only */ -/* present when debugging */ - -PRIVATE int debug_colamd ; /* debug print level */ - -#define DEBUG0(params) { (void) printf params ; } -#define DEBUG1(params) { if (debug_colamd >= 1) (void) printf params ; } -#define DEBUG2(params) { if (debug_colamd >= 2) (void) printf params ; } -#define DEBUG3(params) { if (debug_colamd >= 3) (void) printf params ; } -#define DEBUG4(params) { if (debug_colamd >= 4) (void) printf params ; } - -#else +#else /* NDEBUG */ /* === No debugging ========================================================= */ @@ -524,104 +870,426 @@ #define DEBUG3(params) ; #define DEBUG4(params) ; -#endif +#define ASSERT(expression) ((void) 0) + +#endif /* NDEBUG */ + +/* ========================================================================== */ + + + +/* ========================================================================== */ +/* === USER-CALLABLE ROUTINES: ============================================== */ +/* ========================================================================== */ + + +/* ========================================================================== */ +/* === colamd_recommended =================================================== */ +/* ========================================================================== */ + +/* + The colamd_recommended routine returns the suggested size for Alen. This + value has been determined to provide good balance between the number of + garbage collections and the memory requirements for colamd. If any + argument is negative, a -1 is returned as an error condition. This + function is also available as a macro defined in colamd.h, so that you + can use it for a statically-allocated array size. +*/ + +PUBLIC int colamd_recommended /* returns recommended value of Alen. */ +( + /* === Parameters ======================================================= */ + + int nnz, /* number of nonzeros in A */ + int n_row, /* number of rows in A */ + int n_col /* number of columns in A */ +) +{ + return (COLAMD_RECOMMENDED (nnz, n_row, n_col)) ; +} + + +/* ========================================================================== */ +/* === colamd_set_defaults ================================================== */ +/* ========================================================================== */ + +/* + The colamd_set_defaults routine sets the default values of the user- + controllable parameters for colamd: + + knobs [0] rows with knobs[0]*n_col entries or more are removed + prior to ordering in colamd. Rows and columns with + knobs[0]*n_col entries or more are removed prior to + ordering in symamd and placed last in the output + ordering. + + knobs [1] columns with knobs[1]*n_row entries or more are removed + prior to ordering in colamd, and placed last in the + column permutation. Symamd ignores this knob. + + knobs [2..19] unused, but future versions might use this +*/ + +PUBLIC void colamd_set_defaults +( + /* === Parameters ======================================================= */ + + double knobs [COLAMD_KNOBS] /* knob array */ +) +{ + /* === Local variables ================================================== */ + + int i ; + + if (!knobs) + { + return ; /* no knobs to initialize */ + } + for (i = 0 ; i < COLAMD_KNOBS ; i++) + { + knobs [i] = 0 ; + } + knobs [COLAMD_DENSE_ROW] = 0.5 ; /* ignore rows over 50% dense */ + knobs [COLAMD_DENSE_COL] = 0.5 ; /* ignore columns over 50% dense */ +} + +/* ========================================================================== */ +/* === symamd =============================================================== */ /* ========================================================================== */ +PUBLIC int symamd /* return TRUE if OK, FALSE otherwise */ +( + /* === Parameters ======================================================= */ + + int n, /* number of rows and columns of A */ + int A [], /* row indices of A */ + int p [], /* column pointers of A */ + int perm [], /* output permutation, size n+1 */ + double knobs [COLAMD_KNOBS], /* parameters (uses defaults if NULL) */ + int stats [COLAMD_STATS], /* output statistics and error codes */ + void * (*allocate) (size_t, size_t), + /* pointer to calloc (ANSI C) or */ + /* mxCalloc (for MATLAB mexFunction) */ + void (*release) (void *) + /* pointer to free (ANSI C) or */ + /* mxFree (for MATLAB mexFunction) */ +) +{ + /* === Local variables ================================================== */ + + int *count ; /* length of each column of M, and col pointer*/ + int *mark ; /* mark array for finding duplicate entries */ + int *M ; /* row indices of matrix M */ + int Mlen ; /* length of M */ + int n_row ; /* number of rows in M */ + int nnz ; /* number of entries in A */ + int i ; /* row index of A */ + int j ; /* column index of A */ + int k ; /* row index of M */ + int mnz ; /* number of nonzeros in M */ + int pp ; /* index into a column of A */ + int last_row ; /* last row seen in the current column */ + int length ; /* number of nonzeros in a column */ + + double cknobs [COLAMD_KNOBS] ; /* knobs for colamd */ + double default_knobs [COLAMD_KNOBS] ; /* default knobs for colamd */ + int cstats [COLAMD_STATS] ; /* colamd stats */ + +#ifndef NDEBUG + colamd_get_debug ("symamd") ; +#endif /* NDEBUG */ + + /* === Check the input arguments ======================================== */ + + if (!stats) + { + DEBUG0 (("symamd: stats not present\n")) ; + return (FALSE) ; + } + for (i = 0 ; i < COLAMD_STATS ; i++) + { + stats [i] = 0 ; + } + stats [COLAMD_STATUS] = COLAMD_OK ; + stats [COLAMD_INFO1] = -1 ; + stats [COLAMD_INFO2] = -1 ; + + if (!A) + { + stats [COLAMD_STATUS] = COLAMD_ERROR_A_not_present ; + DEBUG0 (("symamd: A not present\n")) ; + return (FALSE) ; + } + + if (!p) /* p is not present */ + { + stats [COLAMD_STATUS] = COLAMD_ERROR_p_not_present ; + DEBUG0 (("symamd: p not present\n")) ; + return (FALSE) ; + } + + if (n < 0) /* n must be >= 0 */ + { + stats [COLAMD_STATUS] = COLAMD_ERROR_ncol_negative ; + stats [COLAMD_INFO1] = n ; + DEBUG0 (("symamd: n negative %d\n", n)) ; + return (FALSE) ; + } + + nnz = p [n] ; + if (nnz < 0) /* nnz must be >= 0 */ + { + stats [COLAMD_STATUS] = COLAMD_ERROR_nnz_negative ; + stats [COLAMD_INFO1] = nnz ; + DEBUG0 (("symamd: number of entries negative %d\n", nnz)) ; + return (FALSE) ; + } + + if (p [0] != 0) + { + stats [COLAMD_STATUS] = COLAMD_ERROR_p0_nonzero ; + stats [COLAMD_INFO1] = p [0] ; + DEBUG0 (("symamd: p[0] not zero %d\n", p [0])) ; + return (FALSE) ; + } + + /* === If no knobs, set default knobs =================================== */ + + if (!knobs) + { + colamd_set_defaults (default_knobs) ; + knobs = default_knobs ; + } + + /* === Allocate count and mark ========================================== */ + + count = (int *) ((*allocate) (n+1, sizeof (int))) ; + if (!count) + { + stats [COLAMD_STATUS] = COLAMD_ERROR_out_of_memory ; + DEBUG0 (("symamd: allocate count (size %d) failed\n", n+1)) ; + return (FALSE) ; + } + + mark = (int *) ((*allocate) (n+1, sizeof (int))) ; + if (!mark) + { + stats [COLAMD_STATUS] = COLAMD_ERROR_out_of_memory ; + (*release) ((void *) count) ; + DEBUG0 (("symamd: allocate mark (size %d) failed\n", n+1)) ; + return (FALSE) ; + } + + /* === Compute column counts of M, check if A is valid ================== */ + + stats [COLAMD_INFO3] = 0 ; /* number of duplicate or unsorted row indices*/ + + for (i = 0 ; i < n ; i++) + { + mark [i] = -1 ; + } + + for (j = 0 ; j < n ; j++) + { + last_row = -1 ; + + length = p [j+1] - p [j] ; + if (length < 0) + { + /* column pointers must be non-decreasing */ + stats [COLAMD_STATUS] = COLAMD_ERROR_col_length_negative ; + stats [COLAMD_INFO1] = j ; + stats [COLAMD_INFO2] = length ; + (*release) ((void *) count) ; + (*release) ((void *) mark) ; + DEBUG0 (("symamd: col %d negative length %d\n", j, length)) ; + return (FALSE) ; + } + + for (pp = p [j] ; pp < p [j+1] ; pp++) + { + i = A [pp] ; + if (i < 0 || i >= n) + { + /* row index i, in column j, is out of bounds */ + stats [COLAMD_STATUS] = COLAMD_ERROR_row_index_out_of_bounds ; + stats [COLAMD_INFO1] = j ; + stats [COLAMD_INFO2] = i ; + stats [COLAMD_INFO3] = n ; + (*release) ((void *) count) ; + (*release) ((void *) mark) ; + DEBUG0 (("symamd: row %d col %d out of bounds\n", i, j)) ; + return (FALSE) ; + } + + if (i <= last_row || mark [i] == j) + { + /* row index is unsorted or repeated (or both), thus col */ + /* is jumbled. This is a notice, not an error condition. */ + stats [COLAMD_STATUS] = COLAMD_OK_BUT_JUMBLED ; + stats [COLAMD_INFO1] = j ; + stats [COLAMD_INFO2] = i ; + (stats [COLAMD_INFO3]) ++ ; + DEBUG1 (("symamd: row %d col %d unsorted/duplicate\n", i, j)) ; + } + + if (i > j && mark [i] != j) + { + /* row k of M will contain column indices i and j */ + count [i]++ ; + count [j]++ ; + } + + /* mark the row as having been seen in this column */ + mark [i] = j ; + + last_row = i ; + } + } -/* ========================================================================== */ -/* === USER-CALLABLE ROUTINES: ============================================== */ -/* ========================================================================== */ + if (stats [COLAMD_STATUS] == COLAMD_OK) + { + /* if there are no duplicate entries, then mark is no longer needed */ + (*release) ((void *) mark) ; + } + /* === Compute column pointers of M ===================================== */ -/* ========================================================================== */ -/* === colamd_recommended =================================================== */ -/* ========================================================================== */ + /* use output permutation, perm, for column pointers of M */ + perm [0] = 0 ; + for (j = 1 ; j <= n ; j++) + { + perm [j] = perm [j-1] + count [j-1] ; + } + for (j = 0 ; j < n ; j++) + { + count [j] = perm [j] ; + } -/* - The colamd_recommended routine returns the suggested size for Alen. This - value has been determined to provide good balance between the number of - garbage collections and the memory requirements for colamd. -*/ + /* === Construct M ====================================================== */ -PUBLIC int colamd_recommended /* returns recommended value of Alen. */ -( - /* === Parameters ======================================================= */ + mnz = perm [n] ; + n_row = mnz / 2 ; + Mlen = colamd_recommended (mnz, n_row, n) ; + M = (int *) ((*allocate) (Mlen, sizeof (int))) ; + DEBUG0 (("symamd: M is %d-by-%d with %d entries, Mlen = %d\n", + n_row, n, mnz, Mlen)) ; - int nnz, /* number of nonzeros in A */ - int n_row, /* number of rows in A */ - int n_col /* number of columns in A */ -) -{ - /* === Local variables ================================================== */ + if (!M) + { + stats [COLAMD_STATUS] = COLAMD_ERROR_out_of_memory ; + (*release) ((void *) count) ; + (*release) ((void *) mark) ; + DEBUG0 (("symamd: allocate M (size %d) failed\n", Mlen)) ; + return (FALSE) ; + } - int minimum ; /* bare minimum requirements */ - int recommended ; /* recommended value of Alen */ + k = 0 ; - if (nnz < 0 || n_row < 0 || n_col < 0) + if (stats [COLAMD_STATUS] == COLAMD_OK) + { + /* Matrix is OK */ + for (j = 0 ; j < n ; j++) + { + ASSERT (p [j+1] - p [j] >= 0) ; + for (pp = p [j] ; pp < p [j+1] ; pp++) + { + i = A [pp] ; + ASSERT (i >= 0 && i < n) ; + if (i > j) + { + /* row k of M contains column indices i and j */ + M [count [i]++] = k ; + M [count [j]++] = k ; + k++ ; + } + } + } + } + else { - /* return -1 if any input argument is corrupted */ - DEBUG0 (("colamd_recommended error!")) ; - DEBUG0 ((" nnz: %d, n_row: %d, n_col: %d\n", nnz, n_row, n_col)) ; - return (-1) ; + /* Matrix is jumbled. Do not add duplicates to M. Unsorted cols OK. */ + DEBUG0 (("symamd: Duplicates in A.\n")) ; + for (i = 0 ; i < n ; i++) + { + mark [i] = -1 ; + } + for (j = 0 ; j < n ; j++) + { + ASSERT (p [j+1] - p [j] >= 0) ; + for (pp = p [j] ; pp < p [j+1] ; pp++) + { + i = A [pp] ; + ASSERT (i >= 0 && i < n) ; + if (i > j && mark [i] != j) + { + /* row k of M contains column indices i and j */ + M [count [i]++] = k ; + M [count [j]++] = k ; + k++ ; + mark [i] = j ; + } + } + } + (*release) ((void *) mark) ; } - minimum = - 2 * (nnz) /* for A */ - + (((n_col) + 1) * sizeof (ColInfo) / sizeof (int)) /* for Col */ - + (((n_row) + 1) * sizeof (RowInfo) / sizeof (int)) /* for Row */ - + n_col /* minimum elbow room to guarrantee success */ - + COLAMD_STATS ; /* for output statistics */ + /* count and mark no longer needed */ + (*release) ((void *) count) ; + ASSERT (k == n_row) ; - /* recommended is equal to the minumum plus enough memory to keep the */ - /* number garbage collections low */ - recommended = minimum + nnz/5 ; + /* === Adjust the knobs for M =========================================== */ - return (recommended) ; -} + for (i = 0 ; i < COLAMD_KNOBS ; i++) + { + cknobs [i] = knobs [i] ; + } + /* there are no dense rows in M */ + cknobs [COLAMD_DENSE_ROW] = 1.0 ; -/* ========================================================================== */ -/* === colamd_set_defaults ================================================== */ -/* ========================================================================== */ + if (n_row != 0 && n < n_row) + { + /* On input, the knob is a fraction of 1..n, the number of rows of A. */ + /* Convert it to a fraction of 1..n_row, of the number of rows of M. */ + cknobs [COLAMD_DENSE_COL] = (knobs [COLAMD_DENSE_ROW] * n) / n_row ; + } + else + { + /* no dense columns in M */ + cknobs [COLAMD_DENSE_COL] = 1.0 ; + } -/* - The colamd_set_defaults routine sets the default values of the user- - controllable parameters for colamd: + DEBUG0 (("symamd: dense col knob for M: %g\n", cknobs [COLAMD_DENSE_COL])) ; - knobs [0] rows with knobs[0]*n_col entries or more are removed - prior to ordering. + /* === Order the columns of M =========================================== */ - knobs [1] columns with knobs[1]*n_row entries or more are removed - prior to ordering, and placed last in the column - permutation. + if (!colamd (n_row, n, Mlen, M, perm, cknobs, cstats)) + { + /* This "cannot" happen, unless there is a bug in the code. */ + stats [COLAMD_STATUS] = COLAMD_ERROR_internal_error ; + (*release) ((void *) M) ; + DEBUG0 (("symamd: internal error!\n")) ; + return (FALSE) ; + } - knobs [2..19] unused, but future versions might use this -*/ + /* Note that the output permutation is now in perm */ -PUBLIC void colamd_set_defaults -( - /* === Parameters ======================================================= */ + /* === get the statistics for symamd from colamd ======================== */ - double knobs [COLAMD_KNOBS] /* knob array */ -) -{ - /* === Local variables ================================================== */ + /* note that a dense column in colamd means a dense row and col in symamd */ + stats [COLAMD_DENSE_ROW] = cstats [COLAMD_DENSE_COL] ; + stats [COLAMD_DENSE_COL] = cstats [COLAMD_DENSE_COL] ; + stats [COLAMD_DEFRAG_COUNT] = cstats [COLAMD_DEFRAG_COUNT] ; - int i ; + /* === Free M =========================================================== */ - if (!knobs) - { - return ; /* no knobs to initialize */ - } - for (i = 0 ; i < COLAMD_KNOBS ; i++) - { - knobs [i] = 0 ; - } - knobs [COLAMD_DENSE_ROW] = 0.5 ; /* ignore rows over 50% dense */ - knobs [COLAMD_DENSE_COL] = 0.5 ; /* ignore columns over 50% dense */ -} + (*release) ((void *) M) ; + DEBUG0 (("symamd: done.\n")) ; + return (TRUE) ; +} /* ========================================================================== */ /* === colamd =============================================================== */ @@ -633,79 +1301,9 @@ selected via partial pivoting. The routine can also be viewed as providing a permutation Q such that the Cholesky factorization (AQ)'(AQ) = LL' remains sparse. - - On input, the nonzero patterns of the columns of A are stored in the - array A, in order 0 to n_col-1. A is held in 0-based form (rows in the - range 0 to n_row-1 and columns in the range 0 to n_col-1). Row indices - for column c are located in A [(p [c]) ... (p [c+1]-1)], where p [0] = 0, - and thus p [n_col] is the number of entries in A. The matrix is - destroyed on output. The row indices within each column do not have to - be sorted (from small to large row indices), and duplicate row indices - may be present. However, colamd will work a little faster if columns are - sorted and no duplicates are present. Matlab 5.2 always passes the matrix - with sorted columns, and no duplicates. - - The integer array A is of size Alen. Alen must be at least of size - (where nnz is the number of entries in A): - - nnz for the input column form of A - + nnz for a row form of A that colamd generates - + 6*(n_col+1) for a ColInfo Col [0..n_col] array - (this assumes sizeof (ColInfo) is 6 int's). - + 4*(n_row+1) for a RowInfo Row [0..n_row] array - (this assumes sizeof (RowInfo) is 4 int's). - + elbow_room must be at least n_col. We recommend at least - nnz/5 in addition to that. If sufficient, - changes in the elbow room affect the ordering - time only, not the ordering itself. - + COLAMD_STATS for the output statistics - - Colamd returns FALSE is memory is insufficient, or TRUE otherwise. - - On input, the caller must specify: - - n_row the number of rows of A - n_col the number of columns of A - Alen the size of the array A - A [0 ... nnz-1] the row indices, where nnz = p [n_col] - A [nnz ... Alen-1] (need not be initialized by the user) - p [0 ... n_col] the column pointers, p [0] = 0, and p [n_col] - is the number of entries in A. Column c of A - is stored in A [p [c] ... p [c+1]-1]. - knobs [0 ... 19] a set of parameters that control the behavior - of colamd. If knobs is a NULL pointer the - defaults are used. The user-callable - colamd_set_defaults routine sets the default - parameters. See that routine for a description - of the user-controllable parameters. - - If the return value of Colamd is TRUE, then on output: - - p [0 ... n_col-1] the column permutation. p [0] is the first - column index, and p [n_col-1] is the last. - That is, p [k] = j means that column j of A - is the kth column of AQ. - - A is undefined on output (the matrix pattern is - destroyed), except for the following statistics: - - A [0] the number of dense (or empty) rows ignored - A [1] the number of dense (or empty) columms. These - are ordered last, in their natural order. - A [2] the number of garbage collections performed. - If this is excessive, then you would have - gotten your results faster if Alen was larger. - A [3] 0, if all row indices in each column were in - sorted order and no duplicates were present. - 1, if there were unsorted or duplicate row - indices in the input. You would have gotten - your results faster if A [3] was returned as 0. - - If the return value of Colamd is FALSE, then A and p are undefined on - output. */ -PUBLIC int colamd /* returns TRUE if successful */ +PUBLIC int colamd /* returns TRUE if successful, FALSE otherwise*/ ( /* === Parameters ======================================================= */ @@ -714,7 +1312,8 @@ int Alen, /* length of A */ int A [], /* row indices of A */ int p [], /* pointers to columns in A */ - double knobs [COLAMD_KNOBS] /* parameters (uses defaults if NULL) */ + double knobs [COLAMD_KNOBS],/* parameters (uses defaults if NULL) */ + int stats [COLAMD_STATS] /* output statistics and error codes */ ) { /* === Local variables ================================================== */ @@ -723,69 +1322,115 @@ int nnz ; /* nonzeros in A */ int Row_size ; /* size of Row [], in integers */ int Col_size ; /* size of Col [], in integers */ - int elbow_room ; /* remaining free space */ - RowInfo *Row ; /* pointer into A of Row [0..n_row] array */ - ColInfo *Col ; /* pointer into A of Col [0..n_col] array */ + int need ; /* minimum required length of A */ + Colamd_Row *Row ; /* pointer into A of Row [0..n_row] array */ + Colamd_Col *Col ; /* pointer into A of Col [0..n_col] array */ int n_col2 ; /* number of non-dense, non-empty columns */ int n_row2 ; /* number of non-dense, non-empty rows */ int ngarbage ; /* number of garbage collections performed */ int max_deg ; /* maximum row degree */ - double default_knobs [COLAMD_KNOBS] ; /* default knobs knobs array */ - int init_result ; /* return code from initialization */ + double default_knobs [COLAMD_KNOBS] ; /* default knobs array */ #ifndef NDEBUG - debug_colamd = 0 ; /* no debug printing */ - /* get "D" environment variable, which gives the debug printing level */ - if (getenv ("D")) debug_colamd = atoi (getenv ("D")) ; - DEBUG0 (("debug version, D = %d (THIS WILL BE SLOOOOW!)\n", debug_colamd)) ; -#endif + colamd_get_debug ("colamd") ; +#endif /* NDEBUG */ /* === Check the input arguments ======================================== */ - if (n_row < 0 || n_col < 0 || !A || !p) + if (!stats) + { + DEBUG0 (("colamd: stats not present\n")) ; + return (FALSE) ; + } + for (i = 0 ; i < COLAMD_STATS ; i++) + { + stats [i] = 0 ; + } + stats [COLAMD_STATUS] = COLAMD_OK ; + stats [COLAMD_INFO1] = -1 ; + stats [COLAMD_INFO2] = -1 ; + + if (!A) /* A is not present */ { - /* n_row and n_col must be non-negative, A and p must be present */ - DEBUG0 (("colamd error! %d %d %d\n", n_row, n_col, Alen)) ; + stats [COLAMD_STATUS] = COLAMD_ERROR_A_not_present ; + DEBUG0 (("colamd: A not present\n")) ; return (FALSE) ; } + + if (!p) /* p is not present */ + { + stats [COLAMD_STATUS] = COLAMD_ERROR_p_not_present ; + DEBUG0 (("colamd: p not present\n")) ; + return (FALSE) ; + } + + if (n_row < 0) /* n_row must be >= 0 */ + { + stats [COLAMD_STATUS] = COLAMD_ERROR_nrow_negative ; + stats [COLAMD_INFO1] = n_row ; + DEBUG0 (("colamd: nrow negative %d\n", n_row)) ; + return (FALSE) ; + } + + if (n_col < 0) /* n_col must be >= 0 */ + { + stats [COLAMD_STATUS] = COLAMD_ERROR_ncol_negative ; + stats [COLAMD_INFO1] = n_col ; + DEBUG0 (("colamd: ncol negative %d\n", n_col)) ; + return (FALSE) ; + } + nnz = p [n_col] ; - if (nnz < 0 || p [0] != 0) + if (nnz < 0) /* nnz must be >= 0 */ + { + stats [COLAMD_STATUS] = COLAMD_ERROR_nnz_negative ; + stats [COLAMD_INFO1] = nnz ; + DEBUG0 (("colamd: number of entries negative %d\n", nnz)) ; + return (FALSE) ; + } + + if (p [0] != 0) { - /* nnz must be non-negative, and p [0] must be zero */ - DEBUG0 (("colamd error! %d %d\n", nnz, p [0])) ; + stats [COLAMD_STATUS] = COLAMD_ERROR_p0_nonzero ; + stats [COLAMD_INFO1] = p [0] ; + DEBUG0 (("colamd: p[0] not zero %d\n", p [0])) ; return (FALSE) ; } - /* === If no knobs, set default parameters ============================== */ + /* === If no knobs, set default knobs =================================== */ if (!knobs) { + colamd_set_defaults (default_knobs) ; knobs = default_knobs ; - colamd_set_defaults (knobs) ; } /* === Allocate the Row and Col arrays from array A ===================== */ - Col_size = (n_col + 1) * sizeof (ColInfo) / sizeof (int) ; - Row_size = (n_row + 1) * sizeof (RowInfo) / sizeof (int) ; - elbow_room = Alen - (2*nnz + Col_size + Row_size) ; - if (elbow_room < n_col + COLAMD_STATS) + Col_size = COLAMD_C (n_col) ; + Row_size = COLAMD_R (n_row) ; + need = 2*nnz + n_col + Col_size + Row_size ; + + if (need > Alen) { /* not enough space in array A to perform the ordering */ - DEBUG0 (("colamd error! elbow_room %d, %d\n", elbow_room,n_col)) ; + stats [COLAMD_STATUS] = COLAMD_ERROR_A_too_small ; + stats [COLAMD_INFO1] = need ; + stats [COLAMD_INFO2] = Alen ; + DEBUG0 (("colamd: Need Alen >= %d, given only Alen = %d\n", need,Alen)); return (FALSE) ; } - Alen = 2*nnz + elbow_room ; - Col = (ColInfo *) &A [Alen] ; - Row = (RowInfo *) &A [Alen + Col_size] ; + + Alen -= Col_size + Row_size ; + Col = (Colamd_Col *) &A [Alen] ; + Row = (Colamd_Row *) &A [Alen + Col_size] ; /* === Construct the row and column data structures ===================== */ - init_result = init_rows_cols (n_row, n_col, Row, Col, A, p) ; - if (init_result == -1) + if (!init_rows_cols (n_row, n_col, Row, Col, A, p, stats)) { /* input matrix is invalid */ - DEBUG0 (("colamd error! matrix invalid\n")) ; + DEBUG0 (("colamd: Matrix invalid\n")) ; return (FALSE) ; } @@ -803,22 +1448,44 @@ order_children (n_col, Col, p) ; - /* === Return statistics in A =========================================== */ - - for (i = 0 ; i < COLAMD_STATS ; i++) - { - A [i] = 0 ; - } - A [COLAMD_DENSE_ROW] = n_row - n_row2 ; - A [COLAMD_DENSE_COL] = n_col - n_col2 ; - A [COLAMD_DEFRAG_COUNT] = ngarbage ; - A [COLAMD_JUMBLED_COLS] = init_result ; + /* === Return statistics in stats ======================================= */ + stats [COLAMD_DENSE_ROW] = n_row - n_row2 ; + stats [COLAMD_DENSE_COL] = n_col - n_col2 ; + stats [COLAMD_DEFRAG_COUNT] = ngarbage ; + DEBUG0 (("colamd: done.\n")) ; return (TRUE) ; } /* ========================================================================== */ +/* === colamd_report ======================================================== */ +/* ========================================================================== */ + +PUBLIC void colamd_report +( + int stats [COLAMD_STATS] +) +{ + print_report ("colamd", stats) ; +} + + +/* ========================================================================== */ +/* === symamd_report ======================================================== */ +/* ========================================================================== */ + +PUBLIC void symamd_report +( + int stats [COLAMD_STATS] +) +{ + print_report ("symamd", stats) ; +} + + + +/* ========================================================================== */ /* === NON-USER-CALLABLE ROUTINES: ========================================== */ /* ========================================================================== */ @@ -834,20 +1501,21 @@ matrix. Also, row and column attributes are stored in the Col and Row structs. If the columns are un-sorted or contain duplicate row indices, this routine will also sort and remove duplicate row indices from the - column form of the matrix. Returns -1 on error, 1 if columns jumbled, - or 0 if columns not jumbled. Not user-callable. + column form of the matrix. Returns FALSE if the matrix is invalid, + TRUE otherwise. Not user-callable. */ -PRIVATE int init_rows_cols /* returns status code */ +PRIVATE int init_rows_cols /* returns TRUE if OK, or FALSE otherwise */ ( /* === Parameters ======================================================= */ int n_row, /* number of rows of A */ int n_col, /* number of columns of A */ - RowInfo Row [], /* of size n_row+1 */ - ColInfo Col [], /* of size n_col+1 */ + Colamd_Row Row [], /* of size n_row+1 */ + Colamd_Col Col [], /* of size n_col+1 */ int A [], /* row indices of A, of size Alen */ - int p [] /* pointers to columns in A, of size n_col+1 */ + int p [], /* pointers to columns in A, of size n_col+1 */ + int stats [COLAMD_STATS] /* colamd statistics */ ) { /* === Local variables ================================================== */ @@ -858,44 +1526,36 @@ int *cp_end ; /* a pointer to the end of a column */ int *rp ; /* a row pointer */ int *rp_end ; /* a pointer to the end of a row */ - int last_start ; /* start index of previous column in A */ - int start ; /* start index of column in A */ int last_row ; /* previous row */ - int jumbled_columns ; /* indicates if columns are jumbled */ /* === Initialize columns, and check column pointers ==================== */ - last_start = 0 ; for (col = 0 ; col < n_col ; col++) { - start = p [col] ; - if (start < last_start) + Col [col].start = p [col] ; + Col [col].length = p [col+1] - p [col] ; + + if (Col [col].length < 0) { /* column pointers must be non-decreasing */ - DEBUG0 (("colamd error! last p %d p [col] %d\n",last_start,start)); - return (-1) ; + stats [COLAMD_STATUS] = COLAMD_ERROR_col_length_negative ; + stats [COLAMD_INFO1] = col ; + stats [COLAMD_INFO2] = Col [col].length ; + DEBUG0 (("colamd: col %d length %d < 0\n", col, Col [col].length)) ; + return (FALSE) ; } - Col [col].start = start ; - Col [col].length = p [col+1] - start ; + Col [col].shared1.thickness = 1 ; Col [col].shared2.score = 0 ; Col [col].shared3.prev = EMPTY ; Col [col].shared4.degree_next = EMPTY ; - last_start = start ; - } - /* must check the end pointer for last column */ - if (p [n_col] < last_start) - { - /* column pointers must be non-decreasing */ - DEBUG0 (("colamd error! last p %d p [n_col] %d\n",p[col],last_start)) ; - return (-1) ; } /* p [0..n_col] no longer needed, used as "head" in subsequent routines */ /* === Scan columns, compute row degrees, and check row indices ========= */ - jumbled_columns = FALSE ; + stats [COLAMD_INFO3] = 0 ; /* number of duplicate or unsorted row indices*/ for (row = 0 ; row < n_row ; row++) { @@ -917,22 +1577,28 @@ /* make sure row indices within range */ if (row < 0 || row >= n_row) { - DEBUG0 (("colamd error! col %d row %d last_row %d\n", - col, row, last_row)) ; - return (-1) ; + stats [COLAMD_STATUS] = COLAMD_ERROR_row_index_out_of_bounds ; + stats [COLAMD_INFO1] = col ; + stats [COLAMD_INFO2] = row ; + stats [COLAMD_INFO3] = n_row ; + DEBUG0 (("colamd: row %d col %d out of bounds\n", row, col)) ; + return (FALSE) ; + } + + if (row <= last_row || Row [row].shared2.mark == col) + { + /* row index are unsorted or repeated (or both), thus col */ + /* is jumbled. This is a notice, not an error condition. */ + stats [COLAMD_STATUS] = COLAMD_OK_BUT_JUMBLED ; + stats [COLAMD_INFO1] = col ; + stats [COLAMD_INFO2] = row ; + (stats [COLAMD_INFO3]) ++ ; + DEBUG1 (("colamd: row %d col %d unsorted/duplicate\n",row,col)); } - else if (row <= last_row) - { - /* row indices are not sorted or repeated, thus cols */ - /* are jumbled */ - jumbled_columns = TRUE ; - } - /* prevent repeated row from being counted */ + if (Row [row].shared2.mark != col) { Row [row].length++ ; - Row [row].shared2.mark = col ; - last_row = row ; } else { @@ -940,6 +1606,11 @@ /* it will be removed */ Col [col].length-- ; } + + /* mark the row as having been seen in this column */ + Row [row].shared2.mark = col ; + + last_row = row ; } } @@ -959,7 +1630,7 @@ /* === Create row form ================================================== */ - if (jumbled_columns) + if (stats [COLAMD_STATUS] == COLAMD_OK_BUT_JUMBLED) { /* if cols jumbled, watch for repeated row indices */ for (col = 0 ; col < n_col ; col++) @@ -1001,8 +1672,9 @@ /* === See if we need to re-create columns ============================== */ - if (jumbled_columns) + if (stats [COLAMD_STATUS] == COLAMD_OK_BUT_JUMBLED) { + DEBUG0 (("colamd: reconstructing column form, matrix jumbled\n")) ; #ifndef NDEBUG /* make sure column lengths are correct */ @@ -1021,10 +1693,10 @@ } for (col = 0 ; col < n_col ; col++) { - assert (p [col] == 0) ; + ASSERT (p [col] == 0) ; } /* now p is all zero (different than when debugging is turned off) */ -#endif +#endif /* NDEBUG */ /* === Compute col pointers ========================================= */ @@ -1053,13 +1725,11 @@ A [(p [*rp++])++] = row ; } } - return (1) ; - } - else - { - /* no columns jumbled (this is faster) */ - return (0) ; } + + /* === Done. Matrix is not (or no longer) jumbled ====================== */ + + return (TRUE) ; } @@ -1078,8 +1748,8 @@ int n_row, /* number of rows of A */ int n_col, /* number of columns of A */ - RowInfo Row [], /* of size n_row+1 */ - ColInfo Col [], /* of size n_col+1 */ + Colamd_Row Row [], /* of size n_row+1 */ + Colamd_Col Col [], /* of size n_col+1 */ int A [], /* column form and row form of A */ int head [], /* of size n_col+1 */ double knobs [COLAMD_KNOBS],/* parameters */ @@ -1093,7 +1763,7 @@ int c ; /* a column index */ int r, row ; /* a row index */ int *cp ; /* a column pointer */ - int deg ; /* degree (# entries) of a row or column */ + int deg ; /* degree of a row or column */ int *cp_end ; /* a pointer to the end of a column */ int *new_cp ; /* new column pointer */ int col_length ; /* length of pruned column */ @@ -1105,22 +1775,23 @@ int min_score ; /* smallest column score */ int max_deg ; /* maximum row degree */ int next_col ; /* Used to add to degree list.*/ + #ifndef NDEBUG int debug_count ; /* debug only. */ -#endif +#endif /* NDEBUG */ /* === Extract knobs ==================================================== */ dense_row_count = MAX (0, MIN (knobs [COLAMD_DENSE_ROW] * n_col, n_col)) ; dense_col_count = MAX (0, MIN (knobs [COLAMD_DENSE_COL] * n_row, n_row)) ; - DEBUG0 (("densecount: %d %d\n", dense_row_count, dense_col_count)) ; + DEBUG1 (("colamd: densecount: %d %d\n", dense_row_count, dense_col_count)) ; max_deg = 0 ; n_col2 = n_col ; n_row2 = n_row ; /* === Kill empty columns =============================================== */ - /* Put the empty columns at the end in their natural, so that LU */ + /* Put the empty columns at the end in their natural order, so that LU */ /* factorization can proceed as far as possible. */ for (c = n_col-1 ; c >= 0 ; c--) { @@ -1132,7 +1803,7 @@ KILL_PRINCIPAL_COL (c) ; } } - DEBUG0 (("null columns killed: %d\n", n_col - n_col2)) ; + DEBUG1 (("colamd: null columns killed: %d\n", n_col - n_col2)) ; /* === Kill dense columns =============================================== */ @@ -1159,14 +1830,14 @@ KILL_PRINCIPAL_COL (c) ; } } - DEBUG0 (("Dense and null columns killed: %d\n", n_col - n_col2)) ; + DEBUG1 (("colamd: Dense and null columns killed: %d\n", n_col - n_col2)) ; /* === Kill dense and empty rows ======================================== */ for (r = 0 ; r < n_row ; r++) { deg = Row [r].shared1.degree ; - assert (deg >= 0 && deg <= n_col) ; + ASSERT (deg >= 0 && deg <= n_col) ; if (deg > dense_row_count || deg == 0) { /* kill a dense or empty row */ @@ -1179,7 +1850,7 @@ max_deg = MAX (max_deg, deg) ; } } - DEBUG0 (("Dense and null rows killed: %d\n", n_row - n_row2)) ; + DEBUG1 (("colamd: Dense and null rows killed: %d\n", n_row - n_row2)) ; /* === Compute initial column scores ==================================== */ @@ -1222,20 +1893,21 @@ { /* a newly-made null column (all rows in this col are "dense" */ /* and have already been killed) */ - DEBUG0 (("Newly null killed: %d\n", c)) ; + DEBUG2 (("Newly null killed: %d\n", c)) ; Col [c].shared2.order = --n_col2 ; KILL_PRINCIPAL_COL (c) ; } else { /* set column length and set score */ - assert (score >= 0) ; - assert (score <= n_col) ; + ASSERT (score >= 0) ; + ASSERT (score <= n_col) ; Col [c].length = col_length ; Col [c].shared2.score = score ; } } - DEBUG0 (("Dense, null, and newly-null columns killed: %d\n",n_col-n_col2)) ; + DEBUG1 (("colamd: Dense, null, and newly-null columns killed: %d\n", + n_col-n_col2)) ; /* At this point, all empty rows and columns are dead. All live columns */ /* are "clean" (containing no dead rows) and simplicial (no supercolumns */ @@ -1244,13 +1916,13 @@ #ifndef NDEBUG debug_structures (n_row, n_col, Row, Col, A, n_col2) ; -#endif +#endif /* NDEBUG */ /* === Initialize degree lists ========================================== */ #ifndef NDEBUG debug_count = 0 ; -#endif +#endif /* NDEBUG */ /* clear the hash buckets */ for (c = 0 ; c <= n_col ; c++) @@ -1272,11 +1944,11 @@ score = Col [c].shared2.score ; - assert (min_score >= 0) ; - assert (min_score <= n_col) ; - assert (score >= 0) ; - assert (score <= n_col) ; - assert (head [score] >= EMPTY) ; + ASSERT (min_score >= 0) ; + ASSERT (min_score <= n_col) ; + ASSERT (score >= 0) ; + ASSERT (score <= n_col) ; + ASSERT (head [score] >= EMPTY) ; /* now add this column to dList at proper score location */ next_col = head [score] ; @@ -1296,16 +1968,17 @@ #ifndef NDEBUG debug_count++ ; -#endif +#endif /* NDEBUG */ + } } #ifndef NDEBUG - DEBUG0 (("Live cols %d out of %d, non-princ: %d\n", + DEBUG1 (("colamd: Live cols %d out of %d, non-princ: %d\n", debug_count, n_col, n_col-debug_count)) ; - assert (debug_count == n_col2) ; + ASSERT (debug_count == n_col2) ; debug_deg_lists (n_row, n_col, Row, Col, head, min_score, n_col2, max_deg) ; -#endif +#endif /* NDEBUG */ /* === Return number of remaining columns, and max row degree =========== */ @@ -1331,9 +2004,9 @@ int n_row, /* number of rows of A */ int n_col, /* number of columns of A */ - int Alen, /* size of A, 2*nnz + elbow_room or larger */ - RowInfo Row [], /* of size n_row+1 */ - ColInfo Col [], /* of size n_col+1 */ + int Alen, /* size of A, 2*nnz + n_col or larger */ + Colamd_Row Row [], /* of size n_row+1 */ + Colamd_Col Col [], /* of size n_col+1 */ int A [], /* column form and row form of A */ int head [], /* of size n_col+1 */ int n_col2, /* Remaining columns to order */ @@ -1351,8 +2024,8 @@ int *new_cp ; /* modified column pointer */ int *new_rp ; /* modified row pointer */ int pivot_row_start ; /* pointer to start of pivot row */ - int pivot_row_degree ; /* # of columns in pivot row */ - int pivot_row_length ; /* # of supercolumns in pivot row */ + int pivot_row_degree ; /* number of columns in pivot row */ + int pivot_row_length ; /* number of supercolumns in pivot row */ int pivot_col_score ; /* score of pivot column */ int needed_memory ; /* free space needed for pivot row */ int *cp_end ; /* pointer to the end of a column */ @@ -1368,16 +2041,17 @@ int row_mark ; /* Row [row].shared2.mark */ int set_difference ; /* set difference size of row with pivot row */ int min_score ; /* smallest column score */ - int col_thickness ; /* "thickness" (# of columns in a supercol) */ + int col_thickness ; /* "thickness" (no. of columns in a supercol) */ int max_mark ; /* maximum value of tag_mark */ int pivot_col_thickness ; /* number of columns represented by pivot col */ int prev_col ; /* Used by Dlist operations. */ int next_col ; /* Used by Dlist operations. */ int ngarbage ; /* number of garbage collections performed */ + #ifndef NDEBUG int debug_d ; /* debug loop counter */ int debug_step = 0 ; /* debug loop counter */ -#endif +#endif /* NDEBUG */ /* === Initialization and clear mark ==================================== */ @@ -1385,7 +2059,7 @@ tag_mark = clear_mark (n_row, Row) ; min_score = 0 ; ngarbage = 0 ; - DEBUG0 (("Ordering.. n_col2=%d\n", n_col2)) ; + DEBUG1 (("colamd: Ordering, n_col2=%d\n", n_col2)) ; /* === Order the columns ================================================ */ @@ -1395,31 +2069,31 @@ #ifndef NDEBUG if (debug_step % 100 == 0) { - DEBUG0 (("\n... Step k: %d out of n_col2: %d\n", k, n_col2)) ; + DEBUG2 (("\n... Step k: %d out of n_col2: %d\n", k, n_col2)) ; } else { - DEBUG1 (("\n----------Step k: %d out of n_col2: %d\n", k, n_col2)) ; + DEBUG3 (("\n----------Step k: %d out of n_col2: %d\n", k, n_col2)) ; } debug_step++ ; debug_deg_lists (n_row, n_col, Row, Col, head, min_score, n_col2-k, max_deg) ; debug_matrix (n_row, n_col, Row, Col, A) ; -#endif +#endif /* NDEBUG */ /* === Select pivot column, and order it ============================ */ /* make sure degree list isn't empty */ - assert (min_score >= 0) ; - assert (min_score <= n_col) ; - assert (head [min_score] >= EMPTY) ; + ASSERT (min_score >= 0) ; + ASSERT (min_score <= n_col) ; + ASSERT (head [min_score] >= EMPTY) ; #ifndef NDEBUG for (debug_d = 0 ; debug_d < min_score ; debug_d++) { - assert (head [debug_d] == EMPTY) ; + ASSERT (head [debug_d] == EMPTY) ; } -#endif +#endif /* NDEBUG */ /* get pivot column from head of minimum degree list */ while (head [min_score] == EMPTY && min_score < n_col) @@ -1427,7 +2101,7 @@ min_score++ ; } pivot_col = head [min_score] ; - assert (pivot_col >= 0 && pivot_col <= n_col) ; + ASSERT (pivot_col >= 0 && pivot_col <= n_col) ; next_col = Col [pivot_col].shared4.degree_next ; head [min_score] = next_col ; if (next_col != EMPTY) @@ -1435,7 +2109,7 @@ Col [next_col].shared3.prev = EMPTY ; } - assert (COL_IS_ALIVE (pivot_col)) ; + ASSERT (COL_IS_ALIVE (pivot_col)) ; DEBUG3 (("Pivot col: %d\n", pivot_col)) ; /* remember score for defrag check */ @@ -1447,7 +2121,7 @@ /* increment order count by column thickness */ pivot_col_thickness = Col [pivot_col].shared1.thickness ; k += pivot_col_thickness ; - assert (pivot_col_thickness > 0) ; + ASSERT (pivot_col_thickness > 0) ; /* === Garbage_collection, if necessary ============================= */ @@ -1457,12 +2131,13 @@ pfree = garbage_collection (n_row, n_col, Row, Col, A, &A [pfree]) ; ngarbage++ ; /* after garbage collection we will have enough */ - assert (pfree + needed_memory < Alen) ; + ASSERT (pfree + needed_memory < Alen) ; /* garbage collection has wiped out the Row[].shared2.mark array */ tag_mark = clear_mark (n_row, Row) ; + #ifndef NDEBUG debug_matrix (n_row, n_col, Row, Col, A) ; -#endif +#endif /* NDEBUG */ } /* === Compute pivot row pattern ==================================== */ @@ -1502,7 +2177,7 @@ { /* tag column in pivot row */ Col [col].shared1.thickness = -col_thickness ; - assert (pfree < Alen) ; + ASSERT (pfree < Alen) ; /* place column in pivot row */ A [pfree++] = col ; pivot_row_degree += col_thickness ; @@ -1517,7 +2192,7 @@ #ifndef NDEBUG DEBUG3 (("check2\n")) ; debug_mark (n_row, Row, tag_mark, max_mark) ; -#endif +#endif /* NDEBUG */ /* === Kill all rows used to construct pivot row ==================== */ @@ -1528,7 +2203,7 @@ { /* may be killing an already dead row */ row = *cp++ ; - DEBUG2 (("Kill row in pivot col: %d\n", row)) ; + DEBUG3 (("Kill row in pivot col: %d\n", row)) ; KILL_ROW (row) ; } @@ -1539,15 +2214,15 @@ { /* pick the "pivot" row arbitrarily (first row in col) */ pivot_row = A [Col [pivot_col].start] ; - DEBUG2 (("Pivotal row is %d\n", pivot_row)) ; + DEBUG3 (("Pivotal row is %d\n", pivot_row)) ; } else { /* there is no pivot row, since it is of zero length */ pivot_row = EMPTY ; - assert (pivot_row_length == 0) ; + ASSERT (pivot_row_length == 0) ; } - assert (Col [pivot_col].length > 0 || pivot_row_length == 0) ; + ASSERT (Col [pivot_col].length > 0 || pivot_row_length == 0) ; /* === Approximate degree computation =============================== */ @@ -1570,23 +2245,23 @@ /* === Compute set differences ====================================== */ - DEBUG1 (("** Computing set differences phase. **\n")) ; + DEBUG3 (("** Computing set differences phase. **\n")) ; /* pivot row is currently dead - it will be revived later. */ - DEBUG2 (("Pivot row: ")) ; + DEBUG3 (("Pivot row: ")) ; /* for each column in pivot row */ rp = &A [pivot_row_start] ; rp_end = rp + pivot_row_length ; while (rp < rp_end) { col = *rp++ ; - assert (COL_IS_ALIVE (col) && col != pivot_col) ; - DEBUG2 (("Col: %d\n", col)) ; + ASSERT (COL_IS_ALIVE (col) && col != pivot_col) ; + DEBUG3 (("Col: %d\n", col)) ; /* clear tags used to construct pivot row pattern */ col_thickness = -Col [col].shared1.thickness ; - assert (col_thickness > 0) ; + ASSERT (col_thickness > 0) ; Col [col].shared1.thickness = col_thickness ; /* === Remove column from degree list =========================== */ @@ -1594,9 +2269,9 @@ cur_score = Col [col].shared2.score ; prev_col = Col [col].shared3.prev ; next_col = Col [col].shared4.degree_next ; - assert (cur_score >= 0) ; - assert (cur_score <= n_col) ; - assert (cur_score >= EMPTY) ; + ASSERT (cur_score >= 0) ; + ASSERT (cur_score <= n_col) ; + ASSERT (cur_score >= EMPTY) ; if (prev_col == EMPTY) { head [cur_score] = next_col ; @@ -1624,21 +2299,21 @@ { continue ; } - assert (row != pivot_row) ; + ASSERT (row != pivot_row) ; set_difference = row_mark - tag_mark ; /* check if the row has been seen yet */ if (set_difference < 0) { - assert (Row [row].shared1.degree <= max_deg) ; + ASSERT (Row [row].shared1.degree <= max_deg) ; set_difference = Row [row].shared1.degree ; } /* subtract column thickness from this row's set difference */ set_difference -= col_thickness ; - assert (set_difference >= 0) ; + ASSERT (set_difference >= 0) ; /* absorb this row if the set difference becomes zero */ if (set_difference == 0) { - DEBUG1 (("aggressive absorption. Row: %d\n", row)) ; + DEBUG3 (("aggressive absorption. Row: %d\n", row)) ; KILL_ROW (row) ; } else @@ -1652,11 +2327,11 @@ #ifndef NDEBUG debug_deg_lists (n_row, n_col, Row, Col, head, min_score, n_col2-k-pivot_row_degree, max_deg) ; -#endif +#endif /* NDEBUG */ /* === Add up set differences for each column ======================= */ - DEBUG1 (("** Adding set differences phase. **\n")) ; + DEBUG3 (("** Adding set differences phase. **\n")) ; /* for each column in pivot row */ rp = &A [pivot_row_start] ; @@ -1665,7 +2340,7 @@ { /* get a column */ col = *rp++ ; - assert (COL_IS_ALIVE (col) && col != pivot_col) ; + ASSERT (COL_IS_ALIVE (col) && col != pivot_col) ; hash = 0 ; cur_score = 0 ; cp = &A [Col [col].start] ; @@ -1673,20 +2348,20 @@ new_cp = cp ; cp_end = cp + Col [col].length ; - DEBUG2 (("Adding set diffs for Col: %d.\n", col)) ; + DEBUG4 (("Adding set diffs for Col: %d.\n", col)) ; while (cp < cp_end) { /* get a row */ row = *cp++ ; - assert(row >= 0 && row < n_row) ; + ASSERT(row >= 0 && row < n_row) ; row_mark = Row [row].shared2.mark ; /* skip if dead */ if (ROW_IS_MARKED_DEAD (row_mark)) { continue ; } - assert (row_mark > tag_mark) ; + ASSERT (row_mark > tag_mark) ; /* compact the column */ *new_cp++ = row ; /* compute hash function */ @@ -1704,11 +2379,11 @@ if (Col [col].length == 0) { - DEBUG1 (("further mass elimination. Col: %d\n", col)) ; + DEBUG4 (("further mass elimination. Col: %d\n", col)) ; /* nothing left but the pivot row in this column */ KILL_PRINCIPAL_COL (col) ; pivot_row_degree -= Col [col].shared1.thickness ; - assert (pivot_row_degree >= 0) ; + ASSERT (pivot_row_degree >= 0) ; /* order it */ Col [col].shared2.order = k ; /* increment order count by column thickness */ @@ -1718,7 +2393,7 @@ { /* === Prepare for supercolumn detection ==================== */ - DEBUG2 (("Preparing supercol detection for Col: %d.\n", col)) ; + DEBUG4 (("Preparing supercol detection for Col: %d.\n", col)) ; /* save score so far */ Col [col].shared2.score = cur_score ; @@ -1726,8 +2401,8 @@ /* add column to hash table, for supercolumn detection */ hash %= n_col + 1 ; - DEBUG2 ((" Hash = %d, n_col = %d.\n", hash, n_col)) ; - assert (hash <= n_col) ; + DEBUG4 ((" Hash = %d, n_col = %d.\n", hash, n_col)) ; + ASSERT (hash <= n_col) ; head_column = head [hash] ; if (head_column > EMPTY) @@ -1747,7 +2422,7 @@ /* save hash function in Col [col].shared3.hash */ Col [col].shared3.hash = (int) hash ; - assert (COL_IS_ALIVE (col)) ; + ASSERT (COL_IS_ALIVE (col)) ; } } @@ -1755,12 +2430,14 @@ /* === Supercolumn detection ======================================== */ - DEBUG1 (("** Supercolumn detection phase. **\n")) ; + DEBUG3 (("** Supercolumn detection phase. **\n")) ; detect_super_cols ( + #ifndef NDEBUG n_col, Row, -#endif +#endif /* NDEBUG */ + Col, A, head, pivot_row_start, pivot_row_length) ; /* === Kill the pivotal column ====================================== */ @@ -1772,17 +2449,18 @@ tag_mark += (max_deg + 1) ; if (tag_mark >= max_mark) { - DEBUG1 (("clearing tag_mark\n")) ; + DEBUG2 (("clearing tag_mark\n")) ; tag_mark = clear_mark (n_row, Row) ; } + #ifndef NDEBUG DEBUG3 (("check3\n")) ; debug_mark (n_row, Row, tag_mark, max_mark) ; -#endif +#endif /* NDEBUG */ /* === Finalize the new pivot row, and column scores ================ */ - DEBUG1 (("** Finalize scores phase. **\n")) ; + DEBUG3 (("** Finalize scores phase. **\n")) ; /* for each column in pivot row */ rp = &A [pivot_row_start] ; @@ -1816,18 +2494,18 @@ /* make sure score is less or equal than the max score */ cur_score = MIN (cur_score, max_score) ; - assert (cur_score >= 0) ; + ASSERT (cur_score >= 0) ; /* store updated score */ Col [col].shared2.score = cur_score ; /* === Place column back in degree list ========================= */ - assert (min_score >= 0) ; - assert (min_score <= n_col) ; - assert (cur_score >= 0) ; - assert (cur_score <= n_col) ; - assert (head [cur_score] >= EMPTY) ; + ASSERT (min_score >= 0) ; + ASSERT (min_score <= n_col) ; + ASSERT (cur_score >= 0) ; + ASSERT (cur_score <= n_col) ; + ASSERT (head [cur_score] >= EMPTY) ; next_col = head [cur_score] ; Col [col].shared4.degree_next = next_col ; Col [col].shared3.prev = EMPTY ; @@ -1845,7 +2523,7 @@ #ifndef NDEBUG debug_deg_lists (n_row, n_col, Row, Col, head, min_score, n_col2-k, max_deg) ; -#endif +#endif /* NDEBUG */ /* === Resurrect the new pivot row ================================== */ @@ -1889,7 +2567,7 @@ /* === Parameters ======================================================= */ int n_col, /* number of columns of A */ - ColInfo Col [], /* of size n_col+1 */ + Colamd_Col Col [], /* of size n_col+1 */ int p [] /* p [0 ... n_col-1] is the column permutation*/ ) { @@ -1905,7 +2583,7 @@ for (i = 0 ; i < n_col ; i++) { /* find an un-ordered non-principal column */ - assert (COL_IS_DEAD (i)) ; + ASSERT (COL_IS_DEAD (i)) ; if (!COL_IS_DEAD_PRINCIPAL (i) && Col [i].shared2.order == EMPTY) { parent = i ; @@ -1923,7 +2601,7 @@ do { - assert (Col [c].shared2.order == EMPTY) ; + ASSERT (Col [c].shared2.order == EMPTY) ; /* order this column */ Col [c].shared2.order = order++ ; @@ -1992,9 +2670,10 @@ #ifndef NDEBUG /* these two parameters are only needed when debugging is enabled: */ int n_col, /* number of columns of A */ - RowInfo Row [], /* of size n_row+1 */ -#endif - ColInfo Col [], /* of size n_col+1 */ + Colamd_Row Row [], /* of size n_row+1 */ +#endif /* NDEBUG */ + + Colamd_Col Col [], /* of size n_col+1 */ int A [], /* row indices of A */ int head [], /* head of degree lists and hash buckets */ int row_start, /* pointer to set of columns to check */ @@ -2003,7 +2682,7 @@ { /* === Local variables ================================================== */ - int hash ; /* hash # for a column */ + int hash ; /* hash value for a column */ int *rp ; /* pointer to a row */ int c ; /* a column index */ int super_c ; /* column index of the column to absorb into */ @@ -2031,7 +2710,7 @@ /* get hash number for this column */ hash = Col [col].shared3.hash ; - assert (hash <= n_col) ; + ASSERT (hash <= n_col) ; /* === Get the first column in this hash bucket ===================== */ @@ -2050,8 +2729,8 @@ for (super_c = first_col ; super_c != EMPTY ; super_c = Col [super_c].shared4.hash_next) { - assert (COL_IS_ALIVE (super_c)) ; - assert (Col [super_c].shared3.hash == hash) ; + ASSERT (COL_IS_ALIVE (super_c)) ; + ASSERT (Col [super_c].shared3.hash == hash) ; length = Col [super_c].length ; /* prev_c is the column preceding column c in the hash bucket */ @@ -2062,9 +2741,9 @@ for (c = Col [super_c].shared4.hash_next ; c != EMPTY ; c = Col [c].shared4.hash_next) { - assert (c != super_c) ; - assert (COL_IS_ALIVE (c)) ; - assert (Col [c].shared3.hash == hash) ; + ASSERT (c != super_c) ; + ASSERT (COL_IS_ALIVE (c)) ; + ASSERT (Col [c].shared3.hash == hash) ; /* not identical if lengths or scores are different */ if (Col [c].length != length || @@ -2081,8 +2760,8 @@ for (i = 0 ; i < length ; i++) { /* the columns are "clean" (no dead rows) */ - assert (ROW_IS_ALIVE (*cp1)) ; - assert (ROW_IS_ALIVE (*cp2)) ; + ASSERT (ROW_IS_ALIVE (*cp1)) ; + ASSERT (ROW_IS_ALIVE (*cp2)) ; /* row indices will same order for both supercols, */ /* no gather scatter nessasary */ if (*cp1++ != *cp2++) @@ -2100,7 +2779,7 @@ /* === Got it! two columns are identical =================== */ - assert (Col [c].shared2.score == Col [super_c].shared2.score) ; + ASSERT (Col [c].shared2.score == Col [super_c].shared2.score) ; Col [super_c].shared1.thickness += Col [c].shared1.thickness ; Col [c].shared1.parent = super_c ; @@ -2147,8 +2826,8 @@ int n_row, /* number of rows */ int n_col, /* number of columns */ - RowInfo Row [], /* row info */ - ColInfo Col [], /* column info */ + Colamd_Row Row [], /* row info */ + Colamd_Col Col [], /* column info */ int A [], /* A [0 ... Alen-1] holds the matrix */ int *pfree /* &A [0] ... pfree is in use */ ) @@ -2164,10 +2843,10 @@ #ifndef NDEBUG int debug_rows ; - DEBUG0 (("Defrag..\n")) ; - for (psrc = &A[0] ; psrc < pfree ; psrc++) assert (*psrc >= 0) ; + DEBUG2 (("Defrag..\n")) ; + for (psrc = &A[0] ; psrc < pfree ; psrc++) ASSERT (*psrc >= 0) ; debug_rows = 0 ; -#endif +#endif /* NDEBUG */ /* === Defragment the columns =========================================== */ @@ -2179,7 +2858,7 @@ psrc = &A [Col [c].start] ; /* move and compact the column */ - assert (pdest <= psrc) ; + ASSERT (pdest <= psrc) ; Col [c].start = (int) (pdest - &A [0]) ; length = Col [c].length ; for (j = 0 ; j < length ; j++) @@ -2203,7 +2882,7 @@ if (Row [r].length == 0) { /* this row is of zero length. cannot compact it, so kill it */ - DEBUG0 (("Defrag row kill\n")) ; + DEBUG3 (("Defrag row kill\n")) ; KILL_ROW (r) ; } else @@ -2211,12 +2890,14 @@ /* save first column index in Row [r].shared2.first_column */ psrc = &A [Row [r].start] ; Row [r].shared2.first_column = *psrc ; - assert (ROW_IS_ALIVE (r)) ; + ASSERT (ROW_IS_ALIVE (r)) ; /* flag the start of the row with the one's complement of row */ *psrc = ONES_COMPLEMENT (r) ; + #ifndef NDEBUG debug_rows++ ; -#endif +#endif /* NDEBUG */ + } } } @@ -2232,13 +2913,13 @@ psrc-- ; /* get the row index */ r = ONES_COMPLEMENT (*psrc) ; - assert (r >= 0 && r < n_row) ; + ASSERT (r >= 0 && r < n_row) ; /* restore first column index */ *psrc = Row [r].shared2.first_column ; - assert (ROW_IS_ALIVE (r)) ; + ASSERT (ROW_IS_ALIVE (r)) ; /* move and compact the row */ - assert (pdest <= psrc) ; + ASSERT (pdest <= psrc) ; Row [r].start = (int) (pdest - &A [0]) ; length = Row [r].length ; for (j = 0 ; j < length ; j++) @@ -2250,13 +2931,15 @@ } } Row [r].length = (int) (pdest - &A [Row [r].start]) ; + #ifndef NDEBUG debug_rows-- ; -#endif +#endif /* NDEBUG */ + } } /* ensure we found all the rows */ - assert (debug_rows == 0) ; + ASSERT (debug_rows == 0) ; /* === Return the new value of pfree ==================================== */ @@ -2278,14 +2961,13 @@ /* === Parameters ======================================================= */ int n_row, /* number of rows in A */ - RowInfo Row [] /* Row [0 ... n_row-1].shared2.mark is set to zero */ + Colamd_Row Row [] /* Row [0 ... n_row-1].shared2.mark is set to zero */ ) { /* === Local variables ================================================== */ int r ; - DEBUG0 (("Clear mark\n")) ; for (r = 0 ; r < n_row ; r++) { if (ROW_IS_ALIVE (r)) @@ -2298,7 +2980,139 @@ /* ========================================================================== */ -/* === debugging routines =================================================== */ +/* === print_report ========================================================= */ +/* ========================================================================== */ + +PRIVATE void print_report +( + char *method, + int stats [COLAMD_STATS] +) +{ + + int i1, i2, i3 ; + + if (!stats) + { + PRINTF ("%s: No statistics available.\n", method) ; + return ; + } + + i1 = stats [COLAMD_INFO1] ; + i2 = stats [COLAMD_INFO2] ; + i3 = stats [COLAMD_INFO3] ; + + if (stats [COLAMD_STATUS] >= 0) + { + PRINTF ("%s: OK. ", method) ; + } + else + { + PRINTF ("%s: ERROR. ", method) ; + } + + switch (stats [COLAMD_STATUS]) + { + + case COLAMD_OK_BUT_JUMBLED: + + PRINTF ("Matrix has unsorted or duplicate row indices.\n") ; + + PRINTF ("%s: number of duplicate or out-of-order row indices: %d\n", + method, i3) ; + + PRINTF ("%s: last seen duplicate or out-of-order row index: %d\n", + method, INDEX (i2)) ; + + PRINTF ("%s: last seen in column: %d", + method, INDEX (i1)) ; + + /* no break - fall through to next case instead */ + + case COLAMD_OK: + + PRINTF ("\n") ; + + PRINTF ("%s: number of dense or empty rows ignored: %d\n", + method, stats [COLAMD_DENSE_ROW]) ; + + PRINTF ("%s: number of dense or empty columns ignored: %d\n", + method, stats [COLAMD_DENSE_COL]) ; + + PRINTF ("%s: number of garbage collections performed: %d\n", + method, stats [COLAMD_DEFRAG_COUNT]) ; + break ; + + case COLAMD_ERROR_A_not_present: + + PRINTF ("Array A (row indices of matrix) not present.\n") ; + break ; + + case COLAMD_ERROR_p_not_present: + + PRINTF ("Array p (column pointers for matrix) not present.\n") ; + break ; + + case COLAMD_ERROR_nrow_negative: + + PRINTF ("Invalid number of rows (%d).\n", i1) ; + break ; + + case COLAMD_ERROR_ncol_negative: + + PRINTF ("Invalid number of columns (%d).\n", i1) ; + break ; + + case COLAMD_ERROR_nnz_negative: + + PRINTF ("Invalid number of nonzero entries (%d).\n", i1) ; + break ; + + case COLAMD_ERROR_p0_nonzero: + + PRINTF ("Invalid column pointer, p [0] = %d, must be zero.\n", i1) ; + break ; + + case COLAMD_ERROR_A_too_small: + + PRINTF ("Array A too small.\n") ; + PRINTF (" Need Alen >= %d, but given only Alen = %d.\n", + i1, i2) ; + break ; + + case COLAMD_ERROR_col_length_negative: + + PRINTF + ("Column %d has a negative number of nonzero entries (%d).\n", + INDEX (i1), i2) ; + break ; + + case COLAMD_ERROR_row_index_out_of_bounds: + + PRINTF + ("Row index (row %d) out of bounds (%d to %d) in column %d.\n", + INDEX (i2), INDEX (0), INDEX (i3-1), INDEX (i1)) ; + break ; + + case COLAMD_ERROR_out_of_memory: + + PRINTF ("Out of memory.\n") ; + break ; + + case COLAMD_ERROR_internal_error: + + /* if this happens, there is a bug in the code */ + PRINTF + ("Internal error! Please contact authors (davis@cise.ufl.edu).\n") ; + break ; + } +} + + + + +/* ========================================================================== */ +/* === colamd debugging routines ============================================ */ /* ========================================================================== */ /* When debugging is disabled, the remainder of this file is ignored. */ @@ -2323,8 +3137,8 @@ int n_row, int n_col, - RowInfo Row [], - ColInfo Col [], + Colamd_Row Row [], + Colamd_Col Col [], int A [], int n_col2 ) @@ -2351,21 +3165,21 @@ len = Col [c].length ; score = Col [c].shared2.score ; DEBUG4 (("initial live col %5d %5d %5d\n", c, len, score)) ; - assert (len > 0) ; - assert (score >= 0) ; - assert (Col [c].shared1.thickness == 1) ; + ASSERT (len > 0) ; + ASSERT (score >= 0) ; + ASSERT (Col [c].shared1.thickness == 1) ; cp = &A [Col [c].start] ; cp_end = cp + len ; while (cp < cp_end) { r = *cp++ ; - assert (ROW_IS_ALIVE (r)) ; + ASSERT (ROW_IS_ALIVE (r)) ; } } else { i = Col [c].shared2.order ; - assert (i >= n_col2 && i < n_col) ; + ASSERT (i >= n_col2 && i < n_col) ; } } @@ -2376,8 +3190,8 @@ i = 0 ; len = Row [r].length ; deg = Row [r].shared1.degree ; - assert (len > 0) ; - assert (deg > 0) ; + ASSERT (len > 0) ; + ASSERT (deg > 0) ; rp = &A [Row [r].start] ; rp_end = rp + len ; while (rp < rp_end) @@ -2388,7 +3202,7 @@ i++ ; } } - assert (i > 0) ; + ASSERT (i > 0) ; } } } @@ -2410,8 +3224,8 @@ int n_row, int n_col, - RowInfo Row [], - ColInfo Col [], + Colamd_Row Row [], + Colamd_Col Col [], int head [], int min_score, int should, @@ -2427,7 +3241,7 @@ /* === Check the degree lists =========================================== */ - if (n_col > 10000 && debug_colamd <= 0) + if (n_col > 10000 && colamd_debug <= 0) { return ; } @@ -2445,17 +3259,17 @@ { DEBUG4 ((" %d", col)) ; have += Col [col].shared1.thickness ; - assert (COL_IS_ALIVE (col)) ; + ASSERT (COL_IS_ALIVE (col)) ; col = Col [col].shared4.degree_next ; } DEBUG4 (("\n")) ; } DEBUG4 (("should %d have %d\n", should, have)) ; - assert (should == have) ; + ASSERT (should == have) ; /* === Check the row degrees ============================================ */ - if (n_row > 10000 && debug_colamd <= 0) + if (n_row > 10000 && colamd_debug <= 0) { return ; } @@ -2463,7 +3277,7 @@ { if (ROW_IS_ALIVE (row)) { - assert (Row [row].shared1.degree <= max_deg) ; + ASSERT (Row [row].shared1.degree <= max_deg) ; } } } @@ -2483,7 +3297,7 @@ /* === Parameters ======================================================= */ int n_row, - RowInfo Row [], + Colamd_Row Row [], int tag_mark, int max_mark ) @@ -2494,14 +3308,14 @@ /* === Check the Row marks ============================================== */ - assert (tag_mark > 0 && tag_mark <= max_mark) ; - if (n_row > 10000 && debug_colamd <= 0) + ASSERT (tag_mark > 0 && tag_mark <= max_mark) ; + if (n_row > 10000 && colamd_debug <= 0) { return ; } for (r = 0 ; r < n_row ; r++) { - assert (Row [r].shared2.mark < tag_mark) ; + ASSERT (Row [r].shared2.mark < tag_mark) ; } } @@ -2520,8 +3334,8 @@ int n_row, int n_col, - RowInfo Row [], - ColInfo Col [], + Colamd_Row Row [], + Colamd_Col Col [], int A [] ) { @@ -2536,7 +3350,7 @@ /* === Dump the rows and columns of the matrix ========================== */ - if (debug_colamd < 3) + if (colamd_debug < 3) { return ; } @@ -2555,7 +3369,7 @@ while (rp < rp_end) { c = *rp++ ; - DEBUG3 ((" %d col %d\n", COL_IS_ALIVE (c), c)) ; + DEBUG4 ((" %d col %d\n", COL_IS_ALIVE (c), c)) ; } } @@ -2574,10 +3388,27 @@ while (cp < cp_end) { r = *cp++ ; - DEBUG3 ((" %d row %d\n", ROW_IS_ALIVE (r), r)) ; + DEBUG4 ((" %d row %d\n", ROW_IS_ALIVE (r), r)) ; } } } -#endif +PRIVATE void colamd_get_debug +( + char *method +) +{ + colamd_debug = 0 ; /* no debug printing */ + + /* get "D" environment variable, which gives the debug printing level */ + if (getenv ("D")) + { + colamd_debug = atoi (getenv ("D")) ; + } + + DEBUG0 (("%s: debug version, D = %d (THIS WILL BE SLOW!)\n", + method, colamd_debug)) ; +} + +#endif /* NDEBUG */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/colamd.h python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/colamd.h --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/colamd.h 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/colamd.h 2010-07-26 15:48:34.000000000 +0100 @@ -1,49 +1,203 @@ -/* ========================================================================== */ -/* === colamd prototypes and definitions ==================================== */ -/* ========================================================================== */ +/*! @file colamd.h + \brief Colamd prototypes and definitions -/* - This is the colamd include file, +
 
+    ==========================================================================
+    === colamd/symamd prototypes and definitions =============================
+    ==========================================================================
+
+    You must include this file (colamd.h) in any routine that uses colamd,
+    symamd, or the related macros and definitions.
+
+    Authors:
+
+	The authors of the code itself are Stefan I. Larimore and Timothy A.
+	Davis (davis@cise.ufl.edu), University of Florida.  The algorithm was
+	developed in collaboration with John Gilbert, Xerox PARC, and Esmond
+	Ng, Oak Ridge National Laboratory.
+
+    Date:
+
+	September 8, 2003.  Version 2.3.
+
+    Acknowledgements:
+
+	This work was supported by the National Science Foundation, under
+	grants DMS-9504974 and DMS-9803599.
 
-	http://www.cise.ufl.edu/~davis/colamd/colamd.h
+    Notice:
 
-    for use in the colamd.c, colamdmex.c, and symamdmex.c files located at
+	Copyright (c) 1998-2003 by the University of Florida.
+	All Rights Reserved.
 
-	http://www.cise.ufl.edu/~davis/colamd/
+	THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+	EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
 
-    See those files for a description of colamd and symamd, and for the
-    copyright notice, which also applies to this file.
+	Permission is hereby granted to use, copy, modify, and/or distribute
+	this program, provided that the Copyright, this License, and the
+	Availability of the original version is retained on all copies and made
+	accessible to the end-user of any code or package that includes COLAMD
+	or any modified version of COLAMD. 
 
-    August 3, 1998.  Version 1.0.
+    Availability:
+
+	The colamd/symamd library is available at
+
+	    http://www.cise.ufl.edu/research/sparse/colamd/
+
+	This is the http://www.cise.ufl.edu/research/sparse/colamd/colamd.h
+	file.  It is required by the colamd.c, colamdmex.c, and symamdmex.c
+	files, and by any C code that calls the routines whose prototypes are
+	listed below, or that uses the colamd/symamd definitions listed below.
+ 
*/ +#ifndef COLAMD_H +#define COLAMD_H + +/* ========================================================================== */ +/* === Include files ======================================================== */ +/* ========================================================================== */ + +#include + /* ========================================================================== */ -/* === Definitions ========================================================== */ +/* === Knob and statistics definitions ====================================== */ /* ========================================================================== */ /* size of the knobs [ ] array. Only knobs [0..1] are currently used. */ #define COLAMD_KNOBS 20 -/* number of output statistics. Only A [0..2] are currently used. */ +/* number of output statistics. Only stats [0..6] are currently used. */ #define COLAMD_STATS 20 -/* knobs [0] and A [0]: dense row knob and output statistic. */ +/* knobs [0] and stats [0]: dense row knob and output statistic. */ #define COLAMD_DENSE_ROW 0 -/* knobs [1] and A [1]: dense column knob and output statistic. */ +/* knobs [1] and stats [1]: dense column knob and output statistic. */ #define COLAMD_DENSE_COL 1 -/* A [2]: memory defragmentation count output statistic */ +/* stats [2]: memory defragmentation count output statistic */ #define COLAMD_DEFRAG_COUNT 2 -/* A [3]: whether or not the input columns were jumbled or had duplicates */ -#define COLAMD_JUMBLED_COLS 3 +/* stats [3]: colamd status: zero OK, > 0 warning or notice, < 0 error */ +#define COLAMD_STATUS 3 + +/* stats [4..6]: error info, or info on jumbled columns */ +#define COLAMD_INFO1 4 +#define COLAMD_INFO2 5 +#define COLAMD_INFO3 6 + +/* error codes returned in stats [3]: */ +#define COLAMD_OK (0) +#define COLAMD_OK_BUT_JUMBLED (1) +#define COLAMD_ERROR_A_not_present (-1) +#define COLAMD_ERROR_p_not_present (-2) +#define COLAMD_ERROR_nrow_negative (-3) +#define COLAMD_ERROR_ncol_negative (-4) +#define COLAMD_ERROR_nnz_negative (-5) +#define COLAMD_ERROR_p0_nonzero (-6) +#define COLAMD_ERROR_A_too_small (-7) +#define COLAMD_ERROR_col_length_negative (-8) +#define COLAMD_ERROR_row_index_out_of_bounds (-9) +#define COLAMD_ERROR_out_of_memory (-10) +#define COLAMD_ERROR_internal_error (-999) + +/* ========================================================================== */ +/* === Row and Column structures ============================================ */ +/* ========================================================================== */ + +/* User code that makes use of the colamd/symamd routines need not directly */ +/* reference these structures. They are used only for the COLAMD_RECOMMENDED */ +/* macro. */ + +typedef struct Colamd_Col_struct +{ + int start ; /* index for A of first row in this column, or DEAD */ + /* if column is dead */ + int length ; /* number of rows in this column */ + union + { + int thickness ; /* number of original columns represented by this */ + /* col, if the column is alive */ + int parent ; /* parent in parent tree super-column structure, if */ + /* the column is dead */ + } shared1 ; + union + { + int score ; /* the score used to maintain heap, if col is alive */ + int order ; /* pivot ordering of this column, if col is dead */ + } shared2 ; + union + { + int headhash ; /* head of a hash bucket, if col is at the head of */ + /* a degree list */ + int hash ; /* hash value, if col is not in a degree list */ + int prev ; /* previous column in degree list, if col is in a */ + /* degree list (but not at the head of a degree list) */ + } shared3 ; + union + { + int degree_next ; /* next column, if col is in a degree list */ + int hash_next ; /* next column, if col is in a hash list */ + } shared4 ; + +} Colamd_Col ; + +typedef struct Colamd_Row_struct +{ + int start ; /* index for A of first col in this row */ + int length ; /* number of principal columns in this row */ + union + { + int degree ; /* number of principal & non-principal columns in row */ + int p ; /* used as a row pointer in init_rows_cols () */ + } shared1 ; + union + { + int mark ; /* for computing set differences and marking dead rows*/ + int first_column ;/* first column in row (used in garbage collection) */ + } shared2 ; + +} Colamd_Row ; + +/* ========================================================================== */ +/* === Colamd recommended memory size ======================================= */ +/* ========================================================================== */ + +/* + The recommended length Alen of the array A passed to colamd is given by + the COLAMD_RECOMMENDED (nnz, n_row, n_col) macro. It returns -1 if any + argument is negative. 2*nnz space is required for the row and column + indices of the matrix. COLAMD_C (n_col) + COLAMD_R (n_row) space is + required for the Col and Row arrays, respectively, which are internal to + colamd. An additional n_col space is the minimal amount of "elbow room", + and nnz/5 more space is recommended for run time efficiency. + + This macro is not needed when using symamd. + + Explicit typecast to int added Sept. 23, 2002, COLAMD version 2.2, to avoid + gcc -pedantic warning messages. +*/ + +#define COLAMD_C(n_col) ((int) (((n_col) + 1) * sizeof (Colamd_Col) / sizeof (int))) +#define COLAMD_R(n_row) ((int) (((n_row) + 1) * sizeof (Colamd_Row) / sizeof (int))) + +#define COLAMD_RECOMMENDED(nnz, n_row, n_col) \ +( \ +((nnz) < 0 || (n_row) < 0 || (n_col) < 0) \ +? \ + (-1) \ +: \ + (2 * (nnz) + COLAMD_C (n_col) + COLAMD_R (n_row) + (n_col) + ((nnz) / 5)) \ +) /* ========================================================================== */ /* === Prototypes of user-callable routines ================================= */ /* ========================================================================== */ -int colamd_recommended /* returns recommended value of Alen */ +int colamd_recommended /* returns recommended value of Alen, */ + /* or (-1) if input arguments are erroneous */ ( int nnz, /* nonzeros in A */ int n_row, /* number of rows in A */ @@ -55,13 +209,41 @@ double knobs [COLAMD_KNOBS] /* parameter settings for colamd */ ) ; -int colamd /* returns TRUE if successful, FALSE otherwise*/ +int colamd /* returns (1) if successful, (0) otherwise*/ ( /* A and p arguments are modified on output */ int n_row, /* number of rows in A */ int n_col, /* number of columns in A */ int Alen, /* size of the array A */ int A [], /* row indices of A, of size Alen */ int p [], /* column pointers of A, of size n_col+1 */ - double knobs [COLAMD_KNOBS] /* parameter settings for colamd */ + double knobs [COLAMD_KNOBS],/* parameter settings for colamd */ + int stats [COLAMD_STATS] /* colamd output statistics and error codes */ +) ; + +int symamd /* return (1) if OK, (0) otherwise */ +( + int n, /* number of rows and columns of A */ + int A [], /* row indices of A */ + int p [], /* column pointers of A */ + int perm [], /* output permutation, size n_col+1 */ + double knobs [COLAMD_KNOBS], /* parameters (uses defaults if NULL) */ + int stats [COLAMD_STATS], /* output statistics and error codes */ + void * (*allocate) (size_t, size_t), + /* pointer to calloc (ANSI C) or */ + /* mxCalloc (for MATLAB mexFunction) */ + void (*release) (void *) + /* pointer to free (ANSI C) or */ + /* mxFree (for MATLAB mexFunction) */ +) ; + +void colamd_report +( + int stats [COLAMD_STATS] +) ; + +void symamd_report +( + int stats [COLAMD_STATS] ) ; +#endif /* COLAMD_H */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cpanel_bmod.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cpanel_bmod.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cpanel_bmod.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cpanel_bmod.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,27 +1,32 @@ -/* +/*! @file cpanel_bmod.c + * \brief Performs numeric block updates + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ /* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. + */ #include #include -#include "csp_defs.h" +#include "slu_cdefs.h" /* * Function prototypes @@ -30,6 +35,25 @@ void cmatvec(int, int, int, complex *, complex *, complex *); extern void ccheck_tempv(); +/*! \brief + * + *
+ * Purpose
+ * =======
+ *
+ *    Performs numeric block updates (sup-panel) in topological order.
+ *    It features: col-col, 2cols-col, 3cols-col, and sup-col updates.
+ *    Special processing on the supernodal portion of L\U[*,j]
+ *
+ *    Before entering this routine, the original nonzeros in the panel 
+ *    were already copied into the spa[m,w].
+ *
+ *    Updated/Output parameters-
+ *    dense[0:m-1,w]: L[*,j:j+w-1] and U[*,j:j+w-1] are returned 
+ *    collectively in the m-by-w vector dense[*]. 
+ * 
+ */ + void cpanel_bmod ( const int m, /* in - number of rows in the matrix */ @@ -44,22 +68,7 @@ SuperLUStat_t *stat /* output */ ) { -/* - * Purpose - * ======= - * - * Performs numeric block updates (sup-panel) in topological order. - * It features: col-col, 2cols-col, 3cols-col, and sup-col updates. - * Special processing on the supernodal portion of L\U[*,j] - * - * Before entering this routine, the original nonzeros in the panel - * were already copied into the spa[m,w]. - * - * Updated/Output parameters- - * dense[0:m-1,w]: L[*,j:j+w-1] and U[*,j:j+w-1] are returned - * collectively in the m-by-w vector dense[*]. - * - */ + #ifdef USE_VENDOR_BLAS #ifdef _CRAY diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cpanel_dfs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cpanel_dfs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cpanel_dfs.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cpanel_dfs.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,48 +1,32 @@ - -/* +/*! @file cpanel_dfs.c + * \brief Peforms a symbolic factorization on a panel of symbols + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "csp_defs.h" -#include "util.h" -void -cpanel_dfs ( - const int m, /* in - number of rows in the matrix */ - const int w, /* in */ - const int jcol, /* in */ - SuperMatrix *A, /* in - original matrix */ - int *perm_r, /* in */ - int *nseg, /* out */ - complex *dense, /* out */ - int *panel_lsub, /* out */ - int *segrep, /* out */ - int *repfnz, /* out */ - int *xprune, /* out */ - int *marker, /* out */ - int *parent, /* working array */ - int *xplore, /* working array */ - GlobalLU_t *Glu /* modified */ - ) -{ -/* +#include "slu_cdefs.h" + +/*! \brief + * + *
  * Purpose
  * =======
  *
@@ -68,8 +52,29 @@
  *   repfnz: SuperA-col --> PA-row
  *   parent: SuperA-col --> SuperA-col
  *   xplore: SuperA-col --> index to L-structure
- *
+ * 
*/ + +void +cpanel_dfs ( + const int m, /* in - number of rows in the matrix */ + const int w, /* in */ + const int jcol, /* in */ + SuperMatrix *A, /* in - original matrix */ + int *perm_r, /* in */ + int *nseg, /* out */ + complex *dense, /* out */ + int *panel_lsub, /* out */ + int *segrep, /* out */ + int *repfnz, /* out */ + int *xprune, /* out */ + int *marker, /* out */ + int *parent, /* working array */ + int *xplore, /* working array */ + GlobalLU_t *Glu /* modified */ + ) +{ + NCPformat *Astore; complex *a; int *asub; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cpivotgrowth.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cpivotgrowth.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cpivotgrowth.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cpivotgrowth.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,21 +1,20 @@ - -/* +/*! @file cpivotgrowth.c + * \brief Computes the reciprocal pivot growth factor + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
- *
+ * 
*/ #include -#include "csp_defs.h" -#include "util.h" +#include "slu_cdefs.h" -float -cPivotGrowth(int ncols, SuperMatrix *A, int *perm_c, - SuperMatrix *L, SuperMatrix *U) -{ -/* +/*! \brief + * + *
  * Purpose
  * =======
  *
@@ -43,8 +42,14 @@
  *	    The factor U from the factorization Pr*A*Pc=L*U. Use column-wise
  *          storage scheme, i.e., U has types: Stype = NC;
  *          Dtype = SLU_C; Mtype = TRU.
- *
+ * 
*/ + +float +cPivotGrowth(int ncols, SuperMatrix *A, int *perm_c, + SuperMatrix *L, SuperMatrix *U) +{ + NCformat *Astore; SCformat *Lstore; NCformat *Ustore; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cpivotL.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cpivotL.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cpivotL.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cpivotL.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,44 +1,36 @@ -/* +/*! @file cpivotL.c + * \brief Performs numerical pivoting + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ + #include #include -#include "csp_defs.h" +#include "slu_cdefs.h" #undef DEBUG -int -cpivotL( - const int jcol, /* in */ - const float u, /* in - diagonal pivoting threshold */ - int *usepr, /* re-use the pivot sequence given by perm_r/iperm_r */ - int *perm_r, /* may be modified */ - int *iperm_r, /* in - inverse of perm_r */ - int *iperm_c, /* in - used to find diagonal of Pc*A*Pc' */ - int *pivrow, /* out */ - GlobalLU_t *Glu, /* modified - global LU data structures */ - SuperLUStat_t *stat /* output */ - ) -{ -/* +/*! \brief + * + *
  * Purpose
  * =======
  *   Performs the numerical pivoting on the current column of L,
@@ -57,8 +49,23 @@
  *
  *   Return value: 0      success;
  *                 i > 0  U(i,i) is exactly zero.
- *
+ * 
*/ + +int +cpivotL( + const int jcol, /* in */ + const double u, /* in - diagonal pivoting threshold */ + int *usepr, /* re-use the pivot sequence given by perm_r/iperm_r */ + int *perm_r, /* may be modified */ + int *iperm_r, /* in - inverse of perm_r */ + int *iperm_c, /* in - used to find diagonal of Pc*A*Pc' */ + int *pivrow, /* out */ + GlobalLU_t *Glu, /* modified - global LU data structures */ + SuperLUStat_t *stat /* output */ + ) +{ + complex one = {1.0, 0.0}; int fsupc; /* first column in the supernode */ int nsupc; /* no of columns in the supernode */ @@ -101,7 +108,11 @@ Also search for user-specified pivot, and diagonal element. */ if ( *usepr ) *pivrow = iperm_r[jcol]; diagind = iperm_c[jcol]; +#ifdef SCIPY_SPECIFIC_FIX + pivmax = -1.0; +#else pivmax = 0.0; +#endif pivptr = nsupc; diag = EMPTY; old_pivptr = nsupc; @@ -116,9 +127,20 @@ } /* Test for singularity */ +#ifdef SCIPY_SPECIFIC_FIX + if (pivmax < 0.0) { + perm_r[diagind] = jcol; + *usepr = 0; + return (jcol+1); + } +#endif if ( pivmax == 0.0 ) { +#if 1 *pivrow = lsub_ptr[pivptr]; perm_r[*pivrow] = jcol; +#else + perm_r[diagind] = jcol; +#endif *usepr = 0; return (jcol+1); } diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cpruneL.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cpruneL.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cpruneL.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cpruneL.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,27 +1,38 @@ - -/* +/*! @file cpruneL.c + * \brief Prunes the L-structure + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ *
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "csp_defs.h" -#include "util.h" + +#include "slu_cdefs.h" + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *   Prunes the L-structure of supernodes whose L-structure
+ *   contains the current pivot row "pivrow"
+ * 
+ */ void cpruneL( @@ -35,13 +46,7 @@ GlobalLU_t *Glu /* modified - global LU data structures */ ) { -/* - * Purpose - * ======= - * Prunes the L-structure of supernodes whose L-structure - * contains the current pivot row "pivrow" - * - */ + complex utemp; int jsupno, irep, irep1, kmin, kmax, krow, movnum; int i, ktemp, minloc, maxloc; @@ -108,8 +113,8 @@ kmax--; else if ( perm_r[lsub[kmin]] != EMPTY ) kmin++; - else { /* kmin below pivrow, and kmax above pivrow: - * interchange the two subscripts + else { /* kmin below pivrow (not yet pivoted), and kmax + * above pivrow: interchange the two subscripts */ ktemp = lsub[kmin]; lsub[kmin] = lsub[kmax]; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/creadhb.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/creadhb.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/creadhb.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/creadhb.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,18 +1,85 @@ - -/* +/*! @file creadhb.c + * \brief Read a matrix stored in Harwell-Boeing format + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
  *
+ * Purpose
+ * =======
+ * 
+ * Read a COMPLEX PRECISION matrix stored in Harwell-Boeing format 
+ * as described below.
+ * 
+ * Line 1 (A72,A8) 
+ *  	Col. 1 - 72   Title (TITLE) 
+ *	Col. 73 - 80  Key (KEY) 
+ * 
+ * Line 2 (5I14) 
+ * 	Col. 1 - 14   Total number of lines excluding header (TOTCRD) 
+ * 	Col. 15 - 28  Number of lines for pointers (PTRCRD) 
+ * 	Col. 29 - 42  Number of lines for row (or variable) indices (INDCRD) 
+ * 	Col. 43 - 56  Number of lines for numerical values (VALCRD) 
+ *	Col. 57 - 70  Number of lines for right-hand sides (RHSCRD) 
+ *                    (including starting guesses and solution vectors 
+ *		       if present) 
+ *           	      (zero indicates no right-hand side data is present) 
+ *
+ * Line 3 (A3, 11X, 4I14) 
+ *   	Col. 1 - 3    Matrix type (see below) (MXTYPE) 
+ * 	Col. 15 - 28  Number of rows (or variables) (NROW) 
+ * 	Col. 29 - 42  Number of columns (or elements) (NCOL) 
+ *	Col. 43 - 56  Number of row (or variable) indices (NNZERO) 
+ *	              (equal to number of entries for assembled matrices) 
+ * 	Col. 57 - 70  Number of elemental matrix entries (NELTVL) 
+ *	              (zero in the case of assembled matrices) 
+ * Line 4 (2A16, 2A20) 
+ * 	Col. 1 - 16   Format for pointers (PTRFMT) 
+ *	Col. 17 - 32  Format for row (or variable) indices (INDFMT) 
+ *	Col. 33 - 52  Format for numerical values of coefficient matrix (VALFMT) 
+ * 	Col. 53 - 72 Format for numerical values of right-hand sides (RHSFMT) 
+ *
+ * Line 5 (A3, 11X, 2I14) Only present if there are right-hand sides present 
+ *    	Col. 1 	      Right-hand side type: 
+ *	         	  F for full storage or M for same format as matrix 
+ *    	Col. 2        G if a starting vector(s) (Guess) is supplied. (RHSTYP) 
+ *    	Col. 3        X if an exact solution vector(s) is supplied. 
+ *	Col. 15 - 28  Number of right-hand sides (NRHS) 
+ *	Col. 29 - 42  Number of row indices (NRHSIX) 
+ *          	      (ignored in case of unassembled matrices) 
+ *
+ * The three character type field on line 3 describes the matrix type. 
+ * The following table lists the permitted values for each of the three 
+ * characters. As an example of the type field, RSA denotes that the matrix 
+ * is real, symmetric, and assembled. 
+ *
+ * First Character: 
+ *	R Real matrix 
+ *	C Complex matrix 
+ *	P Pattern only (no numerical values supplied) 
+ *
+ * Second Character: 
+ *	S Symmetric 
+ *	U Unsymmetric 
+ *	H Hermitian 
+ *	Z Skew symmetric 
+ *	R Rectangular 
+ *
+ * Third Character: 
+ *	A Assembled 
+ *	E Elemental matrices (unassembled) 
+ *
+ * 
*/ #include #include -#include "csp_defs.h" +#include "slu_cdefs.h" -/* Eat up the rest of the current line */ +/*! \brief Eat up the rest of the current line */ int cDumpLine(FILE *fp) { register int c; @@ -60,7 +127,7 @@ return 0; } -int cReadVector(FILE *fp, int n, int *where, int perline, int persize) +static int ReadVector(FILE *fp, int n, int *where, int perline, int persize) { register int i, j, item; char tmp, buf[100]; @@ -80,7 +147,7 @@ return 0; } -/* Read complex numbers as pairs of (real, imaginary) */ +/*! \brief Read complex numbers as pairs of (real, imaginary) */ int cReadValues(FILE *fp, int n, complex *destination, int perline, int persize) { register int i, j, k, s, pair; @@ -118,72 +185,6 @@ creadhb(int *nrow, int *ncol, int *nonz, complex **nzval, int **rowind, int **colptr) { -/* - * Purpose - * ======= - * - * Read a COMPLEX PRECISION matrix stored in Harwell-Boeing format - * as described below. - * - * Line 1 (A72,A8) - * Col. 1 - 72 Title (TITLE) - * Col. 73 - 80 Key (KEY) - * - * Line 2 (5I14) - * Col. 1 - 14 Total number of lines excluding header (TOTCRD) - * Col. 15 - 28 Number of lines for pointers (PTRCRD) - * Col. 29 - 42 Number of lines for row (or variable) indices (INDCRD) - * Col. 43 - 56 Number of lines for numerical values (VALCRD) - * Col. 57 - 70 Number of lines for right-hand sides (RHSCRD) - * (including starting guesses and solution vectors - * if present) - * (zero indicates no right-hand side data is present) - * - * Line 3 (A3, 11X, 4I14) - * Col. 1 - 3 Matrix type (see below) (MXTYPE) - * Col. 15 - 28 Number of rows (or variables) (NROW) - * Col. 29 - 42 Number of columns (or elements) (NCOL) - * Col. 43 - 56 Number of row (or variable) indices (NNZERO) - * (equal to number of entries for assembled matrices) - * Col. 57 - 70 Number of elemental matrix entries (NELTVL) - * (zero in the case of assembled matrices) - * Line 4 (2A16, 2A20) - * Col. 1 - 16 Format for pointers (PTRFMT) - * Col. 17 - 32 Format for row (or variable) indices (INDFMT) - * Col. 33 - 52 Format for numerical values of coefficient matrix (VALFMT) - * Col. 53 - 72 Format for numerical values of right-hand sides (RHSFMT) - * - * Line 5 (A3, 11X, 2I14) Only present if there are right-hand sides present - * Col. 1 Right-hand side type: - * F for full storage or M for same format as matrix - * Col. 2 G if a starting vector(s) (Guess) is supplied. (RHSTYP) - * Col. 3 X if an exact solution vector(s) is supplied. - * Col. 15 - 28 Number of right-hand sides (NRHS) - * Col. 29 - 42 Number of row indices (NRHSIX) - * (ignored in case of unassembled matrices) - * - * The three character type field on line 3 describes the matrix type. - * The following table lists the permitted values for each of the three - * characters. As an example of the type field, RSA denotes that the matrix - * is real, symmetric, and assembled. - * - * First Character: - * R Real matrix - * C Complex matrix - * P Pattern only (no numerical values supplied) - * - * Second Character: - * S Symmetric - * U Unsymmetric - * H Hermitian - * Z Skew symmetric - * R Rectangular - * - * Third Character: - * A Assembled - * E Elemental matrices (unassembled) - * - */ register int i, numer_lines = 0, rhscrd = 0; int tmp, colnum, colsize, rownum, rowsize, valnum, valsize; @@ -254,8 +255,8 @@ printf("valnum %d, valsize %d\n", valnum, valsize); #endif - cReadVector(fp, *ncol+1, *colptr, colnum, colsize); - cReadVector(fp, *nonz, *rowind, rownum, rowsize); + ReadVector(fp, *ncol+1, *colptr, colnum, colsize); + ReadVector(fp, *nonz, *rowind, rownum, rowsize); if ( numer_lines ) { cReadValues(fp, *nonz, *nzval, valnum, valsize); } diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/creadrb.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/creadrb.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/creadrb.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/creadrb.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,246 @@ + +/*! @file creadrb.c + * \brief Read a matrix stored in Rutherford-Boeing format + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory.
+ * June 30, 2009
+ * 
+ * + * Purpose + * ======= + * + * Read a COMPLEX PRECISION matrix stored in Rutherford-Boeing format + * as described below. + * + * Line 1 (A72, A8) + * Col. 1 - 72 Title (TITLE) + * Col. 73 - 80 Matrix name / identifier (MTRXID) + * + * Line 2 (I14, 3(1X, I13)) + * Col. 1 - 14 Total number of lines excluding header (TOTCRD) + * Col. 16 - 28 Number of lines for pointers (PTRCRD) + * Col. 30 - 42 Number of lines for row (or variable) indices (INDCRD) + * Col. 44 - 56 Number of lines for numerical values (VALCRD) + * + * Line 3 (A3, 11X, 4(1X, I13)) + * Col. 1 - 3 Matrix type (see below) (MXTYPE) + * Col. 15 - 28 Compressed Column: Number of rows (NROW) + * Elemental: Largest integer used to index variable (MVAR) + * Col. 30 - 42 Compressed Column: Number of columns (NCOL) + * Elemental: Number of element matrices (NELT) + * Col. 44 - 56 Compressed Column: Number of entries (NNZERO) + * Elemental: Number of variable indeces (NVARIX) + * Col. 58 - 70 Compressed Column: Unused, explicitly zero + * Elemental: Number of elemental matrix entries (NELTVL) + * + * Line 4 (2A16, A20) + * Col. 1 - 16 Fortran format for pointers (PTRFMT) + * Col. 17 - 32 Fortran format for row (or variable) indices (INDFMT) + * Col. 33 - 52 Fortran format for numerical values of coefficient matrix + * (VALFMT) + * (blank in the case of matrix patterns) + * + * The three character type field on line 3 describes the matrix type. + * The following table lists the permitted values for each of the three + * characters. As an example of the type field, RSA denotes that the matrix + * is real, symmetric, and assembled. + * + * First Character: + * R Real matrix + * C Complex matrix + * I integer matrix + * P Pattern only (no numerical values supplied) + * Q Pattern only (numerical values supplied in associated auxiliary value + * file) + * + * Second Character: + * S Symmetric + * U Unsymmetric + * H Hermitian + * Z Skew symmetric + * R Rectangular + * + * Third Character: + * A Compressed column form + * E Elemental form + * + * + */ + +#include "slu_cdefs.h" + + +/*! \brief Eat up the rest of the current line */ +static int cDumpLine(FILE *fp) +{ + register int c; + while ((c = fgetc(fp)) != '\n') ; + return 0; +} + +static int cParseIntFormat(char *buf, int *num, int *size) +{ + char *tmp; + + tmp = buf; + while (*tmp++ != '(') ; + sscanf(tmp, "%d", num); + while (*tmp != 'I' && *tmp != 'i') ++tmp; + ++tmp; + sscanf(tmp, "%d", size); + return 0; +} + +static int cParseFloatFormat(char *buf, int *num, int *size) +{ + char *tmp, *period; + + tmp = buf; + while (*tmp++ != '(') ; + *num = atoi(tmp); /*sscanf(tmp, "%d", num);*/ + while (*tmp != 'E' && *tmp != 'e' && *tmp != 'D' && *tmp != 'd' + && *tmp != 'F' && *tmp != 'f') { + /* May find kP before nE/nD/nF, like (1P6F13.6). In this case the + num picked up refers to P, which should be skipped. */ + if (*tmp=='p' || *tmp=='P') { + ++tmp; + *num = atoi(tmp); /*sscanf(tmp, "%d", num);*/ + } else { + ++tmp; + } + } + ++tmp; + period = tmp; + while (*period != '.' && *period != ')') ++period ; + *period = '\0'; + *size = atoi(tmp); /*sscanf(tmp, "%2d", size);*/ + + return 0; +} + +static int ReadVector(FILE *fp, int n, int *where, int perline, int persize) +{ + register int i, j, item; + char tmp, buf[100]; + + i = 0; + while (i < n) { + fgets(buf, 100, fp); /* read a line at a time */ + for (j=0; j + * -- SuperLU routine (version 4.0) -- + * Lawrence Berkeley National Laboratory. + * June 30, 2009 + * + */ + +#include "slu_cdefs.h" + + +void +creadtriple(int *m, int *n, int *nonz, + complex **nzval, int **rowind, int **colptr) +{ +/* + * Output parameters + * ================= + * (a,asub,xa): asub[*] contains the row subscripts of nonzeros + * in columns of matrix A; a[*] the numerical values; + * row i of A is given by a[k],k=xa[i],...,xa[i+1]-1. + * + */ + int j, k, jsize, nnz, nz; + complex *a, *val; + int *asub, *xa, *row, *col; + int zero_base = 0; + + /* Matrix format: + * First line: #rows, #cols, #non-zero + * Triplet in the rest of lines: + * row, col, value + */ + + scanf("%d%d", n, nonz); + *m = *n; + printf("m %d, n %d, nonz %d\n", *m, *n, *nonz); + callocateA(*n, *nonz, nzval, rowind, colptr); /* Allocate storage */ + a = *nzval; + asub = *rowind; + xa = *colptr; + + val = (complex *) SUPERLU_MALLOC(*nonz * sizeof(complex)); + row = (int *) SUPERLU_MALLOC(*nonz * sizeof(int)); + col = (int *) SUPERLU_MALLOC(*nonz * sizeof(int)); + + for (j = 0; j < *n; ++j) xa[j] = 0; + + /* Read into the triplet array from a file */ + for (nnz = 0, nz = 0; nnz < *nonz; ++nnz) { + scanf("%d%d%f%f\n", &row[nz], &col[nz], &val[nz].r, &val[nz].i); + + if ( nnz == 0 ) { /* first nonzero */ + if ( row[0] == 0 || col[0] == 0 ) { + zero_base = 1; + printf("triplet file: row/col indices are zero-based.\n"); + } else + printf("triplet file: row/col indices are one-based.\n"); + } + + if ( !zero_base ) { + /* Change to 0-based indexing. */ + --row[nz]; + --col[nz]; + } + + if (row[nz] < 0 || row[nz] >= *m || col[nz] < 0 || col[nz] >= *n + /*|| val[nz] == 0.*/) { + fprintf(stderr, "nz %d, (%d, %d) = (%e,%e) out of bound, removed\n", + nz, row[nz], col[nz], val[nz].r, val[nz].i); + exit(-1); + } else { + ++xa[col[nz]]; + ++nz; + } + } + + *nonz = nz; + + /* Initialize the array of column pointers */ + k = 0; + jsize = xa[0]; + xa[0] = 0; + for (j = 1; j < *n; ++j) { + k += jsize; + jsize = xa[j]; + xa[j] = k; + } + + /* Copy the triplets into the column oriented storage */ + for (nz = 0; nz < *nonz; ++nz) { + j = col[nz]; + k = xa[j]; + asub[k] = row[nz]; + a[k] = val[nz]; + ++xa[j]; + } + + /* Reset the column pointers to the beginning of each column */ + for (j = *n; j > 0; --j) + xa[j] = xa[j-1]; + xa[0] = 0; + + SUPERLU_FREE(val); + SUPERLU_FREE(row); + SUPERLU_FREE(col); + +#ifdef CHK_INPUT + { + int i; + for (i = 0; i < *n; i++) { + printf("Col %d, xa %d\n", i, xa[i]); + for (k = xa[i]; k < xa[i+1]; k++) + printf("%d\t%16.10f\n", asub[k], a[k]); + } + } +#endif + +} + + +void creadrhs(int m, complex *b) +{ + FILE *fp, *fopen(); + int i; + /*int j;*/ + + if ( !(fp = fopen("b.dat", "r")) ) { + fprintf(stderr, "dreadrhs: file does not exist\n"); + exit(-1); + } + for (i = 0; i < m; ++i) + fscanf(fp, "%f%f\n", &b[i].r, &b[i].i); + + /* readpair_(j, &b[i]);*/ + fclose(fp); +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/csnode_bmod.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/csnode_bmod.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/csnode_bmod.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/csnode_bmod.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,29 +1,31 @@ -/* +/*! @file csnode_bmod.c + * \brief Performs numeric block updates within the relaxed snode. + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "csp_defs.h" + +#include "slu_cdefs.h" -/* - * Performs numeric block updates within the relaxed snode. +/*! \brief Performs numeric block updates within the relaxed snode. */ int csnode_bmod ( diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/csnode_dfs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/csnode_dfs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/csnode_dfs.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/csnode_dfs.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,27 +1,45 @@ - -/* +/*! @file csnode_dfs.c + * \brief Determines the union of row structures of columns within the relaxed node + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "csp_defs.h" -#include "util.h" + +#include "slu_cdefs.h" + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *    csnode_dfs() - Determine the union of the row structures of those 
+ *    columns within the relaxed snode.
+ *    Note: The relaxed snodes are leaves of the supernodal etree, therefore, 
+ *    the portion outside the rectangular supernode must be zero.
+ *
+ * Return value
+ * ============
+ *     0   success;
+ *    >0   number of bytes allocated when run out of memory.
+ * 
+ */ int csnode_dfs ( @@ -35,19 +53,7 @@ GlobalLU_t *Glu /* modified */ ) { -/* Purpose - * ======= - * csnode_dfs() - Determine the union of the row structures of those - * columns within the relaxed snode. - * Note: The relaxed snodes are leaves of the supernodal etree, therefore, - * the portion outside the rectangular supernode must be zero. - * - * Return value - * ============ - * 0 success; - * >0 number of bytes allocated when run out of memory. - * - */ + register int i, k, ifrom, ito, nextl, new_next; int nsuper, krow, kmark, mem_error; int *xsup, *supno; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/csp_blas2.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/csp_blas2.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/csp_blas2.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/csp_blas2.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,17 +1,20 @@ -/* +/*! @file csp_blas2.c + * \brief Sparse BLAS 2, using some dense BLAS 2 operations + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
- *
+ * 
*/ /* * File name: csp_blas2.c * Purpose: Sparse BLAS 2, using some dense BLAS 2 operations. */ -#include "csp_defs.h" +#include "slu_cdefs.h" /* * Function prototypes @@ -20,12 +23,9 @@ void clsolve(int, int, complex*, complex*); void cmatvec(int, int, int, complex*, complex*, complex*); - -int -sp_ctrsv(char *uplo, char *trans, char *diag, SuperMatrix *L, - SuperMatrix *U, complex *x, SuperLUStat_t *stat, int *info) -{ -/* +/*! \brief Solves one of the systems of equations A*x = b, or A'*x = b + * + *
  *   Purpose
  *   =======
  *
@@ -49,8 +49,8 @@
  *             On entry, trans specifies the equations to be solved as   
  *             follows:   
  *                trans = 'N' or 'n'   A*x = b.   
- *                trans = 'T' or 't'   A'*x = b.   
- *                trans = 'C' or 'c'   A**H*x = b.   
+ *                trans = 'T' or 't'   A'*x = b.
+ *                trans = 'C' or 'c'   A^H*x = b.   
  *
  *   diag   - (input) char*
  *             On entry, diag specifies whether or not A is unit   
@@ -75,8 +75,12 @@
  *
  *   info    - (output) int*
  *             If *info = -i, the i-th argument had an illegal value.
- *
+ * 
*/ +int +sp_ctrsv(char *uplo, char *trans, char *diag, SuperMatrix *L, + SuperMatrix *U, complex *x, SuperLUStat_t *stat, int *info) +{ #ifdef _CRAY _fcd ftcs1 = _cptofcd("L", strlen("L")), ftcs2 = _cptofcd("N", strlen("N")), @@ -98,7 +102,8 @@ /* Test the input parameters */ *info = 0; if ( !lsame_(uplo,"L") && !lsame_(uplo, "U") ) *info = -1; - else if ( !lsame_(trans, "N") && !lsame_(trans, "T") && !lsame_(trans, "C")) *info = -2; + else if ( !lsame_(trans, "N") && !lsame_(trans, "T") && + !lsame_(trans, "C")) *info = -2; else if ( !lsame_(diag, "U") && !lsame_(diag, "N") ) *info = -3; else if ( L->nrow != L->ncol || L->nrow < 0 ) *info = -4; else if ( U->nrow != U->ncol || U->nrow < 0 ) *info = -5; @@ -131,7 +136,8 @@ luptr = L_NZ_START(fsupc); nrow = nsupr - nsupc; - solve_ops += 4 * nsupc * (nsupc - 1); + /* 1 c_div costs 10 flops */ + solve_ops += 4 * nsupc * (nsupc - 1) + 10 * nsupc; solve_ops += 8 * nrow * nsupc; if ( nsupc == 1 ) { @@ -184,7 +190,8 @@ nsupc = L_FST_SUPC(k+1) - fsupc; luptr = L_NZ_START(fsupc); - solve_ops += 4 * nsupc * (nsupc + 1); + /* 1 c_div costs 10 flops */ + solve_ops += 4 * nsupc * (nsupc + 1) + 10 * nsupc; if ( nsupc == 1 ) { c_div(&x[fsupc], &x[fsupc], &Lval[luptr]); @@ -219,7 +226,7 @@ } /* for k ... */ } - } else if (lsame_(trans, "T")) { /* Form x := inv(A')*x */ + } else if ( lsame_(trans, "T") ) { /* Form x := inv(A')*x */ if ( lsame_(uplo, "L") ) { /* Form x := inv(L')*x */ @@ -249,13 +256,13 @@ solve_ops += 4 * nsupc * (nsupc - 1); #ifdef _CRAY ftcs1 = _cptofcd("L", strlen("L")); - ftcs2 = _cptofcd(trans, strlen("T")); + ftcs2 = _cptofcd("T", strlen("T")); ftcs3 = _cptofcd("U", strlen("U")); CTRSV(ftcs1, ftcs2, ftcs3, &nsupc, &Lval[luptr], &nsupr, &x[fsupc], &incx); #else - ctrsv_("L", trans, "U", &nsupc, &Lval[luptr], &nsupr, - &x[fsupc], &incx); + ctrsv_("L", "T", "U", &nsupc, &Lval[luptr], &nsupr, + &x[fsupc], &incx); #endif } } @@ -278,20 +285,21 @@ } } - solve_ops += 4 * nsupc * (nsupc + 1); + /* 1 c_div costs 10 flops */ + solve_ops += 4 * nsupc * (nsupc + 1) + 10 * nsupc; if ( nsupc == 1 ) { c_div(&x[fsupc], &x[fsupc], &Lval[luptr]); } else { #ifdef _CRAY ftcs1 = _cptofcd("U", strlen("U")); - ftcs2 = _cptofcd(trans, strlen("T")); + ftcs2 = _cptofcd("T", strlen("T")); ftcs3 = _cptofcd("N", strlen("N")); CTRSV( ftcs1, ftcs2, ftcs3, &nsupc, &Lval[luptr], &nsupr, &x[fsupc], &incx); #else - ctrsv_("U", trans, "N", &nsupc, &Lval[luptr], &nsupr, - &x[fsupc], &incx); + ctrsv_("U", "T", "N", &nsupc, &Lval[luptr], &nsupr, + &x[fsupc], &incx); #endif } } /* for k ... */ @@ -321,9 +329,9 @@ c_sub(&x[jcol], &x[jcol], &comp_zero); iptr++; } - } - - if ( nsupc > 1 ) { + } + + if ( nsupc > 1 ) { solve_ops += 4 * nsupc * (nsupc - 1); #ifdef _CRAY ftcs1 = _cptofcd("L", strlen("L")); @@ -357,8 +365,9 @@ } } - solve_ops += 4 * nsupc * (nsupc + 1); - + /* 1 c_div costs 10 flops */ + solve_ops += 4 * nsupc * (nsupc + 1) + 10 * nsupc; + if ( nsupc == 1 ) { cc_conj(&temp, &Lval[luptr]); c_div(&x[fsupc], &x[fsupc], &temp); @@ -373,12 +382,11 @@ ctrsv_("U", trans, "N", &nsupc, &Lval[luptr], &nsupr, &x[fsupc], &incx); #endif - } - } /* for k ... */ - } + } + } /* for k ... */ + } } - stat->ops[SOLVE] += solve_ops; SUPERLU_FREE(work); return 0; @@ -386,64 +394,68 @@ +/*! \brief Performs one of the matrix-vector operations y := alpha*A*x + beta*y, or y := alpha*A'*x + beta*y + * + *
  
+ *   Purpose   
+ *   =======   
+ *
+ *   sp_cgemv()  performs one of the matrix-vector operations   
+ *      y := alpha*A*x + beta*y,   or   y := alpha*A'*x + beta*y,   
+ *   where alpha and beta are scalars, x and y are vectors and A is a
+ *   sparse A->nrow by A->ncol matrix.   
+ *
+ *   Parameters   
+ *   ==========   
+ *
+ *   TRANS  - (input) char*
+ *            On entry, TRANS specifies the operation to be performed as   
+ *            follows:   
+ *               TRANS = 'N' or 'n'   y := alpha*A*x + beta*y.   
+ *               TRANS = 'T' or 't'   y := alpha*A'*x + beta*y.   
+ *               TRANS = 'C' or 'c'   y := alpha*A'*x + beta*y.   
+ *
+ *   ALPHA  - (input) complex
+ *            On entry, ALPHA specifies the scalar alpha.   
+ *
+ *   A      - (input) SuperMatrix*
+ *            Before entry, the leading m by n part of the array A must   
+ *            contain the matrix of coefficients.   
+ *
+ *   X      - (input) complex*, array of DIMENSION at least   
+ *            ( 1 + ( n - 1 )*abs( INCX ) ) when TRANS = 'N' or 'n'   
+ *           and at least   
+ *            ( 1 + ( m - 1 )*abs( INCX ) ) otherwise.   
+ *            Before entry, the incremented array X must contain the   
+ *            vector x.   
+ * 
+ *   INCX   - (input) int
+ *            On entry, INCX specifies the increment for the elements of   
+ *            X. INCX must not be zero.   
+ *
+ *   BETA   - (input) complex
+ *            On entry, BETA specifies the scalar beta. When BETA is   
+ *            supplied as zero then Y need not be set on input.   
+ *
+ *   Y      - (output) complex*,  array of DIMENSION at least   
+ *            ( 1 + ( m - 1 )*abs( INCY ) ) when TRANS = 'N' or 'n'   
+ *            and at least   
+ *            ( 1 + ( n - 1 )*abs( INCY ) ) otherwise.   
+ *            Before entry with BETA non-zero, the incremented array Y   
+ *            must contain the vector y. On exit, Y is overwritten by the 
+ *            updated vector y.
+ *	      
+ *   INCY   - (input) int
+ *            On entry, INCY specifies the increment for the elements of   
+ *            Y. INCY must not be zero.   
+ *
+ *    ==== Sparse Level 2 Blas routine.   
+ * 
+*/ int sp_cgemv(char *trans, complex alpha, SuperMatrix *A, complex *x, int incx, complex beta, complex *y, int incy) { -/* Purpose - ======= - - sp_cgemv() performs one of the matrix-vector operations - y := alpha*A*x + beta*y, or y := alpha*A'*x + beta*y, - where alpha and beta are scalars, x and y are vectors and A is a - sparse A->nrow by A->ncol matrix. - - Parameters - ========== - - TRANS - (input) char* - On entry, TRANS specifies the operation to be performed as - follows: - TRANS = 'N' or 'n' y := alpha*A*x + beta*y. - TRANS = 'T' or 't' y := alpha*A'*x + beta*y. - TRANS = 'C' or 'c' y := alpha*A'*x + beta*y. - - ALPHA - (input) complex - On entry, ALPHA specifies the scalar alpha. - - A - (input) SuperMatrix* - Before entry, the leading m by n part of the array A must - contain the matrix of coefficients. - - X - (input) complex*, array of DIMENSION at least - ( 1 + ( n - 1 )*abs( INCX ) ) when TRANS = 'N' or 'n' - and at least - ( 1 + ( m - 1 )*abs( INCX ) ) otherwise. - Before entry, the incremented array X must contain the - vector x. - - INCX - (input) int - On entry, INCX specifies the increment for the elements of - X. INCX must not be zero. - - BETA - (input) complex - On entry, BETA specifies the scalar beta. When BETA is - supplied as zero then Y need not be set on input. - - Y - (output) complex*, array of DIMENSION at least - ( 1 + ( m - 1 )*abs( INCY ) ) when TRANS = 'N' or 'n' - and at least - ( 1 + ( n - 1 )*abs( INCY ) ) otherwise. - Before entry with BETA non-zero, the incremented array Y - must contain the vector y. On exit, Y is overwritten by the - updated vector y. - - INCY - (input) int - On entry, INCY specifies the increment for the elements of - Y. INCY must not be zero. - - ==== Sparse Level 2 Blas routine. -*/ /* Local variables */ NCformat *Astore; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/csp_blas3.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/csp_blas3.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/csp_blas3.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/csp_blas3.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,116 +1,122 @@ - -/* +/*! @file csp_blas3.c + * \brief Sparse BLAS3, using some dense BLAS3 operations + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
- *
+ * 
*/ /* * File name: sp_blas3.c * Purpose: Sparse BLAS3, using some dense BLAS3 operations. */ -#include "csp_defs.h" -#include "util.h" +#include "slu_cdefs.h" + +/*! \brief + * + *
+ * Purpose   
+ *   =======   
+ * 
+ *   sp_c performs one of the matrix-matrix operations   
+ * 
+ *      C := alpha*op( A )*op( B ) + beta*C,   
+ * 
+ *   where  op( X ) is one of 
+ * 
+ *      op( X ) = X   or   op( X ) = X'   or   op( X ) = conjg( X' ),
+ * 
+ *   alpha and beta are scalars, and A, B and C are matrices, with op( A ) 
+ *   an m by k matrix,  op( B )  a  k by n matrix and  C an m by n matrix. 
+ *   
+ * 
+ *   Parameters   
+ *   ==========   
+ * 
+ *   TRANSA - (input) char*
+ *            On entry, TRANSA specifies the form of op( A ) to be used in 
+ *            the matrix multiplication as follows:   
+ *               TRANSA = 'N' or 'n',  op( A ) = A.   
+ *               TRANSA = 'T' or 't',  op( A ) = A'.   
+ *               TRANSA = 'C' or 'c',  op( A ) = conjg( A' ).   
+ *            Unchanged on exit.   
+ * 
+ *   TRANSB - (input) char*
+ *            On entry, TRANSB specifies the form of op( B ) to be used in 
+ *            the matrix multiplication as follows:   
+ *               TRANSB = 'N' or 'n',  op( B ) = B.   
+ *               TRANSB = 'T' or 't',  op( B ) = B'.   
+ *               TRANSB = 'C' or 'c',  op( B ) = conjg( B' ).   
+ *            Unchanged on exit.   
+ * 
+ *   M      - (input) int   
+ *            On entry,  M  specifies  the number of rows of the matrix 
+ *	     op( A ) and of the matrix C.  M must be at least zero. 
+ *	     Unchanged on exit.   
+ * 
+ *   N      - (input) int
+ *            On entry,  N specifies the number of columns of the matrix 
+ *	     op( B ) and the number of columns of the matrix C. N must be 
+ *	     at least zero.
+ *	     Unchanged on exit.   
+ * 
+ *   K      - (input) int
+ *            On entry, K specifies the number of columns of the matrix 
+ *	     op( A ) and the number of rows of the matrix op( B ). K must 
+ *	     be at least  zero.   
+ *           Unchanged on exit.
+ *      
+ *   ALPHA  - (input) complex
+ *            On entry, ALPHA specifies the scalar alpha.   
+ * 
+ *   A      - (input) SuperMatrix*
+ *            Matrix A with a sparse format, of dimension (A->nrow, A->ncol).
+ *            Currently, the type of A can be:
+ *                Stype = NC or NCP; Dtype = SLU_C; Mtype = GE. 
+ *            In the future, more general A can be handled.
+ * 
+ *   B      - COMPLEX PRECISION array of DIMENSION ( LDB, kb ), where kb is 
+ *            n when TRANSB = 'N' or 'n',  and is  k otherwise.   
+ *            Before entry with  TRANSB = 'N' or 'n',  the leading k by n 
+ *            part of the array B must contain the matrix B, otherwise 
+ *            the leading n by k part of the array B must contain the 
+ *            matrix B.   
+ *            Unchanged on exit.   
+ * 
+ *   LDB    - (input) int
+ *            On entry, LDB specifies the first dimension of B as declared 
+ *            in the calling (sub) program. LDB must be at least max( 1, n ).  
+ *            Unchanged on exit.   
+ * 
+ *   BETA   - (input) complex
+ *            On entry, BETA specifies the scalar beta. When BETA is   
+ *            supplied as zero then C need not be set on input.   
+ *  
+ *   C      - COMPLEX PRECISION array of DIMENSION ( LDC, n ).   
+ *            Before entry, the leading m by n part of the array C must 
+ *            contain the matrix C,  except when beta is zero, in which 
+ *            case C need not be set on entry.   
+ *            On exit, the array C is overwritten by the m by n matrix 
+ *	     ( alpha*op( A )*B + beta*C ).   
+ *  
+ *   LDC    - (input) int
+ *            On entry, LDC specifies the first dimension of C as declared 
+ *            in the calling (sub)program. LDC must be at least max(1,m).   
+ *            Unchanged on exit.   
+ *  
+ *   ==== Sparse Level 3 Blas routine.   
+ * 
+ */ int sp_cgemm(char *transa, char *transb, int m, int n, int k, complex alpha, SuperMatrix *A, complex *b, int ldb, complex beta, complex *c, int ldc) { -/* Purpose - ======= - - sp_c performs one of the matrix-matrix operations - - C := alpha*op( A )*op( B ) + beta*C, - - where op( X ) is one of - - op( X ) = X or op( X ) = X' or op( X ) = conjg( X' ), - - alpha and beta are scalars, and A, B and C are matrices, with op( A ) - an m by k matrix, op( B ) a k by n matrix and C an m by n matrix. - - - Parameters - ========== - - TRANSA - (input) char* - On entry, TRANSA specifies the form of op( A ) to be used in - the matrix multiplication as follows: - TRANSA = 'N' or 'n', op( A ) = A. - TRANSA = 'T' or 't', op( A ) = A'. - TRANSA = 'C' or 'c', op( A ) = conjg( A' ). - Unchanged on exit. - - TRANSB - (input) char* - On entry, TRANSB specifies the form of op( B ) to be used in - the matrix multiplication as follows: - TRANSB = 'N' or 'n', op( B ) = B. - TRANSB = 'T' or 't', op( B ) = B'. - TRANSB = 'C' or 'c', op( B ) = conjg( B' ). - Unchanged on exit. - - M - (input) int - On entry, M specifies the number of rows of the matrix - op( A ) and of the matrix C. M must be at least zero. - Unchanged on exit. - - N - (input) int - On entry, N specifies the number of columns of the matrix - op( B ) and the number of columns of the matrix C. N must be - at least zero. - Unchanged on exit. - - K - (input) int - On entry, K specifies the number of columns of the matrix - op( A ) and the number of rows of the matrix op( B ). K must - be at least zero. - Unchanged on exit. - - ALPHA - (input) complex - On entry, ALPHA specifies the scalar alpha. - - A - (input) SuperMatrix* - Matrix A with a sparse format, of dimension (A->nrow, A->ncol). - Currently, the type of A can be: - Stype = NC or NCP; Dtype = SLU_C; Mtype = GE. - In the future, more general A can be handled. - - B - COMPLEX PRECISION array of DIMENSION ( LDB, kb ), where kb is - n when TRANSB = 'N' or 'n', and is k otherwise. - Before entry with TRANSB = 'N' or 'n', the leading k by n - part of the array B must contain the matrix B, otherwise - the leading n by k part of the array B must contain the - matrix B. - Unchanged on exit. - - LDB - (input) int - On entry, LDB specifies the first dimension of B as declared - in the calling (sub) program. LDB must be at least max( 1, n ). - Unchanged on exit. - - BETA - (input) complex - On entry, BETA specifies the scalar beta. When BETA is - supplied as zero then C need not be set on input. - - C - COMPLEX PRECISION array of DIMENSION ( LDC, n ). - Before entry, the leading m by n part of the array C must - contain the matrix C, except when beta is zero, in which - case C need not be set on entry. - On exit, the array C is overwritten by the m by n matrix - ( alpha*op( A )*B + beta*C ). - - LDC - (input) int - On entry, LDC specifies the first dimension of C as declared - in the calling (sub)program. LDC must be at least max(1,m). - Unchanged on exit. - - ==== Sparse Level 3 Blas routine. -*/ int incx = 1, incy = 1; int j; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/csp_defs.h python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/csp_defs.h --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/csp_defs.h 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/csp_defs.h 1970-01-01 01:00:00.000000000 +0100 @@ -1,237 +0,0 @@ - -/* - * -- SuperLU routine (version 3.0) -- - * Univ. of California Berkeley, Xerox Palo Alto Research Center, - * and Lawrence Berkeley National Lab. - * October 15, 2003 - * - */ -#ifndef __SUPERLU_cSP_DEFS /* allow multiple inclusions */ -#define __SUPERLU_cSP_DEFS - -/* - * File name: csp_defs.h - * Purpose: Sparse matrix types and function prototypes - * History: - */ - -#ifdef _CRAY -#include -#include -#endif - -/* Define my integer type int_t */ -typedef int int_t; /* default */ - -#include "Cnames.h" -#include "supermatrix.h" -#include "util.h" -#include "scomplex.h" - - -/* - * Global data structures used in LU factorization - - * - * nsuper: #supernodes = nsuper + 1, numbered [0, nsuper]. - * (xsup,supno): supno[i] is the supernode no to which i belongs; - * xsup(s) points to the beginning of the s-th supernode. - * e.g. supno 0 1 2 2 3 3 3 4 4 4 4 4 (n=12) - * xsup 0 1 2 4 7 12 - * Note: dfs will be performed on supernode rep. relative to the new - * row pivoting ordering - * - * (xlsub,lsub): lsub[*] contains the compressed subscript of - * rectangular supernodes; xlsub[j] points to the starting - * location of the j-th column in lsub[*]. Note that xlsub - * is indexed by column. - * Storage: original row subscripts - * - * During the course of sparse LU factorization, we also use - * (xlsub,lsub) for the purpose of symmetric pruning. For each - * supernode {s,s+1,...,t=s+r} with first column s and last - * column t, the subscript set - * lsub[j], j=xlsub[s], .., xlsub[s+1]-1 - * is the structure of column s (i.e. structure of this supernode). - * It is used for the storage of numerical values. - * Furthermore, - * lsub[j], j=xlsub[t], .., xlsub[t+1]-1 - * is the structure of the last column t of this supernode. - * It is for the purpose of symmetric pruning. Therefore, the - * structural subscripts can be rearranged without making physical - * interchanges among the numerical values. - * - * However, if the supernode has only one column, then we - * only keep one set of subscripts. For any subscript interchange - * performed, similar interchange must be done on the numerical - * values. - * - * The last column structures (for pruning) will be removed - * after the numercial LU factorization phase. - * - * (xlusup,lusup): lusup[*] contains the numerical values of the - * rectangular supernodes; xlusup[j] points to the starting - * location of the j-th column in storage vector lusup[*] - * Note: xlusup is indexed by column. - * Each rectangular supernode is stored by column-major - * scheme, consistent with Fortran 2-dim array storage. - * - * (xusub,ucol,usub): ucol[*] stores the numerical values of - * U-columns outside the rectangular supernodes. The row - * subscript of nonzero ucol[k] is stored in usub[k]. - * xusub[i] points to the starting location of column i in ucol. - * Storage: new row subscripts; that is subscripts of PA. - */ -typedef struct { - int *xsup; /* supernode and column mapping */ - int *supno; - int *lsub; /* compressed L subscripts */ - int *xlsub; - complex *lusup; /* L supernodes */ - int *xlusup; - complex *ucol; /* U columns */ - int *usub; - int *xusub; - int nzlmax; /* current max size of lsub */ - int nzumax; /* " " " ucol */ - int nzlumax; /* " " " lusup */ - int n; /* number of columns in the matrix */ - LU_space_t MemModel; /* 0 - system malloc'd; 1 - user provided */ -} GlobalLU_t; - -typedef struct { - float for_lu; - float total_needed; - int expansions; -} mem_usage_t; - -#ifdef __cplusplus -extern "C" { -#endif - -/* Driver routines */ -extern void -cgssv(superlu_options_t *, SuperMatrix *, int *, int *, SuperMatrix *, - SuperMatrix *, SuperMatrix *, SuperLUStat_t *, int *); -extern void -cgssvx(superlu_options_t *, SuperMatrix *, int *, int *, int *, - char *, float *, float *, SuperMatrix *, SuperMatrix *, - void *, int, SuperMatrix *, SuperMatrix *, - float *, float *, float *, float *, - mem_usage_t *, SuperLUStat_t *, int *); - -/* Supernodal LU factor related */ -extern void -cCreate_CompCol_Matrix(SuperMatrix *, int, int, int, complex *, - int *, int *, Stype_t, Dtype_t, Mtype_t); -extern void -cCreate_CompRow_Matrix(SuperMatrix *, int, int, int, complex *, - int *, int *, Stype_t, Dtype_t, Mtype_t); -extern void -cCopy_CompCol_Matrix(SuperMatrix *, SuperMatrix *); -extern void -cCreate_Dense_Matrix(SuperMatrix *, int, int, complex *, int, - Stype_t, Dtype_t, Mtype_t); -extern void -cCreate_SuperNode_Matrix(SuperMatrix *, int, int, int, complex *, - int *, int *, int *, int *, int *, - Stype_t, Dtype_t, Mtype_t); -extern void -cCopy_Dense_Matrix(int, int, complex *, int, complex *, int); - -extern void countnz (const int, int *, int *, int *, GlobalLU_t *); -extern void fixupL (const int, const int *, GlobalLU_t *); - -extern void callocateA (int, int, complex **, int **, int **); -extern void cgstrf (superlu_options_t*, SuperMatrix*, float, - int, int, int*, void *, int, int *, int *, - SuperMatrix *, SuperMatrix *, SuperLUStat_t*, int *); -extern int csnode_dfs (const int, const int, const int *, const int *, - const int *, int *, int *, GlobalLU_t *); -extern int csnode_bmod (const int, const int, const int, complex *, - complex *, GlobalLU_t *, SuperLUStat_t*); -extern void cpanel_dfs (const int, const int, const int, SuperMatrix *, - int *, int *, complex *, int *, int *, int *, - int *, int *, int *, int *, GlobalLU_t *); -extern void cpanel_bmod (const int, const int, const int, const int, - complex *, complex *, int *, int *, - GlobalLU_t *, SuperLUStat_t*); -extern int ccolumn_dfs (const int, const int, int *, int *, int *, int *, - int *, int *, int *, int *, int *, GlobalLU_t *); -extern int ccolumn_bmod (const int, const int, complex *, - complex *, int *, int *, int, - GlobalLU_t *, SuperLUStat_t*); -extern int ccopy_to_ucol (int, int, int *, int *, int *, - complex *, GlobalLU_t *); -extern int cpivotL (const int, const float, int *, int *, - int *, int *, int *, GlobalLU_t *, SuperLUStat_t*); -extern void cpruneL (const int, const int *, const int, const int, - const int *, const int *, int *, GlobalLU_t *); -extern void creadmt (int *, int *, int *, complex **, int **, int **); -extern void cGenXtrue (int, int, complex *, int); -extern void cFillRHS (trans_t, int, complex *, int, SuperMatrix *, - SuperMatrix *); -extern void cgstrs (trans_t, SuperMatrix *, SuperMatrix *, int *, int *, - SuperMatrix *, SuperLUStat_t*, int *); - - -/* Driver related */ - -extern void cgsequ (SuperMatrix *, float *, float *, float *, - float *, float *, int *); -extern void claqgs (SuperMatrix *, float *, float *, float, - float, float, char *); -extern void cgscon (char *, SuperMatrix *, SuperMatrix *, - float, float *, SuperLUStat_t*, int *); -extern float cPivotGrowth(int, SuperMatrix *, int *, - SuperMatrix *, SuperMatrix *); -extern void cgsrfs (trans_t, SuperMatrix *, SuperMatrix *, - SuperMatrix *, int *, int *, char *, float *, - float *, SuperMatrix *, SuperMatrix *, - float *, float *, SuperLUStat_t*, int *); - -extern int sp_ctrsv (char *, char *, char *, SuperMatrix *, - SuperMatrix *, complex *, SuperLUStat_t*, int *); -extern int sp_cgemv (char *, complex, SuperMatrix *, complex *, - int, complex, complex *, int); - -extern int sp_cgemm (char *, char *, int, int, int, complex, - SuperMatrix *, complex *, int, complex, - complex *, int); - -/* Memory-related */ -extern int cLUMemInit (fact_t, void *, int, int, int, int, int, - SuperMatrix *, SuperMatrix *, - GlobalLU_t *, int **, complex **); -extern void cSetRWork (int, int, complex *, complex **, complex **); -extern void cLUWorkFree (int *, complex *, GlobalLU_t *); -extern int cLUMemXpand (int, int, MemType, int *, GlobalLU_t *); - -extern complex *complexMalloc(int); -extern complex *complexCalloc(int); -extern float *floatMalloc(int); -extern float *floatCalloc(int); -extern int cmemory_usage(const int, const int, const int, const int); -extern int cQuerySpace (SuperMatrix *, SuperMatrix *, mem_usage_t *); - -/* Auxiliary routines */ -extern void creadhb(int *, int *, int *, complex **, int **, int **); -extern void cCompRow_to_CompCol(int, int, int, complex*, int*, int*, - complex **, int **, int **); -extern void cfill (complex *, int, complex); -extern void cinf_norm_error (int, SuperMatrix *, complex *); -extern void PrintPerf (SuperMatrix *, SuperMatrix *, mem_usage_t *, - complex, complex, complex *, complex *, char *); - -/* Routines for debugging */ -extern void cPrint_CompCol_Matrix(char *, SuperMatrix *); -extern void cPrint_SuperNode_Matrix(char *, SuperMatrix *); -extern void cPrint_Dense_Matrix(char *, SuperMatrix *); -extern void print_lu_col(char *, int, int, int *, GlobalLU_t *); -extern void check_tempv(int, complex *); - -#ifdef __cplusplus - } -#endif - -#endif /* __SUPERLU_cSP_DEFS */ - diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cutil.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cutil.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cutil.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/cutil.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,26 +1,29 @@ -/* - * -- SuperLU routine (version 3.0) -- +/*! @file cutil.c + * \brief Matrix utility functions + * + *
+ * -- SuperLU routine (version 3.1) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
- * October 15, 2003
+ * August 1, 2008
+ *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
  *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ + #include -#include "csp_defs.h" +#include "slu_cdefs.h" void cCreate_CompCol_Matrix(SuperMatrix *A, int m, int n, int nnz, @@ -64,7 +67,7 @@ Astore->rowptr = rowptr; } -/* Copy matrix A into matrix B. */ +/*! \brief Copy matrix A into matrix B. */ void cCopy_CompCol_Matrix(SuperMatrix *A, SuperMatrix *B) { @@ -108,12 +111,7 @@ cCopy_Dense_Matrix(int M, int N, complex *X, int ldx, complex *Y, int ldy) { -/* - * - * Purpose - * ======= - * - * Copies a two-dimensional matrix X to another matrix Y. +/*! \brief Copies a two-dimensional matrix X to another matrix Y. */ int i, j; @@ -150,8 +148,7 @@ } -/* - * Convert a row compressed storage into a column compressed storage. +/*! \brief Convert a row compressed storage into a column compressed storage. */ void cCompRow_to_CompCol(int m, int n, int nnz, @@ -240,7 +237,8 @@ for (j = c; j < c + nsup; ++j) { d = Astore->nzval_colptr[j]; for (i = rowind_colptr[c]; i < rowind_colptr[c+1]; ++i) { - printf("%d\t%d\t%e\t%e\n", rowind[i], j, dp[d++], dp[d++]); + printf("%d\t%d\t%e\t%e\n", rowind[i], j, dp[d], dp[d+1]); + d += 2; } } } @@ -266,23 +264,24 @@ void cPrint_Dense_Matrix(char *what, SuperMatrix *A) { - DNformat *Astore; - register int i; + DNformat *Astore = (DNformat *) A->Store; + register int i, j, lda = Astore->lda; float *dp; printf("\nDense matrix %s:\n", what); printf("Stype %d, Dtype %d, Mtype %d\n", A->Stype,A->Dtype,A->Mtype); - Astore = (DNformat *) A->Store; dp = (float *) Astore->nzval; - printf("nrow %d, ncol %d, lda %d\n", A->nrow,A->ncol,Astore->lda); + printf("nrow %d, ncol %d, lda %d\n", A->nrow,A->ncol,lda); printf("\nnzval: "); - for (i = 0; i < 2*A->nrow; ++i) printf("%f ", dp[i]); + for (j = 0; j < A->ncol; ++j) { + for (i = 0; i < 2*A->nrow; ++i) printf("%f ", dp[i + j*2*lda]); + printf("\n"); + } printf("\n"); fflush(stdout); } -/* - * Diagnostic print of column "jcol" in the U/L factor. +/*! \brief Diagnostic print of column "jcol" in the U/L factor. */ void cprint_lu_col(char *msg, int jcol, int pivrow, int *xprune, GlobalLU_t *Glu) @@ -324,9 +323,7 @@ } -/* - * Check whether tempv[] == 0. This should be true before and after - * calling any numeric routines, i.e., "panel_bmod" and "column_bmod". +/*! \brief Check whether tempv[] == 0. This should be true before and after calling any numeric routines, i.e., "panel_bmod" and "column_bmod". */ void ccheck_tempv(int n, complex *tempv) { @@ -353,8 +350,7 @@ } } -/* - * Let rhs[i] = sum of i-th row of A, so the solution vector is all 1's +/*! \brief Let rhs[i] = sum of i-th row of A, so the solution vector is all 1's */ void cFillRHS(trans_t trans, int nrhs, complex *x, int ldx, @@ -383,8 +379,7 @@ } -/* - * Fills a complex precision array with a given value. +/*! \brief Fills a complex precision array with a given value. */ void cfill(complex *a, int alen, complex dval) @@ -395,8 +390,7 @@ -/* - * Check the inf-norm of the error vector +/*! \brief Check the inf-norm of the error vector */ void cinf_norm_error(int nrhs, SuperMatrix *X, complex *xtrue) { @@ -424,7 +418,7 @@ -/* Print performance of the code. */ +/*! \brief Print performance of the code. */ void cPrintPerf(SuperMatrix *L, SuperMatrix *U, mem_usage_t *mem_usage, float rpg, float rcond, float *ferr, @@ -452,9 +446,9 @@ printf("\tNo of nonzeros in factor U = %d\n", Ustore->nnz); printf("\tNo of nonzeros in L+U = %d\n", Lstore->nnz + Ustore->nnz); - printf("L\\U MB %.3f\ttotal MB needed %.3f\texpansions %d\n", - mem_usage->for_lu/1e6, mem_usage->total_needed/1e6, - mem_usage->expansions); + printf("L\\U MB %.3f\ttotal MB needed %.3f\n", + mem_usage->for_lu/1e6, mem_usage->total_needed/1e6); + printf("Number of memory expansions: %d\n", stat->expansions); printf("\tFactor\tMflops\tSolve\tMflops\tEtree\tEquil\tRcond\tRefine\n"); printf("PERF:%8.2f%8.2f%8.2f%8.2f%8.2f%8.2f%8.2f%8.2f\n", diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dcolumn_bmod.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dcolumn_bmod.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dcolumn_bmod.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dcolumn_bmod.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,27 +1,29 @@ -/* +/*! @file dcolumn_bmod.c + * \brief performs numeric block updates + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
- */
-/*
-  Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
- 
-  THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
-  EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
- 
-  Permission is hereby granted to use or copy this program for any
-  purpose, provided the above notices are retained on all copies.
-  Permission to modify the code and to distribute modified code is
-  granted, provided the above notices are retained, and a notice that
-  the code was modified is included with the above copyright notice.
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ *  Permission is hereby granted to use or copy this program for any
+ *  purpose, provided the above notices are retained on all copies.
+ *  Permission to modify the code and to distribute modified code is
+ *  granted, provided the above notices are retained, and a notice that
+ *  the code was modified is included with the above copyright notice.
+ * 
*/ #include #include -#include "dsp_defs.h" +#include "slu_ddefs.h" /* * Function prototypes @@ -32,8 +34,17 @@ -/* Return value: 0 - successful return +/*! \brief + * + *
+ * Purpose:
+ * ========
+ * Performs numeric block updates (sup-col) in topological order.
+ * It features: col-col, 2cols-col, 3cols-col, and sup-col updates.
+ * Special processing on the supernodal portion of L\U[*,j]
+ * Return value:   0 - successful return
  *               > 0 - number of bytes allocated when run out of space
+ * 
*/ int dcolumn_bmod ( @@ -48,14 +59,7 @@ SuperLUStat_t *stat /* output */ ) { -/* - * Purpose: - * ======== - * Performs numeric block updates (sup-col) in topological order. - * It features: col-col, 2cols-col, 3cols-col, and sup-col updates. - * Special processing on the supernodal portion of L\U[*,j] - * - */ + #ifdef _CRAY _fcd ftcs1 = _cptofcd("L", strlen("L")), ftcs2 = _cptofcd("N", strlen("N")), diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dcolumn_dfs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dcolumn_dfs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dcolumn_dfs.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dcolumn_dfs.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,50 +1,38 @@ - -/* +/*! @file dcolumn_dfs.c + * \brief Performs a symbolic factorization + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
- */
-/*
-  Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
- 
-  THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
-  EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
- 
-  Permission is hereby granted to use or copy this program for any
-  purpose, provided the above notices are retained on all copies.
-  Permission to modify the code and to distribute modified code is
-  granted, provided the above notices are retained, and a notice that
-  the code was modified is included with the above copyright notice.
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -#include "dsp_defs.h" +#include "slu_ddefs.h" -/* What type of supernodes we want */ +/*! \brief What type of supernodes we want */ #define T2_SUPER -int -dcolumn_dfs( - const int m, /* in - number of rows in the matrix */ - const int jcol, /* in */ - int *perm_r, /* in */ - int *nseg, /* modified - with new segments appended */ - int *lsub_col, /* in - defines the RHS vector to start the dfs */ - int *segrep, /* modified - with new segments appended */ - int *repfnz, /* modified */ - int *xprune, /* modified */ - int *marker, /* modified */ - int *parent, /* working array */ - int *xplore, /* working array */ - GlobalLU_t *Glu /* modified */ - ) -{ -/* + +/*! \brief + * + *
  * Purpose
  * =======
- *   "column_dfs" performs a symbolic factorization on column jcol, and
+ *   DCOLUMN_DFS performs a symbolic factorization on column jcol, and
  *   decide the supernode boundary.
  *
  *   This routine does not use numeric values, but only use the RHS 
@@ -72,8 +60,25 @@
  * ============
  *     0  success;
  *   > 0  number of bytes allocated when run out of space.
- *
+ * 
*/ +int +dcolumn_dfs( + const int m, /* in - number of rows in the matrix */ + const int jcol, /* in */ + int *perm_r, /* in */ + int *nseg, /* modified - with new segments appended */ + int *lsub_col, /* in - defines the RHS vector to start the dfs */ + int *segrep, /* modified - with new segments appended */ + int *repfnz, /* modified */ + int *xprune, /* modified */ + int *marker, /* modified */ + int *parent, /* working array */ + int *xplore, /* working array */ + GlobalLU_t *Glu /* modified */ + ) +{ + int jcolp1, jcolm1, jsuper, nsuper, nextl; int k, krep, krow, kmark, kperm; int *marker2; /* Used for small panel LU */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dcomplex.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dcomplex.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dcomplex.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dcomplex.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,20 +1,24 @@ -/* +/*! @file dcomplex.c + * \brief Common arithmetic for complex type + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
  *
- */
-/*
  * This file defines common arithmetic operations for complex type.
+ * 
*/ + #include +#include #include -#include "dcomplex.h" +#include "slu_dcomplex.h" -/* Complex Division c = a/b */ +/*! \brief Complex Division c = a/b */ void z_div(doublecomplex *c, doublecomplex *a, doublecomplex *b) { double ratio, den; @@ -26,8 +30,8 @@ abi = - abi; if( abr <= abi ) { if (abi == 0) { - fprintf(stderr, "z_div.c: division by zero"); - exit (-1); + fprintf(stderr, "z_div.c: division by zero\n"); + exit(-1); } ratio = b->r / b->i ; den = b->i * (1 + ratio*ratio); @@ -43,7 +47,8 @@ c->i = ci; } -/* Returns sqrt(z.r^2 + z.i^2) */ + +/*! \brief Returns sqrt(z.r^2 + z.i^2) */ double z_abs(doublecomplex *z) { double temp; @@ -65,8 +70,7 @@ } -/* Approximates the abs */ -/* Returns abs(z.r) + abs(z.i) */ +/*! \brief Approximates the abs. Returns abs(z.r) + abs(z.i) */ double z_abs1(doublecomplex *z) { double real = z->r; @@ -78,7 +82,7 @@ return (real + imag); } -/* Return the exponentiation */ +/*! \brief Return the exponentiation */ void z_exp(doublecomplex *r, doublecomplex *z) { double expx; @@ -88,17 +92,56 @@ r->i = expx * sin(z->i); } -/* Return the complex conjugate */ +/*! \brief Return the complex conjugate */ void d_cnjg(doublecomplex *r, doublecomplex *z) { r->r = z->r; r->i = -z->i; } -/* Return the imaginary part */ +/*! \brief Return the imaginary part */ double d_imag(doublecomplex *z) { return (z->i); } +/*! \brief SIGN functions for complex number. Returns z/abs(z) */ +doublecomplex z_sgn(doublecomplex *z) +{ + register double t = z_abs(z); + register doublecomplex retval; + + if (t == 0.0) { + retval.r = 1.0, retval.i = 0.0; + } else { + retval.r = z->r / t, retval.i = z->i / t; + } + + return retval; +} + +/*! \brief Square-root of a complex number. */ +doublecomplex z_sqrt(doublecomplex *z) +{ + doublecomplex retval; + register double cr, ci, real, imag; + + real = z->r; + imag = z->i; + + if ( imag == 0.0 ) { + retval.r = sqrt(real); + retval.i = 0.0; + } else { + ci = (sqrt(real*real + imag*imag) - real) / 2.0; + ci = sqrt(ci); + cr = imag / (2.0 * ci); + retval.r = cr; + retval.i = ci; + } + + return retval; +} + + diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dcomplex.h python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dcomplex.h --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dcomplex.h 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dcomplex.h 1970-01-01 01:00:00.000000000 +0100 @@ -1,73 +0,0 @@ - - -/* - * -- SuperLU routine (version 2.0) -- - * Univ. of California Berkeley, Xerox Palo Alto Research Center, - * and Lawrence Berkeley National Lab. - * November 15, 1997 - * - */ -#ifndef __SUPERLU_DCOMPLEX /* allow multiple inclusions */ -#define __SUPERLU_DCOMPLEX - -/* - * This header file is to be included in source files z*.c - */ -#ifndef DCOMPLEX_INCLUDE -#define DCOMPLEX_INCLUDE - -typedef struct { double r, i; } doublecomplex; - - -/* Macro definitions */ - -/* Complex Addition c = a + b */ -#define z_add(c, a, b) { (c)->r = (a)->r + (b)->r; \ - (c)->i = (a)->i + (b)->i; } - -/* Complex Subtraction c = a - b */ -#define z_sub(c, a, b) { (c)->r = (a)->r - (b)->r; \ - (c)->i = (a)->i - (b)->i; } - -/* Complex-Double Multiplication */ -#define zd_mult(c, a, b) { (c)->r = (a)->r * (b); \ - (c)->i = (a)->i * (b); } - -/* Complex-Complex Multiplication */ -#define zz_mult(c, a, b) { \ - double cr, ci; \ - cr = (a)->r * (b)->r - (a)->i * (b)->i; \ - ci = (a)->i * (b)->r + (a)->r * (b)->i; \ - (c)->r = cr; \ - (c)->i = ci; \ - } - -#define zz_conj(a, b) { \ - (a)->r = (b)->r; \ - (a)->i = -((b)->i); \ - } - -/* Complex equality testing */ -#define z_eq(a, b) ( (a)->r == (b)->r && (a)->i == (b)->i ) - - -#ifdef __cplusplus -extern "C" { -#endif - -/* Prototypes for functions in dcomplex.c */ -void z_div(doublecomplex *, doublecomplex *, doublecomplex *); -double z_abs(doublecomplex *); /* exact */ -double z_abs1(doublecomplex *); /* approximate */ -void z_exp(doublecomplex *, doublecomplex *); -void d_cnjg(doublecomplex *r, doublecomplex *z); -double d_imag(doublecomplex *); - - -#ifdef __cplusplus - } -#endif - -#endif - -#endif /* __SUPERLU_DCOMPLEX */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dcopy_to_ucol.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dcopy_to_ucol.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dcopy_to_ucol.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dcopy_to_ucol.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,27 +1,26 @@ - -/* +/*! @file dcopy_to_ucol.c + * \brief Copy a computed column of U to the compressed data structure + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
  *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "dsp_defs.h" -#include "util.h" +#include "slu_ddefs.h" int dcopy_to_ucol( @@ -47,7 +46,6 @@ double *ucol; int *usub, *xusub; int nzumax; - double zero = 0.0; xsup = Glu->xsup; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ddiagonal.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ddiagonal.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ddiagonal.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ddiagonal.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,129 @@ + +/*! @file ddiagonal.c + * \brief Auxiliary routines to work with diagonal elements + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory
+ * June 30, 2009
+ * 
+ */ + +#include "slu_ddefs.h" + +int dfill_diag(int n, NCformat *Astore) +/* fill explicit zeros on the diagonal entries, so that the matrix is not + structurally singular. */ +{ + double *nzval = (double *)Astore->nzval; + int *rowind = Astore->rowind; + int *colptr = Astore->colptr; + int nnz = colptr[n]; + int fill = 0; + double *nzval_new; + double zero = 0.0; + int *rowind_new; + int i, j, diag; + + for (i = 0; i < n; i++) + { + diag = -1; + for (j = colptr[i]; j < colptr[i + 1]; j++) + if (rowind[j] == i) diag = j; + if (diag < 0) fill++; + } + if (fill) + { + nzval_new = doubleMalloc(nnz + fill); + rowind_new = intMalloc(nnz + fill); + fill = 0; + for (i = 0; i < n; i++) + { + diag = -1; + for (j = colptr[i] - fill; j < colptr[i + 1]; j++) + { + if ((rowind_new[j + fill] = rowind[j]) == i) diag = j; + nzval_new[j + fill] = nzval[j]; + } + if (diag < 0) + { + rowind_new[colptr[i + 1] + fill] = i; + nzval_new[colptr[i + 1] + fill] = zero; + fill++; + } + colptr[i + 1] += fill; + } + Astore->nzval = nzval_new; + Astore->rowind = rowind_new; + SUPERLU_FREE(nzval); + SUPERLU_FREE(rowind); + } + Astore->nnz += fill; + return fill; +} + +int ddominate(int n, NCformat *Astore) +/* make the matrix diagonally dominant */ +{ + double *nzval = (double *)Astore->nzval; + int *rowind = Astore->rowind; + int *colptr = Astore->colptr; + int nnz = colptr[n]; + int fill = 0; + double *nzval_new; + int *rowind_new; + int i, j, diag; + double s; + + for (i = 0; i < n; i++) + { + diag = -1; + for (j = colptr[i]; j < colptr[i + 1]; j++) + if (rowind[j] == i) diag = j; + if (diag < 0) fill++; + } + if (fill) + { + nzval_new = doubleMalloc(nnz + fill); + rowind_new = intMalloc(nnz+ fill); + fill = 0; + for (i = 0; i < n; i++) + { + s = 1e-6; + diag = -1; + for (j = colptr[i] - fill; j < colptr[i + 1]; j++) + { + if ((rowind_new[j + fill] = rowind[j]) == i) diag = j; + s += fabs(nzval_new[j + fill] = nzval[j]); + } + if (diag >= 0) { + nzval_new[diag+fill] = s * 3.0; + } else { + rowind_new[colptr[i + 1] + fill] = i; + nzval_new[colptr[i + 1] + fill] = s * 3.0; + fill++; + } + colptr[i + 1] += fill; + } + Astore->nzval = nzval_new; + Astore->rowind = rowind_new; + SUPERLU_FREE(nzval); + SUPERLU_FREE(rowind); + } + else + { + for (i = 0; i < n; i++) + { + s = 1e-6; + diag = -1; + for (j = colptr[i]; j < colptr[i + 1]; j++) + { + if (rowind[j] == i) diag = j; + s += fabs(nzval[j]); + } + nzval[diag] = s * 3.0; + } + } + Astore->nnz += fill; + return fill; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dGetDiagU.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dGetDiagU.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dGetDiagU.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dGetDiagU.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,39 +1,38 @@ -/* +/*! @file dGetDiagU.c + * \brief Extracts main diagonal of matrix + * + *
 
  * -- Auxiliary routine in SuperLU (version 2.0) --
  * Lawrence Berkeley National Lab, Univ. of California Berkeley.
  * Xiaoye S. Li
  * September 11, 2003
  *
- */
-
-#include "dsp_defs.h"
-
+ *  Purpose
+ * =======
+ *
+ * GetDiagU extracts the main diagonal of matrix U of the LU factorization.
+ *  
+ * Arguments
+ * =========
+ *
+ * L      (input) SuperMatrix*
+ *        The factor L from the factorization Pr*A*Pc=L*U as computed by
+ *        dgstrf(). Use compressed row subscripts storage for supernodes,
+ *        i.e., L has types: Stype = SLU_SC, Dtype = SLU_D, Mtype = SLU_TRLU.
+ *
+ * diagU  (output) double*, dimension (n)
+ *        The main diagonal of matrix U.
+ *
+ * Note
+ * ====
+ * The diagonal blocks of the L and U matrices are stored in the L
+ * data structures.
+ * 
+*/ +#include void dGetDiagU(SuperMatrix *L, double *diagU) { - /* - * Purpose - * ======= - * - * GetDiagU extracts the main diagonal of matrix U of the LU factorization. - * - * Arguments - * ========= - * - * L (input) SuperMatrix* - * The factor L from the factorization Pr*A*Pc=L*U as computed by - * dgstrf(). Use compressed row subscripts storage for supernodes, - * i.e., L has types: Stype = SLU_SC, Dtype = SLU_D, Mtype = SLU_TRLU. - * - * diagU (output) double*, dimension (n) - * The main diagonal of matrix U. - * - * Note - * ==== - * The diagonal blocks of the L and U matrices are stored in the L - * data structures. - * - */ int_t i, k, nsupers; int_t fsupc, nsupr, nsupc, luptr; double *dblock, *Lval; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgscon.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgscon.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgscon.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgscon.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,69 +1,80 @@ -/* +/*! @file dgscon.c + * \brief Estimates reciprocal of the condition number of a general matrix + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
+ * Modified from lapack routines DGECON.
+ * 
*/ + /* * File name: dgscon.c * History: Modified from lapack routines DGECON. */ #include -#include "dsp_defs.h" +#include "slu_ddefs.h" + +/*! \brief + * + *
+ *   Purpose   
+ *   =======   
+ *
+ *   DGSCON estimates the reciprocal of the condition number of a general 
+ *   real matrix A, in either the 1-norm or the infinity-norm, using   
+ *   the LU factorization computed by DGETRF.   *
+ *
+ *   An estimate is obtained for norm(inv(A)), and the reciprocal of the   
+ *   condition number is computed as   
+ *      RCOND = 1 / ( norm(A) * norm(inv(A)) ).   
+ *
+ *   See supermatrix.h for the definition of 'SuperMatrix' structure.
+ * 
+ *   Arguments   
+ *   =========   
+ *
+ *    NORM    (input) char*
+ *            Specifies whether the 1-norm condition number or the   
+ *            infinity-norm condition number is required:   
+ *            = '1' or 'O':  1-norm;   
+ *            = 'I':         Infinity-norm.
+ *	    
+ *    L       (input) SuperMatrix*
+ *            The factor L from the factorization Pr*A*Pc=L*U as computed by
+ *            dgstrf(). Use compressed row subscripts storage for supernodes,
+ *            i.e., L has types: Stype = SLU_SC, Dtype = SLU_D, Mtype = SLU_TRLU.
+ * 
+ *    U       (input) SuperMatrix*
+ *            The factor U from the factorization Pr*A*Pc=L*U as computed by
+ *            dgstrf(). Use column-wise storage scheme, i.e., U has types:
+ *            Stype = SLU_NC, Dtype = SLU_D, Mtype = SLU_TRU.
+ *	    
+ *    ANORM   (input) double
+ *            If NORM = '1' or 'O', the 1-norm of the original matrix A.   
+ *            If NORM = 'I', the infinity-norm of the original matrix A.
+ *	    
+ *    RCOND   (output) double*
+ *           The reciprocal of the condition number of the matrix A,   
+ *           computed as RCOND = 1/(norm(A) * norm(inv(A))).
+ *	    
+ *    INFO    (output) int*
+ *           = 0:  successful exit   
+ *           < 0:  if INFO = -i, the i-th argument had an illegal value   
+ *
+ *    ===================================================================== 
+ * 
+ */ void dgscon(char *norm, SuperMatrix *L, SuperMatrix *U, double anorm, double *rcond, SuperLUStat_t *stat, int *info) { -/* - Purpose - ======= - - DGSCON estimates the reciprocal of the condition number of a general - real matrix A, in either the 1-norm or the infinity-norm, using - the LU factorization computed by DGETRF. - - An estimate is obtained for norm(inv(A)), and the reciprocal of the - condition number is computed as - RCOND = 1 / ( norm(A) * norm(inv(A)) ). - - See supermatrix.h for the definition of 'SuperMatrix' structure. - - Arguments - ========= - - NORM (input) char* - Specifies whether the 1-norm condition number or the - infinity-norm condition number is required: - = '1' or 'O': 1-norm; - = 'I': Infinity-norm. - - L (input) SuperMatrix* - The factor L from the factorization Pr*A*Pc=L*U as computed by - dgstrf(). Use compressed row subscripts storage for supernodes, - i.e., L has types: Stype = SLU_SC, Dtype = SLU_D, Mtype = SLU_TRLU. - - U (input) SuperMatrix* - The factor U from the factorization Pr*A*Pc=L*U as computed by - dgstrf(). Use column-wise storage scheme, i.e., U has types: - Stype = SLU_NC, Dtype = SLU_D, Mtype = TRU. - - ANORM (input) double - If NORM = '1' or 'O', the 1-norm of the original matrix A. - If NORM = 'I', the infinity-norm of the original matrix A. - - RCOND (output) double* - The reciprocal of the condition number of the matrix A, - computed as RCOND = 1/(norm(A) * norm(inv(A))). - - INFO (output) int* - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - ===================================================================== -*/ /* Local variables */ int kase, kase1, onenrm, i; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgsequ.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgsequ.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgsequ.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgsequ.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,81 +1,90 @@ - -/* +/*! @file dgsequ.c + * \brief Computes row and column scalings + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
  *
+ * Modified from LAPACK routine DGEEQU
+ * 
*/ /* * File name: dgsequ.c * History: Modified from LAPACK routine DGEEQU */ #include -#include "dsp_defs.h" -#include "util.h" +#include "slu_ddefs.h" + + +/*! \brief + * + *
+ * Purpose   
+ *   =======   
+ *
+ *   DGSEQU computes row and column scalings intended to equilibrate an   
+ *   M-by-N sparse matrix A and reduce its condition number. R returns the row
+ *   scale factors and C the column scale factors, chosen to try to make   
+ *   the largest element in each row and column of the matrix B with   
+ *   elements B(i,j)=R(i)*A(i,j)*C(j) have absolute value 1.   
+ *
+ *   R(i) and C(j) are restricted to be between SMLNUM = smallest safe   
+ *   number and BIGNUM = largest safe number.  Use of these scaling   
+ *   factors is not guaranteed to reduce the condition number of A but   
+ *   works well in practice.   
+ *
+ *   See supermatrix.h for the definition of 'SuperMatrix' structure.
+ *
+ *   Arguments   
+ *   =========   
+ *
+ *   A       (input) SuperMatrix*
+ *           The matrix of dimension (A->nrow, A->ncol) whose equilibration
+ *           factors are to be computed. The type of A can be:
+ *           Stype = SLU_NC; Dtype = SLU_D; Mtype = SLU_GE.
+ *	    
+ *   R       (output) double*, size A->nrow
+ *           If INFO = 0 or INFO > M, R contains the row scale factors   
+ *           for A.
+ *	    
+ *   C       (output) double*, size A->ncol
+ *           If INFO = 0,  C contains the column scale factors for A.
+ *	    
+ *   ROWCND  (output) double*
+ *           If INFO = 0 or INFO > M, ROWCND contains the ratio of the   
+ *           smallest R(i) to the largest R(i).  If ROWCND >= 0.1 and   
+ *           AMAX is neither too large nor too small, it is not worth   
+ *           scaling by R.
+ *	    
+ *   COLCND  (output) double*
+ *           If INFO = 0, COLCND contains the ratio of the smallest   
+ *           C(i) to the largest C(i).  If COLCND >= 0.1, it is not   
+ *           worth scaling by C.
+ *	    
+ *   AMAX    (output) double*
+ *           Absolute value of largest matrix element.  If AMAX is very   
+ *           close to overflow or very close to underflow, the matrix   
+ *           should be scaled.
+ *	    
+ *   INFO    (output) int*
+ *           = 0:  successful exit   
+ *           < 0:  if INFO = -i, the i-th argument had an illegal value   
+ *           > 0:  if INFO = i,  and i is   
+ *                 <= A->nrow:  the i-th row of A is exactly zero   
+ *                 >  A->ncol:  the (i-M)-th column of A is exactly zero   
+ *
+ *   ===================================================================== 
+ * 
+ */ void dgsequ(SuperMatrix *A, double *r, double *c, double *rowcnd, double *colcnd, double *amax, int *info) { -/* - Purpose - ======= - - DGSEQU computes row and column scalings intended to equilibrate an - M-by-N sparse matrix A and reduce its condition number. R returns the row - scale factors and C the column scale factors, chosen to try to make - the largest element in each row and column of the matrix B with - elements B(i,j)=R(i)*A(i,j)*C(j) have absolute value 1. - - R(i) and C(j) are restricted to be between SMLNUM = smallest safe - number and BIGNUM = largest safe number. Use of these scaling - factors is not guaranteed to reduce the condition number of A but - works well in practice. - - See supermatrix.h for the definition of 'SuperMatrix' structure. - - Arguments - ========= - - A (input) SuperMatrix* - The matrix of dimension (A->nrow, A->ncol) whose equilibration - factors are to be computed. The type of A can be: - Stype = SLU_NC; Dtype = SLU_D; Mtype = SLU_GE. - - R (output) double*, size A->nrow - If INFO = 0 or INFO > M, R contains the row scale factors - for A. - - C (output) double*, size A->ncol - If INFO = 0, C contains the column scale factors for A. - - ROWCND (output) double* - If INFO = 0 or INFO > M, ROWCND contains the ratio of the - smallest R(i) to the largest R(i). If ROWCND >= 0.1 and - AMAX is neither too large nor too small, it is not worth - scaling by R. - - COLCND (output) double* - If INFO = 0, COLCND contains the ratio of the smallest - C(i) to the largest C(i). If COLCND >= 0.1, it is not - worth scaling by C. - - AMAX (output) double* - Absolute value of largest matrix element. If AMAX is very - close to overflow or very close to underflow, the matrix - should be scaled. - - INFO (output) int* - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - > 0: if INFO = i, and i is - <= A->nrow: the i-th row of A is exactly zero - > A->ncol: the (i-M)-th column of A is exactly zero - ===================================================================== -*/ /* Local variables */ NCformat *Astore; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgsisx.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgsisx.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgsisx.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgsisx.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,693 @@ + +/*! @file dgsisx.c + * \brief Gives the approximate solutions of linear equations A*X=B or A'*X=B + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory.
+ * June 30, 2009
+ * 
+ */ +#include "slu_ddefs.h" + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *
+ * DGSISX gives the approximate solutions of linear equations A*X=B or A'*X=B,
+ * using the ILU factorization from dgsitrf(). An estimation of
+ * the condition number is provided. It performs the following steps:
+ *
+ *   1. If A is stored column-wise (A->Stype = SLU_NC):
+ *  
+ *	1.1. If options->Equil = YES or options->RowPerm = LargeDiag, scaling
+ *	     factors are computed to equilibrate the system:
+ *	     options->Trans = NOTRANS:
+ *		 diag(R)*A*diag(C) *inv(diag(C))*X = diag(R)*B
+ *	     options->Trans = TRANS:
+ *		 (diag(R)*A*diag(C))**T *inv(diag(R))*X = diag(C)*B
+ *	     options->Trans = CONJ:
+ *		 (diag(R)*A*diag(C))**H *inv(diag(R))*X = diag(C)*B
+ *	     Whether or not the system will be equilibrated depends on the
+ *	     scaling of the matrix A, but if equilibration is used, A is
+ *	     overwritten by diag(R)*A*diag(C) and B by diag(R)*B
+ *	     (if options->Trans=NOTRANS) or diag(C)*B (if options->Trans
+ *	     = TRANS or CONJ).
+ *
+ *	1.2. Permute columns of A, forming A*Pc, where Pc is a permutation
+ *	     matrix that usually preserves sparsity.
+ *	     For more details of this step, see sp_preorder.c.
+ *
+ *	1.3. If options->Fact != FACTORED, the LU decomposition is used to
+ *	     factor the matrix A (after equilibration if options->Equil = YES)
+ *	     as Pr*A*Pc = L*U, with Pr determined by partial pivoting.
+ *
+ *	1.4. Compute the reciprocal pivot growth factor.
+ *
+ *	1.5. If some U(i,i) = 0, so that U is exactly singular, then the
+ *	     routine fills a small number on the diagonal entry, that is
+ *		U(i,i) = ||A(:,i)||_oo * options->ILU_FillTol ** (1 - i / n),
+ *	     and info will be increased by 1. The factored form of A is used
+ *	     to estimate the condition number of the preconditioner. If the
+ *	     reciprocal of the condition number is less than machine precision,
+ *	     info = A->ncol+1 is returned as a warning, but the routine still
+ *	     goes on to solve for X.
+ *
+ *	1.6. The system of equations is solved for X using the factored form
+ *	     of A.
+ *
+ *	1.7. options->IterRefine is not used
+ *
+ *	1.8. If equilibration was used, the matrix X is premultiplied by
+ *	     diag(C) (if options->Trans = NOTRANS) or diag(R)
+ *	     (if options->Trans = TRANS or CONJ) so that it solves the
+ *	     original system before equilibration.
+ *
+ *	1.9. options for ILU only
+ *	     1) If options->RowPerm = LargeDiag, MC64 is used to scale and
+ *		permute the matrix to an I-matrix, that is Pr*Dr*A*Dc has
+ *		entries of modulus 1 on the diagonal and off-diagonal entries
+ *		of modulus at most 1. If MC64 fails, dgsequ() is used to
+ *		equilibrate the system.
+ *	     2) options->ILU_DropTol = tau is the threshold for dropping.
+ *		For L, it is used directly (for the whole row in a supernode);
+ *		For U, ||A(:,i)||_oo * tau is used as the threshold
+ *	        for the	i-th column.
+ *		If a secondary dropping rule is required, tau will
+ *	        also be used to compute the second threshold.
+ *	     3) options->ILU_FillFactor = gamma, used as the initial guess
+ *		of memory growth.
+ *		If a secondary dropping rule is required, it will also
+ *              be used as an upper bound of the memory.
+ *	     4) options->ILU_DropRule specifies the dropping rule.
+ *		Option		Explanation
+ *		======		===========
+ *		DROP_BASIC:	Basic dropping rule, supernodal based ILU.
+ *		DROP_PROWS:	Supernodal based ILUTP, p = gamma * nnz(A) / n.
+ *		DROP_COLUMN:	Variation of ILUTP, for j-th column,
+ *				p = gamma * nnz(A(:,j)).
+ *		DROP_AREA;	Variation of ILUTP, for j-th column, use
+ *				nnz(F(:,1:j)) / nnz(A(:,1:j)) to control the
+ *				memory.
+ *		DROP_DYNAMIC:	Modify the threshold tau during the
+ *				factorizaion.
+ *				If nnz(L(:,1:j)) / nnz(A(:,1:j)) < gamma
+ *				    tau_L(j) := MIN(1, tau_L(j-1) * 2);
+ *				Otherwise
+ *				    tau_L(j) := MIN(1, tau_L(j-1) * 2);
+ *				tau_U(j) uses the similar rule.
+ *				NOTE: the thresholds used by L and U are
+ *				indenpendent.
+ *		DROP_INTERP:	Compute the second dropping threshold by
+ *				interpolation instead of sorting (default).
+ *				In this case, the actual fill ratio is not
+ *				guaranteed smaller than gamma.
+ *		DROP_PROWS, DROP_COLUMN and DROP_AREA are mutually exclusive.
+ *		( The default option is DROP_BASIC | DROP_AREA. )
+ *	     5) options->ILU_Norm is the criterion of computing the average
+ *		value of a row in L.
+ *		options->ILU_Norm	average(x[1:n])
+ *		=================	===============
+ *		ONE_NORM		||x||_1 / n
+ *		TWO_NORM		||x||_2 / sqrt(n)
+ *		INF_NORM		max{|x[i]|}
+ *	     6) options->ILU_MILU specifies the type of MILU's variation.
+ *		= SILU (default): do not perform MILU;
+ *		= SMILU_1 (not recommended):
+ *		    U(i,i) := U(i,i) + sum(dropped entries);
+ *		= SMILU_2:
+ *		    U(i,i) := U(i,i) + SGN(U(i,i)) * sum(dropped entries);
+ *		= SMILU_3:
+ *		    U(i,i) := U(i,i) + SGN(U(i,i)) * sum(|dropped entries|);
+ *		NOTE: Even SMILU_1 does not preserve the column sum because of
+ *		late dropping.
+ *	     7) options->ILU_FillTol is used as the perturbation when
+ *		encountering zero pivots. If some U(i,i) = 0, so that U is
+ *		exactly singular, then
+ *		   U(i,i) := ||A(:,i)|| * options->ILU_FillTol ** (1 - i / n).
+ *
+ *   2. If A is stored row-wise (A->Stype = SLU_NR), apply the above algorithm
+ *	to the transpose of A:
+ *
+ *	2.1. If options->Equil = YES or options->RowPerm = LargeDiag, scaling
+ *	     factors are computed to equilibrate the system:
+ *	     options->Trans = NOTRANS:
+ *		 diag(R)*A*diag(C) *inv(diag(C))*X = diag(R)*B
+ *	     options->Trans = TRANS:
+ *		 (diag(R)*A*diag(C))**T *inv(diag(R))*X = diag(C)*B
+ *	     options->Trans = CONJ:
+ *		 (diag(R)*A*diag(C))**H *inv(diag(R))*X = diag(C)*B
+ *	     Whether or not the system will be equilibrated depends on the
+ *	     scaling of the matrix A, but if equilibration is used, A' is
+ *	     overwritten by diag(R)*A'*diag(C) and B by diag(R)*B
+ *	     (if trans='N') or diag(C)*B (if trans = 'T' or 'C').
+ *
+ *	2.2. Permute columns of transpose(A) (rows of A),
+ *	     forming transpose(A)*Pc, where Pc is a permutation matrix that
+ *	     usually preserves sparsity.
+ *	     For more details of this step, see sp_preorder.c.
+ *
+ *	2.3. If options->Fact != FACTORED, the LU decomposition is used to
+ *	     factor the transpose(A) (after equilibration if
+ *	     options->Fact = YES) as Pr*transpose(A)*Pc = L*U with the
+ *	     permutation Pr determined by partial pivoting.
+ *
+ *	2.4. Compute the reciprocal pivot growth factor.
+ *
+ *	2.5. If some U(i,i) = 0, so that U is exactly singular, then the
+ *	     routine fills a small number on the diagonal entry, that is
+ *		 U(i,i) = ||A(:,i)||_oo * options->ILU_FillTol ** (1 - i / n).
+ *	     And info will be increased by 1. The factored form of A is used
+ *	     to estimate the condition number of the preconditioner. If the
+ *	     reciprocal of the condition number is less than machine precision,
+ *	     info = A->ncol+1 is returned as a warning, but the routine still
+ *	     goes on to solve for X.
+ *
+ *	2.6. The system of equations is solved for X using the factored form
+ *	     of transpose(A).
+ *
+ *	2.7. If options->IterRefine is not used.
+ *
+ *	2.8. If equilibration was used, the matrix X is premultiplied by
+ *	     diag(C) (if options->Trans = NOTRANS) or diag(R)
+ *	     (if options->Trans = TRANS or CONJ) so that it solves the
+ *	     original system before equilibration.
+ *
+ *   See supermatrix.h for the definition of 'SuperMatrix' structure.
+ *
+ * Arguments
+ * =========
+ *
+ * options (input) superlu_options_t*
+ *	   The structure defines the input parameters to control
+ *	   how the LU decomposition will be performed and how the
+ *	   system will be solved.
+ *
+ * A	   (input/output) SuperMatrix*
+ *	   Matrix A in A*X=B, of dimension (A->nrow, A->ncol). The number
+ *	   of the linear equations is A->nrow. Currently, the type of A can be:
+ *	   Stype = SLU_NC or SLU_NR, Dtype = SLU_D, Mtype = SLU_GE.
+ *	   In the future, more general A may be handled.
+ *
+ *	   On entry, If options->Fact = FACTORED and equed is not 'N',
+ *	   then A must have been equilibrated by the scaling factors in
+ *	   R and/or C.
+ *	   On exit, A is not modified if options->Equil = NO, or if
+ *	   options->Equil = YES but equed = 'N' on exit.
+ *	   Otherwise, if options->Equil = YES and equed is not 'N',
+ *	   A is scaled as follows:
+ *	   If A->Stype = SLU_NC:
+ *	     equed = 'R':  A := diag(R) * A
+ *	     equed = 'C':  A := A * diag(C)
+ *	     equed = 'B':  A := diag(R) * A * diag(C).
+ *	   If A->Stype = SLU_NR:
+ *	     equed = 'R':  transpose(A) := diag(R) * transpose(A)
+ *	     equed = 'C':  transpose(A) := transpose(A) * diag(C)
+ *	     equed = 'B':  transpose(A) := diag(R) * transpose(A) * diag(C).
+ *
+ * perm_c  (input/output) int*
+ *	   If A->Stype = SLU_NC, Column permutation vector of size A->ncol,
+ *	   which defines the permutation matrix Pc; perm_c[i] = j means
+ *	   column i of A is in position j in A*Pc.
+ *	   On exit, perm_c may be overwritten by the product of the input
+ *	   perm_c and a permutation that postorders the elimination tree
+ *	   of Pc'*A'*A*Pc; perm_c is not changed if the elimination tree
+ *	   is already in postorder.
+ *
+ *	   If A->Stype = SLU_NR, column permutation vector of size A->nrow,
+ *	   which describes permutation of columns of transpose(A) 
+ *	   (rows of A) as described above.
+ *
+ * perm_r  (input/output) int*
+ *	   If A->Stype = SLU_NC, row permutation vector of size A->nrow, 
+ *	   which defines the permutation matrix Pr, and is determined
+ *	   by partial pivoting.  perm_r[i] = j means row i of A is in 
+ *	   position j in Pr*A.
+ *
+ *	   If A->Stype = SLU_NR, permutation vector of size A->ncol, which
+ *	   determines permutation of rows of transpose(A)
+ *	   (columns of A) as described above.
+ *
+ *	   If options->Fact = SamePattern_SameRowPerm, the pivoting routine
+ *	   will try to use the input perm_r, unless a certain threshold
+ *	   criterion is violated. In that case, perm_r is overwritten by a
+ *	   new permutation determined by partial pivoting or diagonal
+ *	   threshold pivoting.
+ *	   Otherwise, perm_r is output argument.
+ *
+ * etree   (input/output) int*,  dimension (A->ncol)
+ *	   Elimination tree of Pc'*A'*A*Pc.
+ *	   If options->Fact != FACTORED and options->Fact != DOFACT,
+ *	   etree is an input argument, otherwise it is an output argument.
+ *	   Note: etree is a vector of parent pointers for a forest whose
+ *	   vertices are the integers 0 to A->ncol-1; etree[root]==A->ncol.
+ *
+ * equed   (input/output) char*
+ *	   Specifies the form of equilibration that was done.
+ *	   = 'N': No equilibration.
+ *	   = 'R': Row equilibration, i.e., A was premultiplied by diag(R).
+ *	   = 'C': Column equilibration, i.e., A was postmultiplied by diag(C).
+ *	   = 'B': Both row and column equilibration, i.e., A was replaced 
+ *		  by diag(R)*A*diag(C).
+ *	   If options->Fact = FACTORED, equed is an input argument,
+ *	   otherwise it is an output argument.
+ *
+ * R	   (input/output) double*, dimension (A->nrow)
+ *	   The row scale factors for A or transpose(A).
+ *	   If equed = 'R' or 'B', A (if A->Stype = SLU_NC) or transpose(A)
+ *	       (if A->Stype = SLU_NR) is multiplied on the left by diag(R).
+ *	   If equed = 'N' or 'C', R is not accessed.
+ *	   If options->Fact = FACTORED, R is an input argument,
+ *	       otherwise, R is output.
+ *	   If options->zFact = FACTORED and equed = 'R' or 'B', each element
+ *	       of R must be positive.
+ *
+ * C	   (input/output) double*, dimension (A->ncol)
+ *	   The column scale factors for A or transpose(A).
+ *	   If equed = 'C' or 'B', A (if A->Stype = SLU_NC) or transpose(A)
+ *	       (if A->Stype = SLU_NR) is multiplied on the right by diag(C).
+ *	   If equed = 'N' or 'R', C is not accessed.
+ *	   If options->Fact = FACTORED, C is an input argument,
+ *	       otherwise, C is output.
+ *	   If options->Fact = FACTORED and equed = 'C' or 'B', each element
+ *	       of C must be positive.
+ *
+ * L	   (output) SuperMatrix*
+ *	   The factor L from the factorization
+ *	       Pr*A*Pc=L*U		(if A->Stype SLU_= NC) or
+ *	       Pr*transpose(A)*Pc=L*U	(if A->Stype = SLU_NR).
+ *	   Uses compressed row subscripts storage for supernodes, i.e.,
+ *	   L has types: Stype = SLU_SC, Dtype = SLU_D, Mtype = SLU_TRLU.
+ *
+ * U	   (output) SuperMatrix*
+ *	   The factor U from the factorization
+ *	       Pr*A*Pc=L*U		(if A->Stype = SLU_NC) or
+ *	       Pr*transpose(A)*Pc=L*U	(if A->Stype = SLU_NR).
+ *	   Uses column-wise storage scheme, i.e., U has types:
+ *	   Stype = SLU_NC, Dtype = SLU_D, Mtype = SLU_TRU.
+ *
+ * work    (workspace/output) void*, size (lwork) (in bytes)
+ *	   User supplied workspace, should be large enough
+ *	   to hold data structures for factors L and U.
+ *	   On exit, if fact is not 'F', L and U point to this array.
+ *
+ * lwork   (input) int
+ *	   Specifies the size of work array in bytes.
+ *	   = 0:  allocate space internally by system malloc;
+ *	   > 0:  use user-supplied work array of length lwork in bytes,
+ *		 returns error if space runs out.
+ *	   = -1: the routine guesses the amount of space needed without
+ *		 performing the factorization, and returns it in
+ *		 mem_usage->total_needed; no other side effects.
+ *
+ *	   See argument 'mem_usage' for memory usage statistics.
+ *
+ * B	   (input/output) SuperMatrix*
+ *	   B has types: Stype = SLU_DN, Dtype = SLU_D, Mtype = SLU_GE.
+ *	   On entry, the right hand side matrix.
+ *	   If B->ncol = 0, only LU decomposition is performed, the triangular
+ *			   solve is skipped.
+ *	   On exit,
+ *	      if equed = 'N', B is not modified; otherwise
+ *	      if A->Stype = SLU_NC:
+ *		 if options->Trans = NOTRANS and equed = 'R' or 'B',
+ *		    B is overwritten by diag(R)*B;
+ *		 if options->Trans = TRANS or CONJ and equed = 'C' of 'B',
+ *		    B is overwritten by diag(C)*B;
+ *	      if A->Stype = SLU_NR:
+ *		 if options->Trans = NOTRANS and equed = 'C' or 'B',
+ *		    B is overwritten by diag(C)*B;
+ *		 if options->Trans = TRANS or CONJ and equed = 'R' of 'B',
+ *		    B is overwritten by diag(R)*B.
+ *
+ * X	   (output) SuperMatrix*
+ *	   X has types: Stype = SLU_DN, Dtype = SLU_D, Mtype = SLU_GE.
+ *	   If info = 0 or info = A->ncol+1, X contains the solution matrix
+ *	   to the original system of equations. Note that A and B are modified
+ *	   on exit if equed is not 'N', and the solution to the equilibrated
+ *	   system is inv(diag(C))*X if options->Trans = NOTRANS and
+ *	   equed = 'C' or 'B', or inv(diag(R))*X if options->Trans = 'T' or 'C'
+ *	   and equed = 'R' or 'B'.
+ *
+ * recip_pivot_growth (output) double*
+ *	   The reciprocal pivot growth factor max_j( norm(A_j)/norm(U_j) ).
+ *	   The infinity norm is used. If recip_pivot_growth is much less
+ *	   than 1, the stability of the LU factorization could be poor.
+ *
+ * rcond   (output) double*
+ *	   The estimate of the reciprocal condition number of the matrix A
+ *	   after equilibration (if done). If rcond is less than the machine
+ *	   precision (in particular, if rcond = 0), the matrix is singular
+ *	   to working precision. This condition is indicated by a return
+ *	   code of info > 0.
+ *
+ * mem_usage (output) mem_usage_t*
+ *	   Record the memory usage statistics, consisting of following fields:
+ *	   - for_lu (float)
+ *	     The amount of space used in bytes for L\U data structures.
+ *	   - total_needed (float)
+ *	     The amount of space needed in bytes to perform factorization.
+ *	   - expansions (int)
+ *	     The number of memory expansions during the LU factorization.
+ *
+ * stat   (output) SuperLUStat_t*
+ *	  Record the statistics on runtime and floating-point operation count.
+ *	  See slu_util.h for the definition of 'SuperLUStat_t'.
+ *
+ * info    (output) int*
+ *	   = 0: successful exit
+ *	   < 0: if info = -i, the i-th argument had an illegal value
+ *	   > 0: if info = i, and i is
+ *		<= A->ncol: number of zero pivots. They are replaced by small
+ *		      entries due to options->ILU_FillTol.
+ *		= A->ncol+1: U is nonsingular, but RCOND is less than machine
+ *		      precision, meaning that the matrix is singular to
+ *		      working precision. Nevertheless, the solution and
+ *		      error bounds are computed because there are a number
+ *		      of situations where the computed solution can be more
+ *		      accurate than the value of RCOND would suggest.
+ *		> A->ncol+1: number of bytes allocated when memory allocation
+ *		      failure occurred, plus A->ncol.
+ * 
+ */ + +void +dgsisx(superlu_options_t *options, SuperMatrix *A, int *perm_c, int *perm_r, + int *etree, char *equed, double *R, double *C, + SuperMatrix *L, SuperMatrix *U, void *work, int lwork, + SuperMatrix *B, SuperMatrix *X, + double *recip_pivot_growth, double *rcond, + mem_usage_t *mem_usage, SuperLUStat_t *stat, int *info) +{ + + DNformat *Bstore, *Xstore; + double *Bmat, *Xmat; + int ldb, ldx, nrhs; + SuperMatrix *AA;/* A in SLU_NC format used by the factorization routine.*/ + SuperMatrix AC; /* Matrix postmultiplied by Pc */ + int colequ, equil, nofact, notran, rowequ, permc_spec, mc64; + trans_t trant; + char norm[1]; + int i, j, info1; + double amax, anorm, bignum, smlnum, colcnd, rowcnd, rcmax, rcmin; + int relax, panel_size; + double diag_pivot_thresh; + double t0; /* temporary time */ + double *utime; + + int *perm = NULL; + + /* External functions */ + extern double dlangs(char *, SuperMatrix *); + + Bstore = B->Store; + Xstore = X->Store; + Bmat = Bstore->nzval; + Xmat = Xstore->nzval; + ldb = Bstore->lda; + ldx = Xstore->lda; + nrhs = B->ncol; + + *info = 0; + nofact = (options->Fact != FACTORED); + equil = (options->Equil == YES); + notran = (options->Trans == NOTRANS); + mc64 = (options->RowPerm == LargeDiag); + if ( nofact ) { + *(unsigned char *)equed = 'N'; + rowequ = FALSE; + colequ = FALSE; + } else { + rowequ = lsame_(equed, "R") || lsame_(equed, "B"); + colequ = lsame_(equed, "C") || lsame_(equed, "B"); + smlnum = dlamch_("Safe minimum"); + bignum = 1. / smlnum; + } + + /* Test the input parameters */ + if (!nofact && options->Fact != DOFACT && options->Fact != SamePattern && + options->Fact != SamePattern_SameRowPerm && + !notran && options->Trans != TRANS && options->Trans != CONJ && + !equil && options->Equil != NO) + *info = -1; + else if ( A->nrow != A->ncol || A->nrow < 0 || + (A->Stype != SLU_NC && A->Stype != SLU_NR) || + A->Dtype != SLU_D || A->Mtype != SLU_GE ) + *info = -2; + else if (options->Fact == FACTORED && + !(rowequ || colequ || lsame_(equed, "N"))) + *info = -6; + else { + if (rowequ) { + rcmin = bignum; + rcmax = 0.; + for (j = 0; j < A->nrow; ++j) { + rcmin = SUPERLU_MIN(rcmin, R[j]); + rcmax = SUPERLU_MAX(rcmax, R[j]); + } + if (rcmin <= 0.) *info = -7; + else if ( A->nrow > 0) + rowcnd = SUPERLU_MAX(rcmin,smlnum) / SUPERLU_MIN(rcmax,bignum); + else rowcnd = 1.; + } + if (colequ && *info == 0) { + rcmin = bignum; + rcmax = 0.; + for (j = 0; j < A->nrow; ++j) { + rcmin = SUPERLU_MIN(rcmin, C[j]); + rcmax = SUPERLU_MAX(rcmax, C[j]); + } + if (rcmin <= 0.) *info = -8; + else if (A->nrow > 0) + colcnd = SUPERLU_MAX(rcmin,smlnum) / SUPERLU_MIN(rcmax,bignum); + else colcnd = 1.; + } + if (*info == 0) { + if ( lwork < -1 ) *info = -12; + else if ( B->ncol < 0 || Bstore->lda < SUPERLU_MAX(0, A->nrow) || + B->Stype != SLU_DN || B->Dtype != SLU_D || + B->Mtype != SLU_GE ) + *info = -13; + else if ( X->ncol < 0 || Xstore->lda < SUPERLU_MAX(0, A->nrow) || + (B->ncol != 0 && B->ncol != X->ncol) || + X->Stype != SLU_DN || + X->Dtype != SLU_D || X->Mtype != SLU_GE ) + *info = -14; + } + } + if (*info != 0) { + i = -(*info); + xerbla_("dgsisx", &i); + return; + } + + /* Initialization for factor parameters */ + panel_size = sp_ienv(1); + relax = sp_ienv(2); + diag_pivot_thresh = options->DiagPivotThresh; + + utime = stat->utime; + + /* Convert A to SLU_NC format when necessary. */ + if ( A->Stype == SLU_NR ) { + NRformat *Astore = A->Store; + AA = (SuperMatrix *) SUPERLU_MALLOC( sizeof(SuperMatrix) ); + dCreate_CompCol_Matrix(AA, A->ncol, A->nrow, Astore->nnz, + Astore->nzval, Astore->colind, Astore->rowptr, + SLU_NC, A->Dtype, A->Mtype); + if ( notran ) { /* Reverse the transpose argument. */ + trant = TRANS; + notran = 0; + } else { + trant = NOTRANS; + notran = 1; + } + } else { /* A->Stype == SLU_NC */ + trant = options->Trans; + AA = A; + } + + if ( nofact ) { + register int i, j; + NCformat *Astore = AA->Store; + int nnz = Astore->nnz; + int *colptr = Astore->colptr; + int *rowind = Astore->rowind; + double *nzval = (double *)Astore->nzval; + int n = AA->nrow; + + if ( mc64 ) { + *equed = 'B'; + rowequ = colequ = 1; + t0 = SuperLU_timer_(); + if ((perm = intMalloc(n)) == NULL) + ABORT("SUPERLU_MALLOC fails for perm[]"); + + info1 = dldperm(5, n, nnz, colptr, rowind, nzval, perm, R, C); + + if (info1 > 0) { /* MC64 fails, call dgsequ() later */ + mc64 = 0; + SUPERLU_FREE(perm); + perm = NULL; + } else { + for (i = 0; i < n; i++) { + R[i] = exp(R[i]); + C[i] = exp(C[i]); + } + /* permute and scale the matrix */ + for (j = 0; j < n; j++) { + for (i = colptr[j]; i < colptr[j + 1]; i++) { + nzval[i] *= R[rowind[i]] * C[j]; + rowind[i] = perm[rowind[i]]; + } + } + } + utime[EQUIL] = SuperLU_timer_() - t0; + } + if ( !mc64 & equil ) { + t0 = SuperLU_timer_(); + /* Compute row and column scalings to equilibrate the matrix A. */ + dgsequ(AA, R, C, &rowcnd, &colcnd, &amax, &info1); + + if ( info1 == 0 ) { + /* Equilibrate matrix A. */ + dlaqgs(AA, R, C, rowcnd, colcnd, amax, equed); + rowequ = lsame_(equed, "R") || lsame_(equed, "B"); + colequ = lsame_(equed, "C") || lsame_(equed, "B"); + } + utime[EQUIL] = SuperLU_timer_() - t0; + } + } + + if ( nrhs > 0 ) { + /* Scale the right hand side if equilibration was performed. */ + if ( notran ) { + if ( rowequ ) { + for (j = 0; j < nrhs; ++j) + for (i = 0; i < A->nrow; ++i) { + Bmat[i + j*ldb] *= R[i]; + } + } + } else if ( colequ ) { + for (j = 0; j < nrhs; ++j) + for (i = 0; i < A->nrow; ++i) { + Bmat[i + j*ldb] *= C[i]; + } + } + } + + if ( nofact ) { + + t0 = SuperLU_timer_(); + /* + * Gnet column permutation vector perm_c[], according to permc_spec: + * permc_spec = NATURAL: natural ordering + * permc_spec = MMD_AT_PLUS_A: minimum degree on structure of A'+A + * permc_spec = MMD_ATA: minimum degree on structure of A'*A + * permc_spec = COLAMD: approximate minimum degree column ordering + * permc_spec = MY_PERMC: the ordering already supplied in perm_c[] + */ + permc_spec = options->ColPerm; + if ( permc_spec != MY_PERMC && options->Fact == DOFACT ) + get_perm_c(permc_spec, AA, perm_c); + utime[COLPERM] = SuperLU_timer_() - t0; + + t0 = SuperLU_timer_(); + sp_preorder(options, AA, perm_c, etree, &AC); + utime[ETREE] = SuperLU_timer_() - t0; + + /* Compute the LU factorization of A*Pc. */ + t0 = SuperLU_timer_(); + dgsitrf(options, &AC, relax, panel_size, etree, work, lwork, + perm_c, perm_r, L, U, stat, info); + utime[FACT] = SuperLU_timer_() - t0; + + if ( lwork == -1 ) { + mem_usage->total_needed = *info - A->ncol; + return; + } + } + + if ( options->PivotGrowth ) { + if ( *info > 0 ) return; + + /* Compute the reciprocal pivot growth factor *recip_pivot_growth. */ + *recip_pivot_growth = dPivotGrowth(A->ncol, AA, perm_c, L, U); + } + + if ( options->ConditionNumber ) { + /* Estimate the reciprocal of the condition number of A. */ + t0 = SuperLU_timer_(); + if ( notran ) { + *(unsigned char *)norm = '1'; + } else { + *(unsigned char *)norm = 'I'; + } + anorm = dlangs(norm, AA); + dgscon(norm, L, U, anorm, rcond, stat, &info1); + utime[RCOND] = SuperLU_timer_() - t0; + } + + if ( nrhs > 0 ) { + /* Compute the solution matrix X. */ + for (j = 0; j < nrhs; j++) /* Save a copy of the right hand sides */ + for (i = 0; i < B->nrow; i++) + Xmat[i + j*ldx] = Bmat[i + j*ldb]; + + t0 = SuperLU_timer_(); + dgstrs (trant, L, U, perm_c, perm_r, X, stat, &info1); + utime[SOLVE] = SuperLU_timer_() - t0; + + /* Transform the solution matrix X to a solution of the original + system. */ + if ( notran ) { + if ( colequ ) { + for (j = 0; j < nrhs; ++j) + for (i = 0; i < A->nrow; ++i) { + Xmat[i + j*ldx] *= C[i]; + } + } + } else { + if ( rowequ ) { + if (perm) { + double *tmp; + int n = A->nrow; + + if ((tmp = doubleMalloc(n)) == NULL) + ABORT("SUPERLU_MALLOC fails for tmp[]"); + for (j = 0; j < nrhs; j++) { + for (i = 0; i < n; i++) + tmp[i] = Xmat[i + j * ldx]; /*dcopy*/ + for (i = 0; i < n; i++) + Xmat[i + j * ldx] = R[i] * tmp[perm[i]]; + } + SUPERLU_FREE(tmp); + } else { + for (j = 0; j < nrhs; ++j) + for (i = 0; i < A->nrow; ++i) { + Xmat[i + j*ldx] *= R[i]; + } + } + } + } + } /* end if nrhs > 0 */ + + if ( options->ConditionNumber ) { + /* Set INFO = A->ncol+1 if the matrix is singular to working precision. */ + if ( *rcond < dlamch_("E") && *info == 0) *info = A->ncol + 1; + } + + if (perm) SUPERLU_FREE(perm); + + if ( nofact ) { + ilu_dQuerySpace(L, U, mem_usage); + Destroy_CompCol_Permuted(&AC); + } + if ( A->Stype == SLU_NR ) { + Destroy_SuperMatrix_Store(AA); + SUPERLU_FREE(AA); + } + +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgsitrf.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgsitrf.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgsitrf.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgsitrf.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,625 @@ + +/*! @file dgsitf.c + * \brief Computes an ILU factorization of a general sparse matrix + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory.
+ * June 30, 2009
+ * 
+ */ + +#include "slu_ddefs.h" + +#ifdef DEBUG +int num_drop_L; +#endif + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *
+ * DGSITRF computes an ILU factorization of a general sparse m-by-n
+ * matrix A using partial pivoting with row interchanges.
+ * The factorization has the form
+ *     Pr * A = L * U
+ * where Pr is a row permutation matrix, L is lower triangular with unit
+ * diagonal elements (lower trapezoidal if A->nrow > A->ncol), and U is upper
+ * triangular (upper trapezoidal if A->nrow < A->ncol).
+ *
+ * See supermatrix.h for the definition of 'SuperMatrix' structure.
+ *
+ * Arguments
+ * =========
+ *
+ * options (input) superlu_options_t*
+ *	   The structure defines the input parameters to control
+ *	   how the ILU decomposition will be performed.
+ *
+ * A	    (input) SuperMatrix*
+ *	    Original matrix A, permuted by columns, of dimension
+ *	    (A->nrow, A->ncol). The type of A can be:
+ *	    Stype = SLU_NCP; Dtype = SLU_D; Mtype = SLU_GE.
+ *
+ * relax    (input) int
+ *	    To control degree of relaxing supernodes. If the number
+ *	    of nodes (columns) in a subtree of the elimination tree is less
+ *	    than relax, this subtree is considered as one supernode,
+ *	    regardless of the row structures of those columns.
+ *
+ * panel_size (input) int
+ *	    A panel consists of at most panel_size consecutive columns.
+ *
+ * etree    (input) int*, dimension (A->ncol)
+ *	    Elimination tree of A'*A.
+ *	    Note: etree is a vector of parent pointers for a forest whose
+ *	    vertices are the integers 0 to A->ncol-1; etree[root]==A->ncol.
+ *	    On input, the columns of A should be permuted so that the
+ *	    etree is in a certain postorder.
+ *
+ * work     (input/output) void*, size (lwork) (in bytes)
+ *	    User-supplied work space and space for the output data structures.
+ *	    Not referenced if lwork = 0;
+ *
+ * lwork   (input) int
+ *	   Specifies the size of work array in bytes.
+ *	   = 0:  allocate space internally by system malloc;
+ *	   > 0:  use user-supplied work array of length lwork in bytes,
+ *		 returns error if space runs out.
+ *	   = -1: the routine guesses the amount of space needed without
+ *		 performing the factorization, and returns it in
+ *		 *info; no other side effects.
+ *
+ * perm_c   (input) int*, dimension (A->ncol)
+ *	    Column permutation vector, which defines the
+ *	    permutation matrix Pc; perm_c[i] = j means column i of A is
+ *	    in position j in A*Pc.
+ *	    When searching for diagonal, perm_c[*] is applied to the
+ *	    row subscripts of A, so that diagonal threshold pivoting
+ *	    can find the diagonal of A, rather than that of A*Pc.
+ *
+ * perm_r   (input/output) int*, dimension (A->nrow)
+ *	    Row permutation vector which defines the permutation matrix Pr,
+ *	    perm_r[i] = j means row i of A is in position j in Pr*A.
+ *	    If options->Fact = SamePattern_SameRowPerm, the pivoting routine
+ *	       will try to use the input perm_r, unless a certain threshold
+ *	       criterion is violated. In that case, perm_r is overwritten by
+ *	       a new permutation determined by partial pivoting or diagonal
+ *	       threshold pivoting.
+ *	    Otherwise, perm_r is output argument;
+ *
+ * L	    (output) SuperMatrix*
+ *	    The factor L from the factorization Pr*A=L*U; use compressed row
+ *	    subscripts storage for supernodes, i.e., L has type:
+ *	    Stype = SLU_SC, Dtype = SLU_D, Mtype = SLU_TRLU.
+ *
+ * U	    (output) SuperMatrix*
+ *	    The factor U from the factorization Pr*A*Pc=L*U. Use column-wise
+ *	    storage scheme, i.e., U has types: Stype = SLU_NC,
+ *	    Dtype = SLU_D, Mtype = SLU_TRU.
+ *
+ * stat     (output) SuperLUStat_t*
+ *	    Record the statistics on runtime and floating-point operation count.
+ *	    See slu_util.h for the definition of 'SuperLUStat_t'.
+ *
+ * info     (output) int*
+ *	    = 0: successful exit
+ *	    < 0: if info = -i, the i-th argument had an illegal value
+ *	    > 0: if info = i, and i is
+ *	       <= A->ncol: number of zero pivots. They are replaced by small
+ *		  entries according to options->ILU_FillTol.
+ *	       > A->ncol: number of bytes allocated when memory allocation
+ *		  failure occurred, plus A->ncol. If lwork = -1, it is
+ *		  the estimated amount of space needed, plus A->ncol.
+ *
+ * ======================================================================
+ *
+ * Local Working Arrays:
+ * ======================
+ *   m = number of rows in the matrix
+ *   n = number of columns in the matrix
+ *
+ *   marker[0:3*m-1]: marker[i] = j means that node i has been
+ *	reached when working on column j.
+ *	Storage: relative to original row subscripts
+ *	NOTE: There are 4 of them:
+ *	      marker/marker1 are used for panel dfs, see (ilu_)dpanel_dfs.c;
+ *	      marker2 is used for inner-factorization, see (ilu)_dcolumn_dfs.c;
+ *	      marker_relax(has its own space) is used for relaxed supernodes.
+ *
+ *   parent[0:m-1]: parent vector used during dfs
+ *	Storage: relative to new row subscripts
+ *
+ *   xplore[0:m-1]: xplore[i] gives the location of the next (dfs)
+ *	unexplored neighbor of i in lsub[*]
+ *
+ *   segrep[0:nseg-1]: contains the list of supernodal representatives
+ *	in topological order of the dfs. A supernode representative is the
+ *	last column of a supernode.
+ *	The maximum size of segrep[] is n.
+ *
+ *   repfnz[0:W*m-1]: for a nonzero segment U[*,j] that ends at a
+ *	supernodal representative r, repfnz[r] is the location of the first
+ *	nonzero in this segment.  It is also used during the dfs: repfnz[r]>0
+ *	indicates the supernode r has been explored.
+ *	NOTE: There are W of them, each used for one column of a panel.
+ *
+ *   panel_lsub[0:W*m-1]: temporary for the nonzeros row indices below
+ *	the panel diagonal. These are filled in during dpanel_dfs(), and are
+ *	used later in the inner LU factorization within the panel.
+ *	panel_lsub[]/dense[] pair forms the SPA data structure.
+ *	NOTE: There are W of them.
+ *
+ *   dense[0:W*m-1]: sparse accumulating (SPA) vector for intermediate values;
+ *		   NOTE: there are W of them.
+ *
+ *   tempv[0:*]: real temporary used for dense numeric kernels;
+ *	The size of this array is defined by NUM_TEMPV() in slu_util.h.
+ *	It is also used by the dropping routine ilu_ddrop_row().
+ * 
+ */ + +void +dgsitrf(superlu_options_t *options, SuperMatrix *A, int relax, int panel_size, + int *etree, void *work, int lwork, int *perm_c, int *perm_r, + SuperMatrix *L, SuperMatrix *U, SuperLUStat_t *stat, int *info) +{ + /* Local working arrays */ + NCPformat *Astore; + int *iperm_r = NULL; /* inverse of perm_r; used when + options->Fact == SamePattern_SameRowPerm */ + int *iperm_c; /* inverse of perm_c */ + int *swap, *iswap; /* swap is used to store the row permutation + during the factorization. Initially, it is set + to iperm_c (row indeces of Pc*A*Pc'). + iswap is the inverse of swap. After the + factorization, it is equal to perm_r. */ + int *iwork; + double *dwork; + int *segrep, *repfnz, *parent, *xplore; + int *panel_lsub; /* dense[]/panel_lsub[] pair forms a w-wide SPA */ + int *marker, *marker_relax; + double *dense, *tempv; + int *relax_end, *relax_fsupc; + double *a; + int *asub; + int *xa_begin, *xa_end; + int *xsup, *supno; + int *xlsub, *xlusup, *xusub; + int nzlumax; + double *amax; + double drop_sum; + static GlobalLU_t Glu; /* persistent to facilitate multiple factors. */ + int *iwork2; /* used by the second dropping rule */ + + /* Local scalars */ + fact_t fact = options->Fact; + double diag_pivot_thresh = options->DiagPivotThresh; + double drop_tol = options->ILU_DropTol; /* tau */ + double fill_ini = options->ILU_FillTol; /* tau^hat */ + double gamma = options->ILU_FillFactor; + int drop_rule = options->ILU_DropRule; + milu_t milu = options->ILU_MILU; + double fill_tol; + int pivrow; /* pivotal row number in the original matrix A */ + int nseg1; /* no of segments in U-column above panel row jcol */ + int nseg; /* no of segments in each U-column */ + register int jcol; + register int kcol; /* end column of a relaxed snode */ + register int icol; + register int i, k, jj, new_next, iinfo; + int m, n, min_mn, jsupno, fsupc, nextlu, nextu; + int w_def; /* upper bound on panel width */ + int usepr, iperm_r_allocated = 0; + int nnzL, nnzU; + int *panel_histo = stat->panel_histo; + flops_t *ops = stat->ops; + + int last_drop;/* the last column which the dropping rules applied */ + int quota; + int nnzAj; /* number of nonzeros in A(:,1:j) */ + int nnzLj, nnzUj; + double tol_L = drop_tol, tol_U = drop_tol; + double zero = 0.0; + + /* Executable */ + iinfo = 0; + m = A->nrow; + n = A->ncol; + min_mn = SUPERLU_MIN(m, n); + Astore = A->Store; + a = Astore->nzval; + asub = Astore->rowind; + xa_begin = Astore->colbeg; + xa_end = Astore->colend; + + /* Allocate storage common to the factor routines */ + *info = dLUMemInit(fact, work, lwork, m, n, Astore->nnz, panel_size, + gamma, L, U, &Glu, &iwork, &dwork); + if ( *info ) return; + + xsup = Glu.xsup; + supno = Glu.supno; + xlsub = Glu.xlsub; + xlusup = Glu.xlusup; + xusub = Glu.xusub; + + SetIWork(m, n, panel_size, iwork, &segrep, &parent, &xplore, + &repfnz, &panel_lsub, &marker_relax, &marker); + dSetRWork(m, panel_size, dwork, &dense, &tempv); + + usepr = (fact == SamePattern_SameRowPerm); + if ( usepr ) { + /* Compute the inverse of perm_r */ + iperm_r = (int *) intMalloc(m); + for (k = 0; k < m; ++k) iperm_r[perm_r[k]] = k; + iperm_r_allocated = 1; + } + + iperm_c = (int *) intMalloc(n); + for (k = 0; k < n; ++k) iperm_c[perm_c[k]] = k; + swap = (int *)intMalloc(n); + for (k = 0; k < n; k++) swap[k] = iperm_c[k]; + iswap = (int *)intMalloc(n); + for (k = 0; k < n; k++) iswap[k] = perm_c[k]; + amax = (double *) doubleMalloc(panel_size); + if (drop_rule & DROP_SECONDARY) + iwork2 = (int *)intMalloc(n); + else + iwork2 = NULL; + + nnzAj = 0; + nnzLj = 0; + nnzUj = 0; + last_drop = SUPERLU_MAX(min_mn - 2 * sp_ienv(3), (int)(min_mn * 0.95)); + + /* Identify relaxed snodes */ + relax_end = (int *) intMalloc(n); + relax_fsupc = (int *) intMalloc(n); + if ( options->SymmetricMode == YES ) + ilu_heap_relax_snode(n, etree, relax, marker, relax_end, relax_fsupc); + else + ilu_relax_snode(n, etree, relax, marker, relax_end, relax_fsupc); + + ifill (perm_r, m, EMPTY); + ifill (marker, m * NO_MARKER, EMPTY); + supno[0] = -1; + xsup[0] = xlsub[0] = xusub[0] = xlusup[0] = 0; + w_def = panel_size; + + /* Mark the rows used by relaxed supernodes */ + ifill (marker_relax, m, EMPTY); + i = mark_relax(m, relax_end, relax_fsupc, xa_begin, xa_end, + asub, marker_relax); +#if ( PRNTlevel >= 1) + printf("%d relaxed supernodes.\n", i); +#endif + + /* + * Work on one "panel" at a time. A panel is one of the following: + * (a) a relaxed supernode at the bottom of the etree, or + * (b) panel_size contiguous columns, defined by the user + */ + for (jcol = 0; jcol < min_mn; ) { + + if ( relax_end[jcol] != EMPTY ) { /* start of a relaxed snode */ + kcol = relax_end[jcol]; /* end of the relaxed snode */ + panel_histo[kcol-jcol+1]++; + + /* Drop small rows in the previous supernode. */ + if (jcol > 0 && jcol < last_drop) { + int first = xsup[supno[jcol - 1]]; + int last = jcol - 1; + int quota; + + /* Compute the quota */ + if (drop_rule & DROP_PROWS) + quota = gamma * Astore->nnz / m * (m - first) / m + * (last - first + 1); + else if (drop_rule & DROP_COLUMN) { + int i; + quota = 0; + for (i = first; i <= last; i++) + quota += xa_end[i] - xa_begin[i]; + quota = gamma * quota * (m - first) / m; + } else if (drop_rule & DROP_AREA) + quota = gamma * nnzAj * (1.0 - 0.5 * (last + 1.0) / m) + - nnzLj; + else + quota = m * n; + fill_tol = pow(fill_ini, 1.0 - 0.5 * (first + last) / min_mn); + + /* Drop small rows */ + i = ilu_ddrop_row(options, first, last, tol_L, quota, &nnzLj, + &fill_tol, &Glu, tempv, iwork2, 0); + /* Reset the parameters */ + if (drop_rule & DROP_DYNAMIC) { + if (gamma * nnzAj * (1.0 - 0.5 * (last + 1.0) / m) + < nnzLj) + tol_L = SUPERLU_MIN(1.0, tol_L * 2.0); + else + tol_L = SUPERLU_MAX(drop_tol, tol_L * 0.5); + } + if (fill_tol < 0) iinfo -= (int)fill_tol; +#ifdef DEBUG + num_drop_L += i * (last - first + 1); +#endif + } + + /* -------------------------------------- + * Factorize the relaxed supernode(jcol:kcol) + * -------------------------------------- */ + /* Determine the union of the row structure of the snode */ + if ( (*info = ilu_dsnode_dfs(jcol, kcol, asub, xa_begin, xa_end, + marker, &Glu)) != 0 ) + return; + + nextu = xusub[jcol]; + nextlu = xlusup[jcol]; + jsupno = supno[jcol]; + fsupc = xsup[jsupno]; + new_next = nextlu + (xlsub[fsupc+1]-xlsub[fsupc])*(kcol-jcol+1); + nzlumax = Glu.nzlumax; + while ( new_next > nzlumax ) { + if ((*info = dLUMemXpand(jcol, nextlu, LUSUP, &nzlumax, &Glu))) + return; + } + + for (icol = jcol; icol <= kcol; icol++) { + xusub[icol+1] = nextu; + + amax[0] = 0.0; + /* Scatter into SPA dense[*] */ + for (k = xa_begin[icol]; k < xa_end[icol]; k++) { + register double tmp = fabs(a[k]); + if (tmp > amax[0]) amax[0] = tmp; + dense[asub[k]] = a[k]; + } + nnzAj += xa_end[icol] - xa_begin[icol]; + if (amax[0] == 0.0) { + amax[0] = fill_ini; +#if ( PRNTlevel >= 1) + printf("Column %d is entirely zero!\n", icol); + fflush(stdout); +#endif + } + + /* Numeric update within the snode */ + dsnode_bmod(icol, jsupno, fsupc, dense, tempv, &Glu, stat); + + if (usepr) pivrow = iperm_r[icol]; + fill_tol = pow(fill_ini, 1.0 - (double)icol / (double)min_mn); + if ( (*info = ilu_dpivotL(icol, diag_pivot_thresh, &usepr, + perm_r, iperm_c[icol], swap, iswap, + marker_relax, &pivrow, + amax[0] * fill_tol, milu, zero, + &Glu, stat)) ) { + iinfo++; + marker[pivrow] = kcol; + } + + } + + jcol = kcol + 1; + + } else { /* Work on one panel of panel_size columns */ + + /* Adjust panel_size so that a panel won't overlap with the next + * relaxed snode. + */ + panel_size = w_def; + for (k = jcol + 1; k < SUPERLU_MIN(jcol+panel_size, min_mn); k++) + if ( relax_end[k] != EMPTY ) { + panel_size = k - jcol; + break; + } + if ( k == min_mn ) panel_size = min_mn - jcol; + panel_histo[panel_size]++; + + /* symbolic factor on a panel of columns */ + ilu_dpanel_dfs(m, panel_size, jcol, A, perm_r, &nseg1, + dense, amax, panel_lsub, segrep, repfnz, + marker, parent, xplore, &Glu); + + /* numeric sup-panel updates in topological order */ + dpanel_bmod(m, panel_size, jcol, nseg1, dense, + tempv, segrep, repfnz, &Glu, stat); + + /* Sparse LU within the panel, and below panel diagonal */ + for (jj = jcol; jj < jcol + panel_size; jj++) { + + k = (jj - jcol) * m; /* column index for w-wide arrays */ + + nseg = nseg1; /* Begin after all the panel segments */ + + nnzAj += xa_end[jj] - xa_begin[jj]; + + if ((*info = ilu_dcolumn_dfs(m, jj, perm_r, &nseg, + &panel_lsub[k], segrep, &repfnz[k], + marker, parent, xplore, &Glu))) + return; + + /* Numeric updates */ + if ((*info = dcolumn_bmod(jj, (nseg - nseg1), &dense[k], + tempv, &segrep[nseg1], &repfnz[k], + jcol, &Glu, stat)) != 0) return; + + /* Make a fill-in position if the column is entirely zero */ + if (xlsub[jj + 1] == xlsub[jj]) { + register int i, row; + int nextl; + int nzlmax = Glu.nzlmax; + int *lsub = Glu.lsub; + int *marker2 = marker + 2 * m; + + /* Allocate memory */ + nextl = xlsub[jj] + 1; + if (nextl >= nzlmax) { + int error = dLUMemXpand(jj, nextl, LSUB, &nzlmax, &Glu); + if (error) { *info = error; return; } + lsub = Glu.lsub; + } + xlsub[jj + 1]++; + assert(xlusup[jj]==xlusup[jj+1]); + xlusup[jj + 1]++; + Glu.lusup[xlusup[jj]] = zero; + + /* Choose a row index (pivrow) for fill-in */ + for (i = jj; i < n; i++) + if (marker_relax[swap[i]] <= jj) break; + row = swap[i]; + marker2[row] = jj; + lsub[xlsub[jj]] = row; +#ifdef DEBUG + printf("Fill col %d.\n", jj); + fflush(stdout); +#endif + } + + /* Computer the quota */ + if (drop_rule & DROP_PROWS) + quota = gamma * Astore->nnz / m * jj / m; + else if (drop_rule & DROP_COLUMN) + quota = gamma * (xa_end[jj] - xa_begin[jj]) * + (jj + 1) / m; + else if (drop_rule & DROP_AREA) + quota = gamma * 0.9 * nnzAj * 0.5 - nnzUj; + else + quota = m; + + /* Copy the U-segments to ucol[*] and drop small entries */ + if ((*info = ilu_dcopy_to_ucol(jj, nseg, segrep, &repfnz[k], + perm_r, &dense[k], drop_rule, + milu, amax[jj - jcol] * tol_U, + quota, &drop_sum, &nnzUj, &Glu, + iwork2)) != 0) + return; + + /* Reset the dropping threshold if required */ + if (drop_rule & DROP_DYNAMIC) { + if (gamma * 0.9 * nnzAj * 0.5 < nnzLj) + tol_U = SUPERLU_MIN(1.0, tol_U * 2.0); + else + tol_U = SUPERLU_MAX(drop_tol, tol_U * 0.5); + } + + drop_sum *= MILU_ALPHA; + if (usepr) pivrow = iperm_r[jj]; + fill_tol = pow(fill_ini, 1.0 - (double)jj / (double)min_mn); + if ( (*info = ilu_dpivotL(jj, diag_pivot_thresh, &usepr, perm_r, + iperm_c[jj], swap, iswap, + marker_relax, &pivrow, + amax[jj - jcol] * fill_tol, milu, + drop_sum, &Glu, stat)) ) { + iinfo++; + marker[m + pivrow] = jj; + marker[2 * m + pivrow] = jj; + } + + /* Reset repfnz[] for this column */ + resetrep_col (nseg, segrep, &repfnz[k]); + + /* Start a new supernode, drop the previous one */ + if (jj > 0 && supno[jj] > supno[jj - 1] && jj < last_drop) { + int first = xsup[supno[jj - 1]]; + int last = jj - 1; + int quota; + + /* Compute the quota */ + if (drop_rule & DROP_PROWS) + quota = gamma * Astore->nnz / m * (m - first) / m + * (last - first + 1); + else if (drop_rule & DROP_COLUMN) { + int i; + quota = 0; + for (i = first; i <= last; i++) + quota += xa_end[i] - xa_begin[i]; + quota = gamma * quota * (m - first) / m; + } else if (drop_rule & DROP_AREA) + quota = gamma * nnzAj * (1.0 - 0.5 * (last + 1.0) + / m) - nnzLj; + else + quota = m * n; + fill_tol = pow(fill_ini, 1.0 - 0.5 * (first + last) / + (double)min_mn); + + /* Drop small rows */ + i = ilu_ddrop_row(options, first, last, tol_L, quota, + &nnzLj, &fill_tol, &Glu, tempv, iwork2, + 1); + + /* Reset the parameters */ + if (drop_rule & DROP_DYNAMIC) { + if (gamma * nnzAj * (1.0 - 0.5 * (last + 1.0) / m) + < nnzLj) + tol_L = SUPERLU_MIN(1.0, tol_L * 2.0); + else + tol_L = SUPERLU_MAX(drop_tol, tol_L * 0.5); + } + if (fill_tol < 0) iinfo -= (int)fill_tol; +#ifdef DEBUG + num_drop_L += i * (last - first + 1); +#endif + } /* if start a new supernode */ + + } /* for */ + + jcol += panel_size; /* Move to the next panel */ + + } /* else */ + + } /* for */ + + *info = iinfo; + + if ( m > n ) { + k = 0; + for (i = 0; i < m; ++i) + if ( perm_r[i] == EMPTY ) { + perm_r[i] = n + k; + ++k; + } + } + + ilu_countnz(min_mn, &nnzL, &nnzU, &Glu); + fixupL(min_mn, perm_r, &Glu); + + dLUWorkFree(iwork, dwork, &Glu); /* Free work space and compress storage */ + + if ( fact == SamePattern_SameRowPerm ) { + /* L and U structures may have changed due to possibly different + pivoting, even though the storage is available. + There could also be memory expansions, so the array locations + may have changed, */ + ((SCformat *)L->Store)->nnz = nnzL; + ((SCformat *)L->Store)->nsuper = Glu.supno[n]; + ((SCformat *)L->Store)->nzval = Glu.lusup; + ((SCformat *)L->Store)->nzval_colptr = Glu.xlusup; + ((SCformat *)L->Store)->rowind = Glu.lsub; + ((SCformat *)L->Store)->rowind_colptr = Glu.xlsub; + ((NCformat *)U->Store)->nnz = nnzU; + ((NCformat *)U->Store)->nzval = Glu.ucol; + ((NCformat *)U->Store)->rowind = Glu.usub; + ((NCformat *)U->Store)->colptr = Glu.xusub; + } else { + dCreate_SuperNode_Matrix(L, A->nrow, min_mn, nnzL, Glu.lusup, + Glu.xlusup, Glu.lsub, Glu.xlsub, Glu.supno, + Glu.xsup, SLU_SC, SLU_D, SLU_TRLU); + dCreate_CompCol_Matrix(U, min_mn, min_mn, nnzU, Glu.ucol, + Glu.usub, Glu.xusub, SLU_NC, SLU_D, SLU_TRU); + } + + ops[FACT] += ops[TRSV] + ops[GEMV]; + + if ( iperm_r_allocated ) SUPERLU_FREE (iperm_r); + SUPERLU_FREE (iperm_c); + SUPERLU_FREE (relax_end); + SUPERLU_FREE (swap); + SUPERLU_FREE (iswap); + SUPERLU_FREE (relax_fsupc); + SUPERLU_FREE (amax); + if ( iwork2 ) SUPERLU_FREE (iwork2); + +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgsrfs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgsrfs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgsrfs.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgsrfs.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,25 +1,26 @@ -/* +/*! @file dgsrfs.c + * \brief Improves computed solution to a system of inear equations + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
+ * Modified from lapack routine DGERFS
+ * 
*/ /* * File name: dgsrfs.c * History: Modified from lapack routine DGERFS */ #include -#include "dsp_defs.h" +#include "slu_ddefs.h" -void -dgsrfs(trans_t trans, SuperMatrix *A, SuperMatrix *L, SuperMatrix *U, - int *perm_c, int *perm_r, char *equed, double *R, double *C, - SuperMatrix *B, SuperMatrix *X, double *ferr, double *berr, - SuperLUStat_t *stat, int *info) -{ -/* +/*! \brief + * + *
  *   Purpose   
  *   =======   
  *
@@ -123,7 +124,15 @@
  *
  *    ITMAX is the maximum number of steps of iterative refinement.   
  *
- */  
+ * 
+ */ +void +dgsrfs(trans_t trans, SuperMatrix *A, SuperMatrix *L, SuperMatrix *U, + int *perm_c, int *perm_r, char *equed, double *R, double *C, + SuperMatrix *B, SuperMatrix *X, double *ferr, double *berr, + SuperLUStat_t *stat, int *info) +{ + #define ITMAX 5 @@ -224,6 +233,8 @@ nz = A->ncol + 1; eps = dlamch_("Epsilon"); safmin = dlamch_("Safe minimum"); + /* Set SAFE1 essentially to be the underflow threshold times the + number of additions in each row. */ safe1 = nz * safmin; safe2 = safe1 / eps; @@ -274,7 +285,7 @@ where abs(Z) is the componentwise absolute value of the matrix or vector Z. If the i-th component of the denominator is less than SAFE2, then SAFE1 is added to the i-th component of the - numerator and denominator before dividing. */ + numerator before dividing. */ for (i = 0; i < A->nrow; ++i) rwork[i] = fabs( Bptr[i] ); @@ -297,11 +308,15 @@ } s = 0.; for (i = 0; i < A->nrow; ++i) { - if (rwork[i] > safe2) + if (rwork[i] > safe2) { s = SUPERLU_MAX( s, fabs(work[i]) / rwork[i] ); - else - s = SUPERLU_MAX( s, (fabs(work[i]) + safe1) / - (rwork[i] + safe1) ); + } else if ( rwork[i] != 0.0 ) { + /* Adding SAFE1 to the numerator guards against + spuriously zero residuals (underflow). */ + s = SUPERLU_MAX( s, (safe1 + fabs(work[i])) / rwork[i] ); + } + /* If rwork[i] is exactly 0.0, then we know the true + residual also must be exactly 0.0. */ } berr[j] = s; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgssv.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgssv.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgssv.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgssv.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,20 +1,19 @@ - -/* +/*! @file dgssv.c + * \brief Solves the system of linear equations A*X=B + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
- *
+ * 
*/ -#include "dsp_defs.h" +#include "slu_ddefs.h" -void -dgssv(superlu_options_t *options, SuperMatrix *A, int *perm_c, int *perm_r, - SuperMatrix *L, SuperMatrix *U, SuperMatrix *B, - SuperLUStat_t *stat, int *info ) -{ -/* +/*! \brief + * + *
  * Purpose
  * =======
  *
@@ -127,15 +126,21 @@
  *                so the solution could not be computed.
  *             > A->ncol: number of bytes allocated when memory allocation
  *                failure occurred, plus A->ncol.
- *   
+ * 
*/ + +void +dgssv(superlu_options_t *options, SuperMatrix *A, int *perm_c, int *perm_r, + SuperMatrix *L, SuperMatrix *U, SuperMatrix *B, + SuperLUStat_t *stat, int *info ) +{ + DNformat *Bstore; SuperMatrix *AA;/* A in SLU_NC format used by the factorization routine.*/ SuperMatrix AC; /* Matrix postmultiplied by Pc */ int lwork = 0, *etree, i; /* Set default values for some parameters */ - double drop_tol = 0.; int panel_size; /* panel size */ int relax; /* no of columns in a relaxed snodes */ int permc_spec; @@ -201,8 +206,8 @@ relax, panel_size, sp_ienv(3), sp_ienv(4));*/ t = SuperLU_timer_(); /* Compute the LU factorization of A. */ - dgstrf(options, &AC, drop_tol, relax, panel_size, - etree, NULL, lwork, perm_c, perm_r, L, U, stat, info); + dgstrf(options, &AC, relax, panel_size, etree, + NULL, lwork, perm_c, perm_r, L, U, stat, info); utime[FACT] = SuperLU_timer_() - t; t = SuperLU_timer_(); diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgssvx.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgssvx.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgssvx.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgssvx.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,22 +1,19 @@ -/* +/*! @file dgssvx.c + * \brief Solves the system of linear equations A*X=B or A'*X=B + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
- *
+ * 
*/ -#include "dsp_defs.h" +#include "slu_ddefs.h" -void -dgssvx(superlu_options_t *options, SuperMatrix *A, int *perm_c, int *perm_r, - int *etree, char *equed, double *R, double *C, - SuperMatrix *L, SuperMatrix *U, void *work, int lwork, - SuperMatrix *B, SuperMatrix *X, double *recip_pivot_growth, - double *rcond, double *ferr, double *berr, - mem_usage_t *mem_usage, SuperLUStat_t *stat, int *info ) -{ -/* +/*! \brief + * + *
  * Purpose
  * =======
  *
@@ -314,7 +311,7 @@
  *
  * stat   (output) SuperLUStat_t*
  *        Record the statistics on runtime and floating-point operation count.
- *        See util.h for the definition of 'SuperLUStat_t'.
+ *        See slu_util.h for the definition of 'SuperLUStat_t'.
  *
  * info    (output) int*
  *         = 0: successful exit   
@@ -332,9 +329,19 @@
  *                    accurate than the value of RCOND would suggest.   
  *              > A->ncol+1: number of bytes allocated when memory allocation
  *                    failure occurred, plus A->ncol.
- *
+ * 
*/ +void +dgssvx(superlu_options_t *options, SuperMatrix *A, int *perm_c, int *perm_r, + int *etree, char *equed, double *R, double *C, + SuperMatrix *L, SuperMatrix *U, void *work, int lwork, + SuperMatrix *B, SuperMatrix *X, double *recip_pivot_growth, + double *rcond, double *ferr, double *berr, + mem_usage_t *mem_usage, SuperLUStat_t *stat, int *info ) +{ + + DNformat *Bstore, *Xstore; double *Bmat, *Xmat; int ldb, ldx, nrhs; @@ -346,13 +353,12 @@ int i, j, info1; double amax, anorm, bignum, smlnum, colcnd, rowcnd, rcmax, rcmin; int relax, panel_size; - double diag_pivot_thresh, drop_tol; + double diag_pivot_thresh; double t0; /* temporary time */ double *utime; /* External functions */ extern double dlangs(char *, SuperMatrix *); - extern double dlamch_(char *); Bstore = B->Store; Xstore = X->Store; @@ -443,7 +449,6 @@ panel_size = sp_ienv(1); relax = sp_ienv(2); diag_pivot_thresh = options->DiagPivotThresh; - drop_tol = 0.0; utime = stat->utime; @@ -523,8 +528,8 @@ /* Compute the LU factorization of A*Pc. */ t0 = SuperLU_timer_(); - dgstrf(options, &AC, drop_tol, relax, panel_size, - etree, work, lwork, perm_c, perm_r, L, U, stat, info); + dgstrf(options, &AC, relax, panel_size, etree, + work, lwork, perm_c, perm_r, L, U, stat, info); utime[FACT] = SuperLU_timer_() - t0; if ( lwork == -1 ) { diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgstrf.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgstrf.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgstrf.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgstrf.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,33 +1,32 @@ -/* +/*! @file dgstrf.c + * \brief Computes an LU factorization of a general sparse matrix + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
+ * 
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
  *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "dsp_defs.h" -void -dgstrf (superlu_options_t *options, SuperMatrix *A, double drop_tol, - int relax, int panel_size, int *etree, void *work, int lwork, - int *perm_c, int *perm_r, SuperMatrix *L, SuperMatrix *U, - SuperLUStat_t *stat, int *info) -{ -/* +#include "slu_ddefs.h" + +/*! \brief + * + *
  * Purpose
  * =======
  *
@@ -53,11 +52,6 @@
  *          (A->nrow, A->ncol). The type of A can be:
  *          Stype = SLU_NCP; Dtype = SLU_D; Mtype = SLU_GE.
  *
- * drop_tol (input) double (NOT IMPLEMENTED)
- *	    Drop tolerance parameter. At step j of the Gaussian elimination,
- *          if abs(A_ij)/(max_i abs(A_ij)) < drop_tol, drop entry A_ij.
- *          0 <= drop_tol <= 1. The default value of drop_tol is 0.
- *
  * relax    (input) int
  *          To control degree of relaxing supernodes. If the number
  *          of nodes (columns) in a subtree of the elimination tree is less
@@ -117,7 +111,7 @@
  *
  * stat     (output) SuperLUStat_t*
  *          Record the statistics on runtime and floating-point operation count.
- *          See util.h for the definition of 'SuperLUStat_t'.
+ *          See slu_util.h for the definition of 'SuperLUStat_t'.
  *
  * info     (output) int*
  *          = 0: successful exit
@@ -177,13 +171,20 @@
  *	    	   NOTE: there are W of them.
  *
  *   tempv[0:*]: real temporary used for dense numeric kernels;
- *	The size of this array is defined by NUM_TEMPV() in dsp_defs.h.
- *
+ *	The size of this array is defined by NUM_TEMPV() in slu_ddefs.h.
+ * 
*/ + +void +dgstrf (superlu_options_t *options, SuperMatrix *A, + int relax, int panel_size, int *etree, void *work, int lwork, + int *perm_c, int *perm_r, SuperMatrix *L, SuperMatrix *U, + SuperLUStat_t *stat, int *info) +{ /* Local working arrays */ NCPformat *Astore; - int *iperm_r; /* inverse of perm_r; - used when options->Fact == SamePattern_SameRowPerm */ + int *iperm_r = NULL; /* inverse of perm_r; used when + options->Fact == SamePattern_SameRowPerm */ int *iperm_c; /* inverse of perm_c */ int *iwork; double *dwork; @@ -199,7 +200,8 @@ int *xsup, *supno; int *xlsub, *xlusup, *xusub; int nzlumax; - static GlobalLU_t Glu; /* persistent to facilitate multiple factors. */ + double fill_ratio = sp_ienv(6); /* estimated fill ratio */ + static GlobalLU_t Glu; /* persistent to facilitate multiple factors. */ /* Local scalars */ fact_t fact = options->Fact; @@ -230,7 +232,7 @@ /* Allocate storage common to the factor routines */ *info = dLUMemInit(fact, work, lwork, m, n, Astore->nnz, - panel_size, L, U, &Glu, &iwork, &dwork); + panel_size, fill_ratio, L, U, &Glu, &iwork, &dwork); if ( *info ) return; xsup = Glu.xsup; @@ -417,7 +419,7 @@ ((NCformat *)U->Store)->rowind = Glu.usub; ((NCformat *)U->Store)->colptr = Glu.xusub; } else { - dCreate_SuperNode_Matrix(L, A->nrow, A->ncol, nnzL, Glu.lusup, + dCreate_SuperNode_Matrix(L, A->nrow, min_mn, nnzL, Glu.lusup, Glu.xlusup, Glu.lsub, Glu.xlsub, Glu.supno, Glu.xsup, SLU_SC, SLU_D, SLU_TRLU); dCreate_CompCol_Matrix(U, min_mn, min_mn, nnzU, Glu.ucol, @@ -425,6 +427,7 @@ } ops[FACT] += ops[TRSV] + ops[GEMV]; + stat->expansions = --(Glu.num_expansions); if ( iperm_r_allocated ) SUPERLU_FREE (iperm_r); SUPERLU_FREE (iperm_c); diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgstrs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgstrs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgstrs.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgstrs.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,25 +1,27 @@ -/* +/*! @file dgstrs.c + * \brief Solves a system using LU factorization + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "dsp_defs.h" +#include "slu_ddefs.h" /* @@ -29,13 +31,9 @@ void dlsolve(int, int, double*, double*); void dmatvec(int, int, int, double*, double*, double*); - -void -dgstrs (trans_t trans, SuperMatrix *L, SuperMatrix *U, - int *perm_c, int *perm_r, SuperMatrix *B, - SuperLUStat_t *stat, int *info) -{ -/* +/*! \brief + * + *
  * Purpose
  * =======
  *
@@ -85,8 +83,15 @@
  * info    (output) int*
  * 	   = 0: successful exit
  *	   < 0: if info = -i, the i-th argument had an illegal value
- *
+ * 
*/ + +void +dgstrs (trans_t trans, SuperMatrix *L, SuperMatrix *U, + int *perm_c, int *perm_r, SuperMatrix *B, + SuperLUStat_t *stat, int *info) +{ + #ifdef _CRAY _fcd ftcs1, ftcs2, ftcs3, ftcs4; #endif @@ -288,7 +293,7 @@ stat->ops[SOLVE] = solve_ops; - } else { /* Solve A'*X=B */ + } else { /* Solve A'*X=B or CONJ(A)*X=B */ /* Permute right hand sides to form Pc'*B. */ for (i = 0; i < nrhs; i++) { rhs_work = &Bmat[i*ldb]; @@ -297,7 +302,6 @@ } stat->ops[SOLVE] = 0; - for (k = 0; k < nrhs; ++k) { /* Multiply by inv(U'). */ @@ -307,7 +311,6 @@ sp_dtrsv("L", "T", "U", L, U, &Bmat[k*ldb], stat, info); } - /* Compute the final solution X := Pr'*X (=inv(Pr)*X) */ for (i = 0; i < nrhs; i++) { rhs_work = &Bmat[i*ldb]; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgstrsL.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgstrsL.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgstrsL.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgstrsL.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,27 +1,27 @@ - - -/* +/*! @file dgstrsL.c + * \brief Performs the L-solve using the LU factorization computed by DGSTRF + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * September 15, 2003
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "dsp_defs.h" -#include "util.h" +#include "slu_ddefs.h" +#include "slu_util.h" /* @@ -31,15 +31,13 @@ void dlsolve(int, int, double*, double*); void dmatvec(int, int, int, double*, double*, double*); - -void -dgstrsL(char *trans, SuperMatrix *L, int *perm_r, SuperMatrix *B, int *info) -{ -/* +/*! \brief + * + *
  * Purpose
  * =======
  *
- * DGSTRSL only performs the L-solve using the LU factorization computed
+ * dgstrsL only performs the L-solve using the LU factorization computed
  * by DGSTRF.
  *
  * See supermatrix.h for the definition of 'SuperMatrix' structure.
@@ -75,8 +73,11 @@
  * info    (output) int*
  * 	   = 0: successful exit
  *	   < 0: if info = -i, the i-th argument had an illegal value
- *
+ * 
*/ +void +dgstrsL(char *trans, SuperMatrix *L, int *perm_r, SuperMatrix *B, int *info) +{ #ifdef _CRAY _fcd ftcs1, ftcs2, ftcs3, ftcs4; #endif diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgstrsU.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgstrsU.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgstrsU.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dgstrsU.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,224 @@ +/*! @file dgstrsU.c + * \brief Performs the U-solve using the LU factorization computed by DGSTRF + * + *
+ * -- SuperLU routine (version 3.0) --
+ * Univ. of California Berkeley, Xerox Palo Alto Research Center,
+ * and Lawrence Berkeley National Lab.
+ * October 15, 2003
+ * 
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
+ */ + + +#include "slu_ddefs.h" + + +/* + * Function prototypes + */ +void dusolve(int, int, double*, double*); +void dlsolve(int, int, double*, double*); +void dmatvec(int, int, int, double*, double*, double*); + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *
+ * dgstrsU only performs the U-solve using the LU factorization computed
+ * by DGSTRF.
+ *
+ * See supermatrix.h for the definition of 'SuperMatrix' structure.
+ *
+ * Arguments
+ * =========
+ *
+ * trans   (input) trans_t
+ *          Specifies the form of the system of equations:
+ *          = NOTRANS: A * X = B  (No transpose)
+ *          = TRANS:   A'* X = B  (Transpose)
+ *          = CONJ:    A**H * X = B  (Conjugate transpose)
+ *
+ * L       (input) SuperMatrix*
+ *         The factor L from the factorization Pr*A*Pc=L*U as computed by
+ *         dgstrf(). Use compressed row subscripts storage for supernodes,
+ *         i.e., L has types: Stype = SLU_SC, Dtype = SLU_D, Mtype = SLU_TRLU.
+ *
+ * U       (input) SuperMatrix*
+ *         The factor U from the factorization Pr*A*Pc=L*U as computed by
+ *         dgstrf(). Use column-wise storage scheme, i.e., U has types:
+ *         Stype = SLU_NC, Dtype = SLU_D, Mtype = SLU_TRU.
+ *
+ * perm_c  (input) int*, dimension (L->ncol)
+ *	   Column permutation vector, which defines the 
+ *         permutation matrix Pc; perm_c[i] = j means column i of A is 
+ *         in position j in A*Pc.
+ *
+ * perm_r  (input) int*, dimension (L->nrow)
+ *         Row permutation vector, which defines the permutation matrix Pr; 
+ *         perm_r[i] = j means row i of A is in position j in Pr*A.
+ *
+ * B       (input/output) SuperMatrix*
+ *         B has types: Stype = SLU_DN, Dtype = SLU_D, Mtype = SLU_GE.
+ *         On entry, the right hand side matrix.
+ *         On exit, the solution matrix if info = 0;
+ *
+ * stat     (output) SuperLUStat_t*
+ *          Record the statistics on runtime and floating-point operation count.
+ *          See util.h for the definition of 'SuperLUStat_t'.
+ *
+ * info    (output) int*
+ * 	   = 0: successful exit
+ *	   < 0: if info = -i, the i-th argument had an illegal value
+ * 
+ */ +void +dgstrsU(trans_t trans, SuperMatrix *L, SuperMatrix *U, + int *perm_c, int *perm_r, SuperMatrix *B, + SuperLUStat_t *stat, int *info) +{ +#ifdef _CRAY + _fcd ftcs1, ftcs2, ftcs3, ftcs4; +#endif + int incx = 1, incy = 1; +#ifdef USE_VENDOR_BLAS + double alpha = 1.0, beta = 1.0; + double *work_col; +#endif + DNformat *Bstore; + double *Bmat; + SCformat *Lstore; + NCformat *Ustore; + double *Lval, *Uval; + int fsupc, nrow, nsupr, nsupc, luptr, istart, irow; + int i, j, k, iptr, jcol, n, ldb, nrhs; + double *rhs_work, *soln; + flops_t solve_ops; + void dprint_soln(); + + /* Test input parameters ... */ + *info = 0; + Bstore = B->Store; + ldb = Bstore->lda; + nrhs = B->ncol; + if ( trans != NOTRANS && trans != TRANS && trans != CONJ ) *info = -1; + else if ( L->nrow != L->ncol || L->nrow < 0 || + L->Stype != SLU_SC || L->Dtype != SLU_D || L->Mtype != SLU_TRLU ) + *info = -2; + else if ( U->nrow != U->ncol || U->nrow < 0 || + U->Stype != SLU_NC || U->Dtype != SLU_D || U->Mtype != SLU_TRU ) + *info = -3; + else if ( ldb < SUPERLU_MAX(0, L->nrow) || + B->Stype != SLU_DN || B->Dtype != SLU_D || B->Mtype != SLU_GE ) + *info = -6; + if ( *info ) { + i = -(*info); + xerbla_("dgstrs", &i); + return; + } + + n = L->nrow; + soln = doubleMalloc(n); + if ( !soln ) ABORT("Malloc fails for local soln[]."); + + Bmat = Bstore->nzval; + Lstore = L->Store; + Lval = Lstore->nzval; + Ustore = U->Store; + Uval = Ustore->nzval; + solve_ops = 0; + + if ( trans == NOTRANS ) { + /* + * Back solve Ux=y. + */ + for (k = Lstore->nsuper; k >= 0; k--) { + fsupc = L_FST_SUPC(k); + istart = L_SUB_START(fsupc); + nsupr = L_SUB_START(fsupc+1) - istart; + nsupc = L_FST_SUPC(k+1) - fsupc; + luptr = L_NZ_START(fsupc); + + solve_ops += nsupc * (nsupc + 1) * nrhs; + + if ( nsupc == 1 ) { + rhs_work = &Bmat[0]; + for (j = 0; j < nrhs; j++) { + rhs_work[fsupc] /= Lval[luptr]; + rhs_work += ldb; + } + } else { +#ifdef USE_VENDOR_BLAS +#ifdef _CRAY + ftcs1 = _cptofcd("L", strlen("L")); + ftcs2 = _cptofcd("U", strlen("U")); + ftcs3 = _cptofcd("N", strlen("N")); + STRSM( ftcs1, ftcs2, ftcs3, ftcs3, &nsupc, &nrhs, &alpha, + &Lval[luptr], &nsupr, &Bmat[fsupc], &ldb); +#else + dtrsm_("L", "U", "N", "N", &nsupc, &nrhs, &alpha, + &Lval[luptr], &nsupr, &Bmat[fsupc], &ldb); +#endif +#else + for (j = 0; j < nrhs; j++) + dusolve ( nsupr, nsupc, &Lval[luptr], &Bmat[fsupc+j*ldb] ); +#endif + } + + for (j = 0; j < nrhs; ++j) { + rhs_work = &Bmat[j*ldb]; + for (jcol = fsupc; jcol < fsupc + nsupc; jcol++) { + solve_ops += 2*(U_NZ_START(jcol+1) - U_NZ_START(jcol)); + for (i = U_NZ_START(jcol); i < U_NZ_START(jcol+1); i++ ){ + irow = U_SUB(i); + rhs_work[irow] -= rhs_work[jcol] * Uval[i]; + } + } + } + + } /* for U-solve */ + +#ifdef DEBUG + printf("After U-solve: x=\n"); + dprint_soln(n, nrhs, Bmat); +#endif + + /* Compute the final solution X := Pc*X. */ + for (i = 0; i < nrhs; i++) { + rhs_work = &Bmat[i*ldb]; + for (k = 0; k < n; k++) soln[k] = rhs_work[perm_c[k]]; + for (k = 0; k < n; k++) rhs_work[k] = soln[k]; + } + + stat->ops[SOLVE] = solve_ops; + + } else { /* Solve U'x = b */ + /* Permute right hand sides to form Pc'*B. */ + for (i = 0; i < nrhs; i++) { + rhs_work = &Bmat[i*ldb]; + for (k = 0; k < n; k++) soln[perm_c[k]] = rhs_work[k]; + for (k = 0; k < n; k++) rhs_work[k] = soln[k]; + } + + for (k = 0; k < nrhs; ++k) { + /* Multiply by inv(U'). */ + sp_dtrsv("U", "T", "N", L, U, &Bmat[k*ldb], stat, info); + } + + } + + SUPERLU_FREE(soln); +} + diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dlacon.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dlacon.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dlacon.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dlacon.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,66 +1,73 @@ - -/* +/*! @file dlacon.c + * \brief Estimates the 1-norm + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
- *
+ * 
*/ #include -#include "Cnames.h" +#include "slu_Cnames.h" + +/*! \brief + * + *
+ *   Purpose   
+ *   =======   
+ *
+ *   DLACON estimates the 1-norm of a square matrix A.   
+ *   Reverse communication is used for evaluating matrix-vector products. 
+ * 
+ *
+ *   Arguments   
+ *   =========   
+ *
+ *   N      (input) INT
+ *          The order of the matrix.  N >= 1.   
+ *
+ *   V      (workspace) DOUBLE PRECISION array, dimension (N)   
+ *          On the final return, V = A*W,  where  EST = norm(V)/norm(W)   
+ *          (W is not returned).   
+ *
+ *   X      (input/output) DOUBLE PRECISION array, dimension (N)   
+ *          On an intermediate return, X should be overwritten by   
+ *                A * X,   if KASE=1,   
+ *                A' * X,  if KASE=2,
+ *         and DLACON must be re-called with all the other parameters   
+ *          unchanged.   
+ *
+ *   ISGN   (workspace) INT array, dimension (N)
+ *
+ *   EST    (output) DOUBLE PRECISION   
+ *          An estimate (a lower bound) for norm(A).   
+ *
+ *   KASE   (input/output) INT
+ *          On the initial call to DLACON, KASE should be 0.   
+ *          On an intermediate return, KASE will be 1 or 2, indicating   
+ *          whether X should be overwritten by A * X  or A' * X.   
+ *          On the final return from DLACON, KASE will again be 0.   
+ *
+ *   Further Details   
+ *   ======= =======   
+ *
+ *   Contributed by Nick Higham, University of Manchester.   
+ *   Originally named CONEST, dated March 16, 1988.   
+ *
+ *   Reference: N.J. Higham, "FORTRAN codes for estimating the one-norm of 
+ *   a real or complex matrix, with applications to condition estimation", 
+ *   ACM Trans. Math. Soft., vol. 14, no. 4, pp. 381-396, December 1988.   
+ *   ===================================================================== 
+ * 
+ */ int dlacon_(int *n, double *v, double *x, int *isgn, double *est, int *kase) { -/* - Purpose - ======= - - DLACON estimates the 1-norm of a square matrix A. - Reverse communication is used for evaluating matrix-vector products. - - - Arguments - ========= - - N (input) INT - The order of the matrix. N >= 1. - - V (workspace) DOUBLE PRECISION array, dimension (N) - On the final return, V = A*W, where EST = norm(V)/norm(W) - (W is not returned). - - X (input/output) DOUBLE PRECISION array, dimension (N) - On an intermediate return, X should be overwritten by - A * X, if KASE=1, - A' * X, if KASE=2, - and DLACON must be re-called with all the other parameters - unchanged. - - ISGN (workspace) INT array, dimension (N) - - EST (output) DOUBLE PRECISION - An estimate (a lower bound) for norm(A). - - KASE (input/output) INT - On the initial call to DLACON, KASE should be 0. - On an intermediate return, KASE will be 1 or 2, indicating - whether X should be overwritten by A * X or A' * X. - On the final return from DLACON, KASE will again be 0. - - Further Details - ======= ======= - - Contributed by Nick Higham, University of Manchester. - Originally named CONEST, dated March 16, 1988. - - Reference: N.J. Higham, "FORTRAN codes for estimating the one-norm of - a real or complex matrix, with applications to condition estimation", - ACM Trans. Math. Soft., vol. 14, no. 4, pp. 381-396, December 1988. - ===================================================================== -*/ + /* Table of constant values */ int c__1 = 1; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dlamch.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dlamch.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dlamch.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dlamch.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,17 +1,26 @@ +/*! @file dlamch.c + * \brief Determines double precision machine parameters + * + *
+ *       -- LAPACK auxiliary routine (version 2.0) --   
+ *       Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd.,   
+ *       Courant Institute, Argonne National Lab, and Rice University   
+ *       October 31, 1992   
+ * 
+ */ #include +#include "slu_Cnames.h" + #define TRUE_ (1) #define FALSE_ (0) #define abs(x) ((x) >= 0 ? (x) : -(x)) #define min(a,b) ((a) <= (b) ? (a) : (b)) #define max(a,b) ((a) >= (b) ? (a) : (b)) -double dlamch_(char *cmach) -{ -/* -- LAPACK auxiliary routine (version 2.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 +/*! \brief + +
     Purpose   
     =======   
 
@@ -47,7 +56,11 @@
             rmax  = overflow threshold  - (base**emax)*(1-eps)   
 
    ===================================================================== 
+
*/ +double dlamch_(char *cmach) +{ + static int first = TRUE_; @@ -125,18 +138,11 @@ /* End of DLAMCH */ } /* dlamch_ */ - - -/* Subroutine */ int dlamc1_(int *beta, int *t, int *rnd, int - *ieee1) -{ -/* -- LAPACK auxiliary routine (version 2.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - - Purpose +/* Subroutine */ +/*! \brief + +
+ Purpose   
     =======   
 
     DLAMC1 determines the machine parameters given by BETA, T, RND, and   
@@ -177,7 +183,11 @@
           Comms. of the ACM, 17, 276-277.   
 
    ===================================================================== 
+
*/ +int dlamc1_(int *beta, int *t, int *rnd, int + *ieee1) +{ /* Initialized data */ static int first = TRUE_; /* System generated locals */ @@ -337,16 +347,10 @@ } /* dlamc1_ */ -/* Subroutine */ int dlamc2_(int *beta, int *t, int *rnd, - double *eps, int *emin, double *rmin, int *emax, - double *rmax) -{ -/* -- LAPACK auxiliary routine (version 2.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - +/* Subroutine */ +/*! \brief + +
     Purpose   
     =======   
 
@@ -402,7 +406,13 @@
     W. Kahan of the University of California at Berkeley.   
 
    ===================================================================== 
+
*/ +int dlamc2_(int *beta, int *t, int *rnd, + double *eps, int *emin, double *rmin, int *emax, + double *rmax) +{ + /* Table of constant values */ static int c__1 = 1; @@ -638,15 +648,9 @@ } /* dlamc2_ */ - -double dlamc3_(double *a, double *b) -{ -/* -- LAPACK auxiliary routine (version 2.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - +/*! \brief + +
     Purpose   
     =======   
 
@@ -663,12 +667,19 @@
             The values A and B.   
 
    ===================================================================== 
+
*/ +double dlamc3_(double *a, double *b) +{ /* >>Start of File<< System generated locals */ - double ret_val; - - ret_val = *a + *b; + volatile double ret_val; + volatile double x; + volatile double y; + + x = *a; + y = *b; + ret_val = x + y; return ret_val; @@ -677,14 +688,10 @@ } /* dlamc3_ */ -/* Subroutine */ int dlamc4_(int *emin, double *start, int *base) -{ -/* -- LAPACK auxiliary routine (version 2.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - +/* Subroutine */ +/*! \brief +
     Purpose   
     =======   
 
@@ -706,7 +713,11 @@
             The base of the machine.   
 
    ===================================================================== 
+
*/ + +int dlamc4_(int *emin, double *start, int *base) +{ /* System generated locals */ int i__1; double d__1; @@ -765,15 +776,10 @@ } /* dlamc4_ */ -/* Subroutine */ int dlamc5_(int *beta, int *p, int *emin, - int *ieee, int *emax, double *rmax) -{ -/* -- LAPACK auxiliary routine (version 2.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - +/* Subroutine */ +/*! \brief + +
     Purpose   
     =======   
 
@@ -815,7 +821,13 @@
        First compute LEXP and UEXP, two powers of 2 that bound   
        abs(EMIN). We then assume that EMAX + abs(EMIN) will sum   
        approximately to the bound that is closest to abs(EMIN).   
-       (EMAX is the exponent of the required number RMAX). */
+       (EMAX is the exponent of the required number RMAX).
+
+*/ +int dlamc5_(int *beta, int *p, int *emin, + int *ieee, int *emax, double *rmax) +{ + /* Table of constant values */ static double c_b5 = 0.; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dlangs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dlangs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dlangs.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dlangs.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,58 +1,65 @@ - -/* +/*! @file dlangs.c + * \brief Returns the value of the one norm + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
  *
+ * Modified from lapack routine DLANGE 
+ * 
*/ /* * File name: dlangs.c * History: Modified from lapack routine DLANGE */ #include -#include "dsp_defs.h" -#include "util.h" +#include "slu_ddefs.h" + +/*! \brief + * + *
+ * Purpose   
+ *   =======   
+ *
+ *   DLANGS returns the value of the one norm, or the Frobenius norm, or 
+ *   the infinity norm, or the element of largest absolute value of a 
+ *   real matrix A.   
+ *
+ *   Description   
+ *   ===========   
+ *
+ *   DLANGE returns the value   
+ *
+ *      DLANGE = ( max(abs(A(i,j))), NORM = 'M' or 'm'   
+ *               (   
+ *               ( norm1(A),         NORM = '1', 'O' or 'o'   
+ *               (   
+ *               ( normI(A),         NORM = 'I' or 'i'   
+ *               (   
+ *               ( normF(A),         NORM = 'F', 'f', 'E' or 'e'   
+ *
+ *   where  norm1  denotes the  one norm of a matrix (maximum column sum), 
+ *   normI  denotes the  infinity norm  of a matrix  (maximum row sum) and 
+ *   normF  denotes the  Frobenius norm of a matrix (square root of sum of 
+ *   squares).  Note that  max(abs(A(i,j)))  is not a  matrix norm.   
+ *
+ *   Arguments   
+ *   =========   
+ *
+ *   NORM    (input) CHARACTER*1   
+ *           Specifies the value to be returned in DLANGE as described above.   
+ *   A       (input) SuperMatrix*
+ *           The M by N sparse matrix A. 
+ *
+ *  =====================================================================
+ * 
+ */ double dlangs(char *norm, SuperMatrix *A) { -/* - Purpose - ======= - - DLANGS returns the value of the one norm, or the Frobenius norm, or - the infinity norm, or the element of largest absolute value of a - real matrix A. - - Description - =========== - - DLANGE returns the value - - DLANGE = ( max(abs(A(i,j))), NORM = 'M' or 'm' - ( - ( norm1(A), NORM = '1', 'O' or 'o' - ( - ( normI(A), NORM = 'I' or 'i' - ( - ( normF(A), NORM = 'F', 'f', 'E' or 'e' - - where norm1 denotes the one norm of a matrix (maximum column sum), - normI denotes the infinity norm of a matrix (maximum row sum) and - normF denotes the Frobenius norm of a matrix (square root of sum of - squares). Note that max(abs(A(i,j))) is not a matrix norm. - - Arguments - ========= - - NORM (input) CHARACTER*1 - Specifies the value to be returned in DLANGE as described above. - A (input) SuperMatrix* - The M by N sparse matrix A. - - ===================================================================== -*/ /* Local variables */ NCformat *Astore; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dlaqgs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dlaqgs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dlaqgs.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dlaqgs.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,79 +1,88 @@ - -/* +/*! @file dlaqgs.c + * \brief Equlibrates a general sprase matrix + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
- *
+ * 
+ * Modified from LAPACK routine DLAQGE
+ * 
*/ /* * File name: dlaqgs.c * History: Modified from LAPACK routine DLAQGE */ #include -#include "dsp_defs.h" +#include "slu_ddefs.h" + +/*! \brief + * + *
+ *   Purpose   
+ *   =======   
+ *
+ *   DLAQGS equilibrates a general sparse M by N matrix A using the row and   
+ *   scaling factors in the vectors R and C.   
+ *
+ *   See supermatrix.h for the definition of 'SuperMatrix' structure.
+ *
+ *   Arguments   
+ *   =========   
+ *
+ *   A       (input/output) SuperMatrix*
+ *           On exit, the equilibrated matrix.  See EQUED for the form of 
+ *           the equilibrated matrix. The type of A can be:
+ *	    Stype = NC; Dtype = SLU_D; Mtype = GE.
+ *	    
+ *   R       (input) double*, dimension (A->nrow)
+ *           The row scale factors for A.
+ *	    
+ *   C       (input) double*, dimension (A->ncol)
+ *           The column scale factors for A.
+ *	    
+ *   ROWCND  (input) double
+ *           Ratio of the smallest R(i) to the largest R(i).
+ *	    
+ *   COLCND  (input) double
+ *           Ratio of the smallest C(i) to the largest C(i).
+ *	    
+ *   AMAX    (input) double
+ *           Absolute value of largest matrix entry.
+ *	    
+ *   EQUED   (output) char*
+ *           Specifies the form of equilibration that was done.   
+ *           = 'N':  No equilibration   
+ *           = 'R':  Row equilibration, i.e., A has been premultiplied by  
+ *                   diag(R).   
+ *           = 'C':  Column equilibration, i.e., A has been postmultiplied  
+ *                   by diag(C).   
+ *           = 'B':  Both row and column equilibration, i.e., A has been
+ *                   replaced by diag(R) * A * diag(C).   
+ *
+ *   Internal Parameters   
+ *   ===================   
+ *
+ *   THRESH is a threshold value used to decide if row or column scaling   
+ *   should be done based on the ratio of the row or column scaling   
+ *   factors.  If ROWCND < THRESH, row scaling is done, and if   
+ *   COLCND < THRESH, column scaling is done.   
+ *
+ *   LARGE and SMALL are threshold values used to decide if row scaling   
+ *   should be done based on the absolute size of the largest matrix   
+ *   element.  If AMAX > LARGE or AMAX < SMALL, row scaling is done.   
+ *
+ *   ===================================================================== 
+ * 
+ */ void dlaqgs(SuperMatrix *A, double *r, double *c, double rowcnd, double colcnd, double amax, char *equed) { -/* - Purpose - ======= - - DLAQGS equilibrates a general sparse M by N matrix A using the row and - scaling factors in the vectors R and C. - - See supermatrix.h for the definition of 'SuperMatrix' structure. - - Arguments - ========= - - A (input/output) SuperMatrix* - On exit, the equilibrated matrix. See EQUED for the form of - the equilibrated matrix. The type of A can be: - Stype = NC; Dtype = SLU_D; Mtype = GE. - - R (input) double*, dimension (A->nrow) - The row scale factors for A. - - C (input) double*, dimension (A->ncol) - The column scale factors for A. - - ROWCND (input) double - Ratio of the smallest R(i) to the largest R(i). - - COLCND (input) double - Ratio of the smallest C(i) to the largest C(i). - - AMAX (input) double - Absolute value of largest matrix entry. - - EQUED (output) char* - Specifies the form of equilibration that was done. - = 'N': No equilibration - = 'R': Row equilibration, i.e., A has been premultiplied by - diag(R). - = 'C': Column equilibration, i.e., A has been postmultiplied - by diag(C). - = 'B': Both row and column equilibration, i.e., A has been - replaced by diag(R) * A * diag(C). - - Internal Parameters - =================== - - THRESH is a threshold value used to decide if row or column scaling - should be done based on the ratio of the row or column scaling - factors. If ROWCND < THRESH, row scaling is done, and if - COLCND < THRESH, column scaling is done. - - LARGE and SMALL are threshold values used to decide if row scaling - should be done based on the absolute size of the largest matrix - element. If AMAX > LARGE or AMAX < SMALL, row scaling is done. - ===================================================================== -*/ #define THRESH (0.1) diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dldperm.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dldperm.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dldperm.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dldperm.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,165 @@ + +/*! @file + * \brief Finds a row permutation so that the matrix has large entries on the diagonal + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory.
+ * June 30, 2009
+ * 
+ */ + +#include "slu_ddefs.h" + +extern void mc64id_(int_t*); +extern void mc64ad_(int_t*, int_t*, int_t*, int_t [], int_t [], double [], + int_t*, int_t [], int_t*, int_t[], int_t*, double [], + int_t [], int_t []); + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *
+ *   DLDPERM finds a row permutation so that the matrix has large
+ *   entries on the diagonal.
+ *
+ * Arguments
+ * =========
+ *
+ * job    (input) int
+ *        Control the action. Possible values for JOB are:
+ *        = 1 : Compute a row permutation of the matrix so that the
+ *              permuted matrix has as many entries on its diagonal as
+ *              possible. The values on the diagonal are of arbitrary size.
+ *              HSL subroutine MC21A/AD is used for this.
+ *        = 2 : Compute a row permutation of the matrix so that the smallest 
+ *              value on the diagonal of the permuted matrix is maximized.
+ *        = 3 : Compute a row permutation of the matrix so that the smallest
+ *              value on the diagonal of the permuted matrix is maximized.
+ *              The algorithm differs from the one used for JOB = 2 and may
+ *              have quite a different performance.
+ *        = 4 : Compute a row permutation of the matrix so that the sum
+ *              of the diagonal entries of the permuted matrix is maximized.
+ *        = 5 : Compute a row permutation of the matrix so that the product
+ *              of the diagonal entries of the permuted matrix is maximized
+ *              and vectors to scale the matrix so that the nonzero diagonal 
+ *              entries of the permuted matrix are one in absolute value and 
+ *              all the off-diagonal entries are less than or equal to one in 
+ *              absolute value.
+ *        Restriction: 1 <= JOB <= 5.
+ *
+ * n      (input) int
+ *        The order of the matrix.
+ *
+ * nnz    (input) int
+ *        The number of nonzeros in the matrix.
+ *
+ * adjncy (input) int*, of size nnz
+ *        The adjacency structure of the matrix, which contains the row
+ *        indices of the nonzeros.
+ *
+ * colptr (input) int*, of size n+1
+ *        The pointers to the beginning of each column in ADJNCY.
+ *
+ * nzval  (input) double*, of size nnz
+ *        The nonzero values of the matrix. nzval[k] is the value of
+ *        the entry corresponding to adjncy[k].
+ *        It is not used if job = 1.
+ *
+ * perm   (output) int*, of size n
+ *        The permutation vector. perm[i] = j means row i in the
+ *        original matrix is in row j of the permuted matrix.
+ *
+ * u      (output) double*, of size n
+ *        If job = 5, the natural logarithms of the row scaling factors. 
+ *
+ * v      (output) double*, of size n
+ *        If job = 5, the natural logarithms of the column scaling factors. 
+ *        The scaled matrix B has entries b_ij = a_ij * exp(u_i + v_j).
+ * 
+ */ + +int +dldperm(int_t job, int_t n, int_t nnz, int_t colptr[], int_t adjncy[], + double nzval[], int_t *perm, double u[], double v[]) +{ + int_t i, liw, ldw, num; + int_t *iw, icntl[10], info[10]; + double *dw; + +#if ( DEBUGlevel>=1 ) + CHECK_MALLOC(0, "Enter dldperm()"); +#endif + liw = 5*n; + if ( job == 3 ) liw = 10*n + nnz; + if ( !(iw = intMalloc(liw)) ) ABORT("Malloc fails for iw[]"); + ldw = 3*n + nnz; + if ( !(dw = (double*) SUPERLU_MALLOC(ldw * sizeof(double))) ) + ABORT("Malloc fails for dw[]"); + + /* Increment one to get 1-based indexing. */ + for (i = 0; i <= n; ++i) ++colptr[i]; + for (i = 0; i < nnz; ++i) ++adjncy[i]; +#if ( DEBUGlevel>=2 ) + printf("LDPERM(): n %d, nnz %d\n", n, nnz); + slu_PrintInt10("colptr", n+1, colptr); + slu_PrintInt10("adjncy", nnz, adjncy); +#endif + + /* + * NOTE: + * ===== + * + * MC64AD assumes that column permutation vector is defined as: + * perm(i) = j means column i of permuted A is in column j of original A. + * + * Since a symmetric permutation preserves the diagonal entries. Then + * by the following relation: + * P'(A*P')P = P'A + * we can apply inverse(perm) to rows of A to get large diagonal entries. + * But, since 'perm' defined in MC64AD happens to be the reverse of + * SuperLU's definition of permutation vector, therefore, it is already + * an inverse for our purpose. We will thus use it directly. + * + */ + mc64id_(icntl); +#if 0 + /* Suppress error and warning messages. */ + icntl[0] = -1; + icntl[1] = -1; +#endif + + mc64ad_(&job, &n, &nnz, colptr, adjncy, nzval, &num, perm, + &liw, iw, &ldw, dw, icntl, info); + +#if ( DEBUGlevel>=2 ) + slu_PrintInt10("perm", n, perm); + printf(".. After MC64AD info %d\tsize of matching %d\n", info[0], num); +#endif + if ( info[0] == 1 ) { /* Structurally singular */ + printf(".. The last %d permutations:\n", n-num); + slu_PrintInt10("perm", n-num, &perm[num]); + } + + /* Restore to 0-based indexing. */ + for (i = 0; i <= n; ++i) --colptr[i]; + for (i = 0; i < nnz; ++i) --adjncy[i]; + for (i = 0; i < n; ++i) --perm[i]; + + if ( job == 5 ) + for (i = 0; i < n; ++i) { + u[i] = dw[i]; + v[i] = dw[n+i]; + } + + SUPERLU_FREE(iw); + SUPERLU_FREE(dw); + +#if ( DEBUGlevel>=1 ) + CHECK_MALLOC(0, "Exit dldperm()"); +#endif + + return info[0]; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dmemory.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dmemory.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dmemory.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dmemory.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,54 +1,32 @@ -/* - * -- SuperLU routine (version 3.0) -- - * Univ. of California Berkeley, Xerox Palo Alto Research Center, - * and Lawrence Berkeley National Lab. - * October 15, 2003 +/*! @file dmemory.c + * \brief Memory details * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory.
+ * June 30, 2009
+ * 
*/ -#include "dsp_defs.h" +#include "slu_ddefs.h" -/* Constants */ -#define NO_MEMTYPE 4 /* 0: lusup; - 1: ucol; - 2: lsub; - 3: usub */ -#define GluIntArray(n) (5 * (n) + 5) /* Internal prototypes */ void *dexpand (int *, MemType,int, int, GlobalLU_t *); -int dLUWorkInit (int, int, int, int **, double **, LU_space_t); +int dLUWorkInit (int, int, int, int **, double **, GlobalLU_t *); void copy_mem_double (int, void *, void *); void dStackCompress (GlobalLU_t *); -void dSetupSpace (void *, int, LU_space_t *); -void *duser_malloc (int, int); -void duser_free (int, int); +void dSetupSpace (void *, int, GlobalLU_t *); +void *duser_malloc (int, int, GlobalLU_t *); +void duser_free (int, int, GlobalLU_t *); -/* External prototypes (in memory.c - prec-indep) */ +/* External prototypes (in memory.c - prec-independent) */ extern void copy_mem_int (int, void *, void *); extern void user_bcopy (char *, char *, int); -/* Headers for 4 types of dynamatically managed memory */ -typedef struct e_node { - int size; /* length of the memory that has been used */ - void *mem; /* pointer to the new malloc'd store */ -} ExpHeader; - -typedef struct { - int size; - int used; - int top1; /* grow upward, relative to &array[0] */ - int top2; /* grow downward */ - void *array; -} LU_stack_t; - -/* Variables local to this file */ -static ExpHeader *expanders = 0; /* Array of pointers to 4 types of memory */ -static LU_stack_t stack; -static int no_expand; /* Macros to manipulate stack */ -#define StackFull(x) ( x + stack.used >= stack.size ) +#define StackFull(x) ( x + Glu->stack.used >= Glu->stack.size ) #define NotDoubleAlign(addr) ( (long int)addr & 7 ) #define DoubleAlign(addr) ( ((long int)addr + 7) & ~7L ) #define TempSpace(m, w) ( (2*w + 4 + NO_MARKER) * m * sizeof(int) + \ @@ -58,66 +36,67 @@ -/* - * Setup the memory model to be used for factorization. +/*! \brief Setup the memory model to be used for factorization. + * * lwork = 0: use system malloc; * lwork > 0: use user-supplied work[] space. */ -void dSetupSpace(void *work, int lwork, LU_space_t *MemModel) +void dSetupSpace(void *work, int lwork, GlobalLU_t *Glu) { if ( lwork == 0 ) { - *MemModel = SYSTEM; /* malloc/free */ + Glu->MemModel = SYSTEM; /* malloc/free */ } else if ( lwork > 0 ) { - *MemModel = USER; /* user provided space */ - stack.used = 0; - stack.top1 = 0; - stack.top2 = (lwork/4)*4; /* must be word addressable */ - stack.size = stack.top2; - stack.array = (void *) work; + Glu->MemModel = USER; /* user provided space */ + Glu->stack.used = 0; + Glu->stack.top1 = 0; + Glu->stack.top2 = (lwork/4)*4; /* must be word addressable */ + Glu->stack.size = Glu->stack.top2; + Glu->stack.array = (void *) work; } } -void *duser_malloc(int bytes, int which_end) +void *duser_malloc(int bytes, int which_end, GlobalLU_t *Glu) { void *buf; if ( StackFull(bytes) ) return (NULL); if ( which_end == HEAD ) { - buf = (char*) stack.array + stack.top1; - stack.top1 += bytes; + buf = (char*) Glu->stack.array + Glu->stack.top1; + Glu->stack.top1 += bytes; } else { - stack.top2 -= bytes; - buf = (char*) stack.array + stack.top2; + Glu->stack.top2 -= bytes; + buf = (char*) Glu->stack.array + Glu->stack.top2; } - stack.used += bytes; + Glu->stack.used += bytes; return buf; } -void duser_free(int bytes, int which_end) +void duser_free(int bytes, int which_end, GlobalLU_t *Glu) { if ( which_end == HEAD ) { - stack.top1 -= bytes; + Glu->stack.top1 -= bytes; } else { - stack.top2 += bytes; + Glu->stack.top2 += bytes; } - stack.used -= bytes; + Glu->stack.used -= bytes; } -/* +/*! \brief + * + *
  * mem_usage consists of the following fields:
  *    - for_lu (float)
  *      The amount of space used in bytes for the L\U data structures.
  *    - total_needed (float)
  *      The amount of space needed in bytes to perform factorization.
- *    - expansions (int)
- *      Number of memory expansions during the LU factorization.
+ * 
*/ int dQuerySpace(SuperMatrix *L, SuperMatrix *U, mem_usage_t *mem_usage) { @@ -132,33 +111,75 @@ dword = sizeof(double); /* For LU factors */ - mem_usage->for_lu = (float)( (4*n + 3) * iword + Lstore->nzval_colptr[n] * - dword + Lstore->rowind_colptr[n] * iword ); - mem_usage->for_lu += (float)( (n + 1) * iword + + mem_usage->for_lu = (float)( (4.0*n + 3.0) * iword + + Lstore->nzval_colptr[n] * dword + + Lstore->rowind_colptr[n] * iword ); + mem_usage->for_lu += (float)( (n + 1.0) * iword + Ustore->colptr[n] * (dword + iword) ); /* Working storage to support factorization */ mem_usage->total_needed = mem_usage->for_lu + - (float)( (2 * panel_size + 4 + NO_MARKER) * n * iword + - (panel_size + 1) * n * dword ); - - mem_usage->expansions = --no_expand; + (float)( (2.0 * panel_size + 4.0 + NO_MARKER) * n * iword + + (panel_size + 1.0) * n * dword ); return 0; } /* dQuerySpace */ -/* - * Allocate storage for the data structures common to all factor routines. - * For those unpredictable size, make a guess as FILL * nnz(A). + +/*! \brief + * + *
+ * mem_usage consists of the following fields:
+ *    - for_lu (float)
+ *      The amount of space used in bytes for the L\U data structures.
+ *    - total_needed (float)
+ *      The amount of space needed in bytes to perform factorization.
+ * 
+ */ +int ilu_dQuerySpace(SuperMatrix *L, SuperMatrix *U, mem_usage_t *mem_usage) +{ + SCformat *Lstore; + NCformat *Ustore; + register int n, panel_size = sp_ienv(1); + register float iword, dword; + + Lstore = L->Store; + Ustore = U->Store; + n = L->ncol; + iword = sizeof(int); + dword = sizeof(double); + + /* For LU factors */ + mem_usage->for_lu = (float)( (4.0f * n + 3.0f) * iword + + Lstore->nzval_colptr[n] * dword + + Lstore->rowind_colptr[n] * iword ); + mem_usage->for_lu += (float)( (n + 1.0f) * iword + + Ustore->colptr[n] * (dword + iword) ); + + /* Working storage to support factorization. + ILU needs 5*n more integers than LU */ + mem_usage->total_needed = mem_usage->for_lu + + (float)( (2.0f * panel_size + 9.0f + NO_MARKER) * n * iword + + (panel_size + 1.0f) * n * dword ); + + return 0; +} /* ilu_dQuerySpace */ + + +/*! \brief Allocate storage for the data structures common to all factor routines. + * + *
+ * For those unpredictable size, estimate as fill_ratio * nnz(A).
  * Return value:
  *     If lwork = -1, return the estimated amount of space required, plus n;
  *     otherwise, return the amount of space actually allocated when
  *     memory allocation failure occurred.
+ * 
*/ int dLUMemInit(fact_t fact, void *work, int lwork, int m, int n, int annz, - int panel_size, SuperMatrix *L, SuperMatrix *U, GlobalLU_t *Glu, - int **iwork, double **dwork) + int panel_size, double fill_ratio, SuperMatrix *L, SuperMatrix *U, + GlobalLU_t *Glu, int **iwork, double **dwork) { int info, iword, dword; SCformat *Lstore; @@ -170,32 +191,33 @@ double *ucol; int *usub, *xusub; int nzlmax, nzumax, nzlumax; - int FILL = sp_ienv(6); - Glu->n = n; - no_expand = 0; iword = sizeof(int); dword = sizeof(double); + Glu->n = n; + Glu->num_expansions = 0; - if ( !expanders ) - expanders = (ExpHeader*)SUPERLU_MALLOC(NO_MEMTYPE * sizeof(ExpHeader)); - if ( !expanders ) ABORT("SUPERLU_MALLOC fails for expanders"); + if ( !Glu->expanders ) + Glu->expanders = (ExpHeader*)SUPERLU_MALLOC( NO_MEMTYPE * + sizeof(ExpHeader) ); + if ( !Glu->expanders ) ABORT("SUPERLU_MALLOC fails for expanders"); if ( fact != SamePattern_SameRowPerm ) { /* Guess for L\U factors */ - nzumax = nzlumax = FILL * annz; - nzlmax = SUPERLU_MAX(1, FILL/4.) * annz; + nzumax = nzlumax = fill_ratio * annz; + nzlmax = SUPERLU_MAX(1, fill_ratio/4.) * annz; if ( lwork == -1 ) { return ( GluIntArray(n) * iword + TempSpace(m, panel_size) + (nzlmax+nzumax)*iword + (nzlumax+nzumax)*dword + n ); } else { - dSetupSpace(work, lwork, &Glu->MemModel); + dSetupSpace(work, lwork, Glu); } -#ifdef DEBUG - printf("dLUMemInit() called: annz %d, MemModel %d\n", - annz, Glu->MemModel); +#if ( PRNTlevel >= 1 ) + printf("dLUMemInit() called: fill_ratio %ld, nzlmax %ld, nzumax %ld\n", + fill_ratio, nzlmax, nzumax); + fflush(stdout); #endif /* Integer pointers for L\U factors */ @@ -206,11 +228,11 @@ xlusup = intMalloc(n+1); xusub = intMalloc(n+1); } else { - xsup = (int *)duser_malloc((n+1) * iword, HEAD); - supno = (int *)duser_malloc((n+1) * iword, HEAD); - xlsub = (int *)duser_malloc((n+1) * iword, HEAD); - xlusup = (int *)duser_malloc((n+1) * iword, HEAD); - xusub = (int *)duser_malloc((n+1) * iword, HEAD); + xsup = (int *)duser_malloc((n+1) * iword, HEAD, Glu); + supno = (int *)duser_malloc((n+1) * iword, HEAD, Glu); + xlsub = (int *)duser_malloc((n+1) * iword, HEAD, Glu); + xlusup = (int *)duser_malloc((n+1) * iword, HEAD, Glu); + xusub = (int *)duser_malloc((n+1) * iword, HEAD, Glu); } lusup = (double *) dexpand( &nzlumax, LUSUP, 0, 0, Glu ); @@ -225,7 +247,8 @@ SUPERLU_FREE(lsub); SUPERLU_FREE(usub); } else { - duser_free((nzlumax+nzumax)*dword+(nzlmax+nzumax)*iword, HEAD); + duser_free((nzlumax+nzumax)*dword+(nzlmax+nzumax)*iword, + HEAD, Glu); } nzlumax /= 2; nzumax /= 2; @@ -234,6 +257,11 @@ printf("Not enough memory to perform factorization.\n"); return (dmemory_usage(nzlmax, nzumax, nzlumax, n) + n); } +#if ( PRNTlevel >= 1) + printf("dLUMemInit() reduce size: nzlmax %ld, nzumax %ld\n", + nzlmax, nzumax); + fflush(stdout); +#endif lusup = (double *) dexpand( &nzlumax, LUSUP, 0, 0, Glu ); ucol = (double *) dexpand( &nzumax, UCOL, 0, 0, Glu ); lsub = (int *) dexpand( &nzlmax, LSUB, 0, 0, Glu ); @@ -260,18 +288,18 @@ Glu->MemModel = SYSTEM; } else { Glu->MemModel = USER; - stack.top2 = (lwork/4)*4; /* must be word-addressable */ - stack.size = stack.top2; + Glu->stack.top2 = (lwork/4)*4; /* must be word-addressable */ + Glu->stack.size = Glu->stack.top2; } - lsub = expanders[LSUB].mem = Lstore->rowind; - lusup = expanders[LUSUP].mem = Lstore->nzval; - usub = expanders[USUB].mem = Ustore->rowind; - ucol = expanders[UCOL].mem = Ustore->nzval;; - expanders[LSUB].size = nzlmax; - expanders[LUSUP].size = nzlumax; - expanders[USUB].size = nzumax; - expanders[UCOL].size = nzumax; + lsub = Glu->expanders[LSUB].mem = Lstore->rowind; + lusup = Glu->expanders[LUSUP].mem = Lstore->nzval; + usub = Glu->expanders[USUB].mem = Ustore->rowind; + ucol = Glu->expanders[UCOL].mem = Ustore->nzval;; + Glu->expanders[LSUB].size = nzlmax; + Glu->expanders[LUSUP].size = nzlumax; + Glu->expanders[USUB].size = nzumax; + Glu->expanders[UCOL].size = nzumax; } Glu->xsup = xsup; @@ -287,20 +315,20 @@ Glu->nzumax = nzumax; Glu->nzlumax = nzlumax; - info = dLUWorkInit(m, n, panel_size, iwork, dwork, Glu->MemModel); + info = dLUWorkInit(m, n, panel_size, iwork, dwork, Glu); if ( info ) return ( info + dmemory_usage(nzlmax, nzumax, nzlumax, n) + n); - ++no_expand; + ++Glu->num_expansions; return 0; } /* dLUMemInit */ -/* Allocate known working storage. Returns 0 if success, otherwise +/*! \brief Allocate known working storage. Returns 0 if success, otherwise returns the number of bytes allocated so far when failure occurred. */ int dLUWorkInit(int m, int n, int panel_size, int **iworkptr, - double **dworkptr, LU_space_t MemModel) + double **dworkptr, GlobalLU_t *Glu) { int isize, dsize, extra; double *old_ptr; @@ -311,19 +339,19 @@ dsize = (m * panel_size + NUM_TEMPV(m,panel_size,maxsuper,rowblk)) * sizeof(double); - if ( MemModel == SYSTEM ) + if ( Glu->MemModel == SYSTEM ) *iworkptr = (int *) intCalloc(isize/sizeof(int)); else - *iworkptr = (int *) duser_malloc(isize, TAIL); + *iworkptr = (int *) duser_malloc(isize, TAIL, Glu); if ( ! *iworkptr ) { fprintf(stderr, "dLUWorkInit: malloc fails for local iworkptr[]\n"); return (isize + n); } - if ( MemModel == SYSTEM ) + if ( Glu->MemModel == SYSTEM ) *dworkptr = (double *) SUPERLU_MALLOC(dsize); else { - *dworkptr = (double *) duser_malloc(dsize, TAIL); + *dworkptr = (double *) duser_malloc(dsize, TAIL, Glu); if ( NotDoubleAlign(*dworkptr) ) { old_ptr = *dworkptr; *dworkptr = (double*) DoubleAlign(*dworkptr); @@ -332,8 +360,8 @@ #ifdef DEBUG printf("dLUWorkInit: not aligned, extra %d\n", extra); #endif - stack.top2 -= extra; - stack.used += extra; + Glu->stack.top2 -= extra; + Glu->stack.used += extra; } } if ( ! *dworkptr ) { @@ -345,8 +373,7 @@ } -/* - * Set up pointers for real working arrays. +/*! \brief Set up pointers for real working arrays. */ void dSetRWork(int m, int panel_size, double *dworkptr, @@ -362,8 +389,7 @@ dfill (*tempv, NUM_TEMPV(m,panel_size,maxsuper,rowblk), zero); } -/* - * Free the working storage used by factor routines. +/*! \brief Free the working storage used by factor routines. */ void dLUWorkFree(int *iwork, double *dwork, GlobalLU_t *Glu) { @@ -371,18 +397,21 @@ SUPERLU_FREE (iwork); SUPERLU_FREE (dwork); } else { - stack.used -= (stack.size - stack.top2); - stack.top2 = stack.size; + Glu->stack.used -= (Glu->stack.size - Glu->stack.top2); + Glu->stack.top2 = Glu->stack.size; /* dStackCompress(Glu); */ } - SUPERLU_FREE (expanders); - expanders = 0; + SUPERLU_FREE (Glu->expanders); + Glu->expanders = NULL; } -/* Expand the data structures for L and U during the factorization. +/*! \brief Expand the data structures for L and U during the factorization. + * + *
  * Return value:   0 - successful return
  *               > 0 - number of bytes allocated when run out of space
+ * 
*/ int dLUMemXpand(int jcol, @@ -446,8 +475,7 @@ for (i = 0; i < howmany; i++) dnew[i] = dold[i]; } -/* - * Expand the existing storage to accommodate more fill-ins. +/*! \brief Expand the existing storage to accommodate more fill-ins. */ void *dexpand ( @@ -463,12 +491,14 @@ float alpha; void *new_mem, *old_mem; int new_len, tries, lword, extra, bytes_to_copy; + ExpHeader *expanders = Glu->expanders; /* Array of 4 types of memory */ alpha = EXPAND; - if ( no_expand == 0 || keep_prev ) /* First time allocate requested */ + if ( Glu->num_expansions == 0 || keep_prev ) { + /* First time allocate requested */ new_len = *prev_len; - else { + } else { new_len = alpha * *prev_len; } @@ -476,9 +506,8 @@ else lword = sizeof(double); if ( Glu->MemModel == SYSTEM ) { - new_mem = (void *) SUPERLU_MALLOC(new_len * lword); -/* new_mem = (void *) calloc(new_len, lword); */ - if ( no_expand != 0 ) { + new_mem = (void *) SUPERLU_MALLOC((size_t)new_len * lword); + if ( Glu->num_expansions != 0 ) { tries = 0; if ( keep_prev ) { if ( !new_mem ) return (NULL); @@ -487,8 +516,7 @@ if ( ++tries > 10 ) return (NULL); alpha = Reduce(alpha); new_len = alpha * *prev_len; - new_mem = (void *) SUPERLU_MALLOC(new_len * lword); -/* new_mem = (void *) calloc(new_len, lword); */ + new_mem = (void *) SUPERLU_MALLOC((size_t)new_len * lword); } } if ( type == LSUB || type == USUB ) { @@ -501,8 +529,8 @@ expanders[type].mem = (void *) new_mem; } else { /* MemModel == USER */ - if ( no_expand == 0 ) { - new_mem = duser_malloc(new_len * lword, HEAD); + if ( Glu->num_expansions == 0 ) { + new_mem = duser_malloc(new_len * lword, HEAD, Glu); if ( NotDoubleAlign(new_mem) && (type == LUSUP || type == UCOL) ) { old_mem = new_mem; @@ -511,12 +539,11 @@ #ifdef DEBUG printf("expand(): not aligned, extra %d\n", extra); #endif - stack.top1 += extra; - stack.used += extra; + Glu->stack.top1 += extra; + Glu->stack.used += extra; } expanders[type].mem = (void *) new_mem; - } - else { + } else { tries = 0; extra = (new_len - *prev_len) * lword; if ( keep_prev ) { @@ -532,7 +559,7 @@ if ( type != USUB ) { new_mem = (void*)((char*)expanders[type + 1].mem + extra); - bytes_to_copy = (char*)stack.array + stack.top1 + bytes_to_copy = (char*)Glu->stack.array + Glu->stack.top1 - (char*)expanders[type + 1].mem; user_bcopy(expanders[type+1].mem, new_mem, bytes_to_copy); @@ -548,11 +575,11 @@ Glu->ucol = expanders[UCOL].mem = (void*)((char*)expanders[UCOL].mem + extra); } - stack.top1 += extra; - stack.used += extra; + Glu->stack.top1 += extra; + Glu->stack.used += extra; if ( type == UCOL ) { - stack.top1 += extra; /* Add same amount for USUB */ - stack.used += extra; + Glu->stack.top1 += extra; /* Add same amount for USUB */ + Glu->stack.used += extra; } } /* if ... */ @@ -562,15 +589,14 @@ expanders[type].size = new_len; *prev_len = new_len; - if ( no_expand ) ++no_expand; + if ( Glu->num_expansions ) ++Glu->num_expansions; return (void *) expanders[type].mem; } /* dexpand */ -/* - * Compress the work[] array to remove fragmentation. +/*! \brief Compress the work[] array to remove fragmentation. */ void dStackCompress(GlobalLU_t *Glu) @@ -610,9 +636,9 @@ usub = ito; last = (char*)usub + xusub[ndim] * iword; - fragment = (char*) (((char*)stack.array + stack.top1) - last); - stack.used -= (long int) fragment; - stack.top1 -= (long int) fragment; + fragment = (char*) (((char*)Glu->stack.array + Glu->stack.top1) - last); + Glu->stack.used -= (long int) fragment; + Glu->stack.top1 -= (long int) fragment; Glu->ucol = ucol; Glu->lsub = lsub; @@ -626,8 +652,7 @@ } -/* - * Allocate storage for original matrix A +/*! \brief Allocate storage for original matrix A */ void dallocateA(int n, int nnz, double **a, int **asub, int **xa) @@ -641,7 +666,7 @@ double *doubleMalloc(int n) { double *buf; - buf = (double *) SUPERLU_MALLOC(n * sizeof(double)); + buf = (double *) SUPERLU_MALLOC((size_t)n * sizeof(double)); if ( !buf ) { ABORT("SUPERLU_MALLOC failed for buf in doubleMalloc()\n"); } @@ -653,7 +678,7 @@ double *buf; register int i; double zero = 0.0; - buf = (double *) SUPERLU_MALLOC(n * sizeof(double)); + buf = (double *) SUPERLU_MALLOC((size_t)n * sizeof(double)); if ( !buf ) { ABORT("SUPERLU_MALLOC failed for buf in doubleCalloc()\n"); } diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dpanel_bmod.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dpanel_bmod.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dpanel_bmod.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dpanel_bmod.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,27 +1,32 @@ -/* +/*! @file dpanel_bmod.c + * \brief Performs numeric block updates + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ /* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. + */ #include #include -#include "dsp_defs.h" +#include "slu_ddefs.h" /* * Function prototypes @@ -30,6 +35,25 @@ void dmatvec(int, int, int, double *, double *, double *); extern void dcheck_tempv(); +/*! \brief + * + *
+ * Purpose
+ * =======
+ *
+ *    Performs numeric block updates (sup-panel) in topological order.
+ *    It features: col-col, 2cols-col, 3cols-col, and sup-col updates.
+ *    Special processing on the supernodal portion of L\U[*,j]
+ *
+ *    Before entering this routine, the original nonzeros in the panel 
+ *    were already copied into the spa[m,w].
+ *
+ *    Updated/Output parameters-
+ *    dense[0:m-1,w]: L[*,j:j+w-1] and U[*,j:j+w-1] are returned 
+ *    collectively in the m-by-w vector dense[*]. 
+ * 
+ */ + void dpanel_bmod ( const int m, /* in - number of rows in the matrix */ @@ -44,22 +68,7 @@ SuperLUStat_t *stat /* output */ ) { -/* - * Purpose - * ======= - * - * Performs numeric block updates (sup-panel) in topological order. - * It features: col-col, 2cols-col, 3cols-col, and sup-col updates. - * Special processing on the supernodal portion of L\U[*,j] - * - * Before entering this routine, the original nonzeros in the panel - * were already copied into the spa[m,w]. - * - * Updated/Output parameters- - * dense[0:m-1,w]: L[*,j:j+w-1] and U[*,j:j+w-1] are returned - * collectively in the m-by-w vector dense[*]. - * - */ + #ifdef USE_VENDOR_BLAS #ifdef _CRAY diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dpanel_dfs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dpanel_dfs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dpanel_dfs.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dpanel_dfs.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,48 +1,32 @@ - -/* +/*! @file dpanel_dfs.c + * \brief Peforms a symbolic factorization on a panel of symbols + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "dsp_defs.h" -#include "util.h" -void -dpanel_dfs ( - const int m, /* in - number of rows in the matrix */ - const int w, /* in */ - const int jcol, /* in */ - SuperMatrix *A, /* in - original matrix */ - int *perm_r, /* in */ - int *nseg, /* out */ - double *dense, /* out */ - int *panel_lsub, /* out */ - int *segrep, /* out */ - int *repfnz, /* out */ - int *xprune, /* out */ - int *marker, /* out */ - int *parent, /* working array */ - int *xplore, /* working array */ - GlobalLU_t *Glu /* modified */ - ) -{ -/* +#include "slu_ddefs.h" + +/*! \brief + * + *
  * Purpose
  * =======
  *
@@ -68,8 +52,29 @@
  *   repfnz: SuperA-col --> PA-row
  *   parent: SuperA-col --> SuperA-col
  *   xplore: SuperA-col --> index to L-structure
- *
+ * 
*/ + +void +dpanel_dfs ( + const int m, /* in - number of rows in the matrix */ + const int w, /* in */ + const int jcol, /* in */ + SuperMatrix *A, /* in - original matrix */ + int *perm_r, /* in */ + int *nseg, /* out */ + double *dense, /* out */ + int *panel_lsub, /* out */ + int *segrep, /* out */ + int *repfnz, /* out */ + int *xprune, /* out */ + int *marker, /* out */ + int *parent, /* working array */ + int *xplore, /* working array */ + GlobalLU_t *Glu /* modified */ + ) +{ + NCPformat *Astore; double *a; int *asub; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dpivotgrowth.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dpivotgrowth.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dpivotgrowth.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dpivotgrowth.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,21 +1,20 @@ - -/* +/*! @file dpivotgrowth.c + * \brief Computes the reciprocal pivot growth factor + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
- *
+ * 
*/ #include -#include "dsp_defs.h" -#include "util.h" +#include "slu_ddefs.h" -double -dPivotGrowth(int ncols, SuperMatrix *A, int *perm_c, - SuperMatrix *L, SuperMatrix *U) -{ -/* +/*! \brief + * + *
  * Purpose
  * =======
  *
@@ -43,8 +42,14 @@
  *	    The factor U from the factorization Pr*A*Pc=L*U. Use column-wise
  *          storage scheme, i.e., U has types: Stype = NC;
  *          Dtype = SLU_D; Mtype = TRU.
- *
+ * 
*/ + +double +dPivotGrowth(int ncols, SuperMatrix *A, int *perm_c, + SuperMatrix *L, SuperMatrix *U) +{ + NCformat *Astore; SCformat *Lstore; NCformat *Ustore; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dpivotL.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dpivotL.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dpivotL.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dpivotL.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,44 +1,36 @@ -/* +/*! @file dpivotL.c + * \brief Performs numerical pivoting + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ + #include #include -#include "dsp_defs.h" +#include "slu_ddefs.h" #undef DEBUG -int -dpivotL( - const int jcol, /* in */ - const double u, /* in - diagonal pivoting threshold */ - int *usepr, /* re-use the pivot sequence given by perm_r/iperm_r */ - int *perm_r, /* may be modified */ - int *iperm_r, /* in - inverse of perm_r */ - int *iperm_c, /* in - used to find diagonal of Pc*A*Pc' */ - int *pivrow, /* out */ - GlobalLU_t *Glu, /* modified - global LU data structures */ - SuperLUStat_t *stat /* output */ - ) -{ -/* +/*! \brief + * + *
  * Purpose
  * =======
  *   Performs the numerical pivoting on the current column of L,
@@ -57,8 +49,23 @@
  *
  *   Return value: 0      success;
  *                 i > 0  U(i,i) is exactly zero.
- *
+ * 
*/ + +int +dpivotL( + const int jcol, /* in */ + const double u, /* in - diagonal pivoting threshold */ + int *usepr, /* re-use the pivot sequence given by perm_r/iperm_r */ + int *perm_r, /* may be modified */ + int *iperm_r, /* in - inverse of perm_r */ + int *iperm_c, /* in - used to find diagonal of Pc*A*Pc' */ + int *pivrow, /* out */ + GlobalLU_t *Glu, /* modified - global LU data structures */ + SuperLUStat_t *stat /* output */ + ) +{ + int fsupc; /* first column in the supernode */ int nsupc; /* no of columns in the supernode */ int nsupr; /* no of rows in the supernode */ @@ -100,7 +107,11 @@ Also search for user-specified pivot, and diagonal element. */ if ( *usepr ) *pivrow = iperm_r[jcol]; diagind = iperm_c[jcol]; +#ifdef SCIPY_SPECIFIC_FIX + pivmax = -1.0; +#else pivmax = 0.0; +#endif pivptr = nsupc; diag = EMPTY; old_pivptr = nsupc; @@ -115,9 +126,20 @@ } /* Test for singularity */ +#ifdef SCIPY_SPECIFIC_FIX + if (pivmax < 0.0) { + perm_r[diagind] = jcol; + *usepr = 0; + return (jcol+1); + } +#endif if ( pivmax == 0.0 ) { +#if 1 *pivrow = lsub_ptr[pivptr]; perm_r[*pivrow] = jcol; +#else + perm_r[diagind] = jcol; +#endif *usepr = 0; return (jcol+1); } diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dpruneL.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dpruneL.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dpruneL.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dpruneL.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,27 +1,38 @@ - -/* +/*! @file dpruneL.c + * \brief Prunes the L-structure + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ *
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "dsp_defs.h" -#include "util.h" + +#include "slu_ddefs.h" + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *   Prunes the L-structure of supernodes whose L-structure
+ *   contains the current pivot row "pivrow"
+ * 
+ */ void dpruneL( @@ -35,13 +46,7 @@ GlobalLU_t *Glu /* modified - global LU data structures */ ) { -/* - * Purpose - * ======= - * Prunes the L-structure of supernodes whose L-structure - * contains the current pivot row "pivrow" - * - */ + double utemp; int jsupno, irep, irep1, kmin, kmax, krow, movnum; int i, ktemp, minloc, maxloc; @@ -108,8 +113,8 @@ kmax--; else if ( perm_r[lsub[kmin]] != EMPTY ) kmin++; - else { /* kmin below pivrow, and kmax above pivrow: - * interchange the two subscripts + else { /* kmin below pivrow (not yet pivoted), and kmax + * above pivrow: interchange the two subscripts */ ktemp = lsub[kmin]; lsub[kmin] = lsub[kmax]; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dreadhb.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dreadhb.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dreadhb.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dreadhb.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,18 +1,85 @@ - -/* +/*! @file dreadhb.c + * \brief Read a matrix stored in Harwell-Boeing format + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
  *
+ * Purpose
+ * =======
+ * 
+ * Read a DOUBLE PRECISION matrix stored in Harwell-Boeing format 
+ * as described below.
+ * 
+ * Line 1 (A72,A8) 
+ *  	Col. 1 - 72   Title (TITLE) 
+ *	Col. 73 - 80  Key (KEY) 
+ * 
+ * Line 2 (5I14) 
+ * 	Col. 1 - 14   Total number of lines excluding header (TOTCRD) 
+ * 	Col. 15 - 28  Number of lines for pointers (PTRCRD) 
+ * 	Col. 29 - 42  Number of lines for row (or variable) indices (INDCRD) 
+ * 	Col. 43 - 56  Number of lines for numerical values (VALCRD) 
+ *	Col. 57 - 70  Number of lines for right-hand sides (RHSCRD) 
+ *                    (including starting guesses and solution vectors 
+ *		       if present) 
+ *           	      (zero indicates no right-hand side data is present) 
+ *
+ * Line 3 (A3, 11X, 4I14) 
+ *   	Col. 1 - 3    Matrix type (see below) (MXTYPE) 
+ * 	Col. 15 - 28  Number of rows (or variables) (NROW) 
+ * 	Col. 29 - 42  Number of columns (or elements) (NCOL) 
+ *	Col. 43 - 56  Number of row (or variable) indices (NNZERO) 
+ *	              (equal to number of entries for assembled matrices) 
+ * 	Col. 57 - 70  Number of elemental matrix entries (NELTVL) 
+ *	              (zero in the case of assembled matrices) 
+ * Line 4 (2A16, 2A20) 
+ * 	Col. 1 - 16   Format for pointers (PTRFMT) 
+ *	Col. 17 - 32  Format for row (or variable) indices (INDFMT) 
+ *	Col. 33 - 52  Format for numerical values of coefficient matrix (VALFMT) 
+ * 	Col. 53 - 72 Format for numerical values of right-hand sides (RHSFMT) 
+ *
+ * Line 5 (A3, 11X, 2I14) Only present if there are right-hand sides present 
+ *    	Col. 1 	      Right-hand side type: 
+ *	         	  F for full storage or M for same format as matrix 
+ *    	Col. 2        G if a starting vector(s) (Guess) is supplied. (RHSTYP) 
+ *    	Col. 3        X if an exact solution vector(s) is supplied. 
+ *	Col. 15 - 28  Number of right-hand sides (NRHS) 
+ *	Col. 29 - 42  Number of row indices (NRHSIX) 
+ *          	      (ignored in case of unassembled matrices) 
+ *
+ * The three character type field on line 3 describes the matrix type. 
+ * The following table lists the permitted values for each of the three 
+ * characters. As an example of the type field, RSA denotes that the matrix 
+ * is real, symmetric, and assembled. 
+ *
+ * First Character: 
+ *	R Real matrix 
+ *	C Complex matrix 
+ *	P Pattern only (no numerical values supplied) 
+ *
+ * Second Character: 
+ *	S Symmetric 
+ *	U Unsymmetric 
+ *	H Hermitian 
+ *	Z Skew symmetric 
+ *	R Rectangular 
+ *
+ * Third Character: 
+ *	A Assembled 
+ *	E Elemental matrices (unassembled) 
+ *
+ * 
*/ #include #include -#include "dsp_defs.h" +#include "slu_ddefs.h" -/* Eat up the rest of the current line */ +/*! \brief Eat up the rest of the current line */ int dDumpLine(FILE *fp) { register int c; @@ -60,7 +127,7 @@ return 0; } -int dReadVector(FILE *fp, int n, int *where, int perline, int persize) +static int ReadVector(FILE *fp, int n, int *where, int perline, int persize) { register int i, j, item; char tmp, buf[100]; @@ -108,72 +175,6 @@ dreadhb(int *nrow, int *ncol, int *nonz, double **nzval, int **rowind, int **colptr) { -/* - * Purpose - * ======= - * - * Read a DOUBLE PRECISION matrix stored in Harwell-Boeing format - * as described below. - * - * Line 1 (A72,A8) - * Col. 1 - 72 Title (TITLE) - * Col. 73 - 80 Key (KEY) - * - * Line 2 (5I14) - * Col. 1 - 14 Total number of lines excluding header (TOTCRD) - * Col. 15 - 28 Number of lines for pointers (PTRCRD) - * Col. 29 - 42 Number of lines for row (or variable) indices (INDCRD) - * Col. 43 - 56 Number of lines for numerical values (VALCRD) - * Col. 57 - 70 Number of lines for right-hand sides (RHSCRD) - * (including starting guesses and solution vectors - * if present) - * (zero indicates no right-hand side data is present) - * - * Line 3 (A3, 11X, 4I14) - * Col. 1 - 3 Matrix type (see below) (MXTYPE) - * Col. 15 - 28 Number of rows (or variables) (NROW) - * Col. 29 - 42 Number of columns (or elements) (NCOL) - * Col. 43 - 56 Number of row (or variable) indices (NNZERO) - * (equal to number of entries for assembled matrices) - * Col. 57 - 70 Number of elemental matrix entries (NELTVL) - * (zero in the case of assembled matrices) - * Line 4 (2A16, 2A20) - * Col. 1 - 16 Format for pointers (PTRFMT) - * Col. 17 - 32 Format for row (or variable) indices (INDFMT) - * Col. 33 - 52 Format for numerical values of coefficient matrix (VALFMT) - * Col. 53 - 72 Format for numerical values of right-hand sides (RHSFMT) - * - * Line 5 (A3, 11X, 2I14) Only present if there are right-hand sides present - * Col. 1 Right-hand side type: - * F for full storage or M for same format as matrix - * Col. 2 G if a starting vector(s) (Guess) is supplied. (RHSTYP) - * Col. 3 X if an exact solution vector(s) is supplied. - * Col. 15 - 28 Number of right-hand sides (NRHS) - * Col. 29 - 42 Number of row indices (NRHSIX) - * (ignored in case of unassembled matrices) - * - * The three character type field on line 3 describes the matrix type. - * The following table lists the permitted values for each of the three - * characters. As an example of the type field, RSA denotes that the matrix - * is real, symmetric, and assembled. - * - * First Character: - * R Real matrix - * C Complex matrix - * P Pattern only (no numerical values supplied) - * - * Second Character: - * S Symmetric - * U Unsymmetric - * H Hermitian - * Z Skew symmetric - * R Rectangular - * - * Third Character: - * A Assembled - * E Elemental matrices (unassembled) - * - */ register int i, numer_lines = 0, rhscrd = 0; int tmp, colnum, colsize, rownum, rowsize, valnum, valsize; @@ -244,8 +245,8 @@ printf("valnum %d, valsize %d\n", valnum, valsize); #endif - dReadVector(fp, *ncol+1, *colptr, colnum, colsize); - dReadVector(fp, *nonz, *rowind, rownum, rowsize); + ReadVector(fp, *ncol+1, *colptr, colnum, colsize); + ReadVector(fp, *nonz, *rowind, rownum, rowsize); if ( numer_lines ) { dReadValues(fp, *nonz, *nzval, valnum, valsize); } diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dreadrb.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dreadrb.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dreadrb.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dreadrb.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,237 @@ + +/*! @file dreadrb.c + * \brief Read a matrix stored in Rutherford-Boeing format + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory.
+ * June 30, 2009
+ * 
+ * + * Purpose + * ======= + * + * Read a DOUBLE PRECISION matrix stored in Rutherford-Boeing format + * as described below. + * + * Line 1 (A72, A8) + * Col. 1 - 72 Title (TITLE) + * Col. 73 - 80 Matrix name / identifier (MTRXID) + * + * Line 2 (I14, 3(1X, I13)) + * Col. 1 - 14 Total number of lines excluding header (TOTCRD) + * Col. 16 - 28 Number of lines for pointers (PTRCRD) + * Col. 30 - 42 Number of lines for row (or variable) indices (INDCRD) + * Col. 44 - 56 Number of lines for numerical values (VALCRD) + * + * Line 3 (A3, 11X, 4(1X, I13)) + * Col. 1 - 3 Matrix type (see below) (MXTYPE) + * Col. 15 - 28 Compressed Column: Number of rows (NROW) + * Elemental: Largest integer used to index variable (MVAR) + * Col. 30 - 42 Compressed Column: Number of columns (NCOL) + * Elemental: Number of element matrices (NELT) + * Col. 44 - 56 Compressed Column: Number of entries (NNZERO) + * Elemental: Number of variable indeces (NVARIX) + * Col. 58 - 70 Compressed Column: Unused, explicitly zero + * Elemental: Number of elemental matrix entries (NELTVL) + * + * Line 4 (2A16, A20) + * Col. 1 - 16 Fortran format for pointers (PTRFMT) + * Col. 17 - 32 Fortran format for row (or variable) indices (INDFMT) + * Col. 33 - 52 Fortran format for numerical values of coefficient matrix + * (VALFMT) + * (blank in the case of matrix patterns) + * + * The three character type field on line 3 describes the matrix type. + * The following table lists the permitted values for each of the three + * characters. As an example of the type field, RSA denotes that the matrix + * is real, symmetric, and assembled. + * + * First Character: + * R Real matrix + * C Complex matrix + * I integer matrix + * P Pattern only (no numerical values supplied) + * Q Pattern only (numerical values supplied in associated auxiliary value + * file) + * + * Second Character: + * S Symmetric + * U Unsymmetric + * H Hermitian + * Z Skew symmetric + * R Rectangular + * + * Third Character: + * A Compressed column form + * E Elemental form + * + * + */ + +#include "slu_ddefs.h" + + +/*! \brief Eat up the rest of the current line */ +static int dDumpLine(FILE *fp) +{ + register int c; + while ((c = fgetc(fp)) != '\n') ; + return 0; +} + +static int dParseIntFormat(char *buf, int *num, int *size) +{ + char *tmp; + + tmp = buf; + while (*tmp++ != '(') ; + sscanf(tmp, "%d", num); + while (*tmp != 'I' && *tmp != 'i') ++tmp; + ++tmp; + sscanf(tmp, "%d", size); + return 0; +} + +static int dParseFloatFormat(char *buf, int *num, int *size) +{ + char *tmp, *period; + + tmp = buf; + while (*tmp++ != '(') ; + *num = atoi(tmp); /*sscanf(tmp, "%d", num);*/ + while (*tmp != 'E' && *tmp != 'e' && *tmp != 'D' && *tmp != 'd' + && *tmp != 'F' && *tmp != 'f') { + /* May find kP before nE/nD/nF, like (1P6F13.6). In this case the + num picked up refers to P, which should be skipped. */ + if (*tmp=='p' || *tmp=='P') { + ++tmp; + *num = atoi(tmp); /*sscanf(tmp, "%d", num);*/ + } else { + ++tmp; + } + } + ++tmp; + period = tmp; + while (*period != '.' && *period != ')') ++period ; + *period = '\0'; + *size = atoi(tmp); /*sscanf(tmp, "%2d", size);*/ + + return 0; +} + +static int ReadVector(FILE *fp, int n, int *where, int perline, int persize) +{ + register int i, j, item; + char tmp, buf[100]; + + i = 0; + while (i < n) { + fgets(buf, 100, fp); /* read a line at a time */ + for (j=0; j + * -- SuperLU routine (version 4.0) -- + * Lawrence Berkeley National Laboratory. + * June 30, 2009 + * + */ + +#include "slu_ddefs.h" + + +void +dreadtriple(int *m, int *n, int *nonz, + double **nzval, int **rowind, int **colptr) +{ +/* + * Output parameters + * ================= + * (a,asub,xa): asub[*] contains the row subscripts of nonzeros + * in columns of matrix A; a[*] the numerical values; + * row i of A is given by a[k],k=xa[i],...,xa[i+1]-1. + * + */ + int j, k, jsize, nnz, nz; + double *a, *val; + int *asub, *xa, *row, *col; + int zero_base = 0; + + /* Matrix format: + * First line: #rows, #cols, #non-zero + * Triplet in the rest of lines: + * row, col, value + */ + + scanf("%d%d", n, nonz); + *m = *n; + printf("m %d, n %d, nonz %d\n", *m, *n, *nonz); + dallocateA(*n, *nonz, nzval, rowind, colptr); /* Allocate storage */ + a = *nzval; + asub = *rowind; + xa = *colptr; + + val = (double *) SUPERLU_MALLOC(*nonz * sizeof(double)); + row = (int *) SUPERLU_MALLOC(*nonz * sizeof(int)); + col = (int *) SUPERLU_MALLOC(*nonz * sizeof(int)); + + for (j = 0; j < *n; ++j) xa[j] = 0; + + /* Read into the triplet array from a file */ + for (nnz = 0, nz = 0; nnz < *nonz; ++nnz) { + scanf("%d%d%lf\n", &row[nz], &col[nz], &val[nz]); + + if ( nnz == 0 ) { /* first nonzero */ + if ( row[0] == 0 || col[0] == 0 ) { + zero_base = 1; + printf("triplet file: row/col indices are zero-based.\n"); + } else + printf("triplet file: row/col indices are one-based.\n"); + } + + if ( !zero_base ) { + /* Change to 0-based indexing. */ + --row[nz]; + --col[nz]; + } + + if (row[nz] < 0 || row[nz] >= *m || col[nz] < 0 || col[nz] >= *n + /*|| val[nz] == 0.*/) { + fprintf(stderr, "nz %d, (%d, %d) = %e out of bound, removed\n", + nz, row[nz], col[nz], val[nz]); + exit(-1); + } else { + ++xa[col[nz]]; + ++nz; + } + } + + *nonz = nz; + + /* Initialize the array of column pointers */ + k = 0; + jsize = xa[0]; + xa[0] = 0; + for (j = 1; j < *n; ++j) { + k += jsize; + jsize = xa[j]; + xa[j] = k; + } + + /* Copy the triplets into the column oriented storage */ + for (nz = 0; nz < *nonz; ++nz) { + j = col[nz]; + k = xa[j]; + asub[k] = row[nz]; + a[k] = val[nz]; + ++xa[j]; + } + + /* Reset the column pointers to the beginning of each column */ + for (j = *n; j > 0; --j) + xa[j] = xa[j-1]; + xa[0] = 0; + + SUPERLU_FREE(val); + SUPERLU_FREE(row); + SUPERLU_FREE(col); + +#ifdef CHK_INPUT + { + int i; + for (i = 0; i < *n; i++) { + printf("Col %d, xa %d\n", i, xa[i]); + for (k = xa[i]; k < xa[i+1]; k++) + printf("%d\t%16.10f\n", asub[k], a[k]); + } + } +#endif + +} + + +void dreadrhs(int m, double *b) +{ + FILE *fp, *fopen(); + int i; + /*int j;*/ + + if ( !(fp = fopen("b.dat", "r")) ) { + fprintf(stderr, "dreadrhs: file does not exist\n"); + exit(-1); + } + for (i = 0; i < m; ++i) + fscanf(fp, "%lf\n", &b[i]); + + /* readpair_(j, &b[i]);*/ + fclose(fp); +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dsnode_bmod.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dsnode_bmod.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dsnode_bmod.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dsnode_bmod.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,29 +1,31 @@ -/* +/*! @file dsnode_bmod.c + * \brief Performs numeric block updates within the relaxed snode. + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "dsp_defs.h" + +#include "slu_ddefs.h" -/* - * Performs numeric block updates within the relaxed snode. +/*! \brief Performs numeric block updates within the relaxed snode. */ int dsnode_bmod ( diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dsnode_dfs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dsnode_dfs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dsnode_dfs.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dsnode_dfs.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,27 +1,45 @@ - -/* +/*! @file dsnode_dfs.c + * \brief Determines the union of row structures of columns within the relaxed node + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "dsp_defs.h" -#include "util.h" + +#include "slu_ddefs.h" + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *    dsnode_dfs() - Determine the union of the row structures of those 
+ *    columns within the relaxed snode.
+ *    Note: The relaxed snodes are leaves of the supernodal etree, therefore, 
+ *    the portion outside the rectangular supernode must be zero.
+ *
+ * Return value
+ * ============
+ *     0   success;
+ *    >0   number of bytes allocated when run out of memory.
+ * 
+ */ int dsnode_dfs ( @@ -35,19 +53,7 @@ GlobalLU_t *Glu /* modified */ ) { -/* Purpose - * ======= - * dsnode_dfs() - Determine the union of the row structures of those - * columns within the relaxed snode. - * Note: The relaxed snodes are leaves of the supernodal etree, therefore, - * the portion outside the rectangular supernode must be zero. - * - * Return value - * ============ - * 0 success; - * >0 number of bytes allocated when run out of memory. - * - */ + register int i, k, ifrom, ito, nextl, new_next; int nsuper, krow, kmark, mem_error; int *xsup, *supno; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dsp_blas2.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dsp_blas2.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dsp_blas2.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dsp_blas2.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,17 +1,20 @@ -/* +/*! @file dsp_blas2.c + * \brief Sparse BLAS 2, using some dense BLAS 2 operations + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
- *
+ * 
*/ /* * File name: dsp_blas2.c * Purpose: Sparse BLAS 2, using some dense BLAS 2 operations. */ -#include "dsp_defs.h" +#include "slu_ddefs.h" /* * Function prototypes @@ -20,12 +23,9 @@ void dlsolve(int, int, double*, double*); void dmatvec(int, int, int, double*, double*, double*); - -int -sp_dtrsv(char *uplo, char *trans, char *diag, SuperMatrix *L, - SuperMatrix *U, double *x, SuperLUStat_t *stat, int *info) -{ -/* +/*! \brief Solves one of the systems of equations A*x = b, or A'*x = b + * + *
  *   Purpose
  *   =======
  *
@@ -49,7 +49,7 @@
  *             On entry, trans specifies the equations to be solved as   
  *             follows:   
  *                trans = 'N' or 'n'   A*x = b.   
- *                trans = 'T' or 't'   A'*x = b.   
+ *                trans = 'T' or 't'   A'*x = b.
  *                trans = 'C' or 'c'   A'*x = b.   
  *
  *   diag   - (input) char*
@@ -75,8 +75,12 @@
  *
  *   info    - (output) int*
  *             If *info = -i, the i-th argument had an illegal value.
- *
+ * 
*/ +int +sp_dtrsv(char *uplo, char *trans, char *diag, SuperMatrix *L, + SuperMatrix *U, double *x, SuperLUStat_t *stat, int *info) +{ #ifdef _CRAY _fcd ftcs1 = _cptofcd("L", strlen("L")), ftcs2 = _cptofcd("N", strlen("N")), @@ -96,7 +100,8 @@ /* Test the input parameters */ *info = 0; if ( !lsame_(uplo,"L") && !lsame_(uplo, "U") ) *info = -1; - else if ( !lsame_(trans, "N") && !lsame_(trans, "T") ) *info = -2; + else if ( !lsame_(trans, "N") && !lsame_(trans, "T") && + !lsame_(trans, "C")) *info = -2; else if ( !lsame_(diag, "U") && !lsame_(diag, "N") ) *info = -3; else if ( L->nrow != L->ncol || L->nrow < 0 ) *info = -4; else if ( U->nrow != U->ncol || U->nrow < 0 ) *info = -5; @@ -298,68 +303,71 @@ +/*! \brief Performs one of the matrix-vector operations y := alpha*A*x + beta*y, or y := alpha*A'*x + beta*y, + * + *
+ *   Purpose   
+ *   =======   
+ *
+ *   sp_dgemv()  performs one of the matrix-vector operations   
+ *      y := alpha*A*x + beta*y,   or   y := alpha*A'*x + beta*y,   
+ *   where alpha and beta are scalars, x and y are vectors and A is a
+ *   sparse A->nrow by A->ncol matrix.   
+ *
+ *   Parameters   
+ *   ==========   
+ *
+ *   TRANS  - (input) char*
+ *            On entry, TRANS specifies the operation to be performed as   
+ *            follows:   
+ *               TRANS = 'N' or 'n'   y := alpha*A*x + beta*y.   
+ *               TRANS = 'T' or 't'   y := alpha*A'*x + beta*y.   
+ *               TRANS = 'C' or 'c'   y := alpha*A'*x + beta*y.   
+ *
+ *   ALPHA  - (input) double
+ *            On entry, ALPHA specifies the scalar alpha.   
+ *
+ *   A      - (input) SuperMatrix*
+ *            Matrix A with a sparse format, of dimension (A->nrow, A->ncol).
+ *            Currently, the type of A can be:
+ *                Stype = NC or NCP; Dtype = SLU_D; Mtype = GE. 
+ *            In the future, more general A can be handled.
+ *
+ *   X      - (input) double*, array of DIMENSION at least   
+ *            ( 1 + ( n - 1 )*abs( INCX ) ) when TRANS = 'N' or 'n'   
+ *            and at least   
+ *            ( 1 + ( m - 1 )*abs( INCX ) ) otherwise.   
+ *            Before entry, the incremented array X must contain the   
+ *            vector x.   
+ *
+ *   INCX   - (input) int
+ *            On entry, INCX specifies the increment for the elements of   
+ *            X. INCX must not be zero.   
+ *
+ *   BETA   - (input) double
+ *            On entry, BETA specifies the scalar beta. When BETA is   
+ *            supplied as zero then Y need not be set on input.   
+ *
+ *   Y      - (output) double*,  array of DIMENSION at least   
+ *            ( 1 + ( m - 1 )*abs( INCY ) ) when TRANS = 'N' or 'n'   
+ *            and at least   
+ *            ( 1 + ( n - 1 )*abs( INCY ) ) otherwise.   
+ *            Before entry with BETA non-zero, the incremented array Y   
+ *            must contain the vector y. On exit, Y is overwritten by the 
+ *            updated vector y.
+ *	     
+ *   INCY   - (input) int
+ *            On entry, INCY specifies the increment for the elements of   
+ *            Y. INCY must not be zero.   
+ *
+ *   ==== Sparse Level 2 Blas routine.   
+ * 
+ */ int sp_dgemv(char *trans, double alpha, SuperMatrix *A, double *x, int incx, double beta, double *y, int incy) { -/* Purpose - ======= - - sp_dgemv() performs one of the matrix-vector operations - y := alpha*A*x + beta*y, or y := alpha*A'*x + beta*y, - where alpha and beta are scalars, x and y are vectors and A is a - sparse A->nrow by A->ncol matrix. - - Parameters - ========== - - TRANS - (input) char* - On entry, TRANS specifies the operation to be performed as - follows: - TRANS = 'N' or 'n' y := alpha*A*x + beta*y. - TRANS = 'T' or 't' y := alpha*A'*x + beta*y. - TRANS = 'C' or 'c' y := alpha*A'*x + beta*y. - - ALPHA - (input) double - On entry, ALPHA specifies the scalar alpha. - - A - (input) SuperMatrix* - Matrix A with a sparse format, of dimension (A->nrow, A->ncol). - Currently, the type of A can be: - Stype = NC or NCP; Dtype = SLU_D; Mtype = GE. - In the future, more general A can be handled. - - X - (input) double*, array of DIMENSION at least - ( 1 + ( n - 1 )*abs( INCX ) ) when TRANS = 'N' or 'n' - and at least - ( 1 + ( m - 1 )*abs( INCX ) ) otherwise. - Before entry, the incremented array X must contain the - vector x. - - INCX - (input) int - On entry, INCX specifies the increment for the elements of - X. INCX must not be zero. - - BETA - (input) double - On entry, BETA specifies the scalar beta. When BETA is - supplied as zero then Y need not be set on input. - - Y - (output) double*, array of DIMENSION at least - ( 1 + ( m - 1 )*abs( INCY ) ) when TRANS = 'N' or 'n' - and at least - ( 1 + ( n - 1 )*abs( INCY ) ) otherwise. - Before entry with BETA non-zero, the incremented array Y - must contain the vector y. On exit, Y is overwritten by the - updated vector y. - - INCY - (input) int - On entry, INCY specifies the increment for the elements of - Y. INCY must not be zero. - - ==== Sparse Level 2 Blas routine. -*/ - /* Local variables */ NCformat *Astore; double *Aval; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dsp_blas3.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dsp_blas3.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dsp_blas3.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dsp_blas3.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,116 +1,122 @@ - -/* +/*! @file dsp_blas3.c + * \brief Sparse BLAS3, using some dense BLAS3 operations + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
- *
+ * 
*/ /* * File name: sp_blas3.c * Purpose: Sparse BLAS3, using some dense BLAS3 operations. */ -#include "dsp_defs.h" -#include "util.h" +#include "slu_ddefs.h" + +/*! \brief + * + *
+ * Purpose   
+ *   =======   
+ * 
+ *   sp_d performs one of the matrix-matrix operations   
+ * 
+ *      C := alpha*op( A )*op( B ) + beta*C,   
+ * 
+ *   where  op( X ) is one of 
+ * 
+ *      op( X ) = X   or   op( X ) = X'   or   op( X ) = conjg( X' ),
+ * 
+ *   alpha and beta are scalars, and A, B and C are matrices, with op( A ) 
+ *   an m by k matrix,  op( B )  a  k by n matrix and  C an m by n matrix. 
+ *   
+ * 
+ *   Parameters   
+ *   ==========   
+ * 
+ *   TRANSA - (input) char*
+ *            On entry, TRANSA specifies the form of op( A ) to be used in 
+ *            the matrix multiplication as follows:   
+ *               TRANSA = 'N' or 'n',  op( A ) = A.   
+ *               TRANSA = 'T' or 't',  op( A ) = A'.   
+ *               TRANSA = 'C' or 'c',  op( A ) = conjg( A' ).   
+ *            Unchanged on exit.   
+ * 
+ *   TRANSB - (input) char*
+ *            On entry, TRANSB specifies the form of op( B ) to be used in 
+ *            the matrix multiplication as follows:   
+ *               TRANSB = 'N' or 'n',  op( B ) = B.   
+ *               TRANSB = 'T' or 't',  op( B ) = B'.   
+ *               TRANSB = 'C' or 'c',  op( B ) = conjg( B' ).   
+ *            Unchanged on exit.   
+ * 
+ *   M      - (input) int   
+ *            On entry,  M  specifies  the number of rows of the matrix 
+ *	     op( A ) and of the matrix C.  M must be at least zero. 
+ *	     Unchanged on exit.   
+ * 
+ *   N      - (input) int
+ *            On entry,  N specifies the number of columns of the matrix 
+ *	     op( B ) and the number of columns of the matrix C. N must be 
+ *	     at least zero.
+ *	     Unchanged on exit.   
+ * 
+ *   K      - (input) int
+ *            On entry, K specifies the number of columns of the matrix 
+ *	     op( A ) and the number of rows of the matrix op( B ). K must 
+ *	     be at least  zero.   
+ *           Unchanged on exit.
+ *      
+ *   ALPHA  - (input) double
+ *            On entry, ALPHA specifies the scalar alpha.   
+ * 
+ *   A      - (input) SuperMatrix*
+ *            Matrix A with a sparse format, of dimension (A->nrow, A->ncol).
+ *            Currently, the type of A can be:
+ *                Stype = NC or NCP; Dtype = SLU_D; Mtype = GE. 
+ *            In the future, more general A can be handled.
+ * 
+ *   B      - DOUBLE PRECISION array of DIMENSION ( LDB, kb ), where kb is 
+ *            n when TRANSB = 'N' or 'n',  and is  k otherwise.   
+ *            Before entry with  TRANSB = 'N' or 'n',  the leading k by n 
+ *            part of the array B must contain the matrix B, otherwise 
+ *            the leading n by k part of the array B must contain the 
+ *            matrix B.   
+ *            Unchanged on exit.   
+ * 
+ *   LDB    - (input) int
+ *            On entry, LDB specifies the first dimension of B as declared 
+ *            in the calling (sub) program. LDB must be at least max( 1, n ).  
+ *            Unchanged on exit.   
+ * 
+ *   BETA   - (input) double
+ *            On entry, BETA specifies the scalar beta. When BETA is   
+ *            supplied as zero then C need not be set on input.   
+ *  
+ *   C      - DOUBLE PRECISION array of DIMENSION ( LDC, n ).   
+ *            Before entry, the leading m by n part of the array C must 
+ *            contain the matrix C,  except when beta is zero, in which 
+ *            case C need not be set on entry.   
+ *            On exit, the array C is overwritten by the m by n matrix 
+ *	     ( alpha*op( A )*B + beta*C ).   
+ *  
+ *   LDC    - (input) int
+ *            On entry, LDC specifies the first dimension of C as declared 
+ *            in the calling (sub)program. LDC must be at least max(1,m).   
+ *            Unchanged on exit.   
+ *  
+ *   ==== Sparse Level 3 Blas routine.   
+ * 
+ */ int sp_dgemm(char *transa, char *transb, int m, int n, int k, double alpha, SuperMatrix *A, double *b, int ldb, double beta, double *c, int ldc) { -/* Purpose - ======= - - sp_d performs one of the matrix-matrix operations - - C := alpha*op( A )*op( B ) + beta*C, - - where op( X ) is one of - - op( X ) = X or op( X ) = X' or op( X ) = conjg( X' ), - - alpha and beta are scalars, and A, B and C are matrices, with op( A ) - an m by k matrix, op( B ) a k by n matrix and C an m by n matrix. - - - Parameters - ========== - - TRANSA - (input) char* - On entry, TRANSA specifies the form of op( A ) to be used in - the matrix multiplication as follows: - TRANSA = 'N' or 'n', op( A ) = A. - TRANSA = 'T' or 't', op( A ) = A'. - TRANSA = 'C' or 'c', op( A ) = conjg( A' ). - Unchanged on exit. - - TRANSB - (input) char* - On entry, TRANSB specifies the form of op( B ) to be used in - the matrix multiplication as follows: - TRANSB = 'N' or 'n', op( B ) = B. - TRANSB = 'T' or 't', op( B ) = B'. - TRANSB = 'C' or 'c', op( B ) = conjg( B' ). - Unchanged on exit. - - M - (input) int - On entry, M specifies the number of rows of the matrix - op( A ) and of the matrix C. M must be at least zero. - Unchanged on exit. - - N - (input) int - On entry, N specifies the number of columns of the matrix - op( B ) and the number of columns of the matrix C. N must be - at least zero. - Unchanged on exit. - - K - (input) int - On entry, K specifies the number of columns of the matrix - op( A ) and the number of rows of the matrix op( B ). K must - be at least zero. - Unchanged on exit. - - ALPHA - (input) double - On entry, ALPHA specifies the scalar alpha. - - A - (input) SuperMatrix* - Matrix A with a sparse format, of dimension (A->nrow, A->ncol). - Currently, the type of A can be: - Stype = NC or NCP; Dtype = SLU_D; Mtype = GE. - In the future, more general A can be handled. - - B - DOUBLE PRECISION array of DIMENSION ( LDB, kb ), where kb is - n when TRANSB = 'N' or 'n', and is k otherwise. - Before entry with TRANSB = 'N' or 'n', the leading k by n - part of the array B must contain the matrix B, otherwise - the leading n by k part of the array B must contain the - matrix B. - Unchanged on exit. - - LDB - (input) int - On entry, LDB specifies the first dimension of B as declared - in the calling (sub) program. LDB must be at least max( 1, n ). - Unchanged on exit. - - BETA - (input) double - On entry, BETA specifies the scalar beta. When BETA is - supplied as zero then C need not be set on input. - - C - DOUBLE PRECISION array of DIMENSION ( LDC, n ). - Before entry, the leading m by n part of the array C must - contain the matrix C, except when beta is zero, in which - case C need not be set on entry. - On exit, the array C is overwritten by the m by n matrix - ( alpha*op( A )*B + beta*C ). - - LDC - (input) int - On entry, LDC specifies the first dimension of C as declared - in the calling (sub)program. LDC must be at least max(1,m). - Unchanged on exit. - - ==== Sparse Level 3 Blas routine. -*/ int incx = 1, incy = 1; int j; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dsp_defs.h python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dsp_defs.h --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dsp_defs.h 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dsp_defs.h 1970-01-01 01:00:00.000000000 +0100 @@ -1,234 +0,0 @@ - -/* - * -- SuperLU routine (version 3.0) -- - * Univ. of California Berkeley, Xerox Palo Alto Research Center, - * and Lawrence Berkeley National Lab. - * October 15, 2003 - * - */ -#ifndef __SUPERLU_dSP_DEFS /* allow multiple inclusions */ -#define __SUPERLU_dSP_DEFS - -/* - * File name: dsp_defs.h - * Purpose: Sparse matrix types and function prototypes - * History: - */ - -#ifdef _CRAY -#include -#include -#endif - -/* Define my integer type int_t */ -typedef int int_t; /* default */ - -#include "Cnames.h" -#include "supermatrix.h" -#include "util.h" - - -/* - * Global data structures used in LU factorization - - * - * nsuper: #supernodes = nsuper + 1, numbered [0, nsuper]. - * (xsup,supno): supno[i] is the supernode no to which i belongs; - * xsup(s) points to the beginning of the s-th supernode. - * e.g. supno 0 1 2 2 3 3 3 4 4 4 4 4 (n=12) - * xsup 0 1 2 4 7 12 - * Note: dfs will be performed on supernode rep. relative to the new - * row pivoting ordering - * - * (xlsub,lsub): lsub[*] contains the compressed subscript of - * rectangular supernodes; xlsub[j] points to the starting - * location of the j-th column in lsub[*]. Note that xlsub - * is indexed by column. - * Storage: original row subscripts - * - * During the course of sparse LU factorization, we also use - * (xlsub,lsub) for the purpose of symmetric pruning. For each - * supernode {s,s+1,...,t=s+r} with first column s and last - * column t, the subscript set - * lsub[j], j=xlsub[s], .., xlsub[s+1]-1 - * is the structure of column s (i.e. structure of this supernode). - * It is used for the storage of numerical values. - * Furthermore, - * lsub[j], j=xlsub[t], .., xlsub[t+1]-1 - * is the structure of the last column t of this supernode. - * It is for the purpose of symmetric pruning. Therefore, the - * structural subscripts can be rearranged without making physical - * interchanges among the numerical values. - * - * However, if the supernode has only one column, then we - * only keep one set of subscripts. For any subscript interchange - * performed, similar interchange must be done on the numerical - * values. - * - * The last column structures (for pruning) will be removed - * after the numercial LU factorization phase. - * - * (xlusup,lusup): lusup[*] contains the numerical values of the - * rectangular supernodes; xlusup[j] points to the starting - * location of the j-th column in storage vector lusup[*] - * Note: xlusup is indexed by column. - * Each rectangular supernode is stored by column-major - * scheme, consistent with Fortran 2-dim array storage. - * - * (xusub,ucol,usub): ucol[*] stores the numerical values of - * U-columns outside the rectangular supernodes. The row - * subscript of nonzero ucol[k] is stored in usub[k]. - * xusub[i] points to the starting location of column i in ucol. - * Storage: new row subscripts; that is subscripts of PA. - */ -typedef struct { - int *xsup; /* supernode and column mapping */ - int *supno; - int *lsub; /* compressed L subscripts */ - int *xlsub; - double *lusup; /* L supernodes */ - int *xlusup; - double *ucol; /* U columns */ - int *usub; - int *xusub; - int nzlmax; /* current max size of lsub */ - int nzumax; /* " " " ucol */ - int nzlumax; /* " " " lusup */ - int n; /* number of columns in the matrix */ - LU_space_t MemModel; /* 0 - system malloc'd; 1 - user provided */ -} GlobalLU_t; - -typedef struct { - float for_lu; - float total_needed; - int expansions; -} mem_usage_t; - -#ifdef __cplusplus -extern "C" { -#endif - -/* Driver routines */ -extern void -dgssv(superlu_options_t *, SuperMatrix *, int *, int *, SuperMatrix *, - SuperMatrix *, SuperMatrix *, SuperLUStat_t *, int *); -extern void -dgssvx(superlu_options_t *, SuperMatrix *, int *, int *, int *, - char *, double *, double *, SuperMatrix *, SuperMatrix *, - void *, int, SuperMatrix *, SuperMatrix *, - double *, double *, double *, double *, - mem_usage_t *, SuperLUStat_t *, int *); - -/* Supernodal LU factor related */ -extern void -dCreate_CompCol_Matrix(SuperMatrix *, int, int, int, double *, - int *, int *, Stype_t, Dtype_t, Mtype_t); -extern void -dCreate_CompRow_Matrix(SuperMatrix *, int, int, int, double *, - int *, int *, Stype_t, Dtype_t, Mtype_t); -extern void -dCopy_CompCol_Matrix(SuperMatrix *, SuperMatrix *); -extern void -dCreate_Dense_Matrix(SuperMatrix *, int, int, double *, int, - Stype_t, Dtype_t, Mtype_t); -extern void -dCreate_SuperNode_Matrix(SuperMatrix *, int, int, int, double *, - int *, int *, int *, int *, int *, - Stype_t, Dtype_t, Mtype_t); -extern void -dCopy_Dense_Matrix(int, int, double *, int, double *, int); - -extern void countnz (const int, int *, int *, int *, GlobalLU_t *); -extern void fixupL (const int, const int *, GlobalLU_t *); - -extern void dallocateA (int, int, double **, int **, int **); -extern void dgstrf (superlu_options_t*, SuperMatrix*, double, - int, int, int*, void *, int, int *, int *, - SuperMatrix *, SuperMatrix *, SuperLUStat_t*, int *); -extern int dsnode_dfs (const int, const int, const int *, const int *, - const int *, int *, int *, GlobalLU_t *); -extern int dsnode_bmod (const int, const int, const int, double *, - double *, GlobalLU_t *, SuperLUStat_t*); -extern void dpanel_dfs (const int, const int, const int, SuperMatrix *, - int *, int *, double *, int *, int *, int *, - int *, int *, int *, int *, GlobalLU_t *); -extern void dpanel_bmod (const int, const int, const int, const int, - double *, double *, int *, int *, - GlobalLU_t *, SuperLUStat_t*); -extern int dcolumn_dfs (const int, const int, int *, int *, int *, int *, - int *, int *, int *, int *, int *, GlobalLU_t *); -extern int dcolumn_bmod (const int, const int, double *, - double *, int *, int *, int, - GlobalLU_t *, SuperLUStat_t*); -extern int dcopy_to_ucol (int, int, int *, int *, int *, - double *, GlobalLU_t *); -extern int dpivotL (const int, const double, int *, int *, - int *, int *, int *, GlobalLU_t *, SuperLUStat_t*); -extern void dpruneL (const int, const int *, const int, const int, - const int *, const int *, int *, GlobalLU_t *); -extern void dreadmt (int *, int *, int *, double **, int **, int **); -extern void dGenXtrue (int, int, double *, int); -extern void dFillRHS (trans_t, int, double *, int, SuperMatrix *, - SuperMatrix *); -extern void dgstrs (trans_t, SuperMatrix *, SuperMatrix *, int *, int *, - SuperMatrix *, SuperLUStat_t*, int *); - - -/* Driver related */ - -extern void dgsequ (SuperMatrix *, double *, double *, double *, - double *, double *, int *); -extern void dlaqgs (SuperMatrix *, double *, double *, double, - double, double, char *); -extern void dgscon (char *, SuperMatrix *, SuperMatrix *, - double, double *, SuperLUStat_t*, int *); -extern double dPivotGrowth(int, SuperMatrix *, int *, - SuperMatrix *, SuperMatrix *); -extern void dgsrfs (trans_t, SuperMatrix *, SuperMatrix *, - SuperMatrix *, int *, int *, char *, double *, - double *, SuperMatrix *, SuperMatrix *, - double *, double *, SuperLUStat_t*, int *); - -extern int sp_dtrsv (char *, char *, char *, SuperMatrix *, - SuperMatrix *, double *, SuperLUStat_t*, int *); -extern int sp_dgemv (char *, double, SuperMatrix *, double *, - int, double, double *, int); - -extern int sp_dgemm (char *, char *, int, int, int, double, - SuperMatrix *, double *, int, double, - double *, int); - -/* Memory-related */ -extern int dLUMemInit (fact_t, void *, int, int, int, int, int, - SuperMatrix *, SuperMatrix *, - GlobalLU_t *, int **, double **); -extern void dSetRWork (int, int, double *, double **, double **); -extern void dLUWorkFree (int *, double *, GlobalLU_t *); -extern int dLUMemXpand (int, int, MemType, int *, GlobalLU_t *); - -extern double *doubleMalloc(int); -extern double *doubleCalloc(int); -extern int dmemory_usage(const int, const int, const int, const int); -extern int dQuerySpace (SuperMatrix *, SuperMatrix *, mem_usage_t *); - -/* Auxiliary routines */ -extern void dreadhb(int *, int *, int *, double **, int **, int **); -extern void dCompRow_to_CompCol(int, int, int, double*, int*, int*, - double **, int **, int **); -extern void dfill (double *, int, double); -extern void dinf_norm_error (int, SuperMatrix *, double *); -extern void PrintPerf (SuperMatrix *, SuperMatrix *, mem_usage_t *, - double, double, double *, double *, char *); - -/* Routines for debugging */ -extern void dPrint_CompCol_Matrix(char *, SuperMatrix *); -extern void dPrint_SuperNode_Matrix(char *, SuperMatrix *); -extern void dPrint_Dense_Matrix(char *, SuperMatrix *); -extern void print_lu_col(char *, int, int, int *, GlobalLU_t *); -extern void check_tempv(int, double *); - -#ifdef __cplusplus - } -#endif - -#endif /* __SUPERLU_dSP_DEFS */ - diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dutil.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dutil.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dutil.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dutil.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,26 +1,29 @@ -/* - * -- SuperLU routine (version 3.0) -- +/*! @file dutil.c + * \brief Matrix utility functions + * + *
+ * -- SuperLU routine (version 3.1) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
- * October 15, 2003
+ * August 1, 2008
+ *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
  *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ + #include -#include "dsp_defs.h" +#include "slu_ddefs.h" void dCreate_CompCol_Matrix(SuperMatrix *A, int m, int n, int nnz, @@ -64,7 +67,7 @@ Astore->rowptr = rowptr; } -/* Copy matrix A into matrix B. */ +/*! \brief Copy matrix A into matrix B. */ void dCopy_CompCol_Matrix(SuperMatrix *A, SuperMatrix *B) { @@ -108,12 +111,7 @@ dCopy_Dense_Matrix(int M, int N, double *X, int ldx, double *Y, int ldy) { -/* - * - * Purpose - * ======= - * - * Copies a two-dimensional matrix X to another matrix Y. +/*! \brief Copies a two-dimensional matrix X to another matrix Y. */ int i, j; @@ -150,8 +148,7 @@ } -/* - * Convert a row compressed storage into a column compressed storage. +/*! \brief Convert a row compressed storage into a column compressed storage. */ void dCompRow_to_CompCol(int m, int n, int nnz, @@ -266,23 +263,24 @@ void dPrint_Dense_Matrix(char *what, SuperMatrix *A) { - DNformat *Astore; - register int i; + DNformat *Astore = (DNformat *) A->Store; + register int i, j, lda = Astore->lda; double *dp; printf("\nDense matrix %s:\n", what); printf("Stype %d, Dtype %d, Mtype %d\n", A->Stype,A->Dtype,A->Mtype); - Astore = (DNformat *) A->Store; dp = (double *) Astore->nzval; - printf("nrow %d, ncol %d, lda %d\n", A->nrow,A->ncol,Astore->lda); + printf("nrow %d, ncol %d, lda %d\n", A->nrow,A->ncol,lda); printf("\nnzval: "); - for (i = 0; i < A->nrow; ++i) printf("%f ", dp[i]); + for (j = 0; j < A->ncol; ++j) { + for (i = 0; i < A->nrow; ++i) printf("%f ", dp[i + j*lda]); + printf("\n"); + } printf("\n"); fflush(stdout); } -/* - * Diagnostic print of column "jcol" in the U/L factor. +/*! \brief Diagnostic print of column "jcol" in the U/L factor. */ void dprint_lu_col(char *msg, int jcol, int pivrow, int *xprune, GlobalLU_t *Glu) @@ -324,9 +322,7 @@ } -/* - * Check whether tempv[] == 0. This should be true before and after - * calling any numeric routines, i.e., "panel_bmod" and "column_bmod". +/*! \brief Check whether tempv[] == 0. This should be true before and after calling any numeric routines, i.e., "panel_bmod" and "column_bmod". */ void dcheck_tempv(int n, double *tempv) { @@ -352,8 +348,7 @@ } } -/* - * Let rhs[i] = sum of i-th row of A, so the solution vector is all 1's +/*! \brief Let rhs[i] = sum of i-th row of A, so the solution vector is all 1's */ void dFillRHS(trans_t trans, int nrhs, double *x, int ldx, @@ -382,8 +377,7 @@ } -/* - * Fills a double precision array with a given value. +/*! \brief Fills a double precision array with a given value. */ void dfill(double *a, int alen, double dval) @@ -394,8 +388,7 @@ -/* - * Check the inf-norm of the error vector +/*! \brief Check the inf-norm of the error vector */ void dinf_norm_error(int nrhs, SuperMatrix *X, double *xtrue) { @@ -421,7 +414,7 @@ -/* Print performance of the code. */ +/*! \brief Print performance of the code. */ void dPrintPerf(SuperMatrix *L, SuperMatrix *U, mem_usage_t *mem_usage, double rpg, double rcond, double *ferr, @@ -449,9 +442,9 @@ printf("\tNo of nonzeros in factor U = %d\n", Ustore->nnz); printf("\tNo of nonzeros in L+U = %d\n", Lstore->nnz + Ustore->nnz); - printf("L\\U MB %.3f\ttotal MB needed %.3f\texpansions %d\n", - mem_usage->for_lu/1e6, mem_usage->total_needed/1e6, - mem_usage->expansions); + printf("L\\U MB %.3f\ttotal MB needed %.3f\n", + mem_usage->for_lu/1e6, mem_usage->total_needed/1e6); + printf("Number of memory expansions: %d\n", stat->expansions); printf("\tFactor\tMflops\tSolve\tMflops\tEtree\tEquil\tRcond\tRefine\n"); printf("PERF:%8.2f%8.2f%8.2f%8.2f%8.2f%8.2f%8.2f%8.2f\n", diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dzsum1.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dzsum1.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dzsum1.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/dzsum1.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,12 +1,20 @@ -#include "dcomplex.h" +/*! @file dzsum1.c + * \brief Takes sum of the absolute values of a complex vector and returns a double precision result + * + *
+ *     -- LAPACK auxiliary routine (version 2.0) --   
+ *     Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd.,   
+ *     Courant Institute, Argonne National Lab, and Rice University   
+ *     October 31, 1992   
+ * 
+ */ -double dzsum1_(int *n, doublecomplex *cx, int *incx) -{ -/* -- LAPACK auxiliary routine (version 2.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 +#include "slu_dcomplex.h" +#include "slu_Cnames.h" + +/*! \brief +
     Purpose   
     =======   
 
@@ -31,7 +39,10 @@
             The spacing between successive values of CX.  INCX > 0.   
 
     ===================================================================== 
+
*/ +double dzsum1_(int *n, doublecomplex *cx, int *incx) +{ /* Builtin functions */ double z_abs(doublecomplex *); diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/get_perm_c.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/get_perm_c.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/get_perm_c.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/get_perm_c.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,11 +1,14 @@ -/* - * -- SuperLU routine (version 2.0) -- +/*! @file get_perm_c.c + * \brief Matrix permutation operations + * + *
+ * -- SuperLU routine (version 3.1) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
- * November 15, 1997
- *
+ * August 1, 2008
+ * 
*/ -#include "dsp_defs.h" +#include "slu_ddefs.h" #include "colamd.h" extern int genmmd_(int *, int *, int *, int *, int *, int *, int *, @@ -22,12 +25,11 @@ ) { int Alen, *A, i, info, *p; - double *knobs; + double knobs[COLAMD_KNOBS]; + int stats[COLAMD_STATS]; Alen = colamd_recommended(nnz, m, n); - if ( !(knobs = (double *) SUPERLU_MALLOC(COLAMD_KNOBS * sizeof(double))) ) - ABORT("Malloc fails for knobs"); colamd_set_defaults(knobs); if (!(A = (int *) SUPERLU_MALLOC(Alen * sizeof(int))) ) @@ -36,29 +38,17 @@ ABORT("Malloc fails for p[]"); for (i = 0; i <= n; ++i) p[i] = colptr[i]; for (i = 0; i < nnz; ++i) A[i] = rowind[i]; - info = colamd(m, n, Alen, A, p, knobs); + info = colamd(m, n, Alen, A, p, knobs, stats); if ( info == FALSE ) ABORT("COLAMD failed"); for (i = 0; i < n; ++i) perm_c[p[i]] = i; - SUPERLU_FREE(knobs); SUPERLU_FREE(A); SUPERLU_FREE(p); } - -void -getata( - const int m, /* number of rows in matrix A. */ - const int n, /* number of columns in matrix A. */ - const int nz, /* number of nonzeros in matrix A */ - int *colptr, /* column pointer of size n+1 for matrix A. */ - int *rowind, /* row indices of size nz for matrix A. */ - int *atanz, /* out - on exit, returns the actual number of - nonzeros in matrix A'*A. */ - int **ata_colptr, /* out - size n+1 */ - int **ata_rowind /* out - size *atanz */ - ) -/* +/*! \brief + * + *
  * Purpose
  * =======
  *
@@ -75,8 +65,20 @@
  * =========
  *     o  Do I need to withhold the *dense* rows?
  *     o  How do I know the number of nonzeros in A'*A?
- * 
+ * 
*/ +void +getata( + const int m, /* number of rows in matrix A. */ + const int n, /* number of columns in matrix A. */ + const int nz, /* number of nonzeros in matrix A */ + int *colptr, /* column pointer of size n+1 for matrix A. */ + int *rowind, /* row indices of size nz for matrix A. */ + int *atanz, /* out - on exit, returns the actual number of + nonzeros in matrix A'*A. */ + int **ata_colptr, /* out - size n+1 */ + int **ata_rowind /* out - size *atanz */ + ) { register int i, j, k, col, num_nz, ti, trow; int *marker, *b_colptr, *b_rowind; @@ -188,6 +190,18 @@ } +/*! \brief + * + *
+ * Purpose
+ * =======
+ *
+ * Form the structure of A'+A. A is an n-by-n matrix in column oriented
+ * format represented by (colptr, rowind). The output A'+A is in column
+ * oriented format (symmetrically, also row oriented), represented by
+ * (b_colptr, b_rowind).
+ * 
+ */ void at_plus_a( const int n, /* number of columns in matrix A. */ @@ -200,16 +214,6 @@ int **b_rowind /* out - size *bnz */ ) { -/* - * Purpose - * ======= - * - * Form the structure of A'+A. A is an n-by-n matrix in column oriented - * format represented by (colptr, rowind). The output A'+A is in column - * oriented format (symmetrically, also row oriented), represented by - * (b_colptr, b_rowind). - * - */ register int i, j, k, col, num_nz; int *t_colptr, *t_rowind; /* a column oriented form of T = A' */ int *marker; @@ -324,9 +328,9 @@ SUPERLU_FREE(t_rowind); } -void -get_perm_c(int ispec, SuperMatrix *A, int *perm_c) -/* +/*! \brief + * + *
  * Purpose
  * =======
  *
@@ -356,11 +360,13 @@
  *	   Column permutation vector of size A->ncol, which defines the 
  *         permutation matrix Pc; perm_c[i] = j means column i of A is 
  *         in position j in A*Pc.
- *
+ * 
*/ +void +get_perm_c(int ispec, SuperMatrix *A, int *perm_c) { NCformat *Astore = A->Store; - int m, n, bnz, *b_colptr, i; + int m, n, bnz = 0, *b_colptr, i; int delta, maxint, nofsub, *invp; int *b_rowind, *dhead, *qsize, *llist, *marker; double t, SuperLU_timer_(); @@ -372,12 +378,16 @@ switch ( ispec ) { case 0: /* Natural ordering */ for (i = 0; i < n; ++i) perm_c[i] = i; - /*printf("Use natural column ordering.\n");*/ +#if ( PRNTlevel>=1 ) + printf("Use natural column ordering.\n"); +#endif return; case 1: /* Minimum degree ordering on A'*A */ getata(m, n, Astore->nnz, Astore->colptr, Astore->rowind, &bnz, &b_colptr, &b_rowind); - /*printf("Use minimum degree ordering on A'*A.\n");*/ +#if ( PRNTlevel>=1 ) + printf("Use minimum degree ordering on A'*A.\n"); +#endif t = SuperLU_timer_() - t; /*printf("Form A'*A time = %8.3f\n", t);*/ break; @@ -385,14 +395,18 @@ if ( m != n ) ABORT("Matrix is not square"); at_plus_a(n, Astore->nnz, Astore->colptr, Astore->rowind, &bnz, &b_colptr, &b_rowind); - /*printf("Use minimum degree ordering on A'+A.\n");*/ +#if ( PRNTlevel>=1 ) + printf("Use minimum degree ordering on A'+A.\n"); +#endif t = SuperLU_timer_() - t; /*printf("Form A'+A time = %8.3f\n", t);*/ break; case 3: /* Approximate minimum degree column ordering. */ get_colamd(m, n, Astore->nnz, Astore->colptr, Astore->rowind, perm_c); - /*printf(".. Use approximate minimum degree column ordering.\n");*/ +#if ( PRNTlevel>=1 ) + printf(".. Use approximate minimum degree column ordering.\n"); +#endif return; default: ABORT("Invalid ISPEC"); @@ -420,19 +434,18 @@ for (i = 0; i <= n; ++i) ++b_colptr[i]; for (i = 0; i < bnz; ++i) ++b_rowind[i]; - genmmd_(&n, b_colptr, b_rowind, invp, perm_c, &delta, dhead, + genmmd_(&n, b_colptr, b_rowind, perm_c, invp, &delta, dhead, qsize, llist, marker, &maxint, &nofsub); /* Transform perm_c into 0-based indexing. */ for (i = 0; i < n; ++i) --perm_c[i]; - SUPERLU_FREE(b_colptr); - SUPERLU_FREE(b_rowind); SUPERLU_FREE(invp); SUPERLU_FREE(dhead); SUPERLU_FREE(qsize); SUPERLU_FREE(llist); SUPERLU_FREE(marker); + SUPERLU_FREE(b_rowind); t = SuperLU_timer_() - t; /* printf("call GENMMD time = %8.3f\n", t);*/ @@ -441,4 +454,5 @@ for (i = 0; i < n; ++i) perm_c[i] = i; } + SUPERLU_FREE(b_colptr); } diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/heap_relax_snode.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/heap_relax_snode.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/heap_relax_snode.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/heap_relax_snode.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,24 +1,36 @@ -/* +/*! @file heap_relax_snode.c + * \brief Identify the initial relaxed supernodes + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "dsp_defs.h" +#include "slu_ddefs.h" + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *    relax_snode() - Identify the initial relaxed supernodes, assuming that 
+ *    the matrix has been reordered according to the postorder of the etree.
+ * 
+ */ void heap_relax_snode ( @@ -31,13 +43,6 @@ int *relax_end /* last column in a supernode */ ) { -/* - * Purpose - * ======= - * relax_snode() - Identify the initial relaxed supernodes, assuming that - * the matrix has been reordered according to the postorder of the etree. - * - */ register int i, j, k, l, parent; register int snode_start; /* beginning of a snode */ int *et_save, *post, *inv_post, *iwork; @@ -91,7 +96,10 @@ } else { for (i = snode_start; i <= j; ++i) { l = inv_post[i]; - if ( descendants[i] == 0 ) relax_end[l] = l; + if ( descendants[i] == 0 ) { + relax_end[l] = l; + ++nsuper_et; + } } } j++; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/html_mainpage.h python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/html_mainpage.h --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/html_mainpage.h 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/html_mainpage.h 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,9 @@ +/*! \mainpage SuperLU Documentation + + SuperLU is a sequential library for the direct solution of large, + sparse, nonsymmetric systems of linear equations on high performance + machines. It also provides threshold-based ILU factorization + preconditioner. The library is written in C and is callable from either + C or Fortran. + + */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/icmax1.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/icmax1.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/icmax1.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/icmax1.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,14 +1,20 @@ +/*! @file icmax1.c + * \brief Finds the index of the element whose real part has maximum absolute value + * + *
+ *     -- LAPACK auxiliary routine (version 2.0) --   
+ *     Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd.,   
+ *     Courant Institute, Argonne National Lab, and Rice University   
+ *     October 31, 1992   
+ * 
+ */ #include -#include "scomplex.h" - -int icmax1_(int *n, complex *cx, int *incx) -{ -/* -- LAPACK auxiliary routine (version 2.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 +#include "slu_scomplex.h" +#include "slu_Cnames.h" +/*! \brief +
     Purpose   
     =======   
 
@@ -33,9 +39,11 @@
             The spacing between successive values of CX.  INCX >= 1.   
 
    ===================================================================== 
-  
-
-
+  
+*/ +int icmax1_(int *n, complex *cx, int *incx) +{ +/* NEXT LINE IS THE ONLY MODIFICATION. diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_ccolumn_dfs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_ccolumn_dfs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_ccolumn_dfs.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_ccolumn_dfs.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,258 @@ + +/*! @file ilu_ccolumn_dfs.c + * \brief Performs a symbolic factorization + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory
+ * June 30, 2009
+ * 
+*/ + +#include "slu_cdefs.h" + + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *   ILU_CCOLUMN_DFS performs a symbolic factorization on column jcol, and
+ *   decide the supernode boundary.
+ *
+ *   This routine does not use numeric values, but only use the RHS
+ *   row indices to start the dfs.
+ *
+ *   A supernode representative is the last column of a supernode.
+ *   The nonzeros in U[*,j] are segments that end at supernodal
+ *   representatives. The routine returns a list of such supernodal
+ *   representatives in topological order of the dfs that generates them.
+ *   The location of the first nonzero in each such supernodal segment
+ *   (supernodal entry location) is also returned.
+ *
+ * Local parameters
+ * ================
+ *   nseg: no of segments in current U[*,j]
+ *   jsuper: jsuper=EMPTY if column j does not belong to the same
+ *	supernode as j-1. Otherwise, jsuper=nsuper.
+ *
+ *   marker2: A-row --> A-row/col (0/1)
+ *   repfnz: SuperA-col --> PA-row
+ *   parent: SuperA-col --> SuperA-col
+ *   xplore: SuperA-col --> index to L-structure
+ *
+ * Return value
+ * ============
+ *     0  success;
+ *   > 0  number of bytes allocated when run out of space.
+ * 
+ */ +int +ilu_ccolumn_dfs( + const int m, /* in - number of rows in the matrix */ + const int jcol, /* in */ + int *perm_r, /* in */ + int *nseg, /* modified - with new segments appended */ + int *lsub_col, /* in - defines the RHS vector to start the + dfs */ + int *segrep, /* modified - with new segments appended */ + int *repfnz, /* modified */ + int *marker, /* modified */ + int *parent, /* working array */ + int *xplore, /* working array */ + GlobalLU_t *Glu /* modified */ + ) +{ + + int jcolp1, jcolm1, jsuper, nsuper, nextl; + int k, krep, krow, kmark, kperm; + int *marker2; /* Used for small panel LU */ + int fsupc; /* First column of a snode */ + int myfnz; /* First nonz column of a U-segment */ + int chperm, chmark, chrep, kchild; + int xdfs, maxdfs, kpar, oldrep; + int jptr, jm1ptr; + int ito, ifrom; /* Used to compress row subscripts */ + int mem_error; + int *xsup, *supno, *lsub, *xlsub; + int nzlmax; + static int first = 1, maxsuper; + + xsup = Glu->xsup; + supno = Glu->supno; + lsub = Glu->lsub; + xlsub = Glu->xlsub; + nzlmax = Glu->nzlmax; + + if ( first ) { + maxsuper = sp_ienv(3); + first = 0; + } + jcolp1 = jcol + 1; + jcolm1 = jcol - 1; + nsuper = supno[jcol]; + jsuper = nsuper; + nextl = xlsub[jcol]; + marker2 = &marker[2*m]; + + + /* For each nonzero in A[*,jcol] do dfs */ + for (k = 0; lsub_col[k] != EMPTY; k++) { + + krow = lsub_col[k]; + lsub_col[k] = EMPTY; + kmark = marker2[krow]; + + /* krow was visited before, go to the next nonzero */ + if ( kmark == jcol ) continue; + + /* For each unmarked nbr krow of jcol + * krow is in L: place it in structure of L[*,jcol] + */ + marker2[krow] = jcol; + kperm = perm_r[krow]; + + if ( kperm == EMPTY ) { + lsub[nextl++] = krow; /* krow is indexed into A */ + if ( nextl >= nzlmax ) { + if ((mem_error = cLUMemXpand(jcol, nextl, LSUB, &nzlmax, Glu))) + return (mem_error); + lsub = Glu->lsub; + } + if ( kmark != jcolm1 ) jsuper = EMPTY;/* Row index subset testing */ + } else { + /* krow is in U: if its supernode-rep krep + * has been explored, update repfnz[*] + */ + krep = xsup[supno[kperm]+1] - 1; + myfnz = repfnz[krep]; + + if ( myfnz != EMPTY ) { /* Visited before */ + if ( myfnz > kperm ) repfnz[krep] = kperm; + /* continue; */ + } + else { + /* Otherwise, perform dfs starting at krep */ + oldrep = EMPTY; + parent[krep] = oldrep; + repfnz[krep] = kperm; + xdfs = xlsub[xsup[supno[krep]]]; + maxdfs = xlsub[krep + 1]; + + do { + /* + * For each unmarked kchild of krep + */ + while ( xdfs < maxdfs ) { + + kchild = lsub[xdfs]; + xdfs++; + chmark = marker2[kchild]; + + if ( chmark != jcol ) { /* Not reached yet */ + marker2[kchild] = jcol; + chperm = perm_r[kchild]; + + /* Case kchild is in L: place it in L[*,k] */ + if ( chperm == EMPTY ) { + lsub[nextl++] = kchild; + if ( nextl >= nzlmax ) { + if ( (mem_error = cLUMemXpand(jcol,nextl, + LSUB,&nzlmax,Glu)) ) + return (mem_error); + lsub = Glu->lsub; + } + if ( chmark != jcolm1 ) jsuper = EMPTY; + } else { + /* Case kchild is in U: + * chrep = its supernode-rep. If its rep has + * been explored, update its repfnz[*] + */ + chrep = xsup[supno[chperm]+1] - 1; + myfnz = repfnz[chrep]; + if ( myfnz != EMPTY ) { /* Visited before */ + if ( myfnz > chperm ) + repfnz[chrep] = chperm; + } else { + /* Continue dfs at super-rep of kchild */ + xplore[krep] = xdfs; + oldrep = krep; + krep = chrep; /* Go deeper down G(L^t) */ + parent[krep] = oldrep; + repfnz[krep] = chperm; + xdfs = xlsub[xsup[supno[krep]]]; + maxdfs = xlsub[krep + 1]; + } /* else */ + + } /* else */ + + } /* if */ + + } /* while */ + + /* krow has no more unexplored nbrs; + * place supernode-rep krep in postorder DFS. + * backtrack dfs to its parent + */ + segrep[*nseg] = krep; + ++(*nseg); + kpar = parent[krep]; /* Pop from stack, mimic recursion */ + if ( kpar == EMPTY ) break; /* dfs done */ + krep = kpar; + xdfs = xplore[krep]; + maxdfs = xlsub[krep + 1]; + + } while ( kpar != EMPTY ); /* Until empty stack */ + + } /* else */ + + } /* else */ + + } /* for each nonzero ... */ + + /* Check to see if j belongs in the same supernode as j-1 */ + if ( jcol == 0 ) { /* Do nothing for column 0 */ + nsuper = supno[0] = 0; + } else { + fsupc = xsup[nsuper]; + jptr = xlsub[jcol]; /* Not compressed yet */ + jm1ptr = xlsub[jcolm1]; + + if ( (nextl-jptr != jptr-jm1ptr-1) ) jsuper = EMPTY; + + /* Always start a new supernode for a singular column */ + if ( nextl == jptr ) jsuper = EMPTY; + + /* Make sure the number of columns in a supernode doesn't + exceed threshold. */ + if ( jcol - fsupc >= maxsuper ) jsuper = EMPTY; + + /* If jcol starts a new supernode, reclaim storage space in + * lsub from the previous supernode. Note we only store + * the subscript set of the first columns of the supernode. + */ + if ( jsuper == EMPTY ) { /* starts a new supernode */ + if ( (fsupc < jcolm1) ) { /* >= 2 columns in nsuper */ +#ifdef CHK_COMPRESS + printf(" Compress lsub[] at super %d-%d\n", fsupc, jcolm1); +#endif + ito = xlsub[fsupc+1]; + xlsub[jcolm1] = ito; + xlsub[jcol] = ito; + for (ifrom = jptr; ifrom < nextl; ++ifrom, ++ito) + lsub[ito] = lsub[ifrom]; + nextl = ito; + } + nsuper++; + supno[jcol] = nsuper; + } /* if a new supernode */ + + } /* else: jcol > 0 */ + + /* Tidy up the pointers before exit */ + xsup[nsuper+1] = jcolp1; + supno[jcolp1] = nsuper; + xlsub[jcolp1] = nextl; + + return 0; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_ccopy_to_ucol.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_ccopy_to_ucol.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_ccopy_to_ucol.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_ccopy_to_ucol.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,202 @@ + +/*! @file ilu_ccopy_to_ucol.c + * \brief Copy a computed column of U to the compressed data structure + * and drop some small entries + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory
+ * June 30, 2009
+ * 
+ */ + +#include "slu_cdefs.h" + +#ifdef DEBUG +int num_drop_U; +#endif + +static complex *A; /* used in _compare_ only */ +static int _compare_(const void *a, const void *b) +{ + register int *x = (int *)a, *y = (int *)b; + register float xx = slu_c_abs1(&A[*x]), yy = slu_c_abs1(&A[*y]); + if (xx > yy) return -1; + else if (xx < yy) return 1; + else return 0; +} + + +int +ilu_ccopy_to_ucol( + int jcol, /* in */ + int nseg, /* in */ + int *segrep, /* in */ + int *repfnz, /* in */ + int *perm_r, /* in */ + complex *dense, /* modified - reset to zero on return */ + int drop_rule,/* in */ + milu_t milu, /* in */ + double drop_tol, /* in */ + int quota, /* maximum nonzero entries allowed */ + complex *sum, /* out - the sum of dropped entries */ + int *nnzUj, /* in - out */ + GlobalLU_t *Glu, /* modified */ + int *work /* working space with minimum size n, + * used by the second dropping rule */ + ) +{ +/* + * Gather from SPA dense[*] to global ucol[*]. + */ + int ksub, krep, ksupno; + int i, k, kfnz, segsze; + int fsupc, isub, irow; + int jsupno, nextu; + int new_next, mem_error; + int *xsup, *supno; + int *lsub, *xlsub; + complex *ucol; + int *usub, *xusub; + int nzumax; + int m; /* number of entries in the nonzero U-segments */ + register float d_max = 0.0, d_min = 1.0 / dlamch_("Safe minimum"); + register double tmp; + complex zero = {0.0, 0.0}; + + xsup = Glu->xsup; + supno = Glu->supno; + lsub = Glu->lsub; + xlsub = Glu->xlsub; + ucol = Glu->ucol; + usub = Glu->usub; + xusub = Glu->xusub; + nzumax = Glu->nzumax; + + *sum = zero; + if (drop_rule == NODROP) { + drop_tol = -1.0, quota = Glu->n; + } + + jsupno = supno[jcol]; + nextu = xusub[jcol]; + k = nseg - 1; + for (ksub = 0; ksub < nseg; ksub++) { + krep = segrep[k--]; + ksupno = supno[krep]; + + if ( ksupno != jsupno ) { /* Should go into ucol[] */ + kfnz = repfnz[krep]; + if ( kfnz != EMPTY ) { /* Nonzero U-segment */ + + fsupc = xsup[ksupno]; + isub = xlsub[fsupc] + kfnz - fsupc; + segsze = krep - kfnz + 1; + + new_next = nextu + segsze; + while ( new_next > nzumax ) { + if ((mem_error = cLUMemXpand(jcol, nextu, UCOL, &nzumax, + Glu)) != 0) + return (mem_error); + ucol = Glu->ucol; + if ((mem_error = cLUMemXpand(jcol, nextu, USUB, &nzumax, + Glu)) != 0) + return (mem_error); + usub = Glu->usub; + lsub = Glu->lsub; + } + + for (i = 0; i < segsze; i++) { + irow = lsub[isub++]; + tmp = slu_c_abs1(&dense[irow]); + + /* first dropping rule */ + if (quota > 0 && tmp >= drop_tol) { + if (tmp > d_max) d_max = tmp; + if (tmp < d_min) d_min = tmp; + usub[nextu] = perm_r[irow]; + ucol[nextu] = dense[irow]; + nextu++; + } else { + switch (milu) { + case SMILU_1: + case SMILU_2: + c_add(sum, sum, &dense[irow]); + break; + case SMILU_3: + /* *sum += fabs(dense[irow]);*/ + sum->r += tmp; + break; + case SILU: + default: + break; + } +#ifdef DEBUG + num_drop_U++; +#endif + } + dense[irow] = zero; + } + + } + + } + + } /* for each segment... */ + + xusub[jcol + 1] = nextu; /* Close U[*,jcol] */ + m = xusub[jcol + 1] - xusub[jcol]; + + /* second dropping rule */ + if (drop_rule & DROP_SECONDARY && m > quota) { + register double tol = d_max; + register int m0 = xusub[jcol] + m - 1; + + if (quota > 0) { + if (drop_rule & DROP_INTERP) { + d_max = 1.0 / d_max; d_min = 1.0 / d_min; + tol = 1.0 / (d_max + (d_min - d_max) * quota / m); + } else { + A = &ucol[xusub[jcol]]; + for (i = 0; i < m; i++) work[i] = i; + qsort(work, m, sizeof(int), _compare_); + tol = fabs(usub[xusub[jcol] + work[quota]]); + } + } + for (i = xusub[jcol]; i <= m0; ) { + if (slu_c_abs1(&ucol[i]) <= tol) { + switch (milu) { + case SMILU_1: + case SMILU_2: + c_add(sum, sum, &ucol[i]); + break; + case SMILU_3: + sum->r += tmp; + break; + case SILU: + default: + break; + } + ucol[i] = ucol[m0]; + usub[i] = usub[m0]; + m0--; + m--; +#ifdef DEBUG + num_drop_U++; +#endif + xusub[jcol + 1]--; + continue; + } + i++; + } + } + + if (milu == SMILU_2) { + sum->r = slu_c_abs1(sum); sum->i = 0.0; + } + if (milu == SMILU_3) sum->i = 0.0; + + *nnzUj += m; + + return 0; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_cdrop_row.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_cdrop_row.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_cdrop_row.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_cdrop_row.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,321 @@ + +/*! @file ilu_cdrop_row.c + * \brief Drop small rows from L + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory.
+ * June 30, 2009
+ * <\pre>
+ */
+
+#include 
+#include 
+#include "slu_cdefs.h"
+
+extern void cswap_(int *, complex [], int *, complex [], int *);
+extern void caxpy_(int *, complex *, complex [], int *, complex [], int *);
+
+static float *A;  /* used in _compare_ only */
+static int _compare_(const void *a, const void *b)
+{
+    register int *x = (int *)a, *y = (int *)b;
+    if (A[*x] - A[*y] > 0.0) return -1;
+    else if (A[*x] - A[*y] < 0.0) return 1;
+    else return 0;
+}
+
+/*! \brief
+ * 
+ * Purpose
+ * =======
+ *    ilu_cdrop_row() - Drop some small rows from the previous 
+ *    supernode (L-part only).
+ * 
+ */ +int ilu_cdrop_row( + superlu_options_t *options, /* options */ + int first, /* index of the first column in the supernode */ + int last, /* index of the last column in the supernode */ + double drop_tol, /* dropping parameter */ + int quota, /* maximum nonzero entries allowed */ + int *nnzLj, /* in/out number of nonzeros in L(:, 1:last) */ + double *fill_tol, /* in/out - on exit, fill_tol=-num_zero_pivots, + * does not change if options->ILU_MILU != SMILU1 */ + GlobalLU_t *Glu, /* modified */ + float swork[], /* working space with minimum size last-first+1 */ + int iwork[], /* working space with minimum size m - n, + * used by the second dropping rule */ + int lastc /* if lastc == 0, there is nothing after the + * working supernode [first:last]; + * if lastc == 1, there is one more column after + * the working supernode. */ ) +{ + register int i, j, k, m1; + register int nzlc; /* number of nonzeros in column last+1 */ + register int xlusup_first, xlsub_first; + int m, n; /* m x n is the size of the supernode */ + int r = 0; /* number of dropped rows */ + register float *temp; + register complex *lusup = Glu->lusup; + register int *lsub = Glu->lsub; + register int *xlsub = Glu->xlsub; + register int *xlusup = Glu->xlusup; + register float d_max = 0.0, d_min = 1.0; + int drop_rule = options->ILU_DropRule; + milu_t milu = options->ILU_MILU; + norm_t nrm = options->ILU_Norm; + complex zero = {0.0, 0.0}; + complex one = {1.0, 0.0}; + complex none = {-1.0, 0.0}; + int inc_diag; /* inc_diag = m + 1 */ + int nzp = 0; /* number of zero pivots */ + + xlusup_first = xlusup[first]; + xlsub_first = xlsub[first]; + m = xlusup[first + 1] - xlusup_first; + n = last - first + 1; + m1 = m - 1; + inc_diag = m + 1; + nzlc = lastc ? (xlusup[last + 2] - xlusup[last + 1]) : 0; + temp = swork - n; + + /* Quick return if nothing to do. */ + if (m == 0 || m == n || drop_rule == NODROP) + { + *nnzLj += m * n; + return 0; + } + + /* basic dropping: ILU(tau) */ + for (i = n; i <= m1; ) + { + /* the average abs value of ith row */ + switch (nrm) + { + case ONE_NORM: + temp[i] = scasum_(&n, &lusup[xlusup_first + i], &m) / (double)n; + break; + case TWO_NORM: + temp[i] = scnrm2_(&n, &lusup[xlusup_first + i], &m) + / sqrt((double)n); + break; + case INF_NORM: + default: + k = icamax_(&n, &lusup[xlusup_first + i], &m) - 1; + temp[i] = slu_c_abs1(&lusup[xlusup_first + i + m * k]); + break; + } + + /* drop small entries due to drop_tol */ + if (drop_rule & DROP_BASIC && temp[i] < drop_tol) + { + r++; + /* drop the current row and move the last undropped row here */ + if (r > 1) /* add to last row */ + { + /* accumulate the sum (for MILU) */ + switch (milu) + { + case SMILU_1: + case SMILU_2: + caxpy_(&n, &one, &lusup[xlusup_first + i], &m, + &lusup[xlusup_first + m - 1], &m); + break; + case SMILU_3: + for (j = 0; j < n; j++) + lusup[xlusup_first + (m - 1) + j * m].r += + slu_c_abs1(&lusup[xlusup_first + i + j * m]); + break; + case SILU: + default: + break; + } + ccopy_(&n, &lusup[xlusup_first + m1], &m, + &lusup[xlusup_first + i], &m); + } /* if (r > 1) */ + else /* move to last row */ + { + cswap_(&n, &lusup[xlusup_first + m1], &m, + &lusup[xlusup_first + i], &m); + if (milu == SMILU_3) + for (j = 0; j < n; j++) { + lusup[xlusup_first + m1 + j * m].r = + slu_c_abs1(&lusup[xlusup_first + m1 + j * m]); + lusup[xlusup_first + m1 + j * m].i = 0.0; + } + } + lsub[xlsub_first + i] = lsub[xlsub_first + m1]; + m1--; + continue; + } /* if dropping */ + else + { + if (temp[i] > d_max) d_max = temp[i]; + if (temp[i] < d_min) d_min = temp[i]; + } + i++; + } /* for */ + + /* Secondary dropping: drop more rows according to the quota. */ + quota = ceil((double)quota / (double)n); + if (drop_rule & DROP_SECONDARY && m - r > quota) + { + register double tol = d_max; + + /* Calculate the second dropping tolerance */ + if (quota > n) + { + if (drop_rule & DROP_INTERP) /* by interpolation */ + { + d_max = 1.0 / d_max; d_min = 1.0 / d_min; + tol = 1.0 / (d_max + (d_min - d_max) * quota / (m - n - r)); + } + else /* by quick sort */ + { + register int *itemp = iwork - n; + A = temp; + for (i = n; i <= m1; i++) itemp[i] = i; + qsort(iwork, m1 - n + 1, sizeof(int), _compare_); + tol = temp[iwork[quota]]; + } + } + + for (i = n; i <= m1; ) + { + if (temp[i] <= tol) + { + register int j; + r++; + /* drop the current row and move the last undropped row here */ + if (r > 1) /* add to last row */ + { + /* accumulate the sum (for MILU) */ + switch (milu) + { + case SMILU_1: + case SMILU_2: + caxpy_(&n, &one, &lusup[xlusup_first + i], &m, + &lusup[xlusup_first + m - 1], &m); + break; + case SMILU_3: + for (j = 0; j < n; j++) + lusup[xlusup_first + (m - 1) + j * m].r += + slu_c_abs1(&lusup[xlusup_first + i + j * m]); + break; + case SILU: + default: + break; + } + ccopy_(&n, &lusup[xlusup_first + m1], &m, + &lusup[xlusup_first + i], &m); + } /* if (r > 1) */ + else /* move to last row */ + { + cswap_(&n, &lusup[xlusup_first + m1], &m, + &lusup[xlusup_first + i], &m); + if (milu == SMILU_3) + for (j = 0; j < n; j++) { + lusup[xlusup_first + m1 + j * m].r = + slu_c_abs1(&lusup[xlusup_first + m1 + j * m]); + lusup[xlusup_first + m1 + j * m].i = 0.0; + } + } + lsub[xlsub_first + i] = lsub[xlsub_first + m1]; + m1--; + temp[i] = temp[m1]; + + continue; + } + i++; + + } /* for */ + + } /* if secondary dropping */ + + for (i = n; i < m; i++) temp[i] = 0.0; + + if (r == 0) + { + *nnzLj += m * n; + return 0; + } + + /* add dropped entries to the diagnal */ + if (milu != SILU) + { + register int j; + complex t; + for (j = 0; j < n; j++) + { + cs_mult(&t, &lusup[xlusup_first + (m - 1) + j * m], + MILU_ALPHA); + switch (milu) + { + case SMILU_1: + if ( !(c_eq(&t, &none)) ) { + c_add(&t, &t, &one); + cc_mult(&lusup[xlusup_first + j * inc_diag], + &lusup[xlusup_first + j * inc_diag], + &t); + } + else + { + cs_mult( + &lusup[xlusup_first + j * inc_diag], + &lusup[xlusup_first + j * inc_diag], + *fill_tol); +#ifdef DEBUG + printf("[1] ZERO PIVOT: FILL col %d.\n", first + j); + fflush(stdout); +#endif + nzp++; + } + break; + case SMILU_2: + cs_mult(&lusup[xlusup_first + j * inc_diag], + &lusup[xlusup_first + j * inc_diag], + 1.0 + slu_c_abs1(&t)); + break; + case SMILU_3: + c_add(&t, &t, &one); + cc_mult(&lusup[xlusup_first + j * inc_diag], + &lusup[xlusup_first + j * inc_diag], + &t); + break; + case SILU: + default: + break; + } + } + if (nzp > 0) *fill_tol = -nzp; + } + + /* Remove dropped entries from the memory and fix the pointers. */ + m1 = m - r; + for (j = 1; j < n; j++) + { + register int tmp1, tmp2; + tmp1 = xlusup_first + j * m1; + tmp2 = xlusup_first + j * m; + for (i = 0; i < m1; i++) + lusup[i + tmp1] = lusup[i + tmp2]; + } + for (i = 0; i < nzlc; i++) + lusup[xlusup_first + i + n * m1] = lusup[xlusup_first + i + n * m]; + for (i = 0; i < nzlc; i++) + lsub[xlsub[last + 1] - r + i] = lsub[xlsub[last + 1] + i]; + for (i = first + 1; i <= last + 1; i++) + { + xlusup[i] -= r * (i - first); + xlsub[i] -= r; + } + if (lastc) + { + xlusup[last + 2] -= r * n; + xlsub[last + 2] -= r; + } + + *nnzLj += (m - r) * n; + return r; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_cpanel_dfs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_cpanel_dfs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_cpanel_dfs.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_cpanel_dfs.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,248 @@ + +/*! @file ilu_cpanel_dfs.c + * \brief Peforms a symbolic factorization on a panel of symbols and + * record the entries with maximum absolute value in each column + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory
+ * June 30, 2009
+ * 
+ */ + +#include "slu_cdefs.h" + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *
+ *   Performs a symbolic factorization on a panel of columns [jcol, jcol+w).
+ *
+ *   A supernode representative is the last column of a supernode.
+ *   The nonzeros in U[*,j] are segments that end at supernodal
+ *   representatives.
+ *
+ *   The routine returns one list of the supernodal representatives
+ *   in topological order of the dfs that generates them. This list is
+ *   a superset of the topological order of each individual column within
+ *   the panel.
+ *   The location of the first nonzero in each supernodal segment
+ *   (supernodal entry location) is also returned. Each column has a
+ *   separate list for this purpose.
+ *
+ *   Two marker arrays are used for dfs:
+ *     marker[i] == jj, if i was visited during dfs of current column jj;
+ *     marker1[i] >= jcol, if i was visited by earlier columns in this panel;
+ *
+ *   marker: A-row --> A-row/col (0/1)
+ *   repfnz: SuperA-col --> PA-row
+ *   parent: SuperA-col --> SuperA-col
+ *   xplore: SuperA-col --> index to L-structure
+ * 
+ */ +void +ilu_cpanel_dfs( + const int m, /* in - number of rows in the matrix */ + const int w, /* in */ + const int jcol, /* in */ + SuperMatrix *A, /* in - original matrix */ + int *perm_r, /* in */ + int *nseg, /* out */ + complex *dense, /* out */ + float *amax, /* out - max. abs. value of each column in panel */ + int *panel_lsub, /* out */ + int *segrep, /* out */ + int *repfnz, /* out */ + int *marker, /* out */ + int *parent, /* working array */ + int *xplore, /* working array */ + GlobalLU_t *Glu /* modified */ +) +{ + + NCPformat *Astore; + complex *a; + int *asub; + int *xa_begin, *xa_end; + int krep, chperm, chmark, chrep, oldrep, kchild, myfnz; + int k, krow, kmark, kperm; + int xdfs, maxdfs, kpar; + int jj; /* index through each column in the panel */ + int *marker1; /* marker1[jj] >= jcol if vertex jj was visited + by a previous column within this panel. */ + int *repfnz_col; /* start of each column in the panel */ + complex *dense_col; /* start of each column in the panel */ + int nextl_col; /* next available position in panel_lsub[*,jj] */ + int *xsup, *supno; + int *lsub, *xlsub; + float *amax_col; + register double tmp; + + /* Initialize pointers */ + Astore = A->Store; + a = Astore->nzval; + asub = Astore->rowind; + xa_begin = Astore->colbeg; + xa_end = Astore->colend; + marker1 = marker + m; + repfnz_col = repfnz; + dense_col = dense; + amax_col = amax; + *nseg = 0; + xsup = Glu->xsup; + supno = Glu->supno; + lsub = Glu->lsub; + xlsub = Glu->xlsub; + + /* For each column in the panel */ + for (jj = jcol; jj < jcol + w; jj++) { + nextl_col = (jj - jcol) * m; + +#ifdef CHK_DFS + printf("\npanel col %d: ", jj); +#endif + + *amax_col = 0.0; + /* For each nonz in A[*,jj] do dfs */ + for (k = xa_begin[jj]; k < xa_end[jj]; k++) { + krow = asub[k]; + tmp = slu_c_abs1(&a[k]); + if (tmp > *amax_col) *amax_col = tmp; + dense_col[krow] = a[k]; + kmark = marker[krow]; + if ( kmark == jj ) + continue; /* krow visited before, go to the next nonzero */ + + /* For each unmarked nbr krow of jj + * krow is in L: place it in structure of L[*,jj] + */ + marker[krow] = jj; + kperm = perm_r[krow]; + + if ( kperm == EMPTY ) { + panel_lsub[nextl_col++] = krow; /* krow is indexed into A */ + } + /* + * krow is in U: if its supernode-rep krep + * has been explored, update repfnz[*] + */ + else { + + krep = xsup[supno[kperm]+1] - 1; + myfnz = repfnz_col[krep]; + +#ifdef CHK_DFS + printf("krep %d, myfnz %d, perm_r[%d] %d\n", krep, myfnz, krow, kperm); +#endif + if ( myfnz != EMPTY ) { /* Representative visited before */ + if ( myfnz > kperm ) repfnz_col[krep] = kperm; + /* continue; */ + } + else { + /* Otherwise, perform dfs starting at krep */ + oldrep = EMPTY; + parent[krep] = oldrep; + repfnz_col[krep] = kperm; + xdfs = xlsub[xsup[supno[krep]]]; + maxdfs = xlsub[krep + 1]; + +#ifdef CHK_DFS + printf(" xdfs %d, maxdfs %d: ", xdfs, maxdfs); + for (i = xdfs; i < maxdfs; i++) printf(" %d", lsub[i]); + printf("\n"); +#endif + do { + /* + * For each unmarked kchild of krep + */ + while ( xdfs < maxdfs ) { + + kchild = lsub[xdfs]; + xdfs++; + chmark = marker[kchild]; + + if ( chmark != jj ) { /* Not reached yet */ + marker[kchild] = jj; + chperm = perm_r[kchild]; + + /* Case kchild is in L: place it in L[*,j] */ + if ( chperm == EMPTY ) { + panel_lsub[nextl_col++] = kchild; + } + /* Case kchild is in U: + * chrep = its supernode-rep. If its rep has + * been explored, update its repfnz[*] + */ + else { + + chrep = xsup[supno[chperm]+1] - 1; + myfnz = repfnz_col[chrep]; +#ifdef CHK_DFS + printf("chrep %d,myfnz %d,perm_r[%d] %d\n",chrep,myfnz,kchild,chperm); +#endif + if ( myfnz != EMPTY ) { /* Visited before */ + if ( myfnz > chperm ) + repfnz_col[chrep] = chperm; + } + else { + /* Cont. dfs at snode-rep of kchild */ + xplore[krep] = xdfs; + oldrep = krep; + krep = chrep; /* Go deeper down G(L) */ + parent[krep] = oldrep; + repfnz_col[krep] = chperm; + xdfs = xlsub[xsup[supno[krep]]]; + maxdfs = xlsub[krep + 1]; +#ifdef CHK_DFS + printf(" xdfs %d, maxdfs %d: ", xdfs, maxdfs); + for (i = xdfs; i < maxdfs; i++) printf(" %d", lsub[i]); + printf("\n"); +#endif + } /* else */ + + } /* else */ + + } /* if... */ + + } /* while xdfs < maxdfs */ + + /* krow has no more unexplored nbrs: + * Place snode-rep krep in postorder DFS, if this + * segment is seen for the first time. (Note that + * "repfnz[krep]" may change later.) + * Backtrack dfs to its parent. + */ + if ( marker1[krep] < jcol ) { + segrep[*nseg] = krep; + ++(*nseg); + marker1[krep] = jj; + } + + kpar = parent[krep]; /* Pop stack, mimic recursion */ + if ( kpar == EMPTY ) break; /* dfs done */ + krep = kpar; + xdfs = xplore[krep]; + maxdfs = xlsub[krep + 1]; + +#ifdef CHK_DFS + printf(" pop stack: krep %d,xdfs %d,maxdfs %d: ", krep,xdfs,maxdfs); + for (i = xdfs; i < maxdfs; i++) printf(" %d", lsub[i]); + printf("\n"); +#endif + } while ( kpar != EMPTY ); /* do-while - until empty stack */ + + } /* else */ + + } /* else */ + + } /* for each nonz in A[*,jj] */ + + repfnz_col += m; /* Move to next column */ + dense_col += m; + amax_col++; + + } /* for jj ... */ + +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_cpivotL.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_cpivotL.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_cpivotL.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_cpivotL.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,282 @@ + +/*! @file ilu_cpivotL.c + * \brief Performs numerical pivoting + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory
+ * June 30, 2009
+ * 
+ */ + + +#include +#include +#include "slu_cdefs.h" + +#ifndef SGN +#define SGN(x) ((x)>=0?1:-1) +#endif + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *   Performs the numerical pivoting on the current column of L,
+ *   and the CDIV operation.
+ *
+ *   Pivot policy:
+ *   (1) Compute thresh = u * max_(i>=j) abs(A_ij);
+ *   (2) IF user specifies pivot row k and abs(A_kj) >= thresh THEN
+ *	     pivot row = k;
+ *	 ELSE IF abs(A_jj) >= thresh THEN
+ *	     pivot row = j;
+ *	 ELSE
+ *	     pivot row = m;
+ *
+ *   Note: If you absolutely want to use a given pivot order, then set u=0.0.
+ *
+ *   Return value: 0	  success;
+ *		   i > 0  U(i,i) is exactly zero.
+ * 
+ */ + +int +ilu_cpivotL( + const int jcol, /* in */ + const double u, /* in - diagonal pivoting threshold */ + int *usepr, /* re-use the pivot sequence given by + * perm_r/iperm_r */ + int *perm_r, /* may be modified */ + int diagind, /* diagonal of Pc*A*Pc' */ + int *swap, /* in/out record the row permutation */ + int *iswap, /* in/out inverse of swap, it is the same as + perm_r after the factorization */ + int *marker, /* in */ + int *pivrow, /* in/out, as an input if *usepr!=0 */ + double fill_tol, /* in - fill tolerance of current column + * used for a singular column */ + milu_t milu, /* in */ + complex drop_sum, /* in - computed in ilu_ccopy_to_ucol() + (MILU only) */ + GlobalLU_t *Glu, /* modified - global LU data structures */ + SuperLUStat_t *stat /* output */ + ) +{ + + int n; /* number of columns */ + int fsupc; /* first column in the supernode */ + int nsupc; /* no of columns in the supernode */ + int nsupr; /* no of rows in the supernode */ + int lptr; /* points to the starting subscript of the supernode */ + register int pivptr; + int old_pivptr, diag, ptr0; + register float pivmax, rtemp; + float thresh; + complex temp; + complex *lu_sup_ptr; + complex *lu_col_ptr; + int *lsub_ptr; + register int isub, icol, k, itemp; + int *lsub, *xlsub; + complex *lusup; + int *xlusup; + flops_t *ops = stat->ops; + int info; + complex one = {1.0, 0.0}; + + /* Initialize pointers */ + n = Glu->n; + lsub = Glu->lsub; + xlsub = Glu->xlsub; + lusup = Glu->lusup; + xlusup = Glu->xlusup; + fsupc = (Glu->xsup)[(Glu->supno)[jcol]]; + nsupc = jcol - fsupc; /* excluding jcol; nsupc >= 0 */ + lptr = xlsub[fsupc]; + nsupr = xlsub[fsupc+1] - lptr; + lu_sup_ptr = &lusup[xlusup[fsupc]]; /* start of the current supernode */ + lu_col_ptr = &lusup[xlusup[jcol]]; /* start of jcol in the supernode */ + lsub_ptr = &lsub[lptr]; /* start of row indices of the supernode */ + + /* Determine the largest abs numerical value for partial pivoting; + Also search for user-specified pivot, and diagonal element. */ + pivmax = -1.0; + pivptr = nsupc; + diag = EMPTY; + old_pivptr = nsupc; + ptr0 = EMPTY; + for (isub = nsupc; isub < nsupr; ++isub) { + if (marker[lsub_ptr[isub]] > jcol) + continue; /* do not overlap with a later relaxed supernode */ + + switch (milu) { + case SMILU_1: + c_add(&temp, &lu_col_ptr[isub], &drop_sum); + rtemp = slu_c_abs1(&temp); + break; + case SMILU_2: + case SMILU_3: + /* In this case, drop_sum contains the sum of the abs. value */ + rtemp = slu_c_abs1(&lu_col_ptr[isub]); + break; + case SILU: + default: + rtemp = slu_c_abs1(&lu_col_ptr[isub]); + break; + } + if (rtemp > pivmax) { pivmax = rtemp; pivptr = isub; } + if (*usepr && lsub_ptr[isub] == *pivrow) old_pivptr = isub; + if (lsub_ptr[isub] == diagind) diag = isub; + if (ptr0 == EMPTY) ptr0 = isub; + } + + if (milu == SMILU_2 || milu == SMILU_3) pivmax += drop_sum.r; + + /* Test for singularity */ + if (pivmax < 0.0) { +#if SCIPY_SPECIFIC_FIX + ABORT("[0]: matrix is singular"); +#else + fprintf(stderr, "[0]: jcol=%d, SINGULAR!!!\n", jcol); + fflush(stderr); + exit(1); +#endif + } + if ( pivmax == 0.0 ) { + if (diag != EMPTY) + *pivrow = lsub_ptr[pivptr = diag]; + else if (ptr0 != EMPTY) + *pivrow = lsub_ptr[pivptr = ptr0]; + else { + /* look for the first row which does not + belong to any later supernodes */ + for (icol = jcol; icol < n; icol++) + if (marker[swap[icol]] <= jcol) break; + if (icol >= n) { +#if SCIPY_SPECIFIC_FIX + ABORT("[1]: matrix is singular"); +#else + fprintf(stderr, "[1]: jcol=%d, SINGULAR!!!\n", jcol); + fflush(stderr); + exit(1); +#endif + } + + *pivrow = swap[icol]; + + /* pick up the pivot row */ + for (isub = nsupc; isub < nsupr; ++isub) + if ( lsub_ptr[isub] == *pivrow ) { pivptr = isub; break; } + } + pivmax = fill_tol; + lu_col_ptr[pivptr].r = pivmax; + lu_col_ptr[pivptr].i = 0.0; + *usepr = 0; +#ifdef DEBUG + printf("[0] ZERO PIVOT: FILL (%d, %d).\n", *pivrow, jcol); + fflush(stdout); +#endif + info =jcol + 1; + } /* if (*pivrow == 0.0) */ + else { + thresh = u * pivmax; + + /* Choose appropriate pivotal element by our policy. */ + if ( *usepr ) { + switch (milu) { + case SMILU_1: + c_add(&temp, &lu_col_ptr[old_pivptr], &drop_sum); + rtemp = slu_c_abs1(&temp); + break; + case SMILU_2: + case SMILU_3: + rtemp = slu_c_abs1(&lu_col_ptr[old_pivptr]) + drop_sum.r; + break; + case SILU: + default: + rtemp = slu_c_abs1(&lu_col_ptr[old_pivptr]); + break; + } + if ( rtemp != 0.0 && rtemp >= thresh ) pivptr = old_pivptr; + else *usepr = 0; + } + if ( *usepr == 0 ) { + /* Use diagonal pivot? */ + if ( diag >= 0 ) { /* diagonal exists */ + switch (milu) { + case SMILU_1: + c_add(&temp, &lu_col_ptr[diag], &drop_sum); + rtemp = slu_c_abs1(&temp); + break; + case SMILU_2: + case SMILU_3: + rtemp = slu_c_abs1(&lu_col_ptr[diag]) + drop_sum.r; + break; + case SILU: + default: + rtemp = slu_c_abs1(&lu_col_ptr[diag]); + break; + } + if ( rtemp != 0.0 && rtemp >= thresh ) pivptr = diag; + } + *pivrow = lsub_ptr[pivptr]; + } + info = 0; + + /* Reset the diagonal */ + switch (milu) { + case SMILU_1: + c_add(&lu_col_ptr[pivptr], &lu_col_ptr[pivptr], &drop_sum); + break; + case SMILU_2: + case SMILU_3: + temp = c_sgn(&lu_col_ptr[pivptr]); + cc_mult(&temp, &temp, &drop_sum); + c_add(&lu_col_ptr[pivptr], &lu_col_ptr[pivptr], &drop_sum); + break; + case SILU: + default: + break; + } + + } /* else */ + + /* Record pivot row */ + perm_r[*pivrow] = jcol; + if (jcol < n - 1) { + register int t1, t2, t; + t1 = iswap[*pivrow]; t2 = jcol; + if (t1 != t2) { + t = swap[t1]; swap[t1] = swap[t2]; swap[t2] = t; + t1 = swap[t1]; t2 = t; + t = iswap[t1]; iswap[t1] = iswap[t2]; iswap[t2] = t; + } + } /* if (jcol < n - 1) */ + + /* Interchange row subscripts */ + if ( pivptr != nsupc ) { + itemp = lsub_ptr[pivptr]; + lsub_ptr[pivptr] = lsub_ptr[nsupc]; + lsub_ptr[nsupc] = itemp; + + /* Interchange numerical values as well, for the whole snode, such + * that L is indexed the same way as A. + */ + for (icol = 0; icol <= nsupc; icol++) { + itemp = pivptr + icol * nsupr; + temp = lu_sup_ptr[itemp]; + lu_sup_ptr[itemp] = lu_sup_ptr[nsupc + icol*nsupr]; + lu_sup_ptr[nsupc + icol*nsupr] = temp; + } + } /* if */ + + /* cdiv operation */ + ops[FACT] += 10 * (nsupr - nsupc); + c_div(&temp, &one, &lu_col_ptr[nsupc]); + for (k = nsupc+1; k < nsupr; k++) + cc_mult(&lu_col_ptr[k], &lu_col_ptr[k], &temp); + + return info; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_csnode_dfs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_csnode_dfs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_csnode_dfs.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_csnode_dfs.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,90 @@ + +/*! @file ilu_csnode_dfs.c + * \brief Determines the union of row structures of columns within the relaxed node + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory
+ * June 30, 2009
+ * 
+ */ + +#include "slu_cdefs.h" + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *    ilu_csnode_dfs() - Determine the union of the row structures of those
+ *    columns within the relaxed snode.
+ *    Note: The relaxed snodes are leaves of the supernodal etree, therefore,
+ *    the portion outside the rectangular supernode must be zero.
+ *
+ * Return value
+ * ============
+ *     0   success;
+ *    >0   number of bytes allocated when run out of memory.
+ * 
+ */ + +int +ilu_csnode_dfs( + const int jcol, /* in - start of the supernode */ + const int kcol, /* in - end of the supernode */ + const int *asub, /* in */ + const int *xa_begin, /* in */ + const int *xa_end, /* in */ + int *marker, /* modified */ + GlobalLU_t *Glu /* modified */ + ) +{ + + register int i, k, nextl; + int nsuper, krow, kmark, mem_error; + int *xsup, *supno; + int *lsub, *xlsub; + int nzlmax; + + xsup = Glu->xsup; + supno = Glu->supno; + lsub = Glu->lsub; + xlsub = Glu->xlsub; + nzlmax = Glu->nzlmax; + + nsuper = ++supno[jcol]; /* Next available supernode number */ + nextl = xlsub[jcol]; + + for (i = jcol; i <= kcol; i++) + { + /* For each nonzero in A[*,i] */ + for (k = xa_begin[i]; k < xa_end[i]; k++) + { + krow = asub[k]; + kmark = marker[krow]; + if ( kmark != kcol ) + { /* First time visit krow */ + marker[krow] = kcol; + lsub[nextl++] = krow; + if ( nextl >= nzlmax ) + { + if ( (mem_error = cLUMemXpand(jcol, nextl, LSUB, &nzlmax, + Glu)) != 0) + return (mem_error); + lsub = Glu->lsub; + } + } + } + supno[i] = nsuper; + } + + /* Supernode > 1 */ + if ( jcol < kcol ) + for (i = jcol+1; i <= kcol; i++) xlsub[i] = nextl; + + xsup[nsuper+1] = kcol + 1; + supno[kcol+1] = nsuper; + xlsub[kcol+1] = nextl; + + return 0; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_dcolumn_dfs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_dcolumn_dfs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_dcolumn_dfs.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_dcolumn_dfs.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,258 @@ + +/*! @file ilu_dcolumn_dfs.c + * \brief Performs a symbolic factorization + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory
+ * June 30, 2009
+ * 
+*/ + +#include "slu_ddefs.h" + + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *   ILU_DCOLUMN_DFS performs a symbolic factorization on column jcol, and
+ *   decide the supernode boundary.
+ *
+ *   This routine does not use numeric values, but only use the RHS
+ *   row indices to start the dfs.
+ *
+ *   A supernode representative is the last column of a supernode.
+ *   The nonzeros in U[*,j] are segments that end at supernodal
+ *   representatives. The routine returns a list of such supernodal
+ *   representatives in topological order of the dfs that generates them.
+ *   The location of the first nonzero in each such supernodal segment
+ *   (supernodal entry location) is also returned.
+ *
+ * Local parameters
+ * ================
+ *   nseg: no of segments in current U[*,j]
+ *   jsuper: jsuper=EMPTY if column j does not belong to the same
+ *	supernode as j-1. Otherwise, jsuper=nsuper.
+ *
+ *   marker2: A-row --> A-row/col (0/1)
+ *   repfnz: SuperA-col --> PA-row
+ *   parent: SuperA-col --> SuperA-col
+ *   xplore: SuperA-col --> index to L-structure
+ *
+ * Return value
+ * ============
+ *     0  success;
+ *   > 0  number of bytes allocated when run out of space.
+ * 
+ */ +int +ilu_dcolumn_dfs( + const int m, /* in - number of rows in the matrix */ + const int jcol, /* in */ + int *perm_r, /* in */ + int *nseg, /* modified - with new segments appended */ + int *lsub_col, /* in - defines the RHS vector to start the + dfs */ + int *segrep, /* modified - with new segments appended */ + int *repfnz, /* modified */ + int *marker, /* modified */ + int *parent, /* working array */ + int *xplore, /* working array */ + GlobalLU_t *Glu /* modified */ + ) +{ + + int jcolp1, jcolm1, jsuper, nsuper, nextl; + int k, krep, krow, kmark, kperm; + int *marker2; /* Used for small panel LU */ + int fsupc; /* First column of a snode */ + int myfnz; /* First nonz column of a U-segment */ + int chperm, chmark, chrep, kchild; + int xdfs, maxdfs, kpar, oldrep; + int jptr, jm1ptr; + int ito, ifrom; /* Used to compress row subscripts */ + int mem_error; + int *xsup, *supno, *lsub, *xlsub; + int nzlmax; + static int first = 1, maxsuper; + + xsup = Glu->xsup; + supno = Glu->supno; + lsub = Glu->lsub; + xlsub = Glu->xlsub; + nzlmax = Glu->nzlmax; + + if ( first ) { + maxsuper = sp_ienv(3); + first = 0; + } + jcolp1 = jcol + 1; + jcolm1 = jcol - 1; + nsuper = supno[jcol]; + jsuper = nsuper; + nextl = xlsub[jcol]; + marker2 = &marker[2*m]; + + + /* For each nonzero in A[*,jcol] do dfs */ + for (k = 0; lsub_col[k] != EMPTY; k++) { + + krow = lsub_col[k]; + lsub_col[k] = EMPTY; + kmark = marker2[krow]; + + /* krow was visited before, go to the next nonzero */ + if ( kmark == jcol ) continue; + + /* For each unmarked nbr krow of jcol + * krow is in L: place it in structure of L[*,jcol] + */ + marker2[krow] = jcol; + kperm = perm_r[krow]; + + if ( kperm == EMPTY ) { + lsub[nextl++] = krow; /* krow is indexed into A */ + if ( nextl >= nzlmax ) { + if ((mem_error = dLUMemXpand(jcol, nextl, LSUB, &nzlmax, Glu))) + return (mem_error); + lsub = Glu->lsub; + } + if ( kmark != jcolm1 ) jsuper = EMPTY;/* Row index subset testing */ + } else { + /* krow is in U: if its supernode-rep krep + * has been explored, update repfnz[*] + */ + krep = xsup[supno[kperm]+1] - 1; + myfnz = repfnz[krep]; + + if ( myfnz != EMPTY ) { /* Visited before */ + if ( myfnz > kperm ) repfnz[krep] = kperm; + /* continue; */ + } + else { + /* Otherwise, perform dfs starting at krep */ + oldrep = EMPTY; + parent[krep] = oldrep; + repfnz[krep] = kperm; + xdfs = xlsub[xsup[supno[krep]]]; + maxdfs = xlsub[krep + 1]; + + do { + /* + * For each unmarked kchild of krep + */ + while ( xdfs < maxdfs ) { + + kchild = lsub[xdfs]; + xdfs++; + chmark = marker2[kchild]; + + if ( chmark != jcol ) { /* Not reached yet */ + marker2[kchild] = jcol; + chperm = perm_r[kchild]; + + /* Case kchild is in L: place it in L[*,k] */ + if ( chperm == EMPTY ) { + lsub[nextl++] = kchild; + if ( nextl >= nzlmax ) { + if ( (mem_error = dLUMemXpand(jcol,nextl, + LSUB,&nzlmax,Glu)) ) + return (mem_error); + lsub = Glu->lsub; + } + if ( chmark != jcolm1 ) jsuper = EMPTY; + } else { + /* Case kchild is in U: + * chrep = its supernode-rep. If its rep has + * been explored, update its repfnz[*] + */ + chrep = xsup[supno[chperm]+1] - 1; + myfnz = repfnz[chrep]; + if ( myfnz != EMPTY ) { /* Visited before */ + if ( myfnz > chperm ) + repfnz[chrep] = chperm; + } else { + /* Continue dfs at super-rep of kchild */ + xplore[krep] = xdfs; + oldrep = krep; + krep = chrep; /* Go deeper down G(L^t) */ + parent[krep] = oldrep; + repfnz[krep] = chperm; + xdfs = xlsub[xsup[supno[krep]]]; + maxdfs = xlsub[krep + 1]; + } /* else */ + + } /* else */ + + } /* if */ + + } /* while */ + + /* krow has no more unexplored nbrs; + * place supernode-rep krep in postorder DFS. + * backtrack dfs to its parent + */ + segrep[*nseg] = krep; + ++(*nseg); + kpar = parent[krep]; /* Pop from stack, mimic recursion */ + if ( kpar == EMPTY ) break; /* dfs done */ + krep = kpar; + xdfs = xplore[krep]; + maxdfs = xlsub[krep + 1]; + + } while ( kpar != EMPTY ); /* Until empty stack */ + + } /* else */ + + } /* else */ + + } /* for each nonzero ... */ + + /* Check to see if j belongs in the same supernode as j-1 */ + if ( jcol == 0 ) { /* Do nothing for column 0 */ + nsuper = supno[0] = 0; + } else { + fsupc = xsup[nsuper]; + jptr = xlsub[jcol]; /* Not compressed yet */ + jm1ptr = xlsub[jcolm1]; + + if ( (nextl-jptr != jptr-jm1ptr-1) ) jsuper = EMPTY; + + /* Always start a new supernode for a singular column */ + if ( nextl == jptr ) jsuper = EMPTY; + + /* Make sure the number of columns in a supernode doesn't + exceed threshold. */ + if ( jcol - fsupc >= maxsuper ) jsuper = EMPTY; + + /* If jcol starts a new supernode, reclaim storage space in + * lsub from the previous supernode. Note we only store + * the subscript set of the first columns of the supernode. + */ + if ( jsuper == EMPTY ) { /* starts a new supernode */ + if ( (fsupc < jcolm1) ) { /* >= 2 columns in nsuper */ +#ifdef CHK_COMPRESS + printf(" Compress lsub[] at super %d-%d\n", fsupc, jcolm1); +#endif + ito = xlsub[fsupc+1]; + xlsub[jcolm1] = ito; + xlsub[jcol] = ito; + for (ifrom = jptr; ifrom < nextl; ++ifrom, ++ito) + lsub[ito] = lsub[ifrom]; + nextl = ito; + } + nsuper++; + supno[jcol] = nsuper; + } /* if a new supernode */ + + } /* else: jcol > 0 */ + + /* Tidy up the pointers before exit */ + xsup[nsuper+1] = jcolp1; + supno[jcolp1] = nsuper; + xlsub[jcolp1] = nextl; + + return 0; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_dcopy_to_ucol.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_dcopy_to_ucol.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_dcopy_to_ucol.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_dcopy_to_ucol.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,199 @@ + +/*! @file ilu_dcopy_to_ucol.c + * \brief Copy a computed column of U to the compressed data structure + * and drop some small entries + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory
+ * June 30, 2009
+ * 
+ */ + +#include "slu_ddefs.h" + +#ifdef DEBUG +int num_drop_U; +#endif + +static double *A; /* used in _compare_ only */ +static int _compare_(const void *a, const void *b) +{ + register int *x = (int *)a, *y = (int *)b; + register double xx = fabs(A[*x]), yy = fabs(A[*y]); + if (xx > yy) return -1; + else if (xx < yy) return 1; + else return 0; +} + + +int +ilu_dcopy_to_ucol( + int jcol, /* in */ + int nseg, /* in */ + int *segrep, /* in */ + int *repfnz, /* in */ + int *perm_r, /* in */ + double *dense, /* modified - reset to zero on return */ + int drop_rule,/* in */ + milu_t milu, /* in */ + double drop_tol, /* in */ + int quota, /* maximum nonzero entries allowed */ + double *sum, /* out - the sum of dropped entries */ + int *nnzUj, /* in - out */ + GlobalLU_t *Glu, /* modified */ + int *work /* working space with minimum size n, + * used by the second dropping rule */ + ) +{ +/* + * Gather from SPA dense[*] to global ucol[*]. + */ + int ksub, krep, ksupno; + int i, k, kfnz, segsze; + int fsupc, isub, irow; + int jsupno, nextu; + int new_next, mem_error; + int *xsup, *supno; + int *lsub, *xlsub; + double *ucol; + int *usub, *xusub; + int nzumax; + int m; /* number of entries in the nonzero U-segments */ + register double d_max = 0.0, d_min = 1.0 / dlamch_("Safe minimum"); + register double tmp; + double zero = 0.0; + + xsup = Glu->xsup; + supno = Glu->supno; + lsub = Glu->lsub; + xlsub = Glu->xlsub; + ucol = Glu->ucol; + usub = Glu->usub; + xusub = Glu->xusub; + nzumax = Glu->nzumax; + + *sum = zero; + if (drop_rule == NODROP) { + drop_tol = -1.0, quota = Glu->n; + } + + jsupno = supno[jcol]; + nextu = xusub[jcol]; + k = nseg - 1; + for (ksub = 0; ksub < nseg; ksub++) { + krep = segrep[k--]; + ksupno = supno[krep]; + + if ( ksupno != jsupno ) { /* Should go into ucol[] */ + kfnz = repfnz[krep]; + if ( kfnz != EMPTY ) { /* Nonzero U-segment */ + + fsupc = xsup[ksupno]; + isub = xlsub[fsupc] + kfnz - fsupc; + segsze = krep - kfnz + 1; + + new_next = nextu + segsze; + while ( new_next > nzumax ) { + if ((mem_error = dLUMemXpand(jcol, nextu, UCOL, &nzumax, + Glu)) != 0) + return (mem_error); + ucol = Glu->ucol; + if ((mem_error = dLUMemXpand(jcol, nextu, USUB, &nzumax, + Glu)) != 0) + return (mem_error); + usub = Glu->usub; + lsub = Glu->lsub; + } + + for (i = 0; i < segsze; i++) { + irow = lsub[isub++]; + tmp = fabs(dense[irow]); + + /* first dropping rule */ + if (quota > 0 && tmp >= drop_tol) { + if (tmp > d_max) d_max = tmp; + if (tmp < d_min) d_min = tmp; + usub[nextu] = perm_r[irow]; + ucol[nextu] = dense[irow]; + nextu++; + } else { + switch (milu) { + case SMILU_1: + case SMILU_2: + *sum += dense[irow]; + break; + case SMILU_3: + /* *sum += fabs(dense[irow]);*/ + *sum += tmp; + break; + case SILU: + default: + break; + } +#ifdef DEBUG + num_drop_U++; +#endif + } + dense[irow] = zero; + } + + } + + } + + } /* for each segment... */ + + xusub[jcol + 1] = nextu; /* Close U[*,jcol] */ + m = xusub[jcol + 1] - xusub[jcol]; + + /* second dropping rule */ + if (drop_rule & DROP_SECONDARY && m > quota) { + register double tol = d_max; + register int m0 = xusub[jcol] + m - 1; + + if (quota > 0) { + if (drop_rule & DROP_INTERP) { + d_max = 1.0 / d_max; d_min = 1.0 / d_min; + tol = 1.0 / (d_max + (d_min - d_max) * quota / m); + } else { + A = &ucol[xusub[jcol]]; + for (i = 0; i < m; i++) work[i] = i; + qsort(work, m, sizeof(int), _compare_); + tol = fabs(usub[xusub[jcol] + work[quota]]); + } + } + for (i = xusub[jcol]; i <= m0; ) { + if (fabs(ucol[i]) <= tol) { + switch (milu) { + case SMILU_1: + case SMILU_2: + *sum += ucol[i]; + break; + case SMILU_3: + *sum += fabs(ucol[i]); + break; + case SILU: + default: + break; + } + ucol[i] = ucol[m0]; + usub[i] = usub[m0]; + m0--; + m--; +#ifdef DEBUG + num_drop_U++; +#endif + xusub[jcol + 1]--; + continue; + } + i++; + } + } + + if (milu == SMILU_2) *sum = fabs(*sum); + + *nnzUj += m; + + return 0; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_ddrop_row.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_ddrop_row.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_ddrop_row.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_ddrop_row.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,307 @@ + +/*! @file ilu_ddrop_row.c + * \brief Drop small rows from L + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory.
+ * June 30, 2009
+ * <\pre>
+ */
+
+#include 
+#include 
+#include "slu_ddefs.h"
+
+extern void dswap_(int *, double [], int *, double [], int *);
+extern void daxpy_(int *, double *, double [], int *, double [], int *);
+
+static double *A;  /* used in _compare_ only */
+static int _compare_(const void *a, const void *b)
+{
+    register int *x = (int *)a, *y = (int *)b;
+    if (A[*x] - A[*y] > 0.0) return -1;
+    else if (A[*x] - A[*y] < 0.0) return 1;
+    else return 0;
+}
+
+/*! \brief
+ * 
+ * Purpose
+ * =======
+ *    ilu_ddrop_row() - Drop some small rows from the previous 
+ *    supernode (L-part only).
+ * 
+ */ +int ilu_ddrop_row( + superlu_options_t *options, /* options */ + int first, /* index of the first column in the supernode */ + int last, /* index of the last column in the supernode */ + double drop_tol, /* dropping parameter */ + int quota, /* maximum nonzero entries allowed */ + int *nnzLj, /* in/out number of nonzeros in L(:, 1:last) */ + double *fill_tol, /* in/out - on exit, fill_tol=-num_zero_pivots, + * does not change if options->ILU_MILU != SMILU1 */ + GlobalLU_t *Glu, /* modified */ + double dwork[], /* working space with minimum size last-first+1 */ + int iwork[], /* working space with minimum size m - n, + * used by the second dropping rule */ + int lastc /* if lastc == 0, there is nothing after the + * working supernode [first:last]; + * if lastc == 1, there is one more column after + * the working supernode. */ ) +{ + register int i, j, k, m1; + register int nzlc; /* number of nonzeros in column last+1 */ + register int xlusup_first, xlsub_first; + int m, n; /* m x n is the size of the supernode */ + int r = 0; /* number of dropped rows */ + register double *temp; + register double *lusup = Glu->lusup; + register int *lsub = Glu->lsub; + register int *xlsub = Glu->xlsub; + register int *xlusup = Glu->xlusup; + register double d_max = 0.0, d_min = 1.0; + int drop_rule = options->ILU_DropRule; + milu_t milu = options->ILU_MILU; + norm_t nrm = options->ILU_Norm; + double zero = 0.0; + double one = 1.0; + double none = -1.0; + int inc_diag; /* inc_diag = m + 1 */ + int nzp = 0; /* number of zero pivots */ + + xlusup_first = xlusup[first]; + xlsub_first = xlsub[first]; + m = xlusup[first + 1] - xlusup_first; + n = last - first + 1; + m1 = m - 1; + inc_diag = m + 1; + nzlc = lastc ? (xlusup[last + 2] - xlusup[last + 1]) : 0; + temp = dwork - n; + + /* Quick return if nothing to do. */ + if (m == 0 || m == n || drop_rule == NODROP) + { + *nnzLj += m * n; + return 0; + } + + /* basic dropping: ILU(tau) */ + for (i = n; i <= m1; ) + { + /* the average abs value of ith row */ + switch (nrm) + { + case ONE_NORM: + temp[i] = dasum_(&n, &lusup[xlusup_first + i], &m) / (double)n; + break; + case TWO_NORM: + temp[i] = dnrm2_(&n, &lusup[xlusup_first + i], &m) + / sqrt((double)n); + break; + case INF_NORM: + default: + k = idamax_(&n, &lusup[xlusup_first + i], &m) - 1; + temp[i] = fabs(lusup[xlusup_first + i + m * k]); + break; + } + + /* drop small entries due to drop_tol */ + if (drop_rule & DROP_BASIC && temp[i] < drop_tol) + { + r++; + /* drop the current row and move the last undropped row here */ + if (r > 1) /* add to last row */ + { + /* accumulate the sum (for MILU) */ + switch (milu) + { + case SMILU_1: + case SMILU_2: + daxpy_(&n, &one, &lusup[xlusup_first + i], &m, + &lusup[xlusup_first + m - 1], &m); + break; + case SMILU_3: + for (j = 0; j < n; j++) + lusup[xlusup_first + (m - 1) + j * m] += + fabs(lusup[xlusup_first + i + j * m]); + break; + case SILU: + default: + break; + } + dcopy_(&n, &lusup[xlusup_first + m1], &m, + &lusup[xlusup_first + i], &m); + } /* if (r > 1) */ + else /* move to last row */ + { + dswap_(&n, &lusup[xlusup_first + m1], &m, + &lusup[xlusup_first + i], &m); + if (milu == SMILU_3) + for (j = 0; j < n; j++) { + lusup[xlusup_first + m1 + j * m] = + fabs(lusup[xlusup_first + m1 + j * m]); + } + } + lsub[xlsub_first + i] = lsub[xlsub_first + m1]; + m1--; + continue; + } /* if dropping */ + else + { + if (temp[i] > d_max) d_max = temp[i]; + if (temp[i] < d_min) d_min = temp[i]; + } + i++; + } /* for */ + + /* Secondary dropping: drop more rows according to the quota. */ + quota = ceil((double)quota / (double)n); + if (drop_rule & DROP_SECONDARY && m - r > quota) + { + register double tol = d_max; + + /* Calculate the second dropping tolerance */ + if (quota > n) + { + if (drop_rule & DROP_INTERP) /* by interpolation */ + { + d_max = 1.0 / d_max; d_min = 1.0 / d_min; + tol = 1.0 / (d_max + (d_min - d_max) * quota / (m - n - r)); + } + else /* by quick sort */ + { + register int *itemp = iwork - n; + A = temp; + for (i = n; i <= m1; i++) itemp[i] = i; + qsort(iwork, m1 - n + 1, sizeof(int), _compare_); + tol = temp[iwork[quota]]; + } + } + + for (i = n; i <= m1; ) + { + if (temp[i] <= tol) + { + register int j; + r++; + /* drop the current row and move the last undropped row here */ + if (r > 1) /* add to last row */ + { + /* accumulate the sum (for MILU) */ + switch (milu) + { + case SMILU_1: + case SMILU_2: + daxpy_(&n, &one, &lusup[xlusup_first + i], &m, + &lusup[xlusup_first + m - 1], &m); + break; + case SMILU_3: + for (j = 0; j < n; j++) + lusup[xlusup_first + (m - 1) + j * m] += + fabs(lusup[xlusup_first + i + j * m]); + break; + case SILU: + default: + break; + } + dcopy_(&n, &lusup[xlusup_first + m1], &m, + &lusup[xlusup_first + i], &m); + } /* if (r > 1) */ + else /* move to last row */ + { + dswap_(&n, &lusup[xlusup_first + m1], &m, + &lusup[xlusup_first + i], &m); + if (milu == SMILU_3) + for (j = 0; j < n; j++) { + lusup[xlusup_first + m1 + j * m] = + fabs(lusup[xlusup_first + m1 + j * m]); + } + } + lsub[xlsub_first + i] = lsub[xlsub_first + m1]; + m1--; + temp[i] = temp[m1]; + + continue; + } + i++; + + } /* for */ + + } /* if secondary dropping */ + + for (i = n; i < m; i++) temp[i] = 0.0; + + if (r == 0) + { + *nnzLj += m * n; + return 0; + } + + /* add dropped entries to the diagnal */ + if (milu != SILU) + { + register int j; + double t; + for (j = 0; j < n; j++) + { + t = lusup[xlusup_first + (m - 1) + j * m] * MILU_ALPHA; + switch (milu) + { + case SMILU_1: + if (t != none) { + lusup[xlusup_first + j * inc_diag] *= (one + t); + } + else + { + lusup[xlusup_first + j * inc_diag] *= *fill_tol; +#ifdef DEBUG + printf("[1] ZERO PIVOT: FILL col %d.\n", first + j); + fflush(stdout); +#endif + nzp++; + } + break; + case SMILU_2: + lusup[xlusup_first + j * inc_diag] *= (1.0 + fabs(t)); + break; + case SMILU_3: + lusup[xlusup_first + j * inc_diag] *= (one + t); + break; + case SILU: + default: + break; + } + } + if (nzp > 0) *fill_tol = -nzp; + } + + /* Remove dropped entries from the memory and fix the pointers. */ + m1 = m - r; + for (j = 1; j < n; j++) + { + register int tmp1, tmp2; + tmp1 = xlusup_first + j * m1; + tmp2 = xlusup_first + j * m; + for (i = 0; i < m1; i++) + lusup[i + tmp1] = lusup[i + tmp2]; + } + for (i = 0; i < nzlc; i++) + lusup[xlusup_first + i + n * m1] = lusup[xlusup_first + i + n * m]; + for (i = 0; i < nzlc; i++) + lsub[xlsub[last + 1] - r + i] = lsub[xlsub[last + 1] + i]; + for (i = first + 1; i <= last + 1; i++) + { + xlusup[i] -= r * (i - first); + xlsub[i] -= r; + } + if (lastc) + { + xlusup[last + 2] -= r * n; + xlsub[last + 2] -= r; + } + + *nnzLj += (m - r) * n; + return r; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_dpanel_dfs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_dpanel_dfs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_dpanel_dfs.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_dpanel_dfs.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,248 @@ + +/*! @file ilu_dpanel_dfs.c + * \brief Peforms a symbolic factorization on a panel of symbols and + * record the entries with maximum absolute value in each column + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory
+ * June 30, 2009
+ * 
+ */ + +#include "slu_ddefs.h" + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *
+ *   Performs a symbolic factorization on a panel of columns [jcol, jcol+w).
+ *
+ *   A supernode representative is the last column of a supernode.
+ *   The nonzeros in U[*,j] are segments that end at supernodal
+ *   representatives.
+ *
+ *   The routine returns one list of the supernodal representatives
+ *   in topological order of the dfs that generates them. This list is
+ *   a superset of the topological order of each individual column within
+ *   the panel.
+ *   The location of the first nonzero in each supernodal segment
+ *   (supernodal entry location) is also returned. Each column has a
+ *   separate list for this purpose.
+ *
+ *   Two marker arrays are used for dfs:
+ *     marker[i] == jj, if i was visited during dfs of current column jj;
+ *     marker1[i] >= jcol, if i was visited by earlier columns in this panel;
+ *
+ *   marker: A-row --> A-row/col (0/1)
+ *   repfnz: SuperA-col --> PA-row
+ *   parent: SuperA-col --> SuperA-col
+ *   xplore: SuperA-col --> index to L-structure
+ * 
+ */ +void +ilu_dpanel_dfs( + const int m, /* in - number of rows in the matrix */ + const int w, /* in */ + const int jcol, /* in */ + SuperMatrix *A, /* in - original matrix */ + int *perm_r, /* in */ + int *nseg, /* out */ + double *dense, /* out */ + double *amax, /* out - max. abs. value of each column in panel */ + int *panel_lsub, /* out */ + int *segrep, /* out */ + int *repfnz, /* out */ + int *marker, /* out */ + int *parent, /* working array */ + int *xplore, /* working array */ + GlobalLU_t *Glu /* modified */ +) +{ + + NCPformat *Astore; + double *a; + int *asub; + int *xa_begin, *xa_end; + int krep, chperm, chmark, chrep, oldrep, kchild, myfnz; + int k, krow, kmark, kperm; + int xdfs, maxdfs, kpar; + int jj; /* index through each column in the panel */ + int *marker1; /* marker1[jj] >= jcol if vertex jj was visited + by a previous column within this panel. */ + int *repfnz_col; /* start of each column in the panel */ + double *dense_col; /* start of each column in the panel */ + int nextl_col; /* next available position in panel_lsub[*,jj] */ + int *xsup, *supno; + int *lsub, *xlsub; + double *amax_col; + register double tmp; + + /* Initialize pointers */ + Astore = A->Store; + a = Astore->nzval; + asub = Astore->rowind; + xa_begin = Astore->colbeg; + xa_end = Astore->colend; + marker1 = marker + m; + repfnz_col = repfnz; + dense_col = dense; + amax_col = amax; + *nseg = 0; + xsup = Glu->xsup; + supno = Glu->supno; + lsub = Glu->lsub; + xlsub = Glu->xlsub; + + /* For each column in the panel */ + for (jj = jcol; jj < jcol + w; jj++) { + nextl_col = (jj - jcol) * m; + +#ifdef CHK_DFS + printf("\npanel col %d: ", jj); +#endif + + *amax_col = 0.0; + /* For each nonz in A[*,jj] do dfs */ + for (k = xa_begin[jj]; k < xa_end[jj]; k++) { + krow = asub[k]; + tmp = fabs(a[k]); + if (tmp > *amax_col) *amax_col = tmp; + dense_col[krow] = a[k]; + kmark = marker[krow]; + if ( kmark == jj ) + continue; /* krow visited before, go to the next nonzero */ + + /* For each unmarked nbr krow of jj + * krow is in L: place it in structure of L[*,jj] + */ + marker[krow] = jj; + kperm = perm_r[krow]; + + if ( kperm == EMPTY ) { + panel_lsub[nextl_col++] = krow; /* krow is indexed into A */ + } + /* + * krow is in U: if its supernode-rep krep + * has been explored, update repfnz[*] + */ + else { + + krep = xsup[supno[kperm]+1] - 1; + myfnz = repfnz_col[krep]; + +#ifdef CHK_DFS + printf("krep %d, myfnz %d, perm_r[%d] %d\n", krep, myfnz, krow, kperm); +#endif + if ( myfnz != EMPTY ) { /* Representative visited before */ + if ( myfnz > kperm ) repfnz_col[krep] = kperm; + /* continue; */ + } + else { + /* Otherwise, perform dfs starting at krep */ + oldrep = EMPTY; + parent[krep] = oldrep; + repfnz_col[krep] = kperm; + xdfs = xlsub[xsup[supno[krep]]]; + maxdfs = xlsub[krep + 1]; + +#ifdef CHK_DFS + printf(" xdfs %d, maxdfs %d: ", xdfs, maxdfs); + for (i = xdfs; i < maxdfs; i++) printf(" %d", lsub[i]); + printf("\n"); +#endif + do { + /* + * For each unmarked kchild of krep + */ + while ( xdfs < maxdfs ) { + + kchild = lsub[xdfs]; + xdfs++; + chmark = marker[kchild]; + + if ( chmark != jj ) { /* Not reached yet */ + marker[kchild] = jj; + chperm = perm_r[kchild]; + + /* Case kchild is in L: place it in L[*,j] */ + if ( chperm == EMPTY ) { + panel_lsub[nextl_col++] = kchild; + } + /* Case kchild is in U: + * chrep = its supernode-rep. If its rep has + * been explored, update its repfnz[*] + */ + else { + + chrep = xsup[supno[chperm]+1] - 1; + myfnz = repfnz_col[chrep]; +#ifdef CHK_DFS + printf("chrep %d,myfnz %d,perm_r[%d] %d\n",chrep,myfnz,kchild,chperm); +#endif + if ( myfnz != EMPTY ) { /* Visited before */ + if ( myfnz > chperm ) + repfnz_col[chrep] = chperm; + } + else { + /* Cont. dfs at snode-rep of kchild */ + xplore[krep] = xdfs; + oldrep = krep; + krep = chrep; /* Go deeper down G(L) */ + parent[krep] = oldrep; + repfnz_col[krep] = chperm; + xdfs = xlsub[xsup[supno[krep]]]; + maxdfs = xlsub[krep + 1]; +#ifdef CHK_DFS + printf(" xdfs %d, maxdfs %d: ", xdfs, maxdfs); + for (i = xdfs; i < maxdfs; i++) printf(" %d", lsub[i]); + printf("\n"); +#endif + } /* else */ + + } /* else */ + + } /* if... */ + + } /* while xdfs < maxdfs */ + + /* krow has no more unexplored nbrs: + * Place snode-rep krep in postorder DFS, if this + * segment is seen for the first time. (Note that + * "repfnz[krep]" may change later.) + * Backtrack dfs to its parent. + */ + if ( marker1[krep] < jcol ) { + segrep[*nseg] = krep; + ++(*nseg); + marker1[krep] = jj; + } + + kpar = parent[krep]; /* Pop stack, mimic recursion */ + if ( kpar == EMPTY ) break; /* dfs done */ + krep = kpar; + xdfs = xplore[krep]; + maxdfs = xlsub[krep + 1]; + +#ifdef CHK_DFS + printf(" pop stack: krep %d,xdfs %d,maxdfs %d: ", krep,xdfs,maxdfs); + for (i = xdfs; i < maxdfs; i++) printf(" %d", lsub[i]); + printf("\n"); +#endif + } while ( kpar != EMPTY ); /* do-while - until empty stack */ + + } /* else */ + + } /* else */ + + } /* for each nonz in A[*,jj] */ + + repfnz_col += m; /* Move to next column */ + dense_col += m; + amax_col++; + + } /* for jj ... */ + +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_dpivotL.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_dpivotL.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_dpivotL.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_dpivotL.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,274 @@ + +/*! @file ilu_dpivotL.c + * \brief Performs numerical pivoting + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory
+ * June 30, 2009
+ * 
+ */ + + +#include +#include +#include "slu_ddefs.h" + +#ifndef SGN +#define SGN(x) ((x)>=0?1:-1) +#endif + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *   Performs the numerical pivoting on the current column of L,
+ *   and the CDIV operation.
+ *
+ *   Pivot policy:
+ *   (1) Compute thresh = u * max_(i>=j) abs(A_ij);
+ *   (2) IF user specifies pivot row k and abs(A_kj) >= thresh THEN
+ *	     pivot row = k;
+ *	 ELSE IF abs(A_jj) >= thresh THEN
+ *	     pivot row = j;
+ *	 ELSE
+ *	     pivot row = m;
+ *
+ *   Note: If you absolutely want to use a given pivot order, then set u=0.0.
+ *
+ *   Return value: 0	  success;
+ *		   i > 0  U(i,i) is exactly zero.
+ * 
+ */ + +int +ilu_dpivotL( + const int jcol, /* in */ + const double u, /* in - diagonal pivoting threshold */ + int *usepr, /* re-use the pivot sequence given by + * perm_r/iperm_r */ + int *perm_r, /* may be modified */ + int diagind, /* diagonal of Pc*A*Pc' */ + int *swap, /* in/out record the row permutation */ + int *iswap, /* in/out inverse of swap, it is the same as + perm_r after the factorization */ + int *marker, /* in */ + int *pivrow, /* in/out, as an input if *usepr!=0 */ + double fill_tol, /* in - fill tolerance of current column + * used for a singular column */ + milu_t milu, /* in */ + double drop_sum, /* in - computed in ilu_dcopy_to_ucol() + (MILU only) */ + GlobalLU_t *Glu, /* modified - global LU data structures */ + SuperLUStat_t *stat /* output */ + ) +{ + + int n; /* number of columns */ + int fsupc; /* first column in the supernode */ + int nsupc; /* no of columns in the supernode */ + int nsupr; /* no of rows in the supernode */ + int lptr; /* points to the starting subscript of the supernode */ + register int pivptr; + int old_pivptr, diag, ptr0; + register double pivmax, rtemp; + double thresh; + double temp; + double *lu_sup_ptr; + double *lu_col_ptr; + int *lsub_ptr; + register int isub, icol, k, itemp; + int *lsub, *xlsub; + double *lusup; + int *xlusup; + flops_t *ops = stat->ops; + int info; + + /* Initialize pointers */ + n = Glu->n; + lsub = Glu->lsub; + xlsub = Glu->xlsub; + lusup = Glu->lusup; + xlusup = Glu->xlusup; + fsupc = (Glu->xsup)[(Glu->supno)[jcol]]; + nsupc = jcol - fsupc; /* excluding jcol; nsupc >= 0 */ + lptr = xlsub[fsupc]; + nsupr = xlsub[fsupc+1] - lptr; + lu_sup_ptr = &lusup[xlusup[fsupc]]; /* start of the current supernode */ + lu_col_ptr = &lusup[xlusup[jcol]]; /* start of jcol in the supernode */ + lsub_ptr = &lsub[lptr]; /* start of row indices of the supernode */ + + /* Determine the largest abs numerical value for partial pivoting; + Also search for user-specified pivot, and diagonal element. */ + pivmax = -1.0; + pivptr = nsupc; + diag = EMPTY; + old_pivptr = nsupc; + ptr0 = EMPTY; + for (isub = nsupc; isub < nsupr; ++isub) { + if (marker[lsub_ptr[isub]] > jcol) + continue; /* do not overlap with a later relaxed supernode */ + + switch (milu) { + case SMILU_1: + rtemp = fabs(lu_col_ptr[isub] + drop_sum); + break; + case SMILU_2: + case SMILU_3: + /* In this case, drop_sum contains the sum of the abs. value */ + rtemp = fabs(lu_col_ptr[isub]); + break; + case SILU: + default: + rtemp = fabs(lu_col_ptr[isub]); + break; + } + if (rtemp > pivmax) { pivmax = rtemp; pivptr = isub; } + if (*usepr && lsub_ptr[isub] == *pivrow) old_pivptr = isub; + if (lsub_ptr[isub] == diagind) diag = isub; + if (ptr0 == EMPTY) ptr0 = isub; + } + + if (milu == SMILU_2 || milu == SMILU_3) pivmax += drop_sum; + + /* Test for singularity */ + if (pivmax < 0.0) { +#if SCIPY_SPECIFIC_FIX + ABORT("[0]: matrix is singular"); +#else + fprintf(stderr, "[0]: jcol=%d, SINGULAR!!!\n", jcol); + fflush(stderr); + exit(1); +#endif + } + if ( pivmax == 0.0 ) { + if (diag != EMPTY) + *pivrow = lsub_ptr[pivptr = diag]; + else if (ptr0 != EMPTY) + *pivrow = lsub_ptr[pivptr = ptr0]; + else { + /* look for the first row which does not + belong to any later supernodes */ + for (icol = jcol; icol < n; icol++) + if (marker[swap[icol]] <= jcol) break; + if (icol >= n) { +#if SCIPY_SPECIFIC_FIX + ABORT("[1]: matrix is singular"); +#else + fprintf(stderr, "[1]: jcol=%d, SINGULAR!!!\n", jcol); + fflush(stderr); + exit(1); +#endif + } + + *pivrow = swap[icol]; + + /* pick up the pivot row */ + for (isub = nsupc; isub < nsupr; ++isub) + if ( lsub_ptr[isub] == *pivrow ) { pivptr = isub; break; } + } + pivmax = fill_tol; + lu_col_ptr[pivptr] = pivmax; + *usepr = 0; +#ifdef DEBUG + printf("[0] ZERO PIVOT: FILL (%d, %d).\n", *pivrow, jcol); + fflush(stdout); +#endif + info =jcol + 1; + } /* if (*pivrow == 0.0) */ + else { + thresh = u * pivmax; + + /* Choose appropriate pivotal element by our policy. */ + if ( *usepr ) { + switch (milu) { + case SMILU_1: + rtemp = fabs(lu_col_ptr[old_pivptr] + drop_sum); + break; + case SMILU_2: + case SMILU_3: + rtemp = fabs(lu_col_ptr[old_pivptr]) + drop_sum; + break; + case SILU: + default: + rtemp = fabs(lu_col_ptr[old_pivptr]); + break; + } + if ( rtemp != 0.0 && rtemp >= thresh ) pivptr = old_pivptr; + else *usepr = 0; + } + if ( *usepr == 0 ) { + /* Use diagonal pivot? */ + if ( diag >= 0 ) { /* diagonal exists */ + switch (milu) { + case SMILU_1: + rtemp = fabs(lu_col_ptr[diag] + drop_sum); + break; + case SMILU_2: + case SMILU_3: + rtemp = fabs(lu_col_ptr[diag]) + drop_sum; + break; + case SILU: + default: + rtemp = fabs(lu_col_ptr[diag]); + break; + } + if ( rtemp != 0.0 && rtemp >= thresh ) pivptr = diag; + } + *pivrow = lsub_ptr[pivptr]; + } + info = 0; + + /* Reset the diagonal */ + switch (milu) { + case SMILU_1: + lu_col_ptr[pivptr] += drop_sum; + break; + case SMILU_2: + case SMILU_3: + lu_col_ptr[pivptr] += SGN(lu_col_ptr[pivptr]) * drop_sum; + break; + case SILU: + default: + break; + } + + } /* else */ + + /* Record pivot row */ + perm_r[*pivrow] = jcol; + if (jcol < n - 1) { + register int t1, t2, t; + t1 = iswap[*pivrow]; t2 = jcol; + if (t1 != t2) { + t = swap[t1]; swap[t1] = swap[t2]; swap[t2] = t; + t1 = swap[t1]; t2 = t; + t = iswap[t1]; iswap[t1] = iswap[t2]; iswap[t2] = t; + } + } /* if (jcol < n - 1) */ + + /* Interchange row subscripts */ + if ( pivptr != nsupc ) { + itemp = lsub_ptr[pivptr]; + lsub_ptr[pivptr] = lsub_ptr[nsupc]; + lsub_ptr[nsupc] = itemp; + + /* Interchange numerical values as well, for the whole snode, such + * that L is indexed the same way as A. + */ + for (icol = 0; icol <= nsupc; icol++) { + itemp = pivptr + icol * nsupr; + temp = lu_sup_ptr[itemp]; + lu_sup_ptr[itemp] = lu_sup_ptr[nsupc + icol*nsupr]; + lu_sup_ptr[nsupc + icol*nsupr] = temp; + } + } /* if */ + + /* cdiv operation */ + ops[FACT] += nsupr - nsupc; + temp = 1.0 / lu_col_ptr[nsupc]; + for (k = nsupc+1; k < nsupr; k++) lu_col_ptr[k] *= temp; + + return info; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_dsnode_dfs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_dsnode_dfs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_dsnode_dfs.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_dsnode_dfs.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,90 @@ + +/*! @file ilu_dsnode_dfs.c + * \brief Determines the union of row structures of columns within the relaxed node + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory
+ * June 30, 2009
+ * 
+ */ + +#include "slu_ddefs.h" + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *    ilu_dsnode_dfs() - Determine the union of the row structures of those
+ *    columns within the relaxed snode.
+ *    Note: The relaxed snodes are leaves of the supernodal etree, therefore,
+ *    the portion outside the rectangular supernode must be zero.
+ *
+ * Return value
+ * ============
+ *     0   success;
+ *    >0   number of bytes allocated when run out of memory.
+ * 
+ */ + +int +ilu_dsnode_dfs( + const int jcol, /* in - start of the supernode */ + const int kcol, /* in - end of the supernode */ + const int *asub, /* in */ + const int *xa_begin, /* in */ + const int *xa_end, /* in */ + int *marker, /* modified */ + GlobalLU_t *Glu /* modified */ + ) +{ + + register int i, k, nextl; + int nsuper, krow, kmark, mem_error; + int *xsup, *supno; + int *lsub, *xlsub; + int nzlmax; + + xsup = Glu->xsup; + supno = Glu->supno; + lsub = Glu->lsub; + xlsub = Glu->xlsub; + nzlmax = Glu->nzlmax; + + nsuper = ++supno[jcol]; /* Next available supernode number */ + nextl = xlsub[jcol]; + + for (i = jcol; i <= kcol; i++) + { + /* For each nonzero in A[*,i] */ + for (k = xa_begin[i]; k < xa_end[i]; k++) + { + krow = asub[k]; + kmark = marker[krow]; + if ( kmark != kcol ) + { /* First time visit krow */ + marker[krow] = kcol; + lsub[nextl++] = krow; + if ( nextl >= nzlmax ) + { + if ( (mem_error = dLUMemXpand(jcol, nextl, LSUB, &nzlmax, + Glu)) != 0) + return (mem_error); + lsub = Glu->lsub; + } + } + } + supno[i] = nsuper; + } + + /* Supernode > 1 */ + if ( jcol < kcol ) + for (i = jcol+1; i <= kcol; i++) xlsub[i] = nextl; + + xsup[nsuper+1] = kcol + 1; + supno[kcol+1] = nsuper; + xlsub[kcol+1] = nextl; + + return 0; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_heap_relax_snode.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_heap_relax_snode.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_heap_relax_snode.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_heap_relax_snode.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,120 @@ +/*! @file ilu_heap_relax_snode.c + * \brief Identify the initial relaxed supernodes + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory
+ * June 1, 2009
+ * 
+ */ + +#include "slu_ddefs.h" + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *    ilu_heap_relax_snode() - Identify the initial relaxed supernodes,
+ *    assuming that the matrix has been reordered according to the postorder
+ *    of the etree.
+ * 
+ */ + +void +ilu_heap_relax_snode ( + const int n, + int *et, /* column elimination tree */ + const int relax_columns, /* max no of columns allowed in a + relaxed snode */ + int *descendants, /* no of descendants of each node + in the etree */ + int *relax_end, /* last column in a supernode + * if j-th column starts a relaxed + * supernode, relax_end[j] represents + * the last column of this supernode */ + int *relax_fsupc /* first column in a supernode + * relax_fsupc[j] represents the first + * column of j-th supernode */ + ) +{ + register int i, j, k, l, f, parent; + register int snode_start; /* beginning of a snode */ + int *et_save, *post, *inv_post, *iwork; + int nsuper_et = 0, nsuper_et_post = 0; + + /* The etree may not be postordered, but is heap ordered. */ + + iwork = (int*) intMalloc(3*n+2); + if ( !iwork ) ABORT("SUPERLU_MALLOC fails for iwork[]"); + inv_post = iwork + n+1; + et_save = inv_post + n+1; + + /* Post order etree */ + post = (int *) TreePostorder(n, et); + for (i = 0; i < n+1; ++i) inv_post[post[i]] = i; + + /* Renumber etree in postorder */ + for (i = 0; i < n; ++i) { + iwork[post[i]] = post[et[i]]; + et_save[i] = et[i]; /* Save the original etree */ + } + for (i = 0; i < n; ++i) et[i] = iwork[i]; + + /* Compute the number of descendants of each node in the etree */ + ifill (relax_end, n, EMPTY); + ifill (relax_fsupc, n, EMPTY); + for (j = 0; j < n; j++) descendants[j] = 0; + for (j = 0; j < n; j++) { + parent = et[j]; + if ( parent != n ) /* not the dummy root */ + descendants[parent] += descendants[j] + 1; + } + + /* Identify the relaxed supernodes by postorder traversal of the etree. */ + for ( f = j = 0; j < n; ) { + parent = et[j]; + snode_start = j; + while ( parent != n && descendants[parent] < relax_columns ) { + j = parent; + parent = et[j]; + } + /* Found a supernode in postordered etree; j is the last column. */ + ++nsuper_et_post; + k = n; + for (i = snode_start; i <= j; ++i) + k = SUPERLU_MIN(k, inv_post[i]); + l = inv_post[j]; + if ( (l - k) == (j - snode_start) ) { + /* It's also a supernode in the original etree */ + relax_end[k] = l; /* Last column is recorded */ + relax_fsupc[f++] = k; + ++nsuper_et; + } else { + for (i = snode_start; i <= j; ++i) { + l = inv_post[i]; + if ( descendants[i] == 0 ) { + relax_end[l] = l; + relax_fsupc[f++] = l; + ++nsuper_et; + } + } + } + j++; + /* Search for a new leaf */ + while ( descendants[j] != 0 && j < n ) j++; + } + +#if ( PRNTlevel>=1 ) + printf(".. heap_snode_relax:\n" + "\tNo of relaxed snodes in postordered etree:\t%d\n" + "\tNo of relaxed snodes in original etree:\t%d\n", + nsuper_et_post, nsuper_et); +#endif + + /* Recover the original etree */ + for (i = 0; i < n; ++i) et[i] = et_save[i]; + + SUPERLU_FREE(post); + SUPERLU_FREE(iwork); +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_relax_snode.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_relax_snode.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_relax_snode.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_relax_snode.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,69 @@ +/*! @file ilu_relax_snode.c + * \brief Identify initial relaxed supernodes + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory
+ * June 1, 2009
+ * 
+ */ + +#include "slu_ddefs.h" +/*! \brief + * + *
+ * Purpose
+ * =======
+ *    ilu_relax_snode() - Identify the initial relaxed supernodes, assuming
+ *    that the matrix has been reordered according to the postorder of the
+ *    etree.
+ * 
+ */ +void +ilu_relax_snode ( + const int n, + int *et, /* column elimination tree */ + const int relax_columns, /* max no of columns allowed in a + relaxed snode */ + int *descendants, /* no of descendants of each node + in the etree */ + int *relax_end, /* last column in a supernode + * if j-th column starts a relaxed + * supernode, relax_end[j] represents + * the last column of this supernode */ + int *relax_fsupc /* first column in a supernode + * relax_fsupc[j] represents the first + * column of j-th supernode */ + ) +{ + + register int j, f, parent; + register int snode_start; /* beginning of a snode */ + + ifill (relax_end, n, EMPTY); + ifill (relax_fsupc, n, EMPTY); + for (j = 0; j < n; j++) descendants[j] = 0; + + /* Compute the number of descendants of each node in the etree */ + for (j = 0; j < n; j++) { + parent = et[j]; + if ( parent != n ) /* not the dummy root */ + descendants[parent] += descendants[j] + 1; + } + + /* Identify the relaxed supernodes by postorder traversal of the etree. */ + for (j = f = 0; j < n; ) { + parent = et[j]; + snode_start = j; + while ( parent != n && descendants[parent] < relax_columns ) { + j = parent; + parent = et[j]; + } + /* Found a supernode with j being the last column. */ + relax_end[snode_start] = j; /* Last column is recorded */ + j++; + relax_fsupc[f++] = snode_start; + /* Search for a new leaf */ + while ( descendants[j] != 0 && j < n ) j++; + } +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_scolumn_dfs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_scolumn_dfs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_scolumn_dfs.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_scolumn_dfs.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,258 @@ + +/*! @file ilu_scolumn_dfs.c + * \brief Performs a symbolic factorization + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory
+ * June 30, 2009
+ * 
+*/ + +#include "slu_sdefs.h" + + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *   ILU_SCOLUMN_DFS performs a symbolic factorization on column jcol, and
+ *   decide the supernode boundary.
+ *
+ *   This routine does not use numeric values, but only use the RHS
+ *   row indices to start the dfs.
+ *
+ *   A supernode representative is the last column of a supernode.
+ *   The nonzeros in U[*,j] are segments that end at supernodal
+ *   representatives. The routine returns a list of such supernodal
+ *   representatives in topological order of the dfs that generates them.
+ *   The location of the first nonzero in each such supernodal segment
+ *   (supernodal entry location) is also returned.
+ *
+ * Local parameters
+ * ================
+ *   nseg: no of segments in current U[*,j]
+ *   jsuper: jsuper=EMPTY if column j does not belong to the same
+ *	supernode as j-1. Otherwise, jsuper=nsuper.
+ *
+ *   marker2: A-row --> A-row/col (0/1)
+ *   repfnz: SuperA-col --> PA-row
+ *   parent: SuperA-col --> SuperA-col
+ *   xplore: SuperA-col --> index to L-structure
+ *
+ * Return value
+ * ============
+ *     0  success;
+ *   > 0  number of bytes allocated when run out of space.
+ * 
+ */ +int +ilu_scolumn_dfs( + const int m, /* in - number of rows in the matrix */ + const int jcol, /* in */ + int *perm_r, /* in */ + int *nseg, /* modified - with new segments appended */ + int *lsub_col, /* in - defines the RHS vector to start the + dfs */ + int *segrep, /* modified - with new segments appended */ + int *repfnz, /* modified */ + int *marker, /* modified */ + int *parent, /* working array */ + int *xplore, /* working array */ + GlobalLU_t *Glu /* modified */ + ) +{ + + int jcolp1, jcolm1, jsuper, nsuper, nextl; + int k, krep, krow, kmark, kperm; + int *marker2; /* Used for small panel LU */ + int fsupc; /* First column of a snode */ + int myfnz; /* First nonz column of a U-segment */ + int chperm, chmark, chrep, kchild; + int xdfs, maxdfs, kpar, oldrep; + int jptr, jm1ptr; + int ito, ifrom; /* Used to compress row subscripts */ + int mem_error; + int *xsup, *supno, *lsub, *xlsub; + int nzlmax; + static int first = 1, maxsuper; + + xsup = Glu->xsup; + supno = Glu->supno; + lsub = Glu->lsub; + xlsub = Glu->xlsub; + nzlmax = Glu->nzlmax; + + if ( first ) { + maxsuper = sp_ienv(3); + first = 0; + } + jcolp1 = jcol + 1; + jcolm1 = jcol - 1; + nsuper = supno[jcol]; + jsuper = nsuper; + nextl = xlsub[jcol]; + marker2 = &marker[2*m]; + + + /* For each nonzero in A[*,jcol] do dfs */ + for (k = 0; lsub_col[k] != EMPTY; k++) { + + krow = lsub_col[k]; + lsub_col[k] = EMPTY; + kmark = marker2[krow]; + + /* krow was visited before, go to the next nonzero */ + if ( kmark == jcol ) continue; + + /* For each unmarked nbr krow of jcol + * krow is in L: place it in structure of L[*,jcol] + */ + marker2[krow] = jcol; + kperm = perm_r[krow]; + + if ( kperm == EMPTY ) { + lsub[nextl++] = krow; /* krow is indexed into A */ + if ( nextl >= nzlmax ) { + if ((mem_error = sLUMemXpand(jcol, nextl, LSUB, &nzlmax, Glu))) + return (mem_error); + lsub = Glu->lsub; + } + if ( kmark != jcolm1 ) jsuper = EMPTY;/* Row index subset testing */ + } else { + /* krow is in U: if its supernode-rep krep + * has been explored, update repfnz[*] + */ + krep = xsup[supno[kperm]+1] - 1; + myfnz = repfnz[krep]; + + if ( myfnz != EMPTY ) { /* Visited before */ + if ( myfnz > kperm ) repfnz[krep] = kperm; + /* continue; */ + } + else { + /* Otherwise, perform dfs starting at krep */ + oldrep = EMPTY; + parent[krep] = oldrep; + repfnz[krep] = kperm; + xdfs = xlsub[xsup[supno[krep]]]; + maxdfs = xlsub[krep + 1]; + + do { + /* + * For each unmarked kchild of krep + */ + while ( xdfs < maxdfs ) { + + kchild = lsub[xdfs]; + xdfs++; + chmark = marker2[kchild]; + + if ( chmark != jcol ) { /* Not reached yet */ + marker2[kchild] = jcol; + chperm = perm_r[kchild]; + + /* Case kchild is in L: place it in L[*,k] */ + if ( chperm == EMPTY ) { + lsub[nextl++] = kchild; + if ( nextl >= nzlmax ) { + if ( (mem_error = sLUMemXpand(jcol,nextl, + LSUB,&nzlmax,Glu)) ) + return (mem_error); + lsub = Glu->lsub; + } + if ( chmark != jcolm1 ) jsuper = EMPTY; + } else { + /* Case kchild is in U: + * chrep = its supernode-rep. If its rep has + * been explored, update its repfnz[*] + */ + chrep = xsup[supno[chperm]+1] - 1; + myfnz = repfnz[chrep]; + if ( myfnz != EMPTY ) { /* Visited before */ + if ( myfnz > chperm ) + repfnz[chrep] = chperm; + } else { + /* Continue dfs at super-rep of kchild */ + xplore[krep] = xdfs; + oldrep = krep; + krep = chrep; /* Go deeper down G(L^t) */ + parent[krep] = oldrep; + repfnz[krep] = chperm; + xdfs = xlsub[xsup[supno[krep]]]; + maxdfs = xlsub[krep + 1]; + } /* else */ + + } /* else */ + + } /* if */ + + } /* while */ + + /* krow has no more unexplored nbrs; + * place supernode-rep krep in postorder DFS. + * backtrack dfs to its parent + */ + segrep[*nseg] = krep; + ++(*nseg); + kpar = parent[krep]; /* Pop from stack, mimic recursion */ + if ( kpar == EMPTY ) break; /* dfs done */ + krep = kpar; + xdfs = xplore[krep]; + maxdfs = xlsub[krep + 1]; + + } while ( kpar != EMPTY ); /* Until empty stack */ + + } /* else */ + + } /* else */ + + } /* for each nonzero ... */ + + /* Check to see if j belongs in the same supernode as j-1 */ + if ( jcol == 0 ) { /* Do nothing for column 0 */ + nsuper = supno[0] = 0; + } else { + fsupc = xsup[nsuper]; + jptr = xlsub[jcol]; /* Not compressed yet */ + jm1ptr = xlsub[jcolm1]; + + if ( (nextl-jptr != jptr-jm1ptr-1) ) jsuper = EMPTY; + + /* Always start a new supernode for a singular column */ + if ( nextl == jptr ) jsuper = EMPTY; + + /* Make sure the number of columns in a supernode doesn't + exceed threshold. */ + if ( jcol - fsupc >= maxsuper ) jsuper = EMPTY; + + /* If jcol starts a new supernode, reclaim storage space in + * lsub from the previous supernode. Note we only store + * the subscript set of the first columns of the supernode. + */ + if ( jsuper == EMPTY ) { /* starts a new supernode */ + if ( (fsupc < jcolm1) ) { /* >= 2 columns in nsuper */ +#ifdef CHK_COMPRESS + printf(" Compress lsub[] at super %d-%d\n", fsupc, jcolm1); +#endif + ito = xlsub[fsupc+1]; + xlsub[jcolm1] = ito; + xlsub[jcol] = ito; + for (ifrom = jptr; ifrom < nextl; ++ifrom, ++ito) + lsub[ito] = lsub[ifrom]; + nextl = ito; + } + nsuper++; + supno[jcol] = nsuper; + } /* if a new supernode */ + + } /* else: jcol > 0 */ + + /* Tidy up the pointers before exit */ + xsup[nsuper+1] = jcolp1; + supno[jcolp1] = nsuper; + xlsub[jcolp1] = nextl; + + return 0; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_scopy_to_ucol.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_scopy_to_ucol.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_scopy_to_ucol.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_scopy_to_ucol.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,199 @@ + +/*! @file ilu_scopy_to_ucol.c + * \brief Copy a computed column of U to the compressed data structure + * and drop some small entries + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory
+ * June 30, 2009
+ * 
+ */ + +#include "slu_sdefs.h" + +#ifdef DEBUG +int num_drop_U; +#endif + +static float *A; /* used in _compare_ only */ +static int _compare_(const void *a, const void *b) +{ + register int *x = (int *)a, *y = (int *)b; + register double xx = fabs(A[*x]), yy = fabs(A[*y]); + if (xx > yy) return -1; + else if (xx < yy) return 1; + else return 0; +} + + +int +ilu_scopy_to_ucol( + int jcol, /* in */ + int nseg, /* in */ + int *segrep, /* in */ + int *repfnz, /* in */ + int *perm_r, /* in */ + float *dense, /* modified - reset to zero on return */ + int drop_rule,/* in */ + milu_t milu, /* in */ + double drop_tol, /* in */ + int quota, /* maximum nonzero entries allowed */ + float *sum, /* out - the sum of dropped entries */ + int *nnzUj, /* in - out */ + GlobalLU_t *Glu, /* modified */ + int *work /* working space with minimum size n, + * used by the second dropping rule */ + ) +{ +/* + * Gather from SPA dense[*] to global ucol[*]. + */ + int ksub, krep, ksupno; + int i, k, kfnz, segsze; + int fsupc, isub, irow; + int jsupno, nextu; + int new_next, mem_error; + int *xsup, *supno; + int *lsub, *xlsub; + float *ucol; + int *usub, *xusub; + int nzumax; + int m; /* number of entries in the nonzero U-segments */ + register float d_max = 0.0, d_min = 1.0 / dlamch_("Safe minimum"); + register double tmp; + float zero = 0.0; + + xsup = Glu->xsup; + supno = Glu->supno; + lsub = Glu->lsub; + xlsub = Glu->xlsub; + ucol = Glu->ucol; + usub = Glu->usub; + xusub = Glu->xusub; + nzumax = Glu->nzumax; + + *sum = zero; + if (drop_rule == NODROP) { + drop_tol = -1.0, quota = Glu->n; + } + + jsupno = supno[jcol]; + nextu = xusub[jcol]; + k = nseg - 1; + for (ksub = 0; ksub < nseg; ksub++) { + krep = segrep[k--]; + ksupno = supno[krep]; + + if ( ksupno != jsupno ) { /* Should go into ucol[] */ + kfnz = repfnz[krep]; + if ( kfnz != EMPTY ) { /* Nonzero U-segment */ + + fsupc = xsup[ksupno]; + isub = xlsub[fsupc] + kfnz - fsupc; + segsze = krep - kfnz + 1; + + new_next = nextu + segsze; + while ( new_next > nzumax ) { + if ((mem_error = sLUMemXpand(jcol, nextu, UCOL, &nzumax, + Glu)) != 0) + return (mem_error); + ucol = Glu->ucol; + if ((mem_error = sLUMemXpand(jcol, nextu, USUB, &nzumax, + Glu)) != 0) + return (mem_error); + usub = Glu->usub; + lsub = Glu->lsub; + } + + for (i = 0; i < segsze; i++) { + irow = lsub[isub++]; + tmp = fabs(dense[irow]); + + /* first dropping rule */ + if (quota > 0 && tmp >= drop_tol) { + if (tmp > d_max) d_max = tmp; + if (tmp < d_min) d_min = tmp; + usub[nextu] = perm_r[irow]; + ucol[nextu] = dense[irow]; + nextu++; + } else { + switch (milu) { + case SMILU_1: + case SMILU_2: + *sum += dense[irow]; + break; + case SMILU_3: + /* *sum += fabs(dense[irow]);*/ + *sum += tmp; + break; + case SILU: + default: + break; + } +#ifdef DEBUG + num_drop_U++; +#endif + } + dense[irow] = zero; + } + + } + + } + + } /* for each segment... */ + + xusub[jcol + 1] = nextu; /* Close U[*,jcol] */ + m = xusub[jcol + 1] - xusub[jcol]; + + /* second dropping rule */ + if (drop_rule & DROP_SECONDARY && m > quota) { + register double tol = d_max; + register int m0 = xusub[jcol] + m - 1; + + if (quota > 0) { + if (drop_rule & DROP_INTERP) { + d_max = 1.0 / d_max; d_min = 1.0 / d_min; + tol = 1.0 / (d_max + (d_min - d_max) * quota / m); + } else { + A = &ucol[xusub[jcol]]; + for (i = 0; i < m; i++) work[i] = i; + qsort(work, m, sizeof(int), _compare_); + tol = fabs(usub[xusub[jcol] + work[quota]]); + } + } + for (i = xusub[jcol]; i <= m0; ) { + if (fabs(ucol[i]) <= tol) { + switch (milu) { + case SMILU_1: + case SMILU_2: + *sum += ucol[i]; + break; + case SMILU_3: + *sum += fabs(ucol[i]); + break; + case SILU: + default: + break; + } + ucol[i] = ucol[m0]; + usub[i] = usub[m0]; + m0--; + m--; +#ifdef DEBUG + num_drop_U++; +#endif + xusub[jcol + 1]--; + continue; + } + i++; + } + } + + if (milu == SMILU_2) *sum = fabs(*sum); + + *nnzUj += m; + + return 0; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_sdrop_row.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_sdrop_row.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_sdrop_row.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_sdrop_row.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,307 @@ + +/*! @file ilu_sdrop_row.c + * \brief Drop small rows from L + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory.
+ * June 30, 2009
+ * <\pre>
+ */
+
+#include 
+#include 
+#include "slu_sdefs.h"
+
+extern void sswap_(int *, float [], int *, float [], int *);
+extern void saxpy_(int *, float *, float [], int *, float [], int *);
+
+static float *A;  /* used in _compare_ only */
+static int _compare_(const void *a, const void *b)
+{
+    register int *x = (int *)a, *y = (int *)b;
+    if (A[*x] - A[*y] > 0.0) return -1;
+    else if (A[*x] - A[*y] < 0.0) return 1;
+    else return 0;
+}
+
+/*! \brief
+ * 
+ * Purpose
+ * =======
+ *    ilu_sdrop_row() - Drop some small rows from the previous 
+ *    supernode (L-part only).
+ * 
+ */ +int ilu_sdrop_row( + superlu_options_t *options, /* options */ + int first, /* index of the first column in the supernode */ + int last, /* index of the last column in the supernode */ + double drop_tol, /* dropping parameter */ + int quota, /* maximum nonzero entries allowed */ + int *nnzLj, /* in/out number of nonzeros in L(:, 1:last) */ + double *fill_tol, /* in/out - on exit, fill_tol=-num_zero_pivots, + * does not change if options->ILU_MILU != SMILU1 */ + GlobalLU_t *Glu, /* modified */ + float swork[], /* working space with minimum size last-first+1 */ + int iwork[], /* working space with minimum size m - n, + * used by the second dropping rule */ + int lastc /* if lastc == 0, there is nothing after the + * working supernode [first:last]; + * if lastc == 1, there is one more column after + * the working supernode. */ ) +{ + register int i, j, k, m1; + register int nzlc; /* number of nonzeros in column last+1 */ + register int xlusup_first, xlsub_first; + int m, n; /* m x n is the size of the supernode */ + int r = 0; /* number of dropped rows */ + register float *temp; + register float *lusup = Glu->lusup; + register int *lsub = Glu->lsub; + register int *xlsub = Glu->xlsub; + register int *xlusup = Glu->xlusup; + register float d_max = 0.0, d_min = 1.0; + int drop_rule = options->ILU_DropRule; + milu_t milu = options->ILU_MILU; + norm_t nrm = options->ILU_Norm; + float zero = 0.0; + float one = 1.0; + float none = -1.0; + int inc_diag; /* inc_diag = m + 1 */ + int nzp = 0; /* number of zero pivots */ + + xlusup_first = xlusup[first]; + xlsub_first = xlsub[first]; + m = xlusup[first + 1] - xlusup_first; + n = last - first + 1; + m1 = m - 1; + inc_diag = m + 1; + nzlc = lastc ? (xlusup[last + 2] - xlusup[last + 1]) : 0; + temp = swork - n; + + /* Quick return if nothing to do. */ + if (m == 0 || m == n || drop_rule == NODROP) + { + *nnzLj += m * n; + return 0; + } + + /* basic dropping: ILU(tau) */ + for (i = n; i <= m1; ) + { + /* the average abs value of ith row */ + switch (nrm) + { + case ONE_NORM: + temp[i] = sasum_(&n, &lusup[xlusup_first + i], &m) / (double)n; + break; + case TWO_NORM: + temp[i] = snrm2_(&n, &lusup[xlusup_first + i], &m) + / sqrt((double)n); + break; + case INF_NORM: + default: + k = isamax_(&n, &lusup[xlusup_first + i], &m) - 1; + temp[i] = fabs(lusup[xlusup_first + i + m * k]); + break; + } + + /* drop small entries due to drop_tol */ + if (drop_rule & DROP_BASIC && temp[i] < drop_tol) + { + r++; + /* drop the current row and move the last undropped row here */ + if (r > 1) /* add to last row */ + { + /* accumulate the sum (for MILU) */ + switch (milu) + { + case SMILU_1: + case SMILU_2: + saxpy_(&n, &one, &lusup[xlusup_first + i], &m, + &lusup[xlusup_first + m - 1], &m); + break; + case SMILU_3: + for (j = 0; j < n; j++) + lusup[xlusup_first + (m - 1) + j * m] += + fabs(lusup[xlusup_first + i + j * m]); + break; + case SILU: + default: + break; + } + scopy_(&n, &lusup[xlusup_first + m1], &m, + &lusup[xlusup_first + i], &m); + } /* if (r > 1) */ + else /* move to last row */ + { + sswap_(&n, &lusup[xlusup_first + m1], &m, + &lusup[xlusup_first + i], &m); + if (milu == SMILU_3) + for (j = 0; j < n; j++) { + lusup[xlusup_first + m1 + j * m] = + fabs(lusup[xlusup_first + m1 + j * m]); + } + } + lsub[xlsub_first + i] = lsub[xlsub_first + m1]; + m1--; + continue; + } /* if dropping */ + else + { + if (temp[i] > d_max) d_max = temp[i]; + if (temp[i] < d_min) d_min = temp[i]; + } + i++; + } /* for */ + + /* Secondary dropping: drop more rows according to the quota. */ + quota = ceil((double)quota / (double)n); + if (drop_rule & DROP_SECONDARY && m - r > quota) + { + register double tol = d_max; + + /* Calculate the second dropping tolerance */ + if (quota > n) + { + if (drop_rule & DROP_INTERP) /* by interpolation */ + { + d_max = 1.0 / d_max; d_min = 1.0 / d_min; + tol = 1.0 / (d_max + (d_min - d_max) * quota / (m - n - r)); + } + else /* by quick sort */ + { + register int *itemp = iwork - n; + A = temp; + for (i = n; i <= m1; i++) itemp[i] = i; + qsort(iwork, m1 - n + 1, sizeof(int), _compare_); + tol = temp[iwork[quota]]; + } + } + + for (i = n; i <= m1; ) + { + if (temp[i] <= tol) + { + register int j; + r++; + /* drop the current row and move the last undropped row here */ + if (r > 1) /* add to last row */ + { + /* accumulate the sum (for MILU) */ + switch (milu) + { + case SMILU_1: + case SMILU_2: + saxpy_(&n, &one, &lusup[xlusup_first + i], &m, + &lusup[xlusup_first + m - 1], &m); + break; + case SMILU_3: + for (j = 0; j < n; j++) + lusup[xlusup_first + (m - 1) + j * m] += + fabs(lusup[xlusup_first + i + j * m]); + break; + case SILU: + default: + break; + } + scopy_(&n, &lusup[xlusup_first + m1], &m, + &lusup[xlusup_first + i], &m); + } /* if (r > 1) */ + else /* move to last row */ + { + sswap_(&n, &lusup[xlusup_first + m1], &m, + &lusup[xlusup_first + i], &m); + if (milu == SMILU_3) + for (j = 0; j < n; j++) { + lusup[xlusup_first + m1 + j * m] = + fabs(lusup[xlusup_first + m1 + j * m]); + } + } + lsub[xlsub_first + i] = lsub[xlsub_first + m1]; + m1--; + temp[i] = temp[m1]; + + continue; + } + i++; + + } /* for */ + + } /* if secondary dropping */ + + for (i = n; i < m; i++) temp[i] = 0.0; + + if (r == 0) + { + *nnzLj += m * n; + return 0; + } + + /* add dropped entries to the diagnal */ + if (milu != SILU) + { + register int j; + float t; + for (j = 0; j < n; j++) + { + t = lusup[xlusup_first + (m - 1) + j * m] * MILU_ALPHA; + switch (milu) + { + case SMILU_1: + if (t != none) { + lusup[xlusup_first + j * inc_diag] *= (one + t); + } + else + { + lusup[xlusup_first + j * inc_diag] *= *fill_tol; +#ifdef DEBUG + printf("[1] ZERO PIVOT: FILL col %d.\n", first + j); + fflush(stdout); +#endif + nzp++; + } + break; + case SMILU_2: + lusup[xlusup_first + j * inc_diag] *= (1.0 + fabs(t)); + break; + case SMILU_3: + lusup[xlusup_first + j * inc_diag] *= (one + t); + break; + case SILU: + default: + break; + } + } + if (nzp > 0) *fill_tol = -nzp; + } + + /* Remove dropped entries from the memory and fix the pointers. */ + m1 = m - r; + for (j = 1; j < n; j++) + { + register int tmp1, tmp2; + tmp1 = xlusup_first + j * m1; + tmp2 = xlusup_first + j * m; + for (i = 0; i < m1; i++) + lusup[i + tmp1] = lusup[i + tmp2]; + } + for (i = 0; i < nzlc; i++) + lusup[xlusup_first + i + n * m1] = lusup[xlusup_first + i + n * m]; + for (i = 0; i < nzlc; i++) + lsub[xlsub[last + 1] - r + i] = lsub[xlsub[last + 1] + i]; + for (i = first + 1; i <= last + 1; i++) + { + xlusup[i] -= r * (i - first); + xlsub[i] -= r; + } + if (lastc) + { + xlusup[last + 2] -= r * n; + xlsub[last + 2] -= r; + } + + *nnzLj += (m - r) * n; + return r; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_spanel_dfs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_spanel_dfs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_spanel_dfs.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_spanel_dfs.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,248 @@ + +/*! @file ilu_spanel_dfs.c + * \brief Peforms a symbolic factorization on a panel of symbols and + * record the entries with maximum absolute value in each column + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory
+ * June 30, 2009
+ * 
+ */ + +#include "slu_sdefs.h" + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *
+ *   Performs a symbolic factorization on a panel of columns [jcol, jcol+w).
+ *
+ *   A supernode representative is the last column of a supernode.
+ *   The nonzeros in U[*,j] are segments that end at supernodal
+ *   representatives.
+ *
+ *   The routine returns one list of the supernodal representatives
+ *   in topological order of the dfs that generates them. This list is
+ *   a superset of the topological order of each individual column within
+ *   the panel.
+ *   The location of the first nonzero in each supernodal segment
+ *   (supernodal entry location) is also returned. Each column has a
+ *   separate list for this purpose.
+ *
+ *   Two marker arrays are used for dfs:
+ *     marker[i] == jj, if i was visited during dfs of current column jj;
+ *     marker1[i] >= jcol, if i was visited by earlier columns in this panel;
+ *
+ *   marker: A-row --> A-row/col (0/1)
+ *   repfnz: SuperA-col --> PA-row
+ *   parent: SuperA-col --> SuperA-col
+ *   xplore: SuperA-col --> index to L-structure
+ * 
+ */ +void +ilu_spanel_dfs( + const int m, /* in - number of rows in the matrix */ + const int w, /* in */ + const int jcol, /* in */ + SuperMatrix *A, /* in - original matrix */ + int *perm_r, /* in */ + int *nseg, /* out */ + float *dense, /* out */ + float *amax, /* out - max. abs. value of each column in panel */ + int *panel_lsub, /* out */ + int *segrep, /* out */ + int *repfnz, /* out */ + int *marker, /* out */ + int *parent, /* working array */ + int *xplore, /* working array */ + GlobalLU_t *Glu /* modified */ +) +{ + + NCPformat *Astore; + float *a; + int *asub; + int *xa_begin, *xa_end; + int krep, chperm, chmark, chrep, oldrep, kchild, myfnz; + int k, krow, kmark, kperm; + int xdfs, maxdfs, kpar; + int jj; /* index through each column in the panel */ + int *marker1; /* marker1[jj] >= jcol if vertex jj was visited + by a previous column within this panel. */ + int *repfnz_col; /* start of each column in the panel */ + float *dense_col; /* start of each column in the panel */ + int nextl_col; /* next available position in panel_lsub[*,jj] */ + int *xsup, *supno; + int *lsub, *xlsub; + float *amax_col; + register double tmp; + + /* Initialize pointers */ + Astore = A->Store; + a = Astore->nzval; + asub = Astore->rowind; + xa_begin = Astore->colbeg; + xa_end = Astore->colend; + marker1 = marker + m; + repfnz_col = repfnz; + dense_col = dense; + amax_col = amax; + *nseg = 0; + xsup = Glu->xsup; + supno = Glu->supno; + lsub = Glu->lsub; + xlsub = Glu->xlsub; + + /* For each column in the panel */ + for (jj = jcol; jj < jcol + w; jj++) { + nextl_col = (jj - jcol) * m; + +#ifdef CHK_DFS + printf("\npanel col %d: ", jj); +#endif + + *amax_col = 0.0; + /* For each nonz in A[*,jj] do dfs */ + for (k = xa_begin[jj]; k < xa_end[jj]; k++) { + krow = asub[k]; + tmp = fabs(a[k]); + if (tmp > *amax_col) *amax_col = tmp; + dense_col[krow] = a[k]; + kmark = marker[krow]; + if ( kmark == jj ) + continue; /* krow visited before, go to the next nonzero */ + + /* For each unmarked nbr krow of jj + * krow is in L: place it in structure of L[*,jj] + */ + marker[krow] = jj; + kperm = perm_r[krow]; + + if ( kperm == EMPTY ) { + panel_lsub[nextl_col++] = krow; /* krow is indexed into A */ + } + /* + * krow is in U: if its supernode-rep krep + * has been explored, update repfnz[*] + */ + else { + + krep = xsup[supno[kperm]+1] - 1; + myfnz = repfnz_col[krep]; + +#ifdef CHK_DFS + printf("krep %d, myfnz %d, perm_r[%d] %d\n", krep, myfnz, krow, kperm); +#endif + if ( myfnz != EMPTY ) { /* Representative visited before */ + if ( myfnz > kperm ) repfnz_col[krep] = kperm; + /* continue; */ + } + else { + /* Otherwise, perform dfs starting at krep */ + oldrep = EMPTY; + parent[krep] = oldrep; + repfnz_col[krep] = kperm; + xdfs = xlsub[xsup[supno[krep]]]; + maxdfs = xlsub[krep + 1]; + +#ifdef CHK_DFS + printf(" xdfs %d, maxdfs %d: ", xdfs, maxdfs); + for (i = xdfs; i < maxdfs; i++) printf(" %d", lsub[i]); + printf("\n"); +#endif + do { + /* + * For each unmarked kchild of krep + */ + while ( xdfs < maxdfs ) { + + kchild = lsub[xdfs]; + xdfs++; + chmark = marker[kchild]; + + if ( chmark != jj ) { /* Not reached yet */ + marker[kchild] = jj; + chperm = perm_r[kchild]; + + /* Case kchild is in L: place it in L[*,j] */ + if ( chperm == EMPTY ) { + panel_lsub[nextl_col++] = kchild; + } + /* Case kchild is in U: + * chrep = its supernode-rep. If its rep has + * been explored, update its repfnz[*] + */ + else { + + chrep = xsup[supno[chperm]+1] - 1; + myfnz = repfnz_col[chrep]; +#ifdef CHK_DFS + printf("chrep %d,myfnz %d,perm_r[%d] %d\n",chrep,myfnz,kchild,chperm); +#endif + if ( myfnz != EMPTY ) { /* Visited before */ + if ( myfnz > chperm ) + repfnz_col[chrep] = chperm; + } + else { + /* Cont. dfs at snode-rep of kchild */ + xplore[krep] = xdfs; + oldrep = krep; + krep = chrep; /* Go deeper down G(L) */ + parent[krep] = oldrep; + repfnz_col[krep] = chperm; + xdfs = xlsub[xsup[supno[krep]]]; + maxdfs = xlsub[krep + 1]; +#ifdef CHK_DFS + printf(" xdfs %d, maxdfs %d: ", xdfs, maxdfs); + for (i = xdfs; i < maxdfs; i++) printf(" %d", lsub[i]); + printf("\n"); +#endif + } /* else */ + + } /* else */ + + } /* if... */ + + } /* while xdfs < maxdfs */ + + /* krow has no more unexplored nbrs: + * Place snode-rep krep in postorder DFS, if this + * segment is seen for the first time. (Note that + * "repfnz[krep]" may change later.) + * Backtrack dfs to its parent. + */ + if ( marker1[krep] < jcol ) { + segrep[*nseg] = krep; + ++(*nseg); + marker1[krep] = jj; + } + + kpar = parent[krep]; /* Pop stack, mimic recursion */ + if ( kpar == EMPTY ) break; /* dfs done */ + krep = kpar; + xdfs = xplore[krep]; + maxdfs = xlsub[krep + 1]; + +#ifdef CHK_DFS + printf(" pop stack: krep %d,xdfs %d,maxdfs %d: ", krep,xdfs,maxdfs); + for (i = xdfs; i < maxdfs; i++) printf(" %d", lsub[i]); + printf("\n"); +#endif + } while ( kpar != EMPTY ); /* do-while - until empty stack */ + + } /* else */ + + } /* else */ + + } /* for each nonz in A[*,jj] */ + + repfnz_col += m; /* Move to next column */ + dense_col += m; + amax_col++; + + } /* for jj ... */ + +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_spivotL.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_spivotL.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_spivotL.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_spivotL.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,274 @@ + +/*! @file ilu_spivotL.c + * \brief Performs numerical pivoting + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory
+ * June 30, 2009
+ * 
+ */ + + +#include +#include +#include "slu_sdefs.h" + +#ifndef SGN +#define SGN(x) ((x)>=0?1:-1) +#endif + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *   Performs the numerical pivoting on the current column of L,
+ *   and the CDIV operation.
+ *
+ *   Pivot policy:
+ *   (1) Compute thresh = u * max_(i>=j) abs(A_ij);
+ *   (2) IF user specifies pivot row k and abs(A_kj) >= thresh THEN
+ *	     pivot row = k;
+ *	 ELSE IF abs(A_jj) >= thresh THEN
+ *	     pivot row = j;
+ *	 ELSE
+ *	     pivot row = m;
+ *
+ *   Note: If you absolutely want to use a given pivot order, then set u=0.0.
+ *
+ *   Return value: 0	  success;
+ *		   i > 0  U(i,i) is exactly zero.
+ * 
+ */ + +int +ilu_spivotL( + const int jcol, /* in */ + const double u, /* in - diagonal pivoting threshold */ + int *usepr, /* re-use the pivot sequence given by + * perm_r/iperm_r */ + int *perm_r, /* may be modified */ + int diagind, /* diagonal of Pc*A*Pc' */ + int *swap, /* in/out record the row permutation */ + int *iswap, /* in/out inverse of swap, it is the same as + perm_r after the factorization */ + int *marker, /* in */ + int *pivrow, /* in/out, as an input if *usepr!=0 */ + double fill_tol, /* in - fill tolerance of current column + * used for a singular column */ + milu_t milu, /* in */ + float drop_sum, /* in - computed in ilu_scopy_to_ucol() + (MILU only) */ + GlobalLU_t *Glu, /* modified - global LU data structures */ + SuperLUStat_t *stat /* output */ + ) +{ + + int n; /* number of columns */ + int fsupc; /* first column in the supernode */ + int nsupc; /* no of columns in the supernode */ + int nsupr; /* no of rows in the supernode */ + int lptr; /* points to the starting subscript of the supernode */ + register int pivptr; + int old_pivptr, diag, ptr0; + register float pivmax, rtemp; + float thresh; + float temp; + float *lu_sup_ptr; + float *lu_col_ptr; + int *lsub_ptr; + register int isub, icol, k, itemp; + int *lsub, *xlsub; + float *lusup; + int *xlusup; + flops_t *ops = stat->ops; + int info; + + /* Initialize pointers */ + n = Glu->n; + lsub = Glu->lsub; + xlsub = Glu->xlsub; + lusup = Glu->lusup; + xlusup = Glu->xlusup; + fsupc = (Glu->xsup)[(Glu->supno)[jcol]]; + nsupc = jcol - fsupc; /* excluding jcol; nsupc >= 0 */ + lptr = xlsub[fsupc]; + nsupr = xlsub[fsupc+1] - lptr; + lu_sup_ptr = &lusup[xlusup[fsupc]]; /* start of the current supernode */ + lu_col_ptr = &lusup[xlusup[jcol]]; /* start of jcol in the supernode */ + lsub_ptr = &lsub[lptr]; /* start of row indices of the supernode */ + + /* Determine the largest abs numerical value for partial pivoting; + Also search for user-specified pivot, and diagonal element. */ + pivmax = -1.0; + pivptr = nsupc; + diag = EMPTY; + old_pivptr = nsupc; + ptr0 = EMPTY; + for (isub = nsupc; isub < nsupr; ++isub) { + if (marker[lsub_ptr[isub]] > jcol) + continue; /* do not overlap with a later relaxed supernode */ + + switch (milu) { + case SMILU_1: + rtemp = fabs(lu_col_ptr[isub] + drop_sum); + break; + case SMILU_2: + case SMILU_3: + /* In this case, drop_sum contains the sum of the abs. value */ + rtemp = fabs(lu_col_ptr[isub]); + break; + case SILU: + default: + rtemp = fabs(lu_col_ptr[isub]); + break; + } + if (rtemp > pivmax) { pivmax = rtemp; pivptr = isub; } + if (*usepr && lsub_ptr[isub] == *pivrow) old_pivptr = isub; + if (lsub_ptr[isub] == diagind) diag = isub; + if (ptr0 == EMPTY) ptr0 = isub; + } + + if (milu == SMILU_2 || milu == SMILU_3) pivmax += drop_sum; + + /* Test for singularity */ + if (pivmax < 0.0) { +#if SCIPY_SPECIFIC_FIX + ABORT("[0]: matrix is singular"); +#else + fprintf(stderr, "[0]: jcol=%d, SINGULAR!!!\n", jcol); + fflush(stderr); + exit(1); +#endif + } + if ( pivmax == 0.0 ) { + if (diag != EMPTY) + *pivrow = lsub_ptr[pivptr = diag]; + else if (ptr0 != EMPTY) + *pivrow = lsub_ptr[pivptr = ptr0]; + else { + /* look for the first row which does not + belong to any later supernodes */ + for (icol = jcol; icol < n; icol++) + if (marker[swap[icol]] <= jcol) break; + if (icol >= n) { +#if SCIPY_SPECIFIC_FIX + ABORT("[1]: matrix is singular"); +#else + fprintf(stderr, "[1]: jcol=%d, SINGULAR!!!\n", jcol); + fflush(stderr); + exit(1); +#endif + } + + *pivrow = swap[icol]; + + /* pick up the pivot row */ + for (isub = nsupc; isub < nsupr; ++isub) + if ( lsub_ptr[isub] == *pivrow ) { pivptr = isub; break; } + } + pivmax = fill_tol; + lu_col_ptr[pivptr] = pivmax; + *usepr = 0; +#ifdef DEBUG + printf("[0] ZERO PIVOT: FILL (%d, %d).\n", *pivrow, jcol); + fflush(stdout); +#endif + info =jcol + 1; + } /* if (*pivrow == 0.0) */ + else { + thresh = u * pivmax; + + /* Choose appropriate pivotal element by our policy. */ + if ( *usepr ) { + switch (milu) { + case SMILU_1: + rtemp = fabs(lu_col_ptr[old_pivptr] + drop_sum); + break; + case SMILU_2: + case SMILU_3: + rtemp = fabs(lu_col_ptr[old_pivptr]) + drop_sum; + break; + case SILU: + default: + rtemp = fabs(lu_col_ptr[old_pivptr]); + break; + } + if ( rtemp != 0.0 && rtemp >= thresh ) pivptr = old_pivptr; + else *usepr = 0; + } + if ( *usepr == 0 ) { + /* Use diagonal pivot? */ + if ( diag >= 0 ) { /* diagonal exists */ + switch (milu) { + case SMILU_1: + rtemp = fabs(lu_col_ptr[diag] + drop_sum); + break; + case SMILU_2: + case SMILU_3: + rtemp = fabs(lu_col_ptr[diag]) + drop_sum; + break; + case SILU: + default: + rtemp = fabs(lu_col_ptr[diag]); + break; + } + if ( rtemp != 0.0 && rtemp >= thresh ) pivptr = diag; + } + *pivrow = lsub_ptr[pivptr]; + } + info = 0; + + /* Reset the diagonal */ + switch (milu) { + case SMILU_1: + lu_col_ptr[pivptr] += drop_sum; + break; + case SMILU_2: + case SMILU_3: + lu_col_ptr[pivptr] += SGN(lu_col_ptr[pivptr]) * drop_sum; + break; + case SILU: + default: + break; + } + + } /* else */ + + /* Record pivot row */ + perm_r[*pivrow] = jcol; + if (jcol < n - 1) { + register int t1, t2, t; + t1 = iswap[*pivrow]; t2 = jcol; + if (t1 != t2) { + t = swap[t1]; swap[t1] = swap[t2]; swap[t2] = t; + t1 = swap[t1]; t2 = t; + t = iswap[t1]; iswap[t1] = iswap[t2]; iswap[t2] = t; + } + } /* if (jcol < n - 1) */ + + /* Interchange row subscripts */ + if ( pivptr != nsupc ) { + itemp = lsub_ptr[pivptr]; + lsub_ptr[pivptr] = lsub_ptr[nsupc]; + lsub_ptr[nsupc] = itemp; + + /* Interchange numerical values as well, for the whole snode, such + * that L is indexed the same way as A. + */ + for (icol = 0; icol <= nsupc; icol++) { + itemp = pivptr + icol * nsupr; + temp = lu_sup_ptr[itemp]; + lu_sup_ptr[itemp] = lu_sup_ptr[nsupc + icol*nsupr]; + lu_sup_ptr[nsupc + icol*nsupr] = temp; + } + } /* if */ + + /* cdiv operation */ + ops[FACT] += nsupr - nsupc; + temp = 1.0 / lu_col_ptr[nsupc]; + for (k = nsupc+1; k < nsupr; k++) lu_col_ptr[k] *= temp; + + return info; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_ssnode_dfs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_ssnode_dfs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_ssnode_dfs.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_ssnode_dfs.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,90 @@ + +/*! @file ilu_ssnode_dfs.c + * \brief Determines the union of row structures of columns within the relaxed node + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory
+ * June 30, 2009
+ * 
+ */ + +#include "slu_sdefs.h" + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *    ilu_ssnode_dfs() - Determine the union of the row structures of those
+ *    columns within the relaxed snode.
+ *    Note: The relaxed snodes are leaves of the supernodal etree, therefore,
+ *    the portion outside the rectangular supernode must be zero.
+ *
+ * Return value
+ * ============
+ *     0   success;
+ *    >0   number of bytes allocated when run out of memory.
+ * 
+ */ + +int +ilu_ssnode_dfs( + const int jcol, /* in - start of the supernode */ + const int kcol, /* in - end of the supernode */ + const int *asub, /* in */ + const int *xa_begin, /* in */ + const int *xa_end, /* in */ + int *marker, /* modified */ + GlobalLU_t *Glu /* modified */ + ) +{ + + register int i, k, nextl; + int nsuper, krow, kmark, mem_error; + int *xsup, *supno; + int *lsub, *xlsub; + int nzlmax; + + xsup = Glu->xsup; + supno = Glu->supno; + lsub = Glu->lsub; + xlsub = Glu->xlsub; + nzlmax = Glu->nzlmax; + + nsuper = ++supno[jcol]; /* Next available supernode number */ + nextl = xlsub[jcol]; + + for (i = jcol; i <= kcol; i++) + { + /* For each nonzero in A[*,i] */ + for (k = xa_begin[i]; k < xa_end[i]; k++) + { + krow = asub[k]; + kmark = marker[krow]; + if ( kmark != kcol ) + { /* First time visit krow */ + marker[krow] = kcol; + lsub[nextl++] = krow; + if ( nextl >= nzlmax ) + { + if ( (mem_error = sLUMemXpand(jcol, nextl, LSUB, &nzlmax, + Glu)) != 0) + return (mem_error); + lsub = Glu->lsub; + } + } + } + supno[i] = nsuper; + } + + /* Supernode > 1 */ + if ( jcol < kcol ) + for (i = jcol+1; i <= kcol; i++) xlsub[i] = nextl; + + xsup[nsuper+1] = kcol + 1; + supno[kcol+1] = nsuper; + xlsub[kcol+1] = nextl; + + return 0; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zcolumn_dfs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zcolumn_dfs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zcolumn_dfs.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zcolumn_dfs.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,258 @@ + +/*! @file ilu_zcolumn_dfs.c + * \brief Performs a symbolic factorization + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory
+ * June 30, 2009
+ * 
+*/ + +#include "slu_zdefs.h" + + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *   ILU_ZCOLUMN_DFS performs a symbolic factorization on column jcol, and
+ *   decide the supernode boundary.
+ *
+ *   This routine does not use numeric values, but only use the RHS
+ *   row indices to start the dfs.
+ *
+ *   A supernode representative is the last column of a supernode.
+ *   The nonzeros in U[*,j] are segments that end at supernodal
+ *   representatives. The routine returns a list of such supernodal
+ *   representatives in topological order of the dfs that generates them.
+ *   The location of the first nonzero in each such supernodal segment
+ *   (supernodal entry location) is also returned.
+ *
+ * Local parameters
+ * ================
+ *   nseg: no of segments in current U[*,j]
+ *   jsuper: jsuper=EMPTY if column j does not belong to the same
+ *	supernode as j-1. Otherwise, jsuper=nsuper.
+ *
+ *   marker2: A-row --> A-row/col (0/1)
+ *   repfnz: SuperA-col --> PA-row
+ *   parent: SuperA-col --> SuperA-col
+ *   xplore: SuperA-col --> index to L-structure
+ *
+ * Return value
+ * ============
+ *     0  success;
+ *   > 0  number of bytes allocated when run out of space.
+ * 
+ */ +int +ilu_zcolumn_dfs( + const int m, /* in - number of rows in the matrix */ + const int jcol, /* in */ + int *perm_r, /* in */ + int *nseg, /* modified - with new segments appended */ + int *lsub_col, /* in - defines the RHS vector to start the + dfs */ + int *segrep, /* modified - with new segments appended */ + int *repfnz, /* modified */ + int *marker, /* modified */ + int *parent, /* working array */ + int *xplore, /* working array */ + GlobalLU_t *Glu /* modified */ + ) +{ + + int jcolp1, jcolm1, jsuper, nsuper, nextl; + int k, krep, krow, kmark, kperm; + int *marker2; /* Used for small panel LU */ + int fsupc; /* First column of a snode */ + int myfnz; /* First nonz column of a U-segment */ + int chperm, chmark, chrep, kchild; + int xdfs, maxdfs, kpar, oldrep; + int jptr, jm1ptr; + int ito, ifrom; /* Used to compress row subscripts */ + int mem_error; + int *xsup, *supno, *lsub, *xlsub; + int nzlmax; + static int first = 1, maxsuper; + + xsup = Glu->xsup; + supno = Glu->supno; + lsub = Glu->lsub; + xlsub = Glu->xlsub; + nzlmax = Glu->nzlmax; + + if ( first ) { + maxsuper = sp_ienv(3); + first = 0; + } + jcolp1 = jcol + 1; + jcolm1 = jcol - 1; + nsuper = supno[jcol]; + jsuper = nsuper; + nextl = xlsub[jcol]; + marker2 = &marker[2*m]; + + + /* For each nonzero in A[*,jcol] do dfs */ + for (k = 0; lsub_col[k] != EMPTY; k++) { + + krow = lsub_col[k]; + lsub_col[k] = EMPTY; + kmark = marker2[krow]; + + /* krow was visited before, go to the next nonzero */ + if ( kmark == jcol ) continue; + + /* For each unmarked nbr krow of jcol + * krow is in L: place it in structure of L[*,jcol] + */ + marker2[krow] = jcol; + kperm = perm_r[krow]; + + if ( kperm == EMPTY ) { + lsub[nextl++] = krow; /* krow is indexed into A */ + if ( nextl >= nzlmax ) { + if ((mem_error = zLUMemXpand(jcol, nextl, LSUB, &nzlmax, Glu))) + return (mem_error); + lsub = Glu->lsub; + } + if ( kmark != jcolm1 ) jsuper = EMPTY;/* Row index subset testing */ + } else { + /* krow is in U: if its supernode-rep krep + * has been explored, update repfnz[*] + */ + krep = xsup[supno[kperm]+1] - 1; + myfnz = repfnz[krep]; + + if ( myfnz != EMPTY ) { /* Visited before */ + if ( myfnz > kperm ) repfnz[krep] = kperm; + /* continue; */ + } + else { + /* Otherwise, perform dfs starting at krep */ + oldrep = EMPTY; + parent[krep] = oldrep; + repfnz[krep] = kperm; + xdfs = xlsub[xsup[supno[krep]]]; + maxdfs = xlsub[krep + 1]; + + do { + /* + * For each unmarked kchild of krep + */ + while ( xdfs < maxdfs ) { + + kchild = lsub[xdfs]; + xdfs++; + chmark = marker2[kchild]; + + if ( chmark != jcol ) { /* Not reached yet */ + marker2[kchild] = jcol; + chperm = perm_r[kchild]; + + /* Case kchild is in L: place it in L[*,k] */ + if ( chperm == EMPTY ) { + lsub[nextl++] = kchild; + if ( nextl >= nzlmax ) { + if ( (mem_error = zLUMemXpand(jcol,nextl, + LSUB,&nzlmax,Glu)) ) + return (mem_error); + lsub = Glu->lsub; + } + if ( chmark != jcolm1 ) jsuper = EMPTY; + } else { + /* Case kchild is in U: + * chrep = its supernode-rep. If its rep has + * been explored, update its repfnz[*] + */ + chrep = xsup[supno[chperm]+1] - 1; + myfnz = repfnz[chrep]; + if ( myfnz != EMPTY ) { /* Visited before */ + if ( myfnz > chperm ) + repfnz[chrep] = chperm; + } else { + /* Continue dfs at super-rep of kchild */ + xplore[krep] = xdfs; + oldrep = krep; + krep = chrep; /* Go deeper down G(L^t) */ + parent[krep] = oldrep; + repfnz[krep] = chperm; + xdfs = xlsub[xsup[supno[krep]]]; + maxdfs = xlsub[krep + 1]; + } /* else */ + + } /* else */ + + } /* if */ + + } /* while */ + + /* krow has no more unexplored nbrs; + * place supernode-rep krep in postorder DFS. + * backtrack dfs to its parent + */ + segrep[*nseg] = krep; + ++(*nseg); + kpar = parent[krep]; /* Pop from stack, mimic recursion */ + if ( kpar == EMPTY ) break; /* dfs done */ + krep = kpar; + xdfs = xplore[krep]; + maxdfs = xlsub[krep + 1]; + + } while ( kpar != EMPTY ); /* Until empty stack */ + + } /* else */ + + } /* else */ + + } /* for each nonzero ... */ + + /* Check to see if j belongs in the same supernode as j-1 */ + if ( jcol == 0 ) { /* Do nothing for column 0 */ + nsuper = supno[0] = 0; + } else { + fsupc = xsup[nsuper]; + jptr = xlsub[jcol]; /* Not compressed yet */ + jm1ptr = xlsub[jcolm1]; + + if ( (nextl-jptr != jptr-jm1ptr-1) ) jsuper = EMPTY; + + /* Always start a new supernode for a singular column */ + if ( nextl == jptr ) jsuper = EMPTY; + + /* Make sure the number of columns in a supernode doesn't + exceed threshold. */ + if ( jcol - fsupc >= maxsuper ) jsuper = EMPTY; + + /* If jcol starts a new supernode, reclaim storage space in + * lsub from the previous supernode. Note we only store + * the subscript set of the first columns of the supernode. + */ + if ( jsuper == EMPTY ) { /* starts a new supernode */ + if ( (fsupc < jcolm1) ) { /* >= 2 columns in nsuper */ +#ifdef CHK_COMPRESS + printf(" Compress lsub[] at super %d-%d\n", fsupc, jcolm1); +#endif + ito = xlsub[fsupc+1]; + xlsub[jcolm1] = ito; + xlsub[jcol] = ito; + for (ifrom = jptr; ifrom < nextl; ++ifrom, ++ito) + lsub[ito] = lsub[ifrom]; + nextl = ito; + } + nsuper++; + supno[jcol] = nsuper; + } /* if a new supernode */ + + } /* else: jcol > 0 */ + + /* Tidy up the pointers before exit */ + xsup[nsuper+1] = jcolp1; + supno[jcolp1] = nsuper; + xlsub[jcolp1] = nextl; + + return 0; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zcopy_to_ucol.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zcopy_to_ucol.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zcopy_to_ucol.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zcopy_to_ucol.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,202 @@ + +/*! @file ilu_zcopy_to_ucol.c + * \brief Copy a computed column of U to the compressed data structure + * and drop some small entries + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory
+ * June 30, 2009
+ * 
+ */ + +#include "slu_zdefs.h" + +#ifdef DEBUG +int num_drop_U; +#endif + +static doublecomplex *A; /* used in _compare_ only */ +static int _compare_(const void *a, const void *b) +{ + register int *x = (int *)a, *y = (int *)b; + register double xx = z_abs1(&A[*x]), yy = z_abs1(&A[*y]); + if (xx > yy) return -1; + else if (xx < yy) return 1; + else return 0; +} + + +int +ilu_zcopy_to_ucol( + int jcol, /* in */ + int nseg, /* in */ + int *segrep, /* in */ + int *repfnz, /* in */ + int *perm_r, /* in */ + doublecomplex *dense, /* modified - reset to zero on return */ + int drop_rule,/* in */ + milu_t milu, /* in */ + double drop_tol, /* in */ + int quota, /* maximum nonzero entries allowed */ + doublecomplex *sum, /* out - the sum of dropped entries */ + int *nnzUj, /* in - out */ + GlobalLU_t *Glu, /* modified */ + int *work /* working space with minimum size n, + * used by the second dropping rule */ + ) +{ +/* + * Gather from SPA dense[*] to global ucol[*]. + */ + int ksub, krep, ksupno; + int i, k, kfnz, segsze; + int fsupc, isub, irow; + int jsupno, nextu; + int new_next, mem_error; + int *xsup, *supno; + int *lsub, *xlsub; + doublecomplex *ucol; + int *usub, *xusub; + int nzumax; + int m; /* number of entries in the nonzero U-segments */ + register double d_max = 0.0, d_min = 1.0 / dlamch_("Safe minimum"); + register double tmp; + doublecomplex zero = {0.0, 0.0}; + + xsup = Glu->xsup; + supno = Glu->supno; + lsub = Glu->lsub; + xlsub = Glu->xlsub; + ucol = Glu->ucol; + usub = Glu->usub; + xusub = Glu->xusub; + nzumax = Glu->nzumax; + + *sum = zero; + if (drop_rule == NODROP) { + drop_tol = -1.0, quota = Glu->n; + } + + jsupno = supno[jcol]; + nextu = xusub[jcol]; + k = nseg - 1; + for (ksub = 0; ksub < nseg; ksub++) { + krep = segrep[k--]; + ksupno = supno[krep]; + + if ( ksupno != jsupno ) { /* Should go into ucol[] */ + kfnz = repfnz[krep]; + if ( kfnz != EMPTY ) { /* Nonzero U-segment */ + + fsupc = xsup[ksupno]; + isub = xlsub[fsupc] + kfnz - fsupc; + segsze = krep - kfnz + 1; + + new_next = nextu + segsze; + while ( new_next > nzumax ) { + if ((mem_error = zLUMemXpand(jcol, nextu, UCOL, &nzumax, + Glu)) != 0) + return (mem_error); + ucol = Glu->ucol; + if ((mem_error = zLUMemXpand(jcol, nextu, USUB, &nzumax, + Glu)) != 0) + return (mem_error); + usub = Glu->usub; + lsub = Glu->lsub; + } + + for (i = 0; i < segsze; i++) { + irow = lsub[isub++]; + tmp = z_abs1(&dense[irow]); + + /* first dropping rule */ + if (quota > 0 && tmp >= drop_tol) { + if (tmp > d_max) d_max = tmp; + if (tmp < d_min) d_min = tmp; + usub[nextu] = perm_r[irow]; + ucol[nextu] = dense[irow]; + nextu++; + } else { + switch (milu) { + case SMILU_1: + case SMILU_2: + z_add(sum, sum, &dense[irow]); + break; + case SMILU_3: + /* *sum += fabs(dense[irow]);*/ + sum->r += tmp; + break; + case SILU: + default: + break; + } +#ifdef DEBUG + num_drop_U++; +#endif + } + dense[irow] = zero; + } + + } + + } + + } /* for each segment... */ + + xusub[jcol + 1] = nextu; /* Close U[*,jcol] */ + m = xusub[jcol + 1] - xusub[jcol]; + + /* second dropping rule */ + if (drop_rule & DROP_SECONDARY && m > quota) { + register double tol = d_max; + register int m0 = xusub[jcol] + m - 1; + + if (quota > 0) { + if (drop_rule & DROP_INTERP) { + d_max = 1.0 / d_max; d_min = 1.0 / d_min; + tol = 1.0 / (d_max + (d_min - d_max) * quota / m); + } else { + A = &ucol[xusub[jcol]]; + for (i = 0; i < m; i++) work[i] = i; + qsort(work, m, sizeof(int), _compare_); + tol = fabs(usub[xusub[jcol] + work[quota]]); + } + } + for (i = xusub[jcol]; i <= m0; ) { + if (z_abs1(&ucol[i]) <= tol) { + switch (milu) { + case SMILU_1: + case SMILU_2: + z_add(sum, sum, &ucol[i]); + break; + case SMILU_3: + sum->r += tmp; + break; + case SILU: + default: + break; + } + ucol[i] = ucol[m0]; + usub[i] = usub[m0]; + m0--; + m--; +#ifdef DEBUG + num_drop_U++; +#endif + xusub[jcol + 1]--; + continue; + } + i++; + } + } + + if (milu == SMILU_2) { + sum->r = z_abs1(sum); sum->i = 0.0; + } + if (milu == SMILU_3) sum->i = 0.0; + + *nnzUj += m; + + return 0; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zdrop_row.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zdrop_row.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zdrop_row.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zdrop_row.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,321 @@ + +/*! @file ilu_zdrop_row.c + * \brief Drop small rows from L + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory.
+ * June 30, 2009
+ * <\pre>
+ */
+
+#include 
+#include 
+#include "slu_zdefs.h"
+
+extern void zswap_(int *, doublecomplex [], int *, doublecomplex [], int *);
+extern void zaxpy_(int *, doublecomplex *, doublecomplex [], int *, doublecomplex [], int *);
+
+static double *A;  /* used in _compare_ only */
+static int _compare_(const void *a, const void *b)
+{
+    register int *x = (int *)a, *y = (int *)b;
+    if (A[*x] - A[*y] > 0.0) return -1;
+    else if (A[*x] - A[*y] < 0.0) return 1;
+    else return 0;
+}
+
+/*! \brief
+ * 
+ * Purpose
+ * =======
+ *    ilu_zdrop_row() - Drop some small rows from the previous 
+ *    supernode (L-part only).
+ * 
+ */ +int ilu_zdrop_row( + superlu_options_t *options, /* options */ + int first, /* index of the first column in the supernode */ + int last, /* index of the last column in the supernode */ + double drop_tol, /* dropping parameter */ + int quota, /* maximum nonzero entries allowed */ + int *nnzLj, /* in/out number of nonzeros in L(:, 1:last) */ + double *fill_tol, /* in/out - on exit, fill_tol=-num_zero_pivots, + * does not change if options->ILU_MILU != SMILU1 */ + GlobalLU_t *Glu, /* modified */ + double dwork[], /* working space with minimum size last-first+1 */ + int iwork[], /* working space with minimum size m - n, + * used by the second dropping rule */ + int lastc /* if lastc == 0, there is nothing after the + * working supernode [first:last]; + * if lastc == 1, there is one more column after + * the working supernode. */ ) +{ + register int i, j, k, m1; + register int nzlc; /* number of nonzeros in column last+1 */ + register int xlusup_first, xlsub_first; + int m, n; /* m x n is the size of the supernode */ + int r = 0; /* number of dropped rows */ + register double *temp; + register doublecomplex *lusup = Glu->lusup; + register int *lsub = Glu->lsub; + register int *xlsub = Glu->xlsub; + register int *xlusup = Glu->xlusup; + register double d_max = 0.0, d_min = 1.0; + int drop_rule = options->ILU_DropRule; + milu_t milu = options->ILU_MILU; + norm_t nrm = options->ILU_Norm; + doublecomplex zero = {0.0, 0.0}; + doublecomplex one = {1.0, 0.0}; + doublecomplex none = {-1.0, 0.0}; + int inc_diag; /* inc_diag = m + 1 */ + int nzp = 0; /* number of zero pivots */ + + xlusup_first = xlusup[first]; + xlsub_first = xlsub[first]; + m = xlusup[first + 1] - xlusup_first; + n = last - first + 1; + m1 = m - 1; + inc_diag = m + 1; + nzlc = lastc ? (xlusup[last + 2] - xlusup[last + 1]) : 0; + temp = dwork - n; + + /* Quick return if nothing to do. */ + if (m == 0 || m == n || drop_rule == NODROP) + { + *nnzLj += m * n; + return 0; + } + + /* basic dropping: ILU(tau) */ + for (i = n; i <= m1; ) + { + /* the average abs value of ith row */ + switch (nrm) + { + case ONE_NORM: + temp[i] = dzasum_(&n, &lusup[xlusup_first + i], &m) / (double)n; + break; + case TWO_NORM: + temp[i] = dznrm2_(&n, &lusup[xlusup_first + i], &m) + / sqrt((double)n); + break; + case INF_NORM: + default: + k = izamax_(&n, &lusup[xlusup_first + i], &m) - 1; + temp[i] = z_abs1(&lusup[xlusup_first + i + m * k]); + break; + } + + /* drop small entries due to drop_tol */ + if (drop_rule & DROP_BASIC && temp[i] < drop_tol) + { + r++; + /* drop the current row and move the last undropped row here */ + if (r > 1) /* add to last row */ + { + /* accumulate the sum (for MILU) */ + switch (milu) + { + case SMILU_1: + case SMILU_2: + zaxpy_(&n, &one, &lusup[xlusup_first + i], &m, + &lusup[xlusup_first + m - 1], &m); + break; + case SMILU_3: + for (j = 0; j < n; j++) + lusup[xlusup_first + (m - 1) + j * m].r += + z_abs1(&lusup[xlusup_first + i + j * m]); + break; + case SILU: + default: + break; + } + zcopy_(&n, &lusup[xlusup_first + m1], &m, + &lusup[xlusup_first + i], &m); + } /* if (r > 1) */ + else /* move to last row */ + { + zswap_(&n, &lusup[xlusup_first + m1], &m, + &lusup[xlusup_first + i], &m); + if (milu == SMILU_3) + for (j = 0; j < n; j++) { + lusup[xlusup_first + m1 + j * m].r = + z_abs1(&lusup[xlusup_first + m1 + j * m]); + lusup[xlusup_first + m1 + j * m].i = 0.0; + } + } + lsub[xlsub_first + i] = lsub[xlsub_first + m1]; + m1--; + continue; + } /* if dropping */ + else + { + if (temp[i] > d_max) d_max = temp[i]; + if (temp[i] < d_min) d_min = temp[i]; + } + i++; + } /* for */ + + /* Secondary dropping: drop more rows according to the quota. */ + quota = ceil((double)quota / (double)n); + if (drop_rule & DROP_SECONDARY && m - r > quota) + { + register double tol = d_max; + + /* Calculate the second dropping tolerance */ + if (quota > n) + { + if (drop_rule & DROP_INTERP) /* by interpolation */ + { + d_max = 1.0 / d_max; d_min = 1.0 / d_min; + tol = 1.0 / (d_max + (d_min - d_max) * quota / (m - n - r)); + } + else /* by quick sort */ + { + register int *itemp = iwork - n; + A = temp; + for (i = n; i <= m1; i++) itemp[i] = i; + qsort(iwork, m1 - n + 1, sizeof(int), _compare_); + tol = temp[iwork[quota]]; + } + } + + for (i = n; i <= m1; ) + { + if (temp[i] <= tol) + { + register int j; + r++; + /* drop the current row and move the last undropped row here */ + if (r > 1) /* add to last row */ + { + /* accumulate the sum (for MILU) */ + switch (milu) + { + case SMILU_1: + case SMILU_2: + zaxpy_(&n, &one, &lusup[xlusup_first + i], &m, + &lusup[xlusup_first + m - 1], &m); + break; + case SMILU_3: + for (j = 0; j < n; j++) + lusup[xlusup_first + (m - 1) + j * m].r += + z_abs1(&lusup[xlusup_first + i + j * m]); + break; + case SILU: + default: + break; + } + zcopy_(&n, &lusup[xlusup_first + m1], &m, + &lusup[xlusup_first + i], &m); + } /* if (r > 1) */ + else /* move to last row */ + { + zswap_(&n, &lusup[xlusup_first + m1], &m, + &lusup[xlusup_first + i], &m); + if (milu == SMILU_3) + for (j = 0; j < n; j++) { + lusup[xlusup_first + m1 + j * m].r = + z_abs1(&lusup[xlusup_first + m1 + j * m]); + lusup[xlusup_first + m1 + j * m].i = 0.0; + } + } + lsub[xlsub_first + i] = lsub[xlsub_first + m1]; + m1--; + temp[i] = temp[m1]; + + continue; + } + i++; + + } /* for */ + + } /* if secondary dropping */ + + for (i = n; i < m; i++) temp[i] = 0.0; + + if (r == 0) + { + *nnzLj += m * n; + return 0; + } + + /* add dropped entries to the diagnal */ + if (milu != SILU) + { + register int j; + doublecomplex t; + for (j = 0; j < n; j++) + { + zd_mult(&t, &lusup[xlusup_first + (m - 1) + j * m], + MILU_ALPHA); + switch (milu) + { + case SMILU_1: + if ( !(z_eq(&t, &none)) ) { + z_add(&t, &t, &one); + zz_mult(&lusup[xlusup_first + j * inc_diag], + &lusup[xlusup_first + j * inc_diag], + &t); + } + else + { + zd_mult( + &lusup[xlusup_first + j * inc_diag], + &lusup[xlusup_first + j * inc_diag], + *fill_tol); +#ifdef DEBUG + printf("[1] ZERO PIVOT: FILL col %d.\n", first + j); + fflush(stdout); +#endif + nzp++; + } + break; + case SMILU_2: + zd_mult(&lusup[xlusup_first + j * inc_diag], + &lusup[xlusup_first + j * inc_diag], + 1.0 + z_abs1(&t)); + break; + case SMILU_3: + z_add(&t, &t, &one); + zz_mult(&lusup[xlusup_first + j * inc_diag], + &lusup[xlusup_first + j * inc_diag], + &t); + break; + case SILU: + default: + break; + } + } + if (nzp > 0) *fill_tol = -nzp; + } + + /* Remove dropped entries from the memory and fix the pointers. */ + m1 = m - r; + for (j = 1; j < n; j++) + { + register int tmp1, tmp2; + tmp1 = xlusup_first + j * m1; + tmp2 = xlusup_first + j * m; + for (i = 0; i < m1; i++) + lusup[i + tmp1] = lusup[i + tmp2]; + } + for (i = 0; i < nzlc; i++) + lusup[xlusup_first + i + n * m1] = lusup[xlusup_first + i + n * m]; + for (i = 0; i < nzlc; i++) + lsub[xlsub[last + 1] - r + i] = lsub[xlsub[last + 1] + i]; + for (i = first + 1; i <= last + 1; i++) + { + xlusup[i] -= r * (i - first); + xlsub[i] -= r; + } + if (lastc) + { + xlusup[last + 2] -= r * n; + xlsub[last + 2] -= r; + } + + *nnzLj += (m - r) * n; + return r; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zpanel_dfs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zpanel_dfs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zpanel_dfs.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zpanel_dfs.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,248 @@ + +/*! @file ilu_zpanel_dfs.c + * \brief Peforms a symbolic factorization on a panel of symbols and + * record the entries with maximum absolute value in each column + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory
+ * June 30, 2009
+ * 
+ */ + +#include "slu_zdefs.h" + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *
+ *   Performs a symbolic factorization on a panel of columns [jcol, jcol+w).
+ *
+ *   A supernode representative is the last column of a supernode.
+ *   The nonzeros in U[*,j] are segments that end at supernodal
+ *   representatives.
+ *
+ *   The routine returns one list of the supernodal representatives
+ *   in topological order of the dfs that generates them. This list is
+ *   a superset of the topological order of each individual column within
+ *   the panel.
+ *   The location of the first nonzero in each supernodal segment
+ *   (supernodal entry location) is also returned. Each column has a
+ *   separate list for this purpose.
+ *
+ *   Two marker arrays are used for dfs:
+ *     marker[i] == jj, if i was visited during dfs of current column jj;
+ *     marker1[i] >= jcol, if i was visited by earlier columns in this panel;
+ *
+ *   marker: A-row --> A-row/col (0/1)
+ *   repfnz: SuperA-col --> PA-row
+ *   parent: SuperA-col --> SuperA-col
+ *   xplore: SuperA-col --> index to L-structure
+ * 
+ */ +void +ilu_zpanel_dfs( + const int m, /* in - number of rows in the matrix */ + const int w, /* in */ + const int jcol, /* in */ + SuperMatrix *A, /* in - original matrix */ + int *perm_r, /* in */ + int *nseg, /* out */ + doublecomplex *dense, /* out */ + double *amax, /* out - max. abs. value of each column in panel */ + int *panel_lsub, /* out */ + int *segrep, /* out */ + int *repfnz, /* out */ + int *marker, /* out */ + int *parent, /* working array */ + int *xplore, /* working array */ + GlobalLU_t *Glu /* modified */ +) +{ + + NCPformat *Astore; + doublecomplex *a; + int *asub; + int *xa_begin, *xa_end; + int krep, chperm, chmark, chrep, oldrep, kchild, myfnz; + int k, krow, kmark, kperm; + int xdfs, maxdfs, kpar; + int jj; /* index through each column in the panel */ + int *marker1; /* marker1[jj] >= jcol if vertex jj was visited + by a previous column within this panel. */ + int *repfnz_col; /* start of each column in the panel */ + doublecomplex *dense_col; /* start of each column in the panel */ + int nextl_col; /* next available position in panel_lsub[*,jj] */ + int *xsup, *supno; + int *lsub, *xlsub; + double *amax_col; + register double tmp; + + /* Initialize pointers */ + Astore = A->Store; + a = Astore->nzval; + asub = Astore->rowind; + xa_begin = Astore->colbeg; + xa_end = Astore->colend; + marker1 = marker + m; + repfnz_col = repfnz; + dense_col = dense; + amax_col = amax; + *nseg = 0; + xsup = Glu->xsup; + supno = Glu->supno; + lsub = Glu->lsub; + xlsub = Glu->xlsub; + + /* For each column in the panel */ + for (jj = jcol; jj < jcol + w; jj++) { + nextl_col = (jj - jcol) * m; + +#ifdef CHK_DFS + printf("\npanel col %d: ", jj); +#endif + + *amax_col = 0.0; + /* For each nonz in A[*,jj] do dfs */ + for (k = xa_begin[jj]; k < xa_end[jj]; k++) { + krow = asub[k]; + tmp = z_abs1(&a[k]); + if (tmp > *amax_col) *amax_col = tmp; + dense_col[krow] = a[k]; + kmark = marker[krow]; + if ( kmark == jj ) + continue; /* krow visited before, go to the next nonzero */ + + /* For each unmarked nbr krow of jj + * krow is in L: place it in structure of L[*,jj] + */ + marker[krow] = jj; + kperm = perm_r[krow]; + + if ( kperm == EMPTY ) { + panel_lsub[nextl_col++] = krow; /* krow is indexed into A */ + } + /* + * krow is in U: if its supernode-rep krep + * has been explored, update repfnz[*] + */ + else { + + krep = xsup[supno[kperm]+1] - 1; + myfnz = repfnz_col[krep]; + +#ifdef CHK_DFS + printf("krep %d, myfnz %d, perm_r[%d] %d\n", krep, myfnz, krow, kperm); +#endif + if ( myfnz != EMPTY ) { /* Representative visited before */ + if ( myfnz > kperm ) repfnz_col[krep] = kperm; + /* continue; */ + } + else { + /* Otherwise, perform dfs starting at krep */ + oldrep = EMPTY; + parent[krep] = oldrep; + repfnz_col[krep] = kperm; + xdfs = xlsub[xsup[supno[krep]]]; + maxdfs = xlsub[krep + 1]; + +#ifdef CHK_DFS + printf(" xdfs %d, maxdfs %d: ", xdfs, maxdfs); + for (i = xdfs; i < maxdfs; i++) printf(" %d", lsub[i]); + printf("\n"); +#endif + do { + /* + * For each unmarked kchild of krep + */ + while ( xdfs < maxdfs ) { + + kchild = lsub[xdfs]; + xdfs++; + chmark = marker[kchild]; + + if ( chmark != jj ) { /* Not reached yet */ + marker[kchild] = jj; + chperm = perm_r[kchild]; + + /* Case kchild is in L: place it in L[*,j] */ + if ( chperm == EMPTY ) { + panel_lsub[nextl_col++] = kchild; + } + /* Case kchild is in U: + * chrep = its supernode-rep. If its rep has + * been explored, update its repfnz[*] + */ + else { + + chrep = xsup[supno[chperm]+1] - 1; + myfnz = repfnz_col[chrep]; +#ifdef CHK_DFS + printf("chrep %d,myfnz %d,perm_r[%d] %d\n",chrep,myfnz,kchild,chperm); +#endif + if ( myfnz != EMPTY ) { /* Visited before */ + if ( myfnz > chperm ) + repfnz_col[chrep] = chperm; + } + else { + /* Cont. dfs at snode-rep of kchild */ + xplore[krep] = xdfs; + oldrep = krep; + krep = chrep; /* Go deeper down G(L) */ + parent[krep] = oldrep; + repfnz_col[krep] = chperm; + xdfs = xlsub[xsup[supno[krep]]]; + maxdfs = xlsub[krep + 1]; +#ifdef CHK_DFS + printf(" xdfs %d, maxdfs %d: ", xdfs, maxdfs); + for (i = xdfs; i < maxdfs; i++) printf(" %d", lsub[i]); + printf("\n"); +#endif + } /* else */ + + } /* else */ + + } /* if... */ + + } /* while xdfs < maxdfs */ + + /* krow has no more unexplored nbrs: + * Place snode-rep krep in postorder DFS, if this + * segment is seen for the first time. (Note that + * "repfnz[krep]" may change later.) + * Backtrack dfs to its parent. + */ + if ( marker1[krep] < jcol ) { + segrep[*nseg] = krep; + ++(*nseg); + marker1[krep] = jj; + } + + kpar = parent[krep]; /* Pop stack, mimic recursion */ + if ( kpar == EMPTY ) break; /* dfs done */ + krep = kpar; + xdfs = xplore[krep]; + maxdfs = xlsub[krep + 1]; + +#ifdef CHK_DFS + printf(" pop stack: krep %d,xdfs %d,maxdfs %d: ", krep,xdfs,maxdfs); + for (i = xdfs; i < maxdfs; i++) printf(" %d", lsub[i]); + printf("\n"); +#endif + } while ( kpar != EMPTY ); /* do-while - until empty stack */ + + } /* else */ + + } /* else */ + + } /* for each nonz in A[*,jj] */ + + repfnz_col += m; /* Move to next column */ + dense_col += m; + amax_col++; + + } /* for jj ... */ + +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zpivotL.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zpivotL.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zpivotL.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zpivotL.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,282 @@ + +/*! @file ilu_zpivotL.c + * \brief Performs numerical pivoting + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory
+ * June 30, 2009
+ * 
+ */ + + +#include +#include +#include "slu_zdefs.h" + +#ifndef SGN +#define SGN(x) ((x)>=0?1:-1) +#endif + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *   Performs the numerical pivoting on the current column of L,
+ *   and the CDIV operation.
+ *
+ *   Pivot policy:
+ *   (1) Compute thresh = u * max_(i>=j) abs(A_ij);
+ *   (2) IF user specifies pivot row k and abs(A_kj) >= thresh THEN
+ *	     pivot row = k;
+ *	 ELSE IF abs(A_jj) >= thresh THEN
+ *	     pivot row = j;
+ *	 ELSE
+ *	     pivot row = m;
+ *
+ *   Note: If you absolutely want to use a given pivot order, then set u=0.0.
+ *
+ *   Return value: 0	  success;
+ *		   i > 0  U(i,i) is exactly zero.
+ * 
+ */ + +int +ilu_zpivotL( + const int jcol, /* in */ + const double u, /* in - diagonal pivoting threshold */ + int *usepr, /* re-use the pivot sequence given by + * perm_r/iperm_r */ + int *perm_r, /* may be modified */ + int diagind, /* diagonal of Pc*A*Pc' */ + int *swap, /* in/out record the row permutation */ + int *iswap, /* in/out inverse of swap, it is the same as + perm_r after the factorization */ + int *marker, /* in */ + int *pivrow, /* in/out, as an input if *usepr!=0 */ + double fill_tol, /* in - fill tolerance of current column + * used for a singular column */ + milu_t milu, /* in */ + doublecomplex drop_sum, /* in - computed in ilu_zcopy_to_ucol() + (MILU only) */ + GlobalLU_t *Glu, /* modified - global LU data structures */ + SuperLUStat_t *stat /* output */ + ) +{ + + int n; /* number of columns */ + int fsupc; /* first column in the supernode */ + int nsupc; /* no of columns in the supernode */ + int nsupr; /* no of rows in the supernode */ + int lptr; /* points to the starting subscript of the supernode */ + register int pivptr; + int old_pivptr, diag, ptr0; + register double pivmax, rtemp; + double thresh; + doublecomplex temp; + doublecomplex *lu_sup_ptr; + doublecomplex *lu_col_ptr; + int *lsub_ptr; + register int isub, icol, k, itemp; + int *lsub, *xlsub; + doublecomplex *lusup; + int *xlusup; + flops_t *ops = stat->ops; + int info; + doublecomplex one = {1.0, 0.0}; + + /* Initialize pointers */ + n = Glu->n; + lsub = Glu->lsub; + xlsub = Glu->xlsub; + lusup = Glu->lusup; + xlusup = Glu->xlusup; + fsupc = (Glu->xsup)[(Glu->supno)[jcol]]; + nsupc = jcol - fsupc; /* excluding jcol; nsupc >= 0 */ + lptr = xlsub[fsupc]; + nsupr = xlsub[fsupc+1] - lptr; + lu_sup_ptr = &lusup[xlusup[fsupc]]; /* start of the current supernode */ + lu_col_ptr = &lusup[xlusup[jcol]]; /* start of jcol in the supernode */ + lsub_ptr = &lsub[lptr]; /* start of row indices of the supernode */ + + /* Determine the largest abs numerical value for partial pivoting; + Also search for user-specified pivot, and diagonal element. */ + pivmax = -1.0; + pivptr = nsupc; + diag = EMPTY; + old_pivptr = nsupc; + ptr0 = EMPTY; + for (isub = nsupc; isub < nsupr; ++isub) { + if (marker[lsub_ptr[isub]] > jcol) + continue; /* do not overlap with a later relaxed supernode */ + + switch (milu) { + case SMILU_1: + z_add(&temp, &lu_col_ptr[isub], &drop_sum); + rtemp = z_abs1(&temp); + break; + case SMILU_2: + case SMILU_3: + /* In this case, drop_sum contains the sum of the abs. value */ + rtemp = z_abs1(&lu_col_ptr[isub]); + break; + case SILU: + default: + rtemp = z_abs1(&lu_col_ptr[isub]); + break; + } + if (rtemp > pivmax) { pivmax = rtemp; pivptr = isub; } + if (*usepr && lsub_ptr[isub] == *pivrow) old_pivptr = isub; + if (lsub_ptr[isub] == diagind) diag = isub; + if (ptr0 == EMPTY) ptr0 = isub; + } + + if (milu == SMILU_2 || milu == SMILU_3) pivmax += drop_sum.r; + + /* Test for singularity */ + if (pivmax < 0.0) { +#if SCIPY_SPECIFIC_FIX + ABORT("[0]: matrix is singular"); +#else + fprintf(stderr, "[0]: jcol=%d, SINGULAR!!!\n", jcol); + fflush(stderr); + exit(1); +#endif + } + if ( pivmax == 0.0 ) { + if (diag != EMPTY) + *pivrow = lsub_ptr[pivptr = diag]; + else if (ptr0 != EMPTY) + *pivrow = lsub_ptr[pivptr = ptr0]; + else { + /* look for the first row which does not + belong to any later supernodes */ + for (icol = jcol; icol < n; icol++) + if (marker[swap[icol]] <= jcol) break; + if (icol >= n) { +#if SCIPY_SPECIFIC_FIX + ABORT("[1]: matrix is singular"); +#else + fprintf(stderr, "[1]: jcol=%d, SINGULAR!!!\n", jcol); + fflush(stderr); + exit(1); +#endif + } + + *pivrow = swap[icol]; + + /* pick up the pivot row */ + for (isub = nsupc; isub < nsupr; ++isub) + if ( lsub_ptr[isub] == *pivrow ) { pivptr = isub; break; } + } + pivmax = fill_tol; + lu_col_ptr[pivptr].r = pivmax; + lu_col_ptr[pivptr].i = 0.0; + *usepr = 0; +#ifdef DEBUG + printf("[0] ZERO PIVOT: FILL (%d, %d).\n", *pivrow, jcol); + fflush(stdout); +#endif + info =jcol + 1; + } /* if (*pivrow == 0.0) */ + else { + thresh = u * pivmax; + + /* Choose appropriate pivotal element by our policy. */ + if ( *usepr ) { + switch (milu) { + case SMILU_1: + z_add(&temp, &lu_col_ptr[old_pivptr], &drop_sum); + rtemp = z_abs1(&temp); + break; + case SMILU_2: + case SMILU_3: + rtemp = z_abs1(&lu_col_ptr[old_pivptr]) + drop_sum.r; + break; + case SILU: + default: + rtemp = z_abs1(&lu_col_ptr[old_pivptr]); + break; + } + if ( rtemp != 0.0 && rtemp >= thresh ) pivptr = old_pivptr; + else *usepr = 0; + } + if ( *usepr == 0 ) { + /* Use diagonal pivot? */ + if ( diag >= 0 ) { /* diagonal exists */ + switch (milu) { + case SMILU_1: + z_add(&temp, &lu_col_ptr[diag], &drop_sum); + rtemp = z_abs1(&temp); + break; + case SMILU_2: + case SMILU_3: + rtemp = z_abs1(&lu_col_ptr[diag]) + drop_sum.r; + break; + case SILU: + default: + rtemp = z_abs1(&lu_col_ptr[diag]); + break; + } + if ( rtemp != 0.0 && rtemp >= thresh ) pivptr = diag; + } + *pivrow = lsub_ptr[pivptr]; + } + info = 0; + + /* Reset the diagonal */ + switch (milu) { + case SMILU_1: + z_add(&lu_col_ptr[pivptr], &lu_col_ptr[pivptr], &drop_sum); + break; + case SMILU_2: + case SMILU_3: + temp = z_sgn(&lu_col_ptr[pivptr]); + zz_mult(&temp, &temp, &drop_sum); + z_add(&lu_col_ptr[pivptr], &lu_col_ptr[pivptr], &drop_sum); + break; + case SILU: + default: + break; + } + + } /* else */ + + /* Record pivot row */ + perm_r[*pivrow] = jcol; + if (jcol < n - 1) { + register int t1, t2, t; + t1 = iswap[*pivrow]; t2 = jcol; + if (t1 != t2) { + t = swap[t1]; swap[t1] = swap[t2]; swap[t2] = t; + t1 = swap[t1]; t2 = t; + t = iswap[t1]; iswap[t1] = iswap[t2]; iswap[t2] = t; + } + } /* if (jcol < n - 1) */ + + /* Interchange row subscripts */ + if ( pivptr != nsupc ) { + itemp = lsub_ptr[pivptr]; + lsub_ptr[pivptr] = lsub_ptr[nsupc]; + lsub_ptr[nsupc] = itemp; + + /* Interchange numerical values as well, for the whole snode, such + * that L is indexed the same way as A. + */ + for (icol = 0; icol <= nsupc; icol++) { + itemp = pivptr + icol * nsupr; + temp = lu_sup_ptr[itemp]; + lu_sup_ptr[itemp] = lu_sup_ptr[nsupc + icol*nsupr]; + lu_sup_ptr[nsupc + icol*nsupr] = temp; + } + } /* if */ + + /* cdiv operation */ + ops[FACT] += 10 * (nsupr - nsupc); + z_div(&temp, &one, &lu_col_ptr[nsupc]); + for (k = nsupc+1; k < nsupr; k++) + zz_mult(&lu_col_ptr[k], &lu_col_ptr[k], &temp); + + return info; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zsnode_dfs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zsnode_dfs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zsnode_dfs.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ilu_zsnode_dfs.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,90 @@ + +/*! @file ilu_zsnode_dfs.c + * \brief Determines the union of row structures of columns within the relaxed node + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory
+ * June 30, 2009
+ * 
+ */ + +#include "slu_zdefs.h" + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *    ilu_zsnode_dfs() - Determine the union of the row structures of those
+ *    columns within the relaxed snode.
+ *    Note: The relaxed snodes are leaves of the supernodal etree, therefore,
+ *    the portion outside the rectangular supernode must be zero.
+ *
+ * Return value
+ * ============
+ *     0   success;
+ *    >0   number of bytes allocated when run out of memory.
+ * 
+ */ + +int +ilu_zsnode_dfs( + const int jcol, /* in - start of the supernode */ + const int kcol, /* in - end of the supernode */ + const int *asub, /* in */ + const int *xa_begin, /* in */ + const int *xa_end, /* in */ + int *marker, /* modified */ + GlobalLU_t *Glu /* modified */ + ) +{ + + register int i, k, nextl; + int nsuper, krow, kmark, mem_error; + int *xsup, *supno; + int *lsub, *xlsub; + int nzlmax; + + xsup = Glu->xsup; + supno = Glu->supno; + lsub = Glu->lsub; + xlsub = Glu->xlsub; + nzlmax = Glu->nzlmax; + + nsuper = ++supno[jcol]; /* Next available supernode number */ + nextl = xlsub[jcol]; + + for (i = jcol; i <= kcol; i++) + { + /* For each nonzero in A[*,i] */ + for (k = xa_begin[i]; k < xa_end[i]; k++) + { + krow = asub[k]; + kmark = marker[krow]; + if ( kmark != kcol ) + { /* First time visit krow */ + marker[krow] = kcol; + lsub[nextl++] = krow; + if ( nextl >= nzlmax ) + { + if ( (mem_error = zLUMemXpand(jcol, nextl, LSUB, &nzlmax, + Glu)) != 0) + return (mem_error); + lsub = Glu->lsub; + } + } + } + supno[i] = nsuper; + } + + /* Supernode > 1 */ + if ( jcol < kcol ) + for (i = jcol+1; i <= kcol; i++) xlsub[i] = nextl; + + xsup[nsuper+1] = kcol + 1; + supno[kcol+1] = nsuper; + xlsub[kcol+1] = nextl; + + return 0; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/izmax1.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/izmax1.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/izmax1.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/izmax1.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,14 +1,20 @@ -#include "dcomplex.h" - -int -izmax1_(int *n, doublecomplex *cx, int *incx) -{ -/* -- LAPACK auxiliary routine (version 2.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 +/*! @file izmax1.c + * \brief Finds the index of the element whose real part has maximum absolute value + * + *
+ *     -- LAPACK auxiliary routine (version 2.0) --   
+ *     Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd.,   
+ *     Courant Institute, Argonne National Lab, and Rice University   
+ *     October 31, 1992   
+ * 
+ */ +#include +#include "slu_dcomplex.h" +#include "slu_Cnames.h" +/*! \brief +
     Purpose   
     =======   
 
@@ -33,8 +39,14 @@
             The spacing between successive values of CX.  INCX >= 1.   
 
    ===================================================================== 
+
*/ +int +izmax1_(int *n, doublecomplex *cx, int *incx) +{ + + /* System generated locals */ int ret_val, i__1, i__2; double d__1; @@ -60,17 +72,17 @@ /* CODE FOR INCREMENT NOT EQUAL TO 1 */ ix = 1; - smax = (d__1 = CX(1).r, abs(d__1)); + smax = (d__1 = CX(1).r, fabs(d__1)); ix += *incx; i__1 = *n; for (i = 2; i <= *n; ++i) { i__2 = ix; - if ((d__1 = CX(ix).r, abs(d__1)) <= smax) { + if ((d__1 = CX(ix).r, fabs(d__1)) <= smax) { goto L10; } ret_val = i; i__2 = ix; - smax = (d__1 = CX(ix).r, abs(d__1)); + smax = (d__1 = CX(ix).r, fabs(d__1)); L10: ix += *incx; /* L20: */ @@ -80,16 +92,16 @@ /* CODE FOR INCREMENT EQUAL TO 1 */ L30: - smax = (d__1 = CX(1).r, abs(d__1)); + smax = (d__1 = CX(1).r, fabs(d__1)); i__1 = *n; for (i = 2; i <= *n; ++i) { i__2 = i; - if ((d__1 = CX(i).r, abs(d__1)) <= smax) { + if ((d__1 = CX(i).r, fabs(d__1)) <= smax) { goto L40; } ret_val = i; i__2 = i; - smax = (d__1 = CX(i).r, abs(d__1)); + smax = (d__1 = CX(i).r, fabs(d__1)); L40: ; } diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/lsame.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/lsame.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/lsame.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/lsame.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,10 +1,18 @@ -int lsame_(char *ca, char *cb) -{ -/* -- LAPACK auxiliary routine (version 2.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - September 30, 1994 +/*! @file lsame.c + * \brief Check if CA is the same letter as CB regardless of case. + * + *
+ * -- LAPACK auxiliary routine (version 2.0) --   
+ *      Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd.,   
+ *      Courant Institute, Argonne National Lab, and Rice University   
+ *      September 30, 1994   
+ * 
+ */ +#include "slu_Cnames.h" +/*! \brief + +
     Purpose   
     =======   
 
@@ -18,8 +26,13 @@
             CA and CB specify the single characters to be compared.   
 
    ===================================================================== 
+
*/ +int lsame_(char *ca, char *cb) +{ + + /* System generated locals */ int ret_val; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/mark_relax.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/mark_relax.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/mark_relax.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/mark_relax.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,47 @@ +/*! @file mark_relax.c + * \brief Record the rows pivoted by the relaxed supernodes. + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory
+ * June 1, 2009
+ * <\pre>
+ */
+#include "slu_ddefs.h"
+
+/*! \brief
+ *
+ * 
+ * Purpose
+ * =======
+ *    mark_relax() - record the rows used by the relaxed supernodes.
+ * 
+ */ +int mark_relax( + int n, /* order of the matrix A */ + int *relax_end, /* last column in a relaxed supernode. + * if j-th column starts a relaxed supernode, + * relax_end[j] represents the last column of + * this supernode. */ + int *relax_fsupc, /* first column in a relaxed supernode. + * relax_fsupc[j] represents the first column of + * j-th supernode. */ + int *xa_begin, /* Astore->colbeg */ + int *xa_end, /* Astore->colend */ + int *asub, /* row index of A */ + int *marker /* marker[j] is the maximum column index if j-th + * row belongs to a relaxed supernode. */ ) +{ + register int jcol, kcol; + register int i, j, k; + + for (i = 0; i < n && relax_fsupc[i] != EMPTY; i++) + { + jcol = relax_fsupc[i]; /* first column */ + kcol = relax_end[jcol]; /* last column */ + for (j = jcol; j <= kcol; j++) + for (k = xa_begin[j]; k < xa_end[j]; k++) + marker[asub[k]] = jcol; + } + return i; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/memory.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/memory.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/memory.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/memory.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,14 +1,17 @@ -/* +/*! @file memory.c + * \brief Precision-independent memory-related routines + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
- *
+ * 
*/ /** Precision-independent memory-related routines. (Shared by [sdcz]memory.c) **/ -#include "dsp_defs.h" +#include "slu_ddefs.h" #if ( DEBUGlevel>=1 ) /* Debug malloc/free. */ @@ -16,6 +19,7 @@ #define PAD_FACTOR 2 #define DWORD (sizeof(double)) /* Be sure it's no smaller than double. */ +/* size_t is usually defined as 'unsigned long' */ void *superlu_malloc(size_t size) { @@ -23,7 +27,7 @@ buf = (char *) malloc(size + DWORD); if ( !buf ) { - printf("superlu_malloc fails: malloc_total %.0f MB, size %d\n", + printf("superlu_malloc fails: malloc_total %.0f MB, size %ld\n", superlu_malloc_total*1e-6, size); ABORT("superlu_malloc: out of memory"); } @@ -85,8 +89,7 @@ #endif -/* - * Set up pointers for integer working arrays. +/*! \brief Set up pointers for integer working arrays. */ void SetIWork(int m, int n, int panel_size, int *iworkptr, int **segrep, diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/relax_snode.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/relax_snode.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/relax_snode.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/relax_snode.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,25 +1,35 @@ -/* +/*! @file relax_snode.c + * \brief Identify initial relaxed supernodes + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ - -#include "dsp_defs.h" +#include "slu_ddefs.h" +/*! \brief + * + *
+ * Purpose
+ * =======
+ *    relax_snode() - Identify the initial relaxed supernodes, assuming that 
+ *    the matrix has been reordered according to the postorder of the etree.
+ * 
+ */ void relax_snode ( const int n, @@ -31,13 +41,7 @@ int *relax_end /* last column in a supernode */ ) { -/* - * Purpose - * ======= - * relax_snode() - Identify the initial relaxed supernodes, assuming that - * the matrix has been reordered according to the postorder of the etree. - * - */ + register int j, parent; register int snode_start; /* beginning of a snode */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/scipy_slu_config.h python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/scipy_slu_config.h --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/scipy_slu_config.h 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/scipy_slu_config.h 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,36 @@ +#ifndef SCIPY_SLU_CONFIG_H +#define SCIPY_SLU_CONFIG_H + +#include + +/* + * Support routines + */ +void superlu_python_module_abort(char *msg); +void *superlu_python_module_malloc(size_t size); +void superlu_python_module_free(void *ptr); + +#define USER_ABORT superlu_python_module_abort +#define USER_MALLOC superlu_python_module_malloc +#define USER_FREE superlu_python_module_free + +#define SCIPY_SPECIFIC_FIX 1 + +/* + * Fortran configuration + */ +#if defined(NO_APPEND_FORTRAN) +#if defined(UPPERCASE_FORTRAN) +#define UpCase 1 +#else +#define NoChange 1 +#endif +#else +#if defined(UPPERCASE_FORTRAN) +#error Uppercase and trailing slash in Fortran names not supported +#else +#define Add_ 1 +#endif +#endif + +#endif diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/scolumn_bmod.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/scolumn_bmod.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/scolumn_bmod.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/scolumn_bmod.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,27 +1,29 @@ -/* +/*! @file scolumn_bmod.c + * \brief performs numeric block updates + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
- */
-/*
-  Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
- 
-  THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
-  EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
- 
-  Permission is hereby granted to use or copy this program for any
-  purpose, provided the above notices are retained on all copies.
-  Permission to modify the code and to distribute modified code is
-  granted, provided the above notices are retained, and a notice that
-  the code was modified is included with the above copyright notice.
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ *  Permission is hereby granted to use or copy this program for any
+ *  purpose, provided the above notices are retained on all copies.
+ *  Permission to modify the code and to distribute modified code is
+ *  granted, provided the above notices are retained, and a notice that
+ *  the code was modified is included with the above copyright notice.
+ * 
*/ #include #include -#include "ssp_defs.h" +#include "slu_sdefs.h" /* * Function prototypes @@ -32,8 +34,17 @@ -/* Return value: 0 - successful return +/*! \brief + * + *
+ * Purpose:
+ * ========
+ * Performs numeric block updates (sup-col) in topological order.
+ * It features: col-col, 2cols-col, 3cols-col, and sup-col updates.
+ * Special processing on the supernodal portion of L\U[*,j]
+ * Return value:   0 - successful return
  *               > 0 - number of bytes allocated when run out of space
+ * 
*/ int scolumn_bmod ( @@ -48,14 +59,7 @@ SuperLUStat_t *stat /* output */ ) { -/* - * Purpose: - * ======== - * Performs numeric block updates (sup-col) in topological order. - * It features: col-col, 2cols-col, 3cols-col, and sup-col updates. - * Special processing on the supernodal portion of L\U[*,j] - * - */ + #ifdef _CRAY _fcd ftcs1 = _cptofcd("L", strlen("L")), ftcs2 = _cptofcd("N", strlen("N")), diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/scolumn_dfs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/scolumn_dfs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/scolumn_dfs.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/scolumn_dfs.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,50 +1,38 @@ - -/* +/*! @file scolumn_dfs.c + * \brief Performs a symbolic factorization + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
- */
-/*
-  Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
- 
-  THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
-  EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
- 
-  Permission is hereby granted to use or copy this program for any
-  purpose, provided the above notices are retained on all copies.
-  Permission to modify the code and to distribute modified code is
-  granted, provided the above notices are retained, and a notice that
-  the code was modified is included with the above copyright notice.
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -#include "ssp_defs.h" +#include "slu_sdefs.h" -/* What type of supernodes we want */ +/*! \brief What type of supernodes we want */ #define T2_SUPER -int -scolumn_dfs( - const int m, /* in - number of rows in the matrix */ - const int jcol, /* in */ - int *perm_r, /* in */ - int *nseg, /* modified - with new segments appended */ - int *lsub_col, /* in - defines the RHS vector to start the dfs */ - int *segrep, /* modified - with new segments appended */ - int *repfnz, /* modified */ - int *xprune, /* modified */ - int *marker, /* modified */ - int *parent, /* working array */ - int *xplore, /* working array */ - GlobalLU_t *Glu /* modified */ - ) -{ -/* + +/*! \brief + * + *
  * Purpose
  * =======
- *   "column_dfs" performs a symbolic factorization on column jcol, and
+ *   SCOLUMN_DFS performs a symbolic factorization on column jcol, and
  *   decide the supernode boundary.
  *
  *   This routine does not use numeric values, but only use the RHS 
@@ -72,8 +60,25 @@
  * ============
  *     0  success;
  *   > 0  number of bytes allocated when run out of space.
- *
+ * 
*/ +int +scolumn_dfs( + const int m, /* in - number of rows in the matrix */ + const int jcol, /* in */ + int *perm_r, /* in */ + int *nseg, /* modified - with new segments appended */ + int *lsub_col, /* in - defines the RHS vector to start the dfs */ + int *segrep, /* modified - with new segments appended */ + int *repfnz, /* modified */ + int *xprune, /* modified */ + int *marker, /* modified */ + int *parent, /* working array */ + int *xplore, /* working array */ + GlobalLU_t *Glu /* modified */ + ) +{ + int jcolp1, jcolm1, jsuper, nsuper, nextl; int k, krep, krow, kmark, kperm; int *marker2; /* Used for small panel LU */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/scomplex.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/scomplex.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/scomplex.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/scomplex.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,20 +1,24 @@ -/* +/*! @file scomplex.c + * \brief Common arithmetic for complex type + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
  *
- */
-/*
  * This file defines common arithmetic operations for complex type.
+ * 
*/ + #include +#include #include -#include "scomplex.h" +#include "slu_scomplex.h" -/* Complex Division c = a/b */ +/*! \brief Complex Division c = a/b */ void c_div(complex *c, complex *a, complex *b) { float ratio, den; @@ -26,8 +30,8 @@ abi = - abi; if( abr <= abi ) { if (abi == 0) { - fprintf(stderr, "z_div.c: division by zero"); - exit (-1); + fprintf(stderr, "z_div.c: division by zero\n"); + exit(-1); } ratio = b->r / b->i ; den = b->i * (1 + ratio*ratio); @@ -44,7 +48,7 @@ } -/* Returns sqrt(z.r^2 + z.i^2) */ +/*! \brief Returns sqrt(z.r^2 + z.i^2) */ double slu_c_abs(complex *z) { float temp; @@ -66,8 +70,7 @@ } -/* Approximates the abs */ -/* Returns abs(z.r) + abs(z.i) */ +/*! \brief Approximates the abs. Returns abs(z.r) + abs(z.i) */ double slu_c_abs1(complex *z) { float real = z->r; @@ -79,7 +82,7 @@ return (real + imag); } -/* Return the exponentiation */ +/*! \brief Return the exponentiation */ void c_exp(complex *r, complex *z) { float expx; @@ -89,17 +92,56 @@ r->i = expx * sin(z->i); } -/* Return the complex conjugate */ +/*! \brief Return the complex conjugate */ void r_cnjg(complex *r, complex *z) { r->r = z->r; r->i = -z->i; } -/* Return the imaginary part */ +/*! \brief Return the imaginary part */ double r_imag(complex *z) { return (z->i); } +/*! \brief SIGN functions for complex number. Returns z/abs(z) */ +complex c_sgn(complex *z) +{ + register float t = slu_c_abs(z); + register complex retval; + + if (t == 0.0) { + retval.r = 1.0, retval.i = 0.0; + } else { + retval.r = z->r / t, retval.i = z->i / t; + } + + return retval; +} + +/*! \brief Square-root of a complex number. */ +complex c_sqrt(complex *z) +{ + complex retval; + register float cr, ci, real, imag; + + real = z->r; + imag = z->i; + + if ( imag == 0.0 ) { + retval.r = sqrt(real); + retval.i = 0.0; + } else { + ci = (sqrt(real*real + imag*imag) - real) / 2.0; + ci = sqrt(ci); + cr = imag / (2.0 * ci); + retval.r = cr; + retval.i = ci; + } + + return retval; +} + + diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/scomplex.h python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/scomplex.h --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/scomplex.h 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/scomplex.h 1970-01-01 01:00:00.000000000 +0100 @@ -1,73 +0,0 @@ - - -/* - * -- SuperLU routine (version 2.0) -- - * Univ. of California Berkeley, Xerox Palo Alto Research Center, - * and Lawrence Berkeley National Lab. - * November 15, 1997 - * - */ -#ifndef __SUPERLU_SCOMPLEX /* allow multiple inclusions */ -#define __SUPERLU_SCOMPLEX - -/* - * This header file is to be included in source files c*.c - */ -#ifndef SCOMPLEX_INCLUDE -#define SCOMPLEX_INCLUDE - -typedef struct { float r, i; } complex; - - -/* Macro definitions */ - -/* Complex Addition c = a + b */ -#define c_add(c, a, b) { (c)->r = (a)->r + (b)->r; \ - (c)->i = (a)->i + (b)->i; } - -/* Complex Subtraction c = a - b */ -#define c_sub(c, a, b) { (c)->r = (a)->r - (b)->r; \ - (c)->i = (a)->i - (b)->i; } - -/* Complex-Double Multiplication */ -#define cs_mult(c, a, b) { (c)->r = (a)->r * (b); \ - (c)->i = (a)->i * (b); } - -/* Complex-Complex Multiplication */ -#define cc_mult(c, a, b) { \ - float cr, ci; \ - cr = (a)->r * (b)->r - (a)->i * (b)->i; \ - ci = (a)->i * (b)->r + (a)->r * (b)->i; \ - (c)->r = cr; \ - (c)->i = ci; \ - } - -#define cc_conj(a, b) { \ - (a)->r = (b)->r; \ - (a)->i = -((b)->i); \ - } - -/* Complex equality testing */ -#define c_eq(a, b) ( (a)->r == (b)->r && (a)->i == (b)->i ) - - -#ifdef __cplusplus -extern "C" { -#endif - -/* Prototypes for functions in scomplex.c */ -void c_div(complex *, complex *, complex *); -double slu_c_abs(complex *); /* exact */ -double slu_c_abs1(complex *); /* approximate */ -void c_exp(complex *, complex *); -void r_cnjg(complex *, complex *); -double r_imag(complex *); - - -#ifdef __cplusplus - } -#endif - -#endif - -#endif /* __SUPERLU_SCOMPLEX */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/scopy_to_ucol.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/scopy_to_ucol.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/scopy_to_ucol.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/scopy_to_ucol.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,27 +1,26 @@ - -/* +/*! @file scopy_to_ucol.c + * \brief Copy a computed column of U to the compressed data structure + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
  *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "ssp_defs.h" -#include "util.h" +#include "slu_sdefs.h" int scopy_to_ucol( @@ -47,7 +46,6 @@ float *ucol; int *usub, *xusub; int nzumax; - float zero = 0.0; xsup = Glu->xsup; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/scsum1.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/scsum1.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/scsum1.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/scsum1.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,13 +1,19 @@ -#include "scomplex.h" - -double scsum1_(int *n, complex *cx, int *incx) -{ -/* -- LAPACK auxiliary routine (version 2.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 +/*! @file scsum1.c + * \brief Takes sum of the absolute values of a complex vector and returns a single precision result + * + *
+ *     -- LAPACK auxiliary routine (version 2.0) --   
+ *     Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd.,   
+ *     Courant Institute, Argonne National Lab, and Rice University   
+ *     October 31, 1992   
+ * 
+ */ +#include "slu_scomplex.h" +#include "slu_Cnames.h" +/*! \brief +
     Purpose   
     =======   
 
@@ -32,12 +38,10 @@
             The spacing between successive values of CX.  INCX > 0.   
 
     ===================================================================== 
-  
-
-
-    
-   Parameter adjustments   
-       Function Body */
+
+*/ +double scsum1_(int *n, complex *cx, int *incx) +{ /* System generated locals */ int i__1, i__2; float ret_val; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sdiagonal.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sdiagonal.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sdiagonal.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sdiagonal.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,129 @@ + +/*! @file sdiagonal.c + * \brief Auxiliary routines to work with diagonal elements + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory
+ * June 30, 2009
+ * 
+ */ + +#include "slu_sdefs.h" + +int sfill_diag(int n, NCformat *Astore) +/* fill explicit zeros on the diagonal entries, so that the matrix is not + structurally singular. */ +{ + float *nzval = (float *)Astore->nzval; + int *rowind = Astore->rowind; + int *colptr = Astore->colptr; + int nnz = colptr[n]; + int fill = 0; + float *nzval_new; + float zero = 0.0; + int *rowind_new; + int i, j, diag; + + for (i = 0; i < n; i++) + { + diag = -1; + for (j = colptr[i]; j < colptr[i + 1]; j++) + if (rowind[j] == i) diag = j; + if (diag < 0) fill++; + } + if (fill) + { + nzval_new = floatMalloc(nnz + fill); + rowind_new = intMalloc(nnz + fill); + fill = 0; + for (i = 0; i < n; i++) + { + diag = -1; + for (j = colptr[i] - fill; j < colptr[i + 1]; j++) + { + if ((rowind_new[j + fill] = rowind[j]) == i) diag = j; + nzval_new[j + fill] = nzval[j]; + } + if (diag < 0) + { + rowind_new[colptr[i + 1] + fill] = i; + nzval_new[colptr[i + 1] + fill] = zero; + fill++; + } + colptr[i + 1] += fill; + } + Astore->nzval = nzval_new; + Astore->rowind = rowind_new; + SUPERLU_FREE(nzval); + SUPERLU_FREE(rowind); + } + Astore->nnz += fill; + return fill; +} + +int sdominate(int n, NCformat *Astore) +/* make the matrix diagonally dominant */ +{ + float *nzval = (float *)Astore->nzval; + int *rowind = Astore->rowind; + int *colptr = Astore->colptr; + int nnz = colptr[n]; + int fill = 0; + float *nzval_new; + int *rowind_new; + int i, j, diag; + double s; + + for (i = 0; i < n; i++) + { + diag = -1; + for (j = colptr[i]; j < colptr[i + 1]; j++) + if (rowind[j] == i) diag = j; + if (diag < 0) fill++; + } + if (fill) + { + nzval_new = floatMalloc(nnz + fill); + rowind_new = intMalloc(nnz+ fill); + fill = 0; + for (i = 0; i < n; i++) + { + s = 1e-6; + diag = -1; + for (j = colptr[i] - fill; j < colptr[i + 1]; j++) + { + if ((rowind_new[j + fill] = rowind[j]) == i) diag = j; + s += fabs(nzval_new[j + fill] = nzval[j]); + } + if (diag >= 0) { + nzval_new[diag+fill] = s * 3.0; + } else { + rowind_new[colptr[i + 1] + fill] = i; + nzval_new[colptr[i + 1] + fill] = s * 3.0; + fill++; + } + colptr[i + 1] += fill; + } + Astore->nzval = nzval_new; + Astore->rowind = rowind_new; + SUPERLU_FREE(nzval); + SUPERLU_FREE(rowind); + } + else + { + for (i = 0; i < n; i++) + { + s = 1e-6; + diag = -1; + for (j = colptr[i]; j < colptr[i + 1]; j++) + { + if (rowind[j] == i) diag = j; + s += fabs(nzval[j]); + } + nzval[diag] = s * 3.0; + } + } + Astore->nnz += fill; + return fill; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgscon.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgscon.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgscon.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgscon.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,69 +1,80 @@ -/* +/*! @file sgscon.c + * \brief Estimates reciprocal of the condition number of a general matrix + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
+ * Modified from lapack routines SGECON.
+ * 
*/ + /* * File name: sgscon.c * History: Modified from lapack routines SGECON. */ #include -#include "ssp_defs.h" +#include "slu_sdefs.h" + +/*! \brief + * + *
+ *   Purpose   
+ *   =======   
+ *
+ *   SGSCON estimates the reciprocal of the condition number of a general 
+ *   real matrix A, in either the 1-norm or the infinity-norm, using   
+ *   the LU factorization computed by SGETRF.   *
+ *
+ *   An estimate is obtained for norm(inv(A)), and the reciprocal of the   
+ *   condition number is computed as   
+ *      RCOND = 1 / ( norm(A) * norm(inv(A)) ).   
+ *
+ *   See supermatrix.h for the definition of 'SuperMatrix' structure.
+ * 
+ *   Arguments   
+ *   =========   
+ *
+ *    NORM    (input) char*
+ *            Specifies whether the 1-norm condition number or the   
+ *            infinity-norm condition number is required:   
+ *            = '1' or 'O':  1-norm;   
+ *            = 'I':         Infinity-norm.
+ *	    
+ *    L       (input) SuperMatrix*
+ *            The factor L from the factorization Pr*A*Pc=L*U as computed by
+ *            sgstrf(). Use compressed row subscripts storage for supernodes,
+ *            i.e., L has types: Stype = SLU_SC, Dtype = SLU_S, Mtype = SLU_TRLU.
+ * 
+ *    U       (input) SuperMatrix*
+ *            The factor U from the factorization Pr*A*Pc=L*U as computed by
+ *            sgstrf(). Use column-wise storage scheme, i.e., U has types:
+ *            Stype = SLU_NC, Dtype = SLU_S, Mtype = SLU_TRU.
+ *	    
+ *    ANORM   (input) float
+ *            If NORM = '1' or 'O', the 1-norm of the original matrix A.   
+ *            If NORM = 'I', the infinity-norm of the original matrix A.
+ *	    
+ *    RCOND   (output) float*
+ *           The reciprocal of the condition number of the matrix A,   
+ *           computed as RCOND = 1/(norm(A) * norm(inv(A))).
+ *	    
+ *    INFO    (output) int*
+ *           = 0:  successful exit   
+ *           < 0:  if INFO = -i, the i-th argument had an illegal value   
+ *
+ *    ===================================================================== 
+ * 
+ */ void sgscon(char *norm, SuperMatrix *L, SuperMatrix *U, float anorm, float *rcond, SuperLUStat_t *stat, int *info) { -/* - Purpose - ======= - - SGSCON estimates the reciprocal of the condition number of a general - real matrix A, in either the 1-norm or the infinity-norm, using - the LU factorization computed by SGETRF. - - An estimate is obtained for norm(inv(A)), and the reciprocal of the - condition number is computed as - RCOND = 1 / ( norm(A) * norm(inv(A)) ). - - See supermatrix.h for the definition of 'SuperMatrix' structure. - - Arguments - ========= - - NORM (input) char* - Specifies whether the 1-norm condition number or the - infinity-norm condition number is required: - = '1' or 'O': 1-norm; - = 'I': Infinity-norm. - - L (input) SuperMatrix* - The factor L from the factorization Pr*A*Pc=L*U as computed by - sgstrf(). Use compressed row subscripts storage for supernodes, - i.e., L has types: Stype = SLU_SC, Dtype = SLU_S, Mtype = SLU_TRLU. - - U (input) SuperMatrix* - The factor U from the factorization Pr*A*Pc=L*U as computed by - sgstrf(). Use column-wise storage scheme, i.e., U has types: - Stype = SLU_NC, Dtype = SLU_S, Mtype = TRU. - - ANORM (input) float - If NORM = '1' or 'O', the 1-norm of the original matrix A. - If NORM = 'I', the infinity-norm of the original matrix A. - - RCOND (output) float* - The reciprocal of the condition number of the matrix A, - computed as RCOND = 1/(norm(A) * norm(inv(A))). - - INFO (output) int* - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - ===================================================================== -*/ /* Local variables */ int kase, kase1, onenrm, i; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgsequ.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgsequ.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgsequ.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgsequ.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,81 +1,90 @@ - -/* +/*! @file sgsequ.c + * \brief Computes row and column scalings + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
  *
+ * Modified from LAPACK routine SGEEQU
+ * 
*/ /* * File name: sgsequ.c * History: Modified from LAPACK routine SGEEQU */ #include -#include "ssp_defs.h" -#include "util.h" +#include "slu_sdefs.h" + + +/*! \brief + * + *
+ * Purpose   
+ *   =======   
+ *
+ *   SGSEQU computes row and column scalings intended to equilibrate an   
+ *   M-by-N sparse matrix A and reduce its condition number. R returns the row
+ *   scale factors and C the column scale factors, chosen to try to make   
+ *   the largest element in each row and column of the matrix B with   
+ *   elements B(i,j)=R(i)*A(i,j)*C(j) have absolute value 1.   
+ *
+ *   R(i) and C(j) are restricted to be between SMLNUM = smallest safe   
+ *   number and BIGNUM = largest safe number.  Use of these scaling   
+ *   factors is not guaranteed to reduce the condition number of A but   
+ *   works well in practice.   
+ *
+ *   See supermatrix.h for the definition of 'SuperMatrix' structure.
+ *
+ *   Arguments   
+ *   =========   
+ *
+ *   A       (input) SuperMatrix*
+ *           The matrix of dimension (A->nrow, A->ncol) whose equilibration
+ *           factors are to be computed. The type of A can be:
+ *           Stype = SLU_NC; Dtype = SLU_S; Mtype = SLU_GE.
+ *	    
+ *   R       (output) float*, size A->nrow
+ *           If INFO = 0 or INFO > M, R contains the row scale factors   
+ *           for A.
+ *	    
+ *   C       (output) float*, size A->ncol
+ *           If INFO = 0,  C contains the column scale factors for A.
+ *	    
+ *   ROWCND  (output) float*
+ *           If INFO = 0 or INFO > M, ROWCND contains the ratio of the   
+ *           smallest R(i) to the largest R(i).  If ROWCND >= 0.1 and   
+ *           AMAX is neither too large nor too small, it is not worth   
+ *           scaling by R.
+ *	    
+ *   COLCND  (output) float*
+ *           If INFO = 0, COLCND contains the ratio of the smallest   
+ *           C(i) to the largest C(i).  If COLCND >= 0.1, it is not   
+ *           worth scaling by C.
+ *	    
+ *   AMAX    (output) float*
+ *           Absolute value of largest matrix element.  If AMAX is very   
+ *           close to overflow or very close to underflow, the matrix   
+ *           should be scaled.
+ *	    
+ *   INFO    (output) int*
+ *           = 0:  successful exit   
+ *           < 0:  if INFO = -i, the i-th argument had an illegal value   
+ *           > 0:  if INFO = i,  and i is   
+ *                 <= A->nrow:  the i-th row of A is exactly zero   
+ *                 >  A->ncol:  the (i-M)-th column of A is exactly zero   
+ *
+ *   ===================================================================== 
+ * 
+ */ void sgsequ(SuperMatrix *A, float *r, float *c, float *rowcnd, float *colcnd, float *amax, int *info) { -/* - Purpose - ======= - - SGSEQU computes row and column scalings intended to equilibrate an - M-by-N sparse matrix A and reduce its condition number. R returns the row - scale factors and C the column scale factors, chosen to try to make - the largest element in each row and column of the matrix B with - elements B(i,j)=R(i)*A(i,j)*C(j) have absolute value 1. - - R(i) and C(j) are restricted to be between SMLNUM = smallest safe - number and BIGNUM = largest safe number. Use of these scaling - factors is not guaranteed to reduce the condition number of A but - works well in practice. - - See supermatrix.h for the definition of 'SuperMatrix' structure. - - Arguments - ========= - - A (input) SuperMatrix* - The matrix of dimension (A->nrow, A->ncol) whose equilibration - factors are to be computed. The type of A can be: - Stype = SLU_NC; Dtype = SLU_S; Mtype = SLU_GE. - - R (output) float*, size A->nrow - If INFO = 0 or INFO > M, R contains the row scale factors - for A. - - C (output) float*, size A->ncol - If INFO = 0, C contains the column scale factors for A. - - ROWCND (output) float* - If INFO = 0 or INFO > M, ROWCND contains the ratio of the - smallest R(i) to the largest R(i). If ROWCND >= 0.1 and - AMAX is neither too large nor too small, it is not worth - scaling by R. - - COLCND (output) float* - If INFO = 0, COLCND contains the ratio of the smallest - C(i) to the largest C(i). If COLCND >= 0.1, it is not - worth scaling by C. - - AMAX (output) float* - Absolute value of largest matrix element. If AMAX is very - close to overflow or very close to underflow, the matrix - should be scaled. - - INFO (output) int* - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - > 0: if INFO = i, and i is - <= A->nrow: the i-th row of A is exactly zero - > A->ncol: the (i-M)-th column of A is exactly zero - ===================================================================== -*/ /* Local variables */ NCformat *Astore; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgsisx.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgsisx.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgsisx.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgsisx.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,693 @@ + +/*! @file sgsisx.c + * \brief Gives the approximate solutions of linear equations A*X=B or A'*X=B + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory.
+ * June 30, 2009
+ * 
+ */ +#include "slu_sdefs.h" + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *
+ * SGSISX gives the approximate solutions of linear equations A*X=B or A'*X=B,
+ * using the ILU factorization from sgsitrf(). An estimation of
+ * the condition number is provided. It performs the following steps:
+ *
+ *   1. If A is stored column-wise (A->Stype = SLU_NC):
+ *  
+ *	1.1. If options->Equil = YES or options->RowPerm = LargeDiag, scaling
+ *	     factors are computed to equilibrate the system:
+ *	     options->Trans = NOTRANS:
+ *		 diag(R)*A*diag(C) *inv(diag(C))*X = diag(R)*B
+ *	     options->Trans = TRANS:
+ *		 (diag(R)*A*diag(C))**T *inv(diag(R))*X = diag(C)*B
+ *	     options->Trans = CONJ:
+ *		 (diag(R)*A*diag(C))**H *inv(diag(R))*X = diag(C)*B
+ *	     Whether or not the system will be equilibrated depends on the
+ *	     scaling of the matrix A, but if equilibration is used, A is
+ *	     overwritten by diag(R)*A*diag(C) and B by diag(R)*B
+ *	     (if options->Trans=NOTRANS) or diag(C)*B (if options->Trans
+ *	     = TRANS or CONJ).
+ *
+ *	1.2. Permute columns of A, forming A*Pc, where Pc is a permutation
+ *	     matrix that usually preserves sparsity.
+ *	     For more details of this step, see sp_preorder.c.
+ *
+ *	1.3. If options->Fact != FACTORED, the LU decomposition is used to
+ *	     factor the matrix A (after equilibration if options->Equil = YES)
+ *	     as Pr*A*Pc = L*U, with Pr determined by partial pivoting.
+ *
+ *	1.4. Compute the reciprocal pivot growth factor.
+ *
+ *	1.5. If some U(i,i) = 0, so that U is exactly singular, then the
+ *	     routine fills a small number on the diagonal entry, that is
+ *		U(i,i) = ||A(:,i)||_oo * options->ILU_FillTol ** (1 - i / n),
+ *	     and info will be increased by 1. The factored form of A is used
+ *	     to estimate the condition number of the preconditioner. If the
+ *	     reciprocal of the condition number is less than machine precision,
+ *	     info = A->ncol+1 is returned as a warning, but the routine still
+ *	     goes on to solve for X.
+ *
+ *	1.6. The system of equations is solved for X using the factored form
+ *	     of A.
+ *
+ *	1.7. options->IterRefine is not used
+ *
+ *	1.8. If equilibration was used, the matrix X is premultiplied by
+ *	     diag(C) (if options->Trans = NOTRANS) or diag(R)
+ *	     (if options->Trans = TRANS or CONJ) so that it solves the
+ *	     original system before equilibration.
+ *
+ *	1.9. options for ILU only
+ *	     1) If options->RowPerm = LargeDiag, MC64 is used to scale and
+ *		permute the matrix to an I-matrix, that is Pr*Dr*A*Dc has
+ *		entries of modulus 1 on the diagonal and off-diagonal entries
+ *		of modulus at most 1. If MC64 fails, dgsequ() is used to
+ *		equilibrate the system.
+ *	     2) options->ILU_DropTol = tau is the threshold for dropping.
+ *		For L, it is used directly (for the whole row in a supernode);
+ *		For U, ||A(:,i)||_oo * tau is used as the threshold
+ *	        for the	i-th column.
+ *		If a secondary dropping rule is required, tau will
+ *	        also be used to compute the second threshold.
+ *	     3) options->ILU_FillFactor = gamma, used as the initial guess
+ *		of memory growth.
+ *		If a secondary dropping rule is required, it will also
+ *              be used as an upper bound of the memory.
+ *	     4) options->ILU_DropRule specifies the dropping rule.
+ *		Option		Explanation
+ *		======		===========
+ *		DROP_BASIC:	Basic dropping rule, supernodal based ILU.
+ *		DROP_PROWS:	Supernodal based ILUTP, p = gamma * nnz(A) / n.
+ *		DROP_COLUMN:	Variation of ILUTP, for j-th column,
+ *				p = gamma * nnz(A(:,j)).
+ *		DROP_AREA;	Variation of ILUTP, for j-th column, use
+ *				nnz(F(:,1:j)) / nnz(A(:,1:j)) to control the
+ *				memory.
+ *		DROP_DYNAMIC:	Modify the threshold tau during the
+ *				factorizaion.
+ *				If nnz(L(:,1:j)) / nnz(A(:,1:j)) < gamma
+ *				    tau_L(j) := MIN(1, tau_L(j-1) * 2);
+ *				Otherwise
+ *				    tau_L(j) := MIN(1, tau_L(j-1) * 2);
+ *				tau_U(j) uses the similar rule.
+ *				NOTE: the thresholds used by L and U are
+ *				indenpendent.
+ *		DROP_INTERP:	Compute the second dropping threshold by
+ *				interpolation instead of sorting (default).
+ *				In this case, the actual fill ratio is not
+ *				guaranteed smaller than gamma.
+ *		DROP_PROWS, DROP_COLUMN and DROP_AREA are mutually exclusive.
+ *		( The default option is DROP_BASIC | DROP_AREA. )
+ *	     5) options->ILU_Norm is the criterion of computing the average
+ *		value of a row in L.
+ *		options->ILU_Norm	average(x[1:n])
+ *		=================	===============
+ *		ONE_NORM		||x||_1 / n
+ *		TWO_NORM		||x||_2 / sqrt(n)
+ *		INF_NORM		max{|x[i]|}
+ *	     6) options->ILU_MILU specifies the type of MILU's variation.
+ *		= SILU (default): do not perform MILU;
+ *		= SMILU_1 (not recommended):
+ *		    U(i,i) := U(i,i) + sum(dropped entries);
+ *		= SMILU_2:
+ *		    U(i,i) := U(i,i) + SGN(U(i,i)) * sum(dropped entries);
+ *		= SMILU_3:
+ *		    U(i,i) := U(i,i) + SGN(U(i,i)) * sum(|dropped entries|);
+ *		NOTE: Even SMILU_1 does not preserve the column sum because of
+ *		late dropping.
+ *	     7) options->ILU_FillTol is used as the perturbation when
+ *		encountering zero pivots. If some U(i,i) = 0, so that U is
+ *		exactly singular, then
+ *		   U(i,i) := ||A(:,i)|| * options->ILU_FillTol ** (1 - i / n).
+ *
+ *   2. If A is stored row-wise (A->Stype = SLU_NR), apply the above algorithm
+ *	to the transpose of A:
+ *
+ *	2.1. If options->Equil = YES or options->RowPerm = LargeDiag, scaling
+ *	     factors are computed to equilibrate the system:
+ *	     options->Trans = NOTRANS:
+ *		 diag(R)*A*diag(C) *inv(diag(C))*X = diag(R)*B
+ *	     options->Trans = TRANS:
+ *		 (diag(R)*A*diag(C))**T *inv(diag(R))*X = diag(C)*B
+ *	     options->Trans = CONJ:
+ *		 (diag(R)*A*diag(C))**H *inv(diag(R))*X = diag(C)*B
+ *	     Whether or not the system will be equilibrated depends on the
+ *	     scaling of the matrix A, but if equilibration is used, A' is
+ *	     overwritten by diag(R)*A'*diag(C) and B by diag(R)*B
+ *	     (if trans='N') or diag(C)*B (if trans = 'T' or 'C').
+ *
+ *	2.2. Permute columns of transpose(A) (rows of A),
+ *	     forming transpose(A)*Pc, where Pc is a permutation matrix that
+ *	     usually preserves sparsity.
+ *	     For more details of this step, see sp_preorder.c.
+ *
+ *	2.3. If options->Fact != FACTORED, the LU decomposition is used to
+ *	     factor the transpose(A) (after equilibration if
+ *	     options->Fact = YES) as Pr*transpose(A)*Pc = L*U with the
+ *	     permutation Pr determined by partial pivoting.
+ *
+ *	2.4. Compute the reciprocal pivot growth factor.
+ *
+ *	2.5. If some U(i,i) = 0, so that U is exactly singular, then the
+ *	     routine fills a small number on the diagonal entry, that is
+ *		 U(i,i) = ||A(:,i)||_oo * options->ILU_FillTol ** (1 - i / n).
+ *	     And info will be increased by 1. The factored form of A is used
+ *	     to estimate the condition number of the preconditioner. If the
+ *	     reciprocal of the condition number is less than machine precision,
+ *	     info = A->ncol+1 is returned as a warning, but the routine still
+ *	     goes on to solve for X.
+ *
+ *	2.6. The system of equations is solved for X using the factored form
+ *	     of transpose(A).
+ *
+ *	2.7. If options->IterRefine is not used.
+ *
+ *	2.8. If equilibration was used, the matrix X is premultiplied by
+ *	     diag(C) (if options->Trans = NOTRANS) or diag(R)
+ *	     (if options->Trans = TRANS or CONJ) so that it solves the
+ *	     original system before equilibration.
+ *
+ *   See supermatrix.h for the definition of 'SuperMatrix' structure.
+ *
+ * Arguments
+ * =========
+ *
+ * options (input) superlu_options_t*
+ *	   The structure defines the input parameters to control
+ *	   how the LU decomposition will be performed and how the
+ *	   system will be solved.
+ *
+ * A	   (input/output) SuperMatrix*
+ *	   Matrix A in A*X=B, of dimension (A->nrow, A->ncol). The number
+ *	   of the linear equations is A->nrow. Currently, the type of A can be:
+ *	   Stype = SLU_NC or SLU_NR, Dtype = SLU_S, Mtype = SLU_GE.
+ *	   In the future, more general A may be handled.
+ *
+ *	   On entry, If options->Fact = FACTORED and equed is not 'N',
+ *	   then A must have been equilibrated by the scaling factors in
+ *	   R and/or C.
+ *	   On exit, A is not modified if options->Equil = NO, or if
+ *	   options->Equil = YES but equed = 'N' on exit.
+ *	   Otherwise, if options->Equil = YES and equed is not 'N',
+ *	   A is scaled as follows:
+ *	   If A->Stype = SLU_NC:
+ *	     equed = 'R':  A := diag(R) * A
+ *	     equed = 'C':  A := A * diag(C)
+ *	     equed = 'B':  A := diag(R) * A * diag(C).
+ *	   If A->Stype = SLU_NR:
+ *	     equed = 'R':  transpose(A) := diag(R) * transpose(A)
+ *	     equed = 'C':  transpose(A) := transpose(A) * diag(C)
+ *	     equed = 'B':  transpose(A) := diag(R) * transpose(A) * diag(C).
+ *
+ * perm_c  (input/output) int*
+ *	   If A->Stype = SLU_NC, Column permutation vector of size A->ncol,
+ *	   which defines the permutation matrix Pc; perm_c[i] = j means
+ *	   column i of A is in position j in A*Pc.
+ *	   On exit, perm_c may be overwritten by the product of the input
+ *	   perm_c and a permutation that postorders the elimination tree
+ *	   of Pc'*A'*A*Pc; perm_c is not changed if the elimination tree
+ *	   is already in postorder.
+ *
+ *	   If A->Stype = SLU_NR, column permutation vector of size A->nrow,
+ *	   which describes permutation of columns of transpose(A) 
+ *	   (rows of A) as described above.
+ *
+ * perm_r  (input/output) int*
+ *	   If A->Stype = SLU_NC, row permutation vector of size A->nrow, 
+ *	   which defines the permutation matrix Pr, and is determined
+ *	   by partial pivoting.  perm_r[i] = j means row i of A is in 
+ *	   position j in Pr*A.
+ *
+ *	   If A->Stype = SLU_NR, permutation vector of size A->ncol, which
+ *	   determines permutation of rows of transpose(A)
+ *	   (columns of A) as described above.
+ *
+ *	   If options->Fact = SamePattern_SameRowPerm, the pivoting routine
+ *	   will try to use the input perm_r, unless a certain threshold
+ *	   criterion is violated. In that case, perm_r is overwritten by a
+ *	   new permutation determined by partial pivoting or diagonal
+ *	   threshold pivoting.
+ *	   Otherwise, perm_r is output argument.
+ *
+ * etree   (input/output) int*,  dimension (A->ncol)
+ *	   Elimination tree of Pc'*A'*A*Pc.
+ *	   If options->Fact != FACTORED and options->Fact != DOFACT,
+ *	   etree is an input argument, otherwise it is an output argument.
+ *	   Note: etree is a vector of parent pointers for a forest whose
+ *	   vertices are the integers 0 to A->ncol-1; etree[root]==A->ncol.
+ *
+ * equed   (input/output) char*
+ *	   Specifies the form of equilibration that was done.
+ *	   = 'N': No equilibration.
+ *	   = 'R': Row equilibration, i.e., A was premultiplied by diag(R).
+ *	   = 'C': Column equilibration, i.e., A was postmultiplied by diag(C).
+ *	   = 'B': Both row and column equilibration, i.e., A was replaced 
+ *		  by diag(R)*A*diag(C).
+ *	   If options->Fact = FACTORED, equed is an input argument,
+ *	   otherwise it is an output argument.
+ *
+ * R	   (input/output) float*, dimension (A->nrow)
+ *	   The row scale factors for A or transpose(A).
+ *	   If equed = 'R' or 'B', A (if A->Stype = SLU_NC) or transpose(A)
+ *	       (if A->Stype = SLU_NR) is multiplied on the left by diag(R).
+ *	   If equed = 'N' or 'C', R is not accessed.
+ *	   If options->Fact = FACTORED, R is an input argument,
+ *	       otherwise, R is output.
+ *	   If options->zFact = FACTORED and equed = 'R' or 'B', each element
+ *	       of R must be positive.
+ *
+ * C	   (input/output) float*, dimension (A->ncol)
+ *	   The column scale factors for A or transpose(A).
+ *	   If equed = 'C' or 'B', A (if A->Stype = SLU_NC) or transpose(A)
+ *	       (if A->Stype = SLU_NR) is multiplied on the right by diag(C).
+ *	   If equed = 'N' or 'R', C is not accessed.
+ *	   If options->Fact = FACTORED, C is an input argument,
+ *	       otherwise, C is output.
+ *	   If options->Fact = FACTORED and equed = 'C' or 'B', each element
+ *	       of C must be positive.
+ *
+ * L	   (output) SuperMatrix*
+ *	   The factor L from the factorization
+ *	       Pr*A*Pc=L*U		(if A->Stype SLU_= NC) or
+ *	       Pr*transpose(A)*Pc=L*U	(if A->Stype = SLU_NR).
+ *	   Uses compressed row subscripts storage for supernodes, i.e.,
+ *	   L has types: Stype = SLU_SC, Dtype = SLU_S, Mtype = SLU_TRLU.
+ *
+ * U	   (output) SuperMatrix*
+ *	   The factor U from the factorization
+ *	       Pr*A*Pc=L*U		(if A->Stype = SLU_NC) or
+ *	       Pr*transpose(A)*Pc=L*U	(if A->Stype = SLU_NR).
+ *	   Uses column-wise storage scheme, i.e., U has types:
+ *	   Stype = SLU_NC, Dtype = SLU_S, Mtype = SLU_TRU.
+ *
+ * work    (workspace/output) void*, size (lwork) (in bytes)
+ *	   User supplied workspace, should be large enough
+ *	   to hold data structures for factors L and U.
+ *	   On exit, if fact is not 'F', L and U point to this array.
+ *
+ * lwork   (input) int
+ *	   Specifies the size of work array in bytes.
+ *	   = 0:  allocate space internally by system malloc;
+ *	   > 0:  use user-supplied work array of length lwork in bytes,
+ *		 returns error if space runs out.
+ *	   = -1: the routine guesses the amount of space needed without
+ *		 performing the factorization, and returns it in
+ *		 mem_usage->total_needed; no other side effects.
+ *
+ *	   See argument 'mem_usage' for memory usage statistics.
+ *
+ * B	   (input/output) SuperMatrix*
+ *	   B has types: Stype = SLU_DN, Dtype = SLU_S, Mtype = SLU_GE.
+ *	   On entry, the right hand side matrix.
+ *	   If B->ncol = 0, only LU decomposition is performed, the triangular
+ *			   solve is skipped.
+ *	   On exit,
+ *	      if equed = 'N', B is not modified; otherwise
+ *	      if A->Stype = SLU_NC:
+ *		 if options->Trans = NOTRANS and equed = 'R' or 'B',
+ *		    B is overwritten by diag(R)*B;
+ *		 if options->Trans = TRANS or CONJ and equed = 'C' of 'B',
+ *		    B is overwritten by diag(C)*B;
+ *	      if A->Stype = SLU_NR:
+ *		 if options->Trans = NOTRANS and equed = 'C' or 'B',
+ *		    B is overwritten by diag(C)*B;
+ *		 if options->Trans = TRANS or CONJ and equed = 'R' of 'B',
+ *		    B is overwritten by diag(R)*B.
+ *
+ * X	   (output) SuperMatrix*
+ *	   X has types: Stype = SLU_DN, Dtype = SLU_S, Mtype = SLU_GE.
+ *	   If info = 0 or info = A->ncol+1, X contains the solution matrix
+ *	   to the original system of equations. Note that A and B are modified
+ *	   on exit if equed is not 'N', and the solution to the equilibrated
+ *	   system is inv(diag(C))*X if options->Trans = NOTRANS and
+ *	   equed = 'C' or 'B', or inv(diag(R))*X if options->Trans = 'T' or 'C'
+ *	   and equed = 'R' or 'B'.
+ *
+ * recip_pivot_growth (output) float*
+ *	   The reciprocal pivot growth factor max_j( norm(A_j)/norm(U_j) ).
+ *	   The infinity norm is used. If recip_pivot_growth is much less
+ *	   than 1, the stability of the LU factorization could be poor.
+ *
+ * rcond   (output) float*
+ *	   The estimate of the reciprocal condition number of the matrix A
+ *	   after equilibration (if done). If rcond is less than the machine
+ *	   precision (in particular, if rcond = 0), the matrix is singular
+ *	   to working precision. This condition is indicated by a return
+ *	   code of info > 0.
+ *
+ * mem_usage (output) mem_usage_t*
+ *	   Record the memory usage statistics, consisting of following fields:
+ *	   - for_lu (float)
+ *	     The amount of space used in bytes for L\U data structures.
+ *	   - total_needed (float)
+ *	     The amount of space needed in bytes to perform factorization.
+ *	   - expansions (int)
+ *	     The number of memory expansions during the LU factorization.
+ *
+ * stat   (output) SuperLUStat_t*
+ *	  Record the statistics on runtime and floating-point operation count.
+ *	  See slu_util.h for the definition of 'SuperLUStat_t'.
+ *
+ * info    (output) int*
+ *	   = 0: successful exit
+ *	   < 0: if info = -i, the i-th argument had an illegal value
+ *	   > 0: if info = i, and i is
+ *		<= A->ncol: number of zero pivots. They are replaced by small
+ *		      entries due to options->ILU_FillTol.
+ *		= A->ncol+1: U is nonsingular, but RCOND is less than machine
+ *		      precision, meaning that the matrix is singular to
+ *		      working precision. Nevertheless, the solution and
+ *		      error bounds are computed because there are a number
+ *		      of situations where the computed solution can be more
+ *		      accurate than the value of RCOND would suggest.
+ *		> A->ncol+1: number of bytes allocated when memory allocation
+ *		      failure occurred, plus A->ncol.
+ * 
+ */ + +void +sgsisx(superlu_options_t *options, SuperMatrix *A, int *perm_c, int *perm_r, + int *etree, char *equed, float *R, float *C, + SuperMatrix *L, SuperMatrix *U, void *work, int lwork, + SuperMatrix *B, SuperMatrix *X, + float *recip_pivot_growth, float *rcond, + mem_usage_t *mem_usage, SuperLUStat_t *stat, int *info) +{ + + DNformat *Bstore, *Xstore; + float *Bmat, *Xmat; + int ldb, ldx, nrhs; + SuperMatrix *AA;/* A in SLU_NC format used by the factorization routine.*/ + SuperMatrix AC; /* Matrix postmultiplied by Pc */ + int colequ, equil, nofact, notran, rowequ, permc_spec, mc64; + trans_t trant; + char norm[1]; + int i, j, info1; + float amax, anorm, bignum, smlnum, colcnd, rowcnd, rcmax, rcmin; + int relax, panel_size; + float diag_pivot_thresh; + double t0; /* temporary time */ + double *utime; + + int *perm = NULL; + + /* External functions */ + extern float slangs(char *, SuperMatrix *); + + Bstore = B->Store; + Xstore = X->Store; + Bmat = Bstore->nzval; + Xmat = Xstore->nzval; + ldb = Bstore->lda; + ldx = Xstore->lda; + nrhs = B->ncol; + + *info = 0; + nofact = (options->Fact != FACTORED); + equil = (options->Equil == YES); + notran = (options->Trans == NOTRANS); + mc64 = (options->RowPerm == LargeDiag); + if ( nofact ) { + *(unsigned char *)equed = 'N'; + rowequ = FALSE; + colequ = FALSE; + } else { + rowequ = lsame_(equed, "R") || lsame_(equed, "B"); + colequ = lsame_(equed, "C") || lsame_(equed, "B"); + smlnum = slamch_("Safe minimum"); + bignum = 1. / smlnum; + } + + /* Test the input parameters */ + if (!nofact && options->Fact != DOFACT && options->Fact != SamePattern && + options->Fact != SamePattern_SameRowPerm && + !notran && options->Trans != TRANS && options->Trans != CONJ && + !equil && options->Equil != NO) + *info = -1; + else if ( A->nrow != A->ncol || A->nrow < 0 || + (A->Stype != SLU_NC && A->Stype != SLU_NR) || + A->Dtype != SLU_S || A->Mtype != SLU_GE ) + *info = -2; + else if (options->Fact == FACTORED && + !(rowequ || colequ || lsame_(equed, "N"))) + *info = -6; + else { + if (rowequ) { + rcmin = bignum; + rcmax = 0.; + for (j = 0; j < A->nrow; ++j) { + rcmin = SUPERLU_MIN(rcmin, R[j]); + rcmax = SUPERLU_MAX(rcmax, R[j]); + } + if (rcmin <= 0.) *info = -7; + else if ( A->nrow > 0) + rowcnd = SUPERLU_MAX(rcmin,smlnum) / SUPERLU_MIN(rcmax,bignum); + else rowcnd = 1.; + } + if (colequ && *info == 0) { + rcmin = bignum; + rcmax = 0.; + for (j = 0; j < A->nrow; ++j) { + rcmin = SUPERLU_MIN(rcmin, C[j]); + rcmax = SUPERLU_MAX(rcmax, C[j]); + } + if (rcmin <= 0.) *info = -8; + else if (A->nrow > 0) + colcnd = SUPERLU_MAX(rcmin,smlnum) / SUPERLU_MIN(rcmax,bignum); + else colcnd = 1.; + } + if (*info == 0) { + if ( lwork < -1 ) *info = -12; + else if ( B->ncol < 0 || Bstore->lda < SUPERLU_MAX(0, A->nrow) || + B->Stype != SLU_DN || B->Dtype != SLU_S || + B->Mtype != SLU_GE ) + *info = -13; + else if ( X->ncol < 0 || Xstore->lda < SUPERLU_MAX(0, A->nrow) || + (B->ncol != 0 && B->ncol != X->ncol) || + X->Stype != SLU_DN || + X->Dtype != SLU_S || X->Mtype != SLU_GE ) + *info = -14; + } + } + if (*info != 0) { + i = -(*info); + xerbla_("sgsisx", &i); + return; + } + + /* Initialization for factor parameters */ + panel_size = sp_ienv(1); + relax = sp_ienv(2); + diag_pivot_thresh = options->DiagPivotThresh; + + utime = stat->utime; + + /* Convert A to SLU_NC format when necessary. */ + if ( A->Stype == SLU_NR ) { + NRformat *Astore = A->Store; + AA = (SuperMatrix *) SUPERLU_MALLOC( sizeof(SuperMatrix) ); + sCreate_CompCol_Matrix(AA, A->ncol, A->nrow, Astore->nnz, + Astore->nzval, Astore->colind, Astore->rowptr, + SLU_NC, A->Dtype, A->Mtype); + if ( notran ) { /* Reverse the transpose argument. */ + trant = TRANS; + notran = 0; + } else { + trant = NOTRANS; + notran = 1; + } + } else { /* A->Stype == SLU_NC */ + trant = options->Trans; + AA = A; + } + + if ( nofact ) { + register int i, j; + NCformat *Astore = AA->Store; + int nnz = Astore->nnz; + int *colptr = Astore->colptr; + int *rowind = Astore->rowind; + float *nzval = (float *)Astore->nzval; + int n = AA->nrow; + + if ( mc64 ) { + *equed = 'B'; + rowequ = colequ = 1; + t0 = SuperLU_timer_(); + if ((perm = intMalloc(n)) == NULL) + ABORT("SUPERLU_MALLOC fails for perm[]"); + + info1 = sldperm(5, n, nnz, colptr, rowind, nzval, perm, R, C); + + if (info1 > 0) { /* MC64 fails, call sgsequ() later */ + mc64 = 0; + SUPERLU_FREE(perm); + perm = NULL; + } else { + for (i = 0; i < n; i++) { + R[i] = exp(R[i]); + C[i] = exp(C[i]); + } + /* permute and scale the matrix */ + for (j = 0; j < n; j++) { + for (i = colptr[j]; i < colptr[j + 1]; i++) { + nzval[i] *= R[rowind[i]] * C[j]; + rowind[i] = perm[rowind[i]]; + } + } + } + utime[EQUIL] = SuperLU_timer_() - t0; + } + if ( !mc64 & equil ) { + t0 = SuperLU_timer_(); + /* Compute row and column scalings to equilibrate the matrix A. */ + sgsequ(AA, R, C, &rowcnd, &colcnd, &amax, &info1); + + if ( info1 == 0 ) { + /* Equilibrate matrix A. */ + slaqgs(AA, R, C, rowcnd, colcnd, amax, equed); + rowequ = lsame_(equed, "R") || lsame_(equed, "B"); + colequ = lsame_(equed, "C") || lsame_(equed, "B"); + } + utime[EQUIL] = SuperLU_timer_() - t0; + } + } + + if ( nrhs > 0 ) { + /* Scale the right hand side if equilibration was performed. */ + if ( notran ) { + if ( rowequ ) { + for (j = 0; j < nrhs; ++j) + for (i = 0; i < A->nrow; ++i) { + Bmat[i + j*ldb] *= R[i]; + } + } + } else if ( colequ ) { + for (j = 0; j < nrhs; ++j) + for (i = 0; i < A->nrow; ++i) { + Bmat[i + j*ldb] *= C[i]; + } + } + } + + if ( nofact ) { + + t0 = SuperLU_timer_(); + /* + * Gnet column permutation vector perm_c[], according to permc_spec: + * permc_spec = NATURAL: natural ordering + * permc_spec = MMD_AT_PLUS_A: minimum degree on structure of A'+A + * permc_spec = MMD_ATA: minimum degree on structure of A'*A + * permc_spec = COLAMD: approximate minimum degree column ordering + * permc_spec = MY_PERMC: the ordering already supplied in perm_c[] + */ + permc_spec = options->ColPerm; + if ( permc_spec != MY_PERMC && options->Fact == DOFACT ) + get_perm_c(permc_spec, AA, perm_c); + utime[COLPERM] = SuperLU_timer_() - t0; + + t0 = SuperLU_timer_(); + sp_preorder(options, AA, perm_c, etree, &AC); + utime[ETREE] = SuperLU_timer_() - t0; + + /* Compute the LU factorization of A*Pc. */ + t0 = SuperLU_timer_(); + sgsitrf(options, &AC, relax, panel_size, etree, work, lwork, + perm_c, perm_r, L, U, stat, info); + utime[FACT] = SuperLU_timer_() - t0; + + if ( lwork == -1 ) { + mem_usage->total_needed = *info - A->ncol; + return; + } + } + + if ( options->PivotGrowth ) { + if ( *info > 0 ) return; + + /* Compute the reciprocal pivot growth factor *recip_pivot_growth. */ + *recip_pivot_growth = sPivotGrowth(A->ncol, AA, perm_c, L, U); + } + + if ( options->ConditionNumber ) { + /* Estimate the reciprocal of the condition number of A. */ + t0 = SuperLU_timer_(); + if ( notran ) { + *(unsigned char *)norm = '1'; + } else { + *(unsigned char *)norm = 'I'; + } + anorm = slangs(norm, AA); + sgscon(norm, L, U, anorm, rcond, stat, &info1); + utime[RCOND] = SuperLU_timer_() - t0; + } + + if ( nrhs > 0 ) { + /* Compute the solution matrix X. */ + for (j = 0; j < nrhs; j++) /* Save a copy of the right hand sides */ + for (i = 0; i < B->nrow; i++) + Xmat[i + j*ldx] = Bmat[i + j*ldb]; + + t0 = SuperLU_timer_(); + sgstrs (trant, L, U, perm_c, perm_r, X, stat, &info1); + utime[SOLVE] = SuperLU_timer_() - t0; + + /* Transform the solution matrix X to a solution of the original + system. */ + if ( notran ) { + if ( colequ ) { + for (j = 0; j < nrhs; ++j) + for (i = 0; i < A->nrow; ++i) { + Xmat[i + j*ldx] *= C[i]; + } + } + } else { + if ( rowequ ) { + if (perm) { + float *tmp; + int n = A->nrow; + + if ((tmp = floatMalloc(n)) == NULL) + ABORT("SUPERLU_MALLOC fails for tmp[]"); + for (j = 0; j < nrhs; j++) { + for (i = 0; i < n; i++) + tmp[i] = Xmat[i + j * ldx]; /*dcopy*/ + for (i = 0; i < n; i++) + Xmat[i + j * ldx] = R[i] * tmp[perm[i]]; + } + SUPERLU_FREE(tmp); + } else { + for (j = 0; j < nrhs; ++j) + for (i = 0; i < A->nrow; ++i) { + Xmat[i + j*ldx] *= R[i]; + } + } + } + } + } /* end if nrhs > 0 */ + + if ( options->ConditionNumber ) { + /* Set INFO = A->ncol+1 if the matrix is singular to working precision. */ + if ( *rcond < slamch_("E") && *info == 0) *info = A->ncol + 1; + } + + if (perm) SUPERLU_FREE(perm); + + if ( nofact ) { + ilu_sQuerySpace(L, U, mem_usage); + Destroy_CompCol_Permuted(&AC); + } + if ( A->Stype == SLU_NR ) { + Destroy_SuperMatrix_Store(AA); + SUPERLU_FREE(AA); + } + +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgsitrf.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgsitrf.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgsitrf.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgsitrf.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,625 @@ + +/*! @file sgsitf.c + * \brief Computes an ILU factorization of a general sparse matrix + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory.
+ * June 30, 2009
+ * 
+ */ + +#include "slu_sdefs.h" + +#ifdef DEBUG +int num_drop_L; +#endif + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *
+ * SGSITRF computes an ILU factorization of a general sparse m-by-n
+ * matrix A using partial pivoting with row interchanges.
+ * The factorization has the form
+ *     Pr * A = L * U
+ * where Pr is a row permutation matrix, L is lower triangular with unit
+ * diagonal elements (lower trapezoidal if A->nrow > A->ncol), and U is upper
+ * triangular (upper trapezoidal if A->nrow < A->ncol).
+ *
+ * See supermatrix.h for the definition of 'SuperMatrix' structure.
+ *
+ * Arguments
+ * =========
+ *
+ * options (input) superlu_options_t*
+ *	   The structure defines the input parameters to control
+ *	   how the ILU decomposition will be performed.
+ *
+ * A	    (input) SuperMatrix*
+ *	    Original matrix A, permuted by columns, of dimension
+ *	    (A->nrow, A->ncol). The type of A can be:
+ *	    Stype = SLU_NCP; Dtype = SLU_S; Mtype = SLU_GE.
+ *
+ * relax    (input) int
+ *	    To control degree of relaxing supernodes. If the number
+ *	    of nodes (columns) in a subtree of the elimination tree is less
+ *	    than relax, this subtree is considered as one supernode,
+ *	    regardless of the row structures of those columns.
+ *
+ * panel_size (input) int
+ *	    A panel consists of at most panel_size consecutive columns.
+ *
+ * etree    (input) int*, dimension (A->ncol)
+ *	    Elimination tree of A'*A.
+ *	    Note: etree is a vector of parent pointers for a forest whose
+ *	    vertices are the integers 0 to A->ncol-1; etree[root]==A->ncol.
+ *	    On input, the columns of A should be permuted so that the
+ *	    etree is in a certain postorder.
+ *
+ * work     (input/output) void*, size (lwork) (in bytes)
+ *	    User-supplied work space and space for the output data structures.
+ *	    Not referenced if lwork = 0;
+ *
+ * lwork   (input) int
+ *	   Specifies the size of work array in bytes.
+ *	   = 0:  allocate space internally by system malloc;
+ *	   > 0:  use user-supplied work array of length lwork in bytes,
+ *		 returns error if space runs out.
+ *	   = -1: the routine guesses the amount of space needed without
+ *		 performing the factorization, and returns it in
+ *		 *info; no other side effects.
+ *
+ * perm_c   (input) int*, dimension (A->ncol)
+ *	    Column permutation vector, which defines the
+ *	    permutation matrix Pc; perm_c[i] = j means column i of A is
+ *	    in position j in A*Pc.
+ *	    When searching for diagonal, perm_c[*] is applied to the
+ *	    row subscripts of A, so that diagonal threshold pivoting
+ *	    can find the diagonal of A, rather than that of A*Pc.
+ *
+ * perm_r   (input/output) int*, dimension (A->nrow)
+ *	    Row permutation vector which defines the permutation matrix Pr,
+ *	    perm_r[i] = j means row i of A is in position j in Pr*A.
+ *	    If options->Fact = SamePattern_SameRowPerm, the pivoting routine
+ *	       will try to use the input perm_r, unless a certain threshold
+ *	       criterion is violated. In that case, perm_r is overwritten by
+ *	       a new permutation determined by partial pivoting or diagonal
+ *	       threshold pivoting.
+ *	    Otherwise, perm_r is output argument;
+ *
+ * L	    (output) SuperMatrix*
+ *	    The factor L from the factorization Pr*A=L*U; use compressed row
+ *	    subscripts storage for supernodes, i.e., L has type:
+ *	    Stype = SLU_SC, Dtype = SLU_S, Mtype = SLU_TRLU.
+ *
+ * U	    (output) SuperMatrix*
+ *	    The factor U from the factorization Pr*A*Pc=L*U. Use column-wise
+ *	    storage scheme, i.e., U has types: Stype = SLU_NC,
+ *	    Dtype = SLU_S, Mtype = SLU_TRU.
+ *
+ * stat     (output) SuperLUStat_t*
+ *	    Record the statistics on runtime and floating-point operation count.
+ *	    See slu_util.h for the definition of 'SuperLUStat_t'.
+ *
+ * info     (output) int*
+ *	    = 0: successful exit
+ *	    < 0: if info = -i, the i-th argument had an illegal value
+ *	    > 0: if info = i, and i is
+ *	       <= A->ncol: number of zero pivots. They are replaced by small
+ *		  entries according to options->ILU_FillTol.
+ *	       > A->ncol: number of bytes allocated when memory allocation
+ *		  failure occurred, plus A->ncol. If lwork = -1, it is
+ *		  the estimated amount of space needed, plus A->ncol.
+ *
+ * ======================================================================
+ *
+ * Local Working Arrays:
+ * ======================
+ *   m = number of rows in the matrix
+ *   n = number of columns in the matrix
+ *
+ *   marker[0:3*m-1]: marker[i] = j means that node i has been
+ *	reached when working on column j.
+ *	Storage: relative to original row subscripts
+ *	NOTE: There are 4 of them:
+ *	      marker/marker1 are used for panel dfs, see (ilu_)dpanel_dfs.c;
+ *	      marker2 is used for inner-factorization, see (ilu)_dcolumn_dfs.c;
+ *	      marker_relax(has its own space) is used for relaxed supernodes.
+ *
+ *   parent[0:m-1]: parent vector used during dfs
+ *	Storage: relative to new row subscripts
+ *
+ *   xplore[0:m-1]: xplore[i] gives the location of the next (dfs)
+ *	unexplored neighbor of i in lsub[*]
+ *
+ *   segrep[0:nseg-1]: contains the list of supernodal representatives
+ *	in topological order of the dfs. A supernode representative is the
+ *	last column of a supernode.
+ *	The maximum size of segrep[] is n.
+ *
+ *   repfnz[0:W*m-1]: for a nonzero segment U[*,j] that ends at a
+ *	supernodal representative r, repfnz[r] is the location of the first
+ *	nonzero in this segment.  It is also used during the dfs: repfnz[r]>0
+ *	indicates the supernode r has been explored.
+ *	NOTE: There are W of them, each used for one column of a panel.
+ *
+ *   panel_lsub[0:W*m-1]: temporary for the nonzeros row indices below
+ *	the panel diagonal. These are filled in during dpanel_dfs(), and are
+ *	used later in the inner LU factorization within the panel.
+ *	panel_lsub[]/dense[] pair forms the SPA data structure.
+ *	NOTE: There are W of them.
+ *
+ *   dense[0:W*m-1]: sparse accumulating (SPA) vector for intermediate values;
+ *		   NOTE: there are W of them.
+ *
+ *   tempv[0:*]: real temporary used for dense numeric kernels;
+ *	The size of this array is defined by NUM_TEMPV() in slu_util.h.
+ *	It is also used by the dropping routine ilu_ddrop_row().
+ * 
+ */ + +void +sgsitrf(superlu_options_t *options, SuperMatrix *A, int relax, int panel_size, + int *etree, void *work, int lwork, int *perm_c, int *perm_r, + SuperMatrix *L, SuperMatrix *U, SuperLUStat_t *stat, int *info) +{ + /* Local working arrays */ + NCPformat *Astore; + int *iperm_r = NULL; /* inverse of perm_r; used when + options->Fact == SamePattern_SameRowPerm */ + int *iperm_c; /* inverse of perm_c */ + int *swap, *iswap; /* swap is used to store the row permutation + during the factorization. Initially, it is set + to iperm_c (row indeces of Pc*A*Pc'). + iswap is the inverse of swap. After the + factorization, it is equal to perm_r. */ + int *iwork; + float *swork; + int *segrep, *repfnz, *parent, *xplore; + int *panel_lsub; /* dense[]/panel_lsub[] pair forms a w-wide SPA */ + int *marker, *marker_relax; + float *dense, *tempv; + int *relax_end, *relax_fsupc; + float *a; + int *asub; + int *xa_begin, *xa_end; + int *xsup, *supno; + int *xlsub, *xlusup, *xusub; + int nzlumax; + float *amax; + float drop_sum; + static GlobalLU_t Glu; /* persistent to facilitate multiple factors. */ + int *iwork2; /* used by the second dropping rule */ + + /* Local scalars */ + fact_t fact = options->Fact; + double diag_pivot_thresh = options->DiagPivotThresh; + double drop_tol = options->ILU_DropTol; /* tau */ + double fill_ini = options->ILU_FillTol; /* tau^hat */ + double gamma = options->ILU_FillFactor; + int drop_rule = options->ILU_DropRule; + milu_t milu = options->ILU_MILU; + double fill_tol; + int pivrow; /* pivotal row number in the original matrix A */ + int nseg1; /* no of segments in U-column above panel row jcol */ + int nseg; /* no of segments in each U-column */ + register int jcol; + register int kcol; /* end column of a relaxed snode */ + register int icol; + register int i, k, jj, new_next, iinfo; + int m, n, min_mn, jsupno, fsupc, nextlu, nextu; + int w_def; /* upper bound on panel width */ + int usepr, iperm_r_allocated = 0; + int nnzL, nnzU; + int *panel_histo = stat->panel_histo; + flops_t *ops = stat->ops; + + int last_drop;/* the last column which the dropping rules applied */ + int quota; + int nnzAj; /* number of nonzeros in A(:,1:j) */ + int nnzLj, nnzUj; + double tol_L = drop_tol, tol_U = drop_tol; + float zero = 0.0; + + /* Executable */ + iinfo = 0; + m = A->nrow; + n = A->ncol; + min_mn = SUPERLU_MIN(m, n); + Astore = A->Store; + a = Astore->nzval; + asub = Astore->rowind; + xa_begin = Astore->colbeg; + xa_end = Astore->colend; + + /* Allocate storage common to the factor routines */ + *info = sLUMemInit(fact, work, lwork, m, n, Astore->nnz, panel_size, + gamma, L, U, &Glu, &iwork, &swork); + if ( *info ) return; + + xsup = Glu.xsup; + supno = Glu.supno; + xlsub = Glu.xlsub; + xlusup = Glu.xlusup; + xusub = Glu.xusub; + + SetIWork(m, n, panel_size, iwork, &segrep, &parent, &xplore, + &repfnz, &panel_lsub, &marker_relax, &marker); + sSetRWork(m, panel_size, swork, &dense, &tempv); + + usepr = (fact == SamePattern_SameRowPerm); + if ( usepr ) { + /* Compute the inverse of perm_r */ + iperm_r = (int *) intMalloc(m); + for (k = 0; k < m; ++k) iperm_r[perm_r[k]] = k; + iperm_r_allocated = 1; + } + + iperm_c = (int *) intMalloc(n); + for (k = 0; k < n; ++k) iperm_c[perm_c[k]] = k; + swap = (int *)intMalloc(n); + for (k = 0; k < n; k++) swap[k] = iperm_c[k]; + iswap = (int *)intMalloc(n); + for (k = 0; k < n; k++) iswap[k] = perm_c[k]; + amax = (float *) floatMalloc(panel_size); + if (drop_rule & DROP_SECONDARY) + iwork2 = (int *)intMalloc(n); + else + iwork2 = NULL; + + nnzAj = 0; + nnzLj = 0; + nnzUj = 0; + last_drop = SUPERLU_MAX(min_mn - 2 * sp_ienv(3), (int)(min_mn * 0.95)); + + /* Identify relaxed snodes */ + relax_end = (int *) intMalloc(n); + relax_fsupc = (int *) intMalloc(n); + if ( options->SymmetricMode == YES ) + ilu_heap_relax_snode(n, etree, relax, marker, relax_end, relax_fsupc); + else + ilu_relax_snode(n, etree, relax, marker, relax_end, relax_fsupc); + + ifill (perm_r, m, EMPTY); + ifill (marker, m * NO_MARKER, EMPTY); + supno[0] = -1; + xsup[0] = xlsub[0] = xusub[0] = xlusup[0] = 0; + w_def = panel_size; + + /* Mark the rows used by relaxed supernodes */ + ifill (marker_relax, m, EMPTY); + i = mark_relax(m, relax_end, relax_fsupc, xa_begin, xa_end, + asub, marker_relax); +#if ( PRNTlevel >= 1) + printf("%d relaxed supernodes.\n", i); +#endif + + /* + * Work on one "panel" at a time. A panel is one of the following: + * (a) a relaxed supernode at the bottom of the etree, or + * (b) panel_size contiguous columns, defined by the user + */ + for (jcol = 0; jcol < min_mn; ) { + + if ( relax_end[jcol] != EMPTY ) { /* start of a relaxed snode */ + kcol = relax_end[jcol]; /* end of the relaxed snode */ + panel_histo[kcol-jcol+1]++; + + /* Drop small rows in the previous supernode. */ + if (jcol > 0 && jcol < last_drop) { + int first = xsup[supno[jcol - 1]]; + int last = jcol - 1; + int quota; + + /* Compute the quota */ + if (drop_rule & DROP_PROWS) + quota = gamma * Astore->nnz / m * (m - first) / m + * (last - first + 1); + else if (drop_rule & DROP_COLUMN) { + int i; + quota = 0; + for (i = first; i <= last; i++) + quota += xa_end[i] - xa_begin[i]; + quota = gamma * quota * (m - first) / m; + } else if (drop_rule & DROP_AREA) + quota = gamma * nnzAj * (1.0 - 0.5 * (last + 1.0) / m) + - nnzLj; + else + quota = m * n; + fill_tol = pow(fill_ini, 1.0 - 0.5 * (first + last) / min_mn); + + /* Drop small rows */ + i = ilu_sdrop_row(options, first, last, tol_L, quota, &nnzLj, + &fill_tol, &Glu, tempv, iwork2, 0); + /* Reset the parameters */ + if (drop_rule & DROP_DYNAMIC) { + if (gamma * nnzAj * (1.0 - 0.5 * (last + 1.0) / m) + < nnzLj) + tol_L = SUPERLU_MIN(1.0, tol_L * 2.0); + else + tol_L = SUPERLU_MAX(drop_tol, tol_L * 0.5); + } + if (fill_tol < 0) iinfo -= (int)fill_tol; +#ifdef DEBUG + num_drop_L += i * (last - first + 1); +#endif + } + + /* -------------------------------------- + * Factorize the relaxed supernode(jcol:kcol) + * -------------------------------------- */ + /* Determine the union of the row structure of the snode */ + if ( (*info = ilu_ssnode_dfs(jcol, kcol, asub, xa_begin, xa_end, + marker, &Glu)) != 0 ) + return; + + nextu = xusub[jcol]; + nextlu = xlusup[jcol]; + jsupno = supno[jcol]; + fsupc = xsup[jsupno]; + new_next = nextlu + (xlsub[fsupc+1]-xlsub[fsupc])*(kcol-jcol+1); + nzlumax = Glu.nzlumax; + while ( new_next > nzlumax ) { + if ((*info = sLUMemXpand(jcol, nextlu, LUSUP, &nzlumax, &Glu))) + return; + } + + for (icol = jcol; icol <= kcol; icol++) { + xusub[icol+1] = nextu; + + amax[0] = 0.0; + /* Scatter into SPA dense[*] */ + for (k = xa_begin[icol]; k < xa_end[icol]; k++) { + register float tmp = fabs(a[k]); + if (tmp > amax[0]) amax[0] = tmp; + dense[asub[k]] = a[k]; + } + nnzAj += xa_end[icol] - xa_begin[icol]; + if (amax[0] == 0.0) { + amax[0] = fill_ini; +#if ( PRNTlevel >= 1) + printf("Column %d is entirely zero!\n", icol); + fflush(stdout); +#endif + } + + /* Numeric update within the snode */ + ssnode_bmod(icol, jsupno, fsupc, dense, tempv, &Glu, stat); + + if (usepr) pivrow = iperm_r[icol]; + fill_tol = pow(fill_ini, 1.0 - (double)icol / (double)min_mn); + if ( (*info = ilu_spivotL(icol, diag_pivot_thresh, &usepr, + perm_r, iperm_c[icol], swap, iswap, + marker_relax, &pivrow, + amax[0] * fill_tol, milu, zero, + &Glu, stat)) ) { + iinfo++; + marker[pivrow] = kcol; + } + + } + + jcol = kcol + 1; + + } else { /* Work on one panel of panel_size columns */ + + /* Adjust panel_size so that a panel won't overlap with the next + * relaxed snode. + */ + panel_size = w_def; + for (k = jcol + 1; k < SUPERLU_MIN(jcol+panel_size, min_mn); k++) + if ( relax_end[k] != EMPTY ) { + panel_size = k - jcol; + break; + } + if ( k == min_mn ) panel_size = min_mn - jcol; + panel_histo[panel_size]++; + + /* symbolic factor on a panel of columns */ + ilu_spanel_dfs(m, panel_size, jcol, A, perm_r, &nseg1, + dense, amax, panel_lsub, segrep, repfnz, + marker, parent, xplore, &Glu); + + /* numeric sup-panel updates in topological order */ + spanel_bmod(m, panel_size, jcol, nseg1, dense, + tempv, segrep, repfnz, &Glu, stat); + + /* Sparse LU within the panel, and below panel diagonal */ + for (jj = jcol; jj < jcol + panel_size; jj++) { + + k = (jj - jcol) * m; /* column index for w-wide arrays */ + + nseg = nseg1; /* Begin after all the panel segments */ + + nnzAj += xa_end[jj] - xa_begin[jj]; + + if ((*info = ilu_scolumn_dfs(m, jj, perm_r, &nseg, + &panel_lsub[k], segrep, &repfnz[k], + marker, parent, xplore, &Glu))) + return; + + /* Numeric updates */ + if ((*info = scolumn_bmod(jj, (nseg - nseg1), &dense[k], + tempv, &segrep[nseg1], &repfnz[k], + jcol, &Glu, stat)) != 0) return; + + /* Make a fill-in position if the column is entirely zero */ + if (xlsub[jj + 1] == xlsub[jj]) { + register int i, row; + int nextl; + int nzlmax = Glu.nzlmax; + int *lsub = Glu.lsub; + int *marker2 = marker + 2 * m; + + /* Allocate memory */ + nextl = xlsub[jj] + 1; + if (nextl >= nzlmax) { + int error = sLUMemXpand(jj, nextl, LSUB, &nzlmax, &Glu); + if (error) { *info = error; return; } + lsub = Glu.lsub; + } + xlsub[jj + 1]++; + assert(xlusup[jj]==xlusup[jj+1]); + xlusup[jj + 1]++; + Glu.lusup[xlusup[jj]] = zero; + + /* Choose a row index (pivrow) for fill-in */ + for (i = jj; i < n; i++) + if (marker_relax[swap[i]] <= jj) break; + row = swap[i]; + marker2[row] = jj; + lsub[xlsub[jj]] = row; +#ifdef DEBUG + printf("Fill col %d.\n", jj); + fflush(stdout); +#endif + } + + /* Computer the quota */ + if (drop_rule & DROP_PROWS) + quota = gamma * Astore->nnz / m * jj / m; + else if (drop_rule & DROP_COLUMN) + quota = gamma * (xa_end[jj] - xa_begin[jj]) * + (jj + 1) / m; + else if (drop_rule & DROP_AREA) + quota = gamma * 0.9 * nnzAj * 0.5 - nnzUj; + else + quota = m; + + /* Copy the U-segments to ucol[*] and drop small entries */ + if ((*info = ilu_scopy_to_ucol(jj, nseg, segrep, &repfnz[k], + perm_r, &dense[k], drop_rule, + milu, amax[jj - jcol] * tol_U, + quota, &drop_sum, &nnzUj, &Glu, + iwork2)) != 0) + return; + + /* Reset the dropping threshold if required */ + if (drop_rule & DROP_DYNAMIC) { + if (gamma * 0.9 * nnzAj * 0.5 < nnzLj) + tol_U = SUPERLU_MIN(1.0, tol_U * 2.0); + else + tol_U = SUPERLU_MAX(drop_tol, tol_U * 0.5); + } + + drop_sum *= MILU_ALPHA; + if (usepr) pivrow = iperm_r[jj]; + fill_tol = pow(fill_ini, 1.0 - (double)jj / (double)min_mn); + if ( (*info = ilu_spivotL(jj, diag_pivot_thresh, &usepr, perm_r, + iperm_c[jj], swap, iswap, + marker_relax, &pivrow, + amax[jj - jcol] * fill_tol, milu, + drop_sum, &Glu, stat)) ) { + iinfo++; + marker[m + pivrow] = jj; + marker[2 * m + pivrow] = jj; + } + + /* Reset repfnz[] for this column */ + resetrep_col (nseg, segrep, &repfnz[k]); + + /* Start a new supernode, drop the previous one */ + if (jj > 0 && supno[jj] > supno[jj - 1] && jj < last_drop) { + int first = xsup[supno[jj - 1]]; + int last = jj - 1; + int quota; + + /* Compute the quota */ + if (drop_rule & DROP_PROWS) + quota = gamma * Astore->nnz / m * (m - first) / m + * (last - first + 1); + else if (drop_rule & DROP_COLUMN) { + int i; + quota = 0; + for (i = first; i <= last; i++) + quota += xa_end[i] - xa_begin[i]; + quota = gamma * quota * (m - first) / m; + } else if (drop_rule & DROP_AREA) + quota = gamma * nnzAj * (1.0 - 0.5 * (last + 1.0) + / m) - nnzLj; + else + quota = m * n; + fill_tol = pow(fill_ini, 1.0 - 0.5 * (first + last) / + (double)min_mn); + + /* Drop small rows */ + i = ilu_sdrop_row(options, first, last, tol_L, quota, + &nnzLj, &fill_tol, &Glu, tempv, iwork2, + 1); + + /* Reset the parameters */ + if (drop_rule & DROP_DYNAMIC) { + if (gamma * nnzAj * (1.0 - 0.5 * (last + 1.0) / m) + < nnzLj) + tol_L = SUPERLU_MIN(1.0, tol_L * 2.0); + else + tol_L = SUPERLU_MAX(drop_tol, tol_L * 0.5); + } + if (fill_tol < 0) iinfo -= (int)fill_tol; +#ifdef DEBUG + num_drop_L += i * (last - first + 1); +#endif + } /* if start a new supernode */ + + } /* for */ + + jcol += panel_size; /* Move to the next panel */ + + } /* else */ + + } /* for */ + + *info = iinfo; + + if ( m > n ) { + k = 0; + for (i = 0; i < m; ++i) + if ( perm_r[i] == EMPTY ) { + perm_r[i] = n + k; + ++k; + } + } + + ilu_countnz(min_mn, &nnzL, &nnzU, &Glu); + fixupL(min_mn, perm_r, &Glu); + + sLUWorkFree(iwork, swork, &Glu); /* Free work space and compress storage */ + + if ( fact == SamePattern_SameRowPerm ) { + /* L and U structures may have changed due to possibly different + pivoting, even though the storage is available. + There could also be memory expansions, so the array locations + may have changed, */ + ((SCformat *)L->Store)->nnz = nnzL; + ((SCformat *)L->Store)->nsuper = Glu.supno[n]; + ((SCformat *)L->Store)->nzval = Glu.lusup; + ((SCformat *)L->Store)->nzval_colptr = Glu.xlusup; + ((SCformat *)L->Store)->rowind = Glu.lsub; + ((SCformat *)L->Store)->rowind_colptr = Glu.xlsub; + ((NCformat *)U->Store)->nnz = nnzU; + ((NCformat *)U->Store)->nzval = Glu.ucol; + ((NCformat *)U->Store)->rowind = Glu.usub; + ((NCformat *)U->Store)->colptr = Glu.xusub; + } else { + sCreate_SuperNode_Matrix(L, A->nrow, min_mn, nnzL, Glu.lusup, + Glu.xlusup, Glu.lsub, Glu.xlsub, Glu.supno, + Glu.xsup, SLU_SC, SLU_S, SLU_TRLU); + sCreate_CompCol_Matrix(U, min_mn, min_mn, nnzU, Glu.ucol, + Glu.usub, Glu.xusub, SLU_NC, SLU_S, SLU_TRU); + } + + ops[FACT] += ops[TRSV] + ops[GEMV]; + + if ( iperm_r_allocated ) SUPERLU_FREE (iperm_r); + SUPERLU_FREE (iperm_c); + SUPERLU_FREE (relax_end); + SUPERLU_FREE (swap); + SUPERLU_FREE (iswap); + SUPERLU_FREE (relax_fsupc); + SUPERLU_FREE (amax); + if ( iwork2 ) SUPERLU_FREE (iwork2); + +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgsrfs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgsrfs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgsrfs.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgsrfs.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,25 +1,26 @@ -/* +/*! @file sgsrfs.c + * \brief Improves computed solution to a system of inear equations + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
+ * Modified from lapack routine SGERFS
+ * 
*/ /* * File name: sgsrfs.c * History: Modified from lapack routine SGERFS */ #include -#include "ssp_defs.h" +#include "slu_sdefs.h" -void -sgsrfs(trans_t trans, SuperMatrix *A, SuperMatrix *L, SuperMatrix *U, - int *perm_c, int *perm_r, char *equed, float *R, float *C, - SuperMatrix *B, SuperMatrix *X, float *ferr, float *berr, - SuperLUStat_t *stat, int *info) -{ -/* +/*! \brief + * + *
  *   Purpose   
  *   =======   
  *
@@ -123,7 +124,15 @@
  *
  *    ITMAX is the maximum number of steps of iterative refinement.   
  *
- */  
+ * 
+ */ +void +sgsrfs(trans_t trans, SuperMatrix *A, SuperMatrix *L, SuperMatrix *U, + int *perm_c, int *perm_r, char *equed, float *R, float *C, + SuperMatrix *B, SuperMatrix *X, float *ferr, float *berr, + SuperLUStat_t *stat, int *info) +{ + #define ITMAX 5 @@ -224,6 +233,8 @@ nz = A->ncol + 1; eps = slamch_("Epsilon"); safmin = slamch_("Safe minimum"); + /* Set SAFE1 essentially to be the underflow threshold times the + number of additions in each row. */ safe1 = nz * safmin; safe2 = safe1 / eps; @@ -274,7 +285,7 @@ where abs(Z) is the componentwise absolute value of the matrix or vector Z. If the i-th component of the denominator is less than SAFE2, then SAFE1 is added to the i-th component of the - numerator and denominator before dividing. */ + numerator before dividing. */ for (i = 0; i < A->nrow; ++i) rwork[i] = fabs( Bptr[i] ); @@ -297,11 +308,15 @@ } s = 0.; for (i = 0; i < A->nrow; ++i) { - if (rwork[i] > safe2) + if (rwork[i] > safe2) { s = SUPERLU_MAX( s, fabs(work[i]) / rwork[i] ); - else - s = SUPERLU_MAX( s, (fabs(work[i]) + safe1) / - (rwork[i] + safe1) ); + } else if ( rwork[i] != 0.0 ) { + /* Adding SAFE1 to the numerator guards against + spuriously zero residuals (underflow). */ + s = SUPERLU_MAX( s, (safe1 + fabs(work[i])) / rwork[i] ); + } + /* If rwork[i] is exactly 0.0, then we know the true + residual also must be exactly 0.0. */ } berr[j] = s; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgssv.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgssv.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgssv.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgssv.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,20 +1,19 @@ - -/* +/*! @file sgssv.c + * \brief Solves the system of linear equations A*X=B + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
- *
+ * 
*/ -#include "ssp_defs.h" +#include "slu_sdefs.h" -void -sgssv(superlu_options_t *options, SuperMatrix *A, int *perm_c, int *perm_r, - SuperMatrix *L, SuperMatrix *U, SuperMatrix *B, - SuperLUStat_t *stat, int *info ) -{ -/* +/*! \brief + * + *
  * Purpose
  * =======
  *
@@ -127,15 +126,21 @@
  *                so the solution could not be computed.
  *             > A->ncol: number of bytes allocated when memory allocation
  *                failure occurred, plus A->ncol.
- *   
+ * 
*/ + +void +sgssv(superlu_options_t *options, SuperMatrix *A, int *perm_c, int *perm_r, + SuperMatrix *L, SuperMatrix *U, SuperMatrix *B, + SuperLUStat_t *stat, int *info ) +{ + DNformat *Bstore; SuperMatrix *AA;/* A in SLU_NC format used by the factorization routine.*/ SuperMatrix AC; /* Matrix postmultiplied by Pc */ int lwork = 0, *etree, i; /* Set default values for some parameters */ - float drop_tol = 0.; int panel_size; /* panel size */ int relax; /* no of columns in a relaxed snodes */ int permc_spec; @@ -201,8 +206,8 @@ relax, panel_size, sp_ienv(3), sp_ienv(4));*/ t = SuperLU_timer_(); /* Compute the LU factorization of A. */ - sgstrf(options, &AC, drop_tol, relax, panel_size, - etree, NULL, lwork, perm_c, perm_r, L, U, stat, info); + sgstrf(options, &AC, relax, panel_size, etree, + NULL, lwork, perm_c, perm_r, L, U, stat, info); utime[FACT] = SuperLU_timer_() - t; t = SuperLU_timer_(); diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgssvx.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgssvx.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgssvx.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgssvx.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,22 +1,19 @@ -/* +/*! @file sgssvx.c + * \brief Solves the system of linear equations A*X=B or A'*X=B + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
- *
+ * 
*/ -#include "ssp_defs.h" +#include "slu_sdefs.h" -void -sgssvx(superlu_options_t *options, SuperMatrix *A, int *perm_c, int *perm_r, - int *etree, char *equed, float *R, float *C, - SuperMatrix *L, SuperMatrix *U, void *work, int lwork, - SuperMatrix *B, SuperMatrix *X, float *recip_pivot_growth, - float *rcond, float *ferr, float *berr, - mem_usage_t *mem_usage, SuperLUStat_t *stat, int *info ) -{ -/* +/*! \brief + * + *
  * Purpose
  * =======
  *
@@ -314,7 +311,7 @@
  *
  * stat   (output) SuperLUStat_t*
  *        Record the statistics on runtime and floating-point operation count.
- *        See util.h for the definition of 'SuperLUStat_t'.
+ *        See slu_util.h for the definition of 'SuperLUStat_t'.
  *
  * info    (output) int*
  *         = 0: successful exit   
@@ -332,9 +329,19 @@
  *                    accurate than the value of RCOND would suggest.   
  *              > A->ncol+1: number of bytes allocated when memory allocation
  *                    failure occurred, plus A->ncol.
- *
+ * 
*/ +void +sgssvx(superlu_options_t *options, SuperMatrix *A, int *perm_c, int *perm_r, + int *etree, char *equed, float *R, float *C, + SuperMatrix *L, SuperMatrix *U, void *work, int lwork, + SuperMatrix *B, SuperMatrix *X, float *recip_pivot_growth, + float *rcond, float *ferr, float *berr, + mem_usage_t *mem_usage, SuperLUStat_t *stat, int *info ) +{ + + DNformat *Bstore, *Xstore; float *Bmat, *Xmat; int ldb, ldx, nrhs; @@ -346,13 +353,12 @@ int i, j, info1; float amax, anorm, bignum, smlnum, colcnd, rowcnd, rcmax, rcmin; int relax, panel_size; - float diag_pivot_thresh, drop_tol; + float diag_pivot_thresh; double t0; /* temporary time */ double *utime; /* External functions */ extern float slangs(char *, SuperMatrix *); - extern double slamch_(char *); Bstore = B->Store; Xstore = X->Store; @@ -443,7 +449,6 @@ panel_size = sp_ienv(1); relax = sp_ienv(2); diag_pivot_thresh = options->DiagPivotThresh; - drop_tol = 0.0; utime = stat->utime; @@ -523,8 +528,8 @@ /* Compute the LU factorization of A*Pc. */ t0 = SuperLU_timer_(); - sgstrf(options, &AC, drop_tol, relax, panel_size, - etree, work, lwork, perm_c, perm_r, L, U, stat, info); + sgstrf(options, &AC, relax, panel_size, etree, + work, lwork, perm_c, perm_r, L, U, stat, info); utime[FACT] = SuperLU_timer_() - t0; if ( lwork == -1 ) { diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgstrf.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgstrf.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgstrf.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgstrf.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,33 +1,32 @@ -/* +/*! @file sgstrf.c + * \brief Computes an LU factorization of a general sparse matrix + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
+ * 
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
  *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "ssp_defs.h" -void -sgstrf (superlu_options_t *options, SuperMatrix *A, float drop_tol, - int relax, int panel_size, int *etree, void *work, int lwork, - int *perm_c, int *perm_r, SuperMatrix *L, SuperMatrix *U, - SuperLUStat_t *stat, int *info) -{ -/* +#include "slu_sdefs.h" + +/*! \brief + * + *
  * Purpose
  * =======
  *
@@ -53,11 +52,6 @@
  *          (A->nrow, A->ncol). The type of A can be:
  *          Stype = SLU_NCP; Dtype = SLU_S; Mtype = SLU_GE.
  *
- * drop_tol (input) float (NOT IMPLEMENTED)
- *	    Drop tolerance parameter. At step j of the Gaussian elimination,
- *          if abs(A_ij)/(max_i abs(A_ij)) < drop_tol, drop entry A_ij.
- *          0 <= drop_tol <= 1. The default value of drop_tol is 0.
- *
  * relax    (input) int
  *          To control degree of relaxing supernodes. If the number
  *          of nodes (columns) in a subtree of the elimination tree is less
@@ -117,7 +111,7 @@
  *
  * stat     (output) SuperLUStat_t*
  *          Record the statistics on runtime and floating-point operation count.
- *          See util.h for the definition of 'SuperLUStat_t'.
+ *          See slu_util.h for the definition of 'SuperLUStat_t'.
  *
  * info     (output) int*
  *          = 0: successful exit
@@ -177,13 +171,20 @@
  *	    	   NOTE: there are W of them.
  *
  *   tempv[0:*]: real temporary used for dense numeric kernels;
- *	The size of this array is defined by NUM_TEMPV() in ssp_defs.h.
- *
+ *	The size of this array is defined by NUM_TEMPV() in slu_sdefs.h.
+ * 
*/ + +void +sgstrf (superlu_options_t *options, SuperMatrix *A, + int relax, int panel_size, int *etree, void *work, int lwork, + int *perm_c, int *perm_r, SuperMatrix *L, SuperMatrix *U, + SuperLUStat_t *stat, int *info) +{ /* Local working arrays */ NCPformat *Astore; - int *iperm_r; /* inverse of perm_r; - used when options->Fact == SamePattern_SameRowPerm */ + int *iperm_r = NULL; /* inverse of perm_r; used when + options->Fact == SamePattern_SameRowPerm */ int *iperm_c; /* inverse of perm_c */ int *iwork; float *swork; @@ -199,7 +200,8 @@ int *xsup, *supno; int *xlsub, *xlusup, *xusub; int nzlumax; - static GlobalLU_t Glu; /* persistent to facilitate multiple factors. */ + float fill_ratio = sp_ienv(6); /* estimated fill ratio */ + static GlobalLU_t Glu; /* persistent to facilitate multiple factors. */ /* Local scalars */ fact_t fact = options->Fact; @@ -230,7 +232,7 @@ /* Allocate storage common to the factor routines */ *info = sLUMemInit(fact, work, lwork, m, n, Astore->nnz, - panel_size, L, U, &Glu, &iwork, &swork); + panel_size, fill_ratio, L, U, &Glu, &iwork, &swork); if ( *info ) return; xsup = Glu.xsup; @@ -417,7 +419,7 @@ ((NCformat *)U->Store)->rowind = Glu.usub; ((NCformat *)U->Store)->colptr = Glu.xusub; } else { - sCreate_SuperNode_Matrix(L, A->nrow, A->ncol, nnzL, Glu.lusup, + sCreate_SuperNode_Matrix(L, A->nrow, min_mn, nnzL, Glu.lusup, Glu.xlusup, Glu.lsub, Glu.xlsub, Glu.supno, Glu.xsup, SLU_SC, SLU_S, SLU_TRLU); sCreate_CompCol_Matrix(U, min_mn, min_mn, nnzU, Glu.ucol, @@ -425,6 +427,7 @@ } ops[FACT] += ops[TRSV] + ops[GEMV]; + stat->expansions = --(Glu.num_expansions); if ( iperm_r_allocated ) SUPERLU_FREE (iperm_r); SUPERLU_FREE (iperm_c); diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgstrs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgstrs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgstrs.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sgstrs.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,25 +1,27 @@ -/* +/*! @file sgstrs.c + * \brief Solves a system using LU factorization + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "ssp_defs.h" +#include "slu_sdefs.h" /* @@ -29,13 +31,9 @@ void slsolve(int, int, float*, float*); void smatvec(int, int, int, float*, float*, float*); - -void -sgstrs (trans_t trans, SuperMatrix *L, SuperMatrix *U, - int *perm_c, int *perm_r, SuperMatrix *B, - SuperLUStat_t *stat, int *info) -{ -/* +/*! \brief + * + *
  * Purpose
  * =======
  *
@@ -85,8 +83,15 @@
  * info    (output) int*
  * 	   = 0: successful exit
  *	   < 0: if info = -i, the i-th argument had an illegal value
- *
+ * 
*/ + +void +sgstrs (trans_t trans, SuperMatrix *L, SuperMatrix *U, + int *perm_c, int *perm_r, SuperMatrix *B, + SuperLUStat_t *stat, int *info) +{ + #ifdef _CRAY _fcd ftcs1, ftcs2, ftcs3, ftcs4; #endif @@ -288,7 +293,7 @@ stat->ops[SOLVE] = solve_ops; - } else { /* Solve A'*X=B */ + } else { /* Solve A'*X=B or CONJ(A)*X=B */ /* Permute right hand sides to form Pc'*B. */ for (i = 0; i < nrhs; i++) { rhs_work = &Bmat[i*ldb]; @@ -297,7 +302,6 @@ } stat->ops[SOLVE] = 0; - for (k = 0; k < nrhs; ++k) { /* Multiply by inv(U'). */ @@ -307,7 +311,6 @@ sp_strsv("L", "T", "U", L, U, &Bmat[k*ldb], stat, info); } - /* Compute the final solution X := Pr'*X (=inv(Pr)*X) */ for (i = 0; i < nrhs; i++) { rhs_work = &Bmat[i*ldb]; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slacon.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slacon.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slacon.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slacon.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,66 +1,73 @@ - -/* +/*! @file slacon.c + * \brief Estimates the 1-norm + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
- *
+ * 
*/ #include -#include "Cnames.h" +#include "slu_Cnames.h" + +/*! \brief + * + *
+ *   Purpose   
+ *   =======   
+ *
+ *   SLACON estimates the 1-norm of a square matrix A.   
+ *   Reverse communication is used for evaluating matrix-vector products. 
+ * 
+ *
+ *   Arguments   
+ *   =========   
+ *
+ *   N      (input) INT
+ *          The order of the matrix.  N >= 1.   
+ *
+ *   V      (workspace) FLOAT PRECISION array, dimension (N)   
+ *          On the final return, V = A*W,  where  EST = norm(V)/norm(W)   
+ *          (W is not returned).   
+ *
+ *   X      (input/output) FLOAT PRECISION array, dimension (N)   
+ *          On an intermediate return, X should be overwritten by   
+ *                A * X,   if KASE=1,   
+ *                A' * X,  if KASE=2,
+ *         and SLACON must be re-called with all the other parameters   
+ *          unchanged.   
+ *
+ *   ISGN   (workspace) INT array, dimension (N)
+ *
+ *   EST    (output) FLOAT PRECISION   
+ *          An estimate (a lower bound) for norm(A).   
+ *
+ *   KASE   (input/output) INT
+ *          On the initial call to SLACON, KASE should be 0.   
+ *          On an intermediate return, KASE will be 1 or 2, indicating   
+ *          whether X should be overwritten by A * X  or A' * X.   
+ *          On the final return from SLACON, KASE will again be 0.   
+ *
+ *   Further Details   
+ *   ======= =======   
+ *
+ *   Contributed by Nick Higham, University of Manchester.   
+ *   Originally named CONEST, dated March 16, 1988.   
+ *
+ *   Reference: N.J. Higham, "FORTRAN codes for estimating the one-norm of 
+ *   a real or complex matrix, with applications to condition estimation", 
+ *   ACM Trans. Math. Soft., vol. 14, no. 4, pp. 381-396, December 1988.   
+ *   ===================================================================== 
+ * 
+ */ int slacon_(int *n, float *v, float *x, int *isgn, float *est, int *kase) { -/* - Purpose - ======= - - SLACON estimates the 1-norm of a square matrix A. - Reverse communication is used for evaluating matrix-vector products. - - - Arguments - ========= - - N (input) INT - The order of the matrix. N >= 1. - - V (workspace) FLOAT PRECISION array, dimension (N) - On the final return, V = A*W, where EST = norm(V)/norm(W) - (W is not returned). - - X (input/output) FLOAT PRECISION array, dimension (N) - On an intermediate return, X should be overwritten by - A * X, if KASE=1, - A' * X, if KASE=2, - and SLACON must be re-called with all the other parameters - unchanged. - - ISGN (workspace) INT array, dimension (N) - - EST (output) FLOAT PRECISION - An estimate (a lower bound) for norm(A). - - KASE (input/output) INT - On the initial call to SLACON, KASE should be 0. - On an intermediate return, KASE will be 1 or 2, indicating - whether X should be overwritten by A * X or A' * X. - On the final return from SLACON, KASE will again be 0. - - Further Details - ======= ======= - - Contributed by Nick Higham, University of Manchester. - Originally named CONEST, dated March 16, 1988. - - Reference: N.J. Higham, "FORTRAN codes for estimating the one-norm of - a real or complex matrix, with applications to condition estimation", - ACM Trans. Math. Soft., vol. 14, no. 4, pp. 381-396, December 1988. - ===================================================================== -*/ + /* Table of constant values */ int c__1 = 1; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slamch.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slamch.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slamch.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slamch.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,4 +1,16 @@ +/*! @file slamch.c + * \brief Determines single precision machine parameters and other service routines + * + *
+ *   -- LAPACK auxiliary routine (version 2.0) --   
+ *      Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd.,   
+ *      Courant Institute, Argonne National Lab, and Rice University   
+ *      October 31, 1992   
+ * 
+ */ #include +#include "slu_Cnames.h" + #define TRUE_ (1) #define FALSE_ (0) #define min(a,b) ((a) <= (b) ? (a) : (b)) @@ -6,15 +18,10 @@ #define abs(x) ((x) >= 0 ? (x) : -(x)) #define dabs(x) (double)abs(x) -double slamch_(char *cmach) -{ -/* -- LAPACK auxiliary routine (version 2.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 +/*! \brief - - Purpose +
+ Purpose   
     =======   
 
     SLAMCH determines single precision machine parameters.   
@@ -49,7 +56,10 @@
             rmax  = overflow threshold  - (base**emax)*(1-eps)   
 
    ===================================================================== 
+
*/ +double slamch_(char *cmach) +{ /* >>Start of File<< Initialized data */ static int first = TRUE_; @@ -133,16 +143,11 @@ } /* slamch_ */ -/* Subroutine */ int slamc1_(int *beta, int *t, int *rnd, int - *ieee1) -{ -/* -- LAPACK auxiliary routine (version 2.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - +/* Subroutine */ +/*! \brief - Purpose +
+ Purpose   
     =======   
 
     SLAMC1 determines the machine parameters given by BETA, T, RND, and   
@@ -183,7 +188,12 @@
           Comms. of the ACM, 17, 276-277.   
 
    ===================================================================== 
+
*/ + +int slamc1_(int *beta, int *t, int *rnd, int + *ieee1) +{ /* Initialized data */ static int first = TRUE_; /* System generated locals */ @@ -345,15 +355,11 @@ } /* slamc1_ */ -/* Subroutine */ int slamc2_(int *beta, int *t, int *rnd, float * - eps, int *emin, float *rmin, int *emax, float *rmax) -{ -/* -- LAPACK auxiliary routine (version 2.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 +/* Subroutine */ +/*! \brief +
     Purpose   
     =======   
 
@@ -409,7 +415,11 @@
     W. Kahan of the University of California at Berkeley.   
 
    ===================================================================== 
+
*/ +int slamc2_(int *beta, int *t, int *rnd, float * + eps, int *emin, float *rmin, int *emax, float *rmax) +{ /* Table of constant values */ static int c__1 = 1; @@ -647,15 +657,9 @@ } /* slamc2_ */ +/*! \brief -double slamc3_(float *a, float *b) -{ -/* -- LAPACK auxiliary routine (version 2.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - - +
     Purpose   
     =======   
 
@@ -672,14 +676,21 @@
             The values A and B.   
 
    ===================================================================== 
+
*/ -/* >>Start of File<< - System generated locals */ - float ret_val; - +double slamc3_(float *a, float *b) +{ - ret_val = *a + *b; +/* >>Start of File<< + System generated locals */ + volatile float ret_val; + volatile float x; + volatile float y; + + x = *a; + y = *b; + ret_val = x + y; return ret_val; @@ -688,14 +699,11 @@ } /* slamc3_ */ -/* Subroutine */ int slamc4_(int *emin, float *start, int *base) -{ -/* -- LAPACK auxiliary routine (version 2.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 +/* Subroutine */ +/*! \brief +
     Purpose   
     =======   
 
@@ -717,7 +725,11 @@
             The base of the machine.   
 
    ===================================================================== 
+
*/ + +int slamc4_(int *emin, float *start, int *base) +{ /* System generated locals */ int i__1; float r__1; @@ -778,15 +790,10 @@ } /* slamc4_ */ -/* Subroutine */ int slamc5_(int *beta, int *p, int *emin, - int *ieee, int *emax, float *rmax) -{ -/* -- LAPACK auxiliary routine (version 2.0) -- - Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., - Courant Institute, Argonne National Lab, and Rice University - October 31, 1992 - +/* Subroutine */ +/*! \brief +
     Purpose   
     =======   
 
@@ -828,7 +835,13 @@
        First compute LEXP and UEXP, two powers of 2 that bound   
        abs(EMIN). We then assume that EMAX + abs(EMIN) will sum   
        approximately to the bound that is closest to abs(EMIN).   
-       (EMAX is the exponent of the required number RMAX). */
+       (EMAX is the exponent of the required number RMAX). 
+
+*/ + +int slamc5_(int *beta, int *p, int *emin, + int *ieee, int *emax, float *rmax) +{ /* Table of constant values */ static float c_b5 = 0.f; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slangs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slangs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slangs.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slangs.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,58 +1,65 @@ - -/* +/*! @file slangs.c + * \brief Returns the value of the one norm + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
  *
+ * Modified from lapack routine SLANGE 
+ * 
*/ /* * File name: slangs.c * History: Modified from lapack routine SLANGE */ #include -#include "ssp_defs.h" -#include "util.h" +#include "slu_sdefs.h" + +/*! \brief + * + *
+ * Purpose   
+ *   =======   
+ *
+ *   SLANGS returns the value of the one norm, or the Frobenius norm, or 
+ *   the infinity norm, or the element of largest absolute value of a 
+ *   real matrix A.   
+ *
+ *   Description   
+ *   ===========   
+ *
+ *   SLANGE returns the value   
+ *
+ *      SLANGE = ( max(abs(A(i,j))), NORM = 'M' or 'm'   
+ *               (   
+ *               ( norm1(A),         NORM = '1', 'O' or 'o'   
+ *               (   
+ *               ( normI(A),         NORM = 'I' or 'i'   
+ *               (   
+ *               ( normF(A),         NORM = 'F', 'f', 'E' or 'e'   
+ *
+ *   where  norm1  denotes the  one norm of a matrix (maximum column sum), 
+ *   normI  denotes the  infinity norm  of a matrix  (maximum row sum) and 
+ *   normF  denotes the  Frobenius norm of a matrix (square root of sum of 
+ *   squares).  Note that  max(abs(A(i,j)))  is not a  matrix norm.   
+ *
+ *   Arguments   
+ *   =========   
+ *
+ *   NORM    (input) CHARACTER*1   
+ *           Specifies the value to be returned in SLANGE as described above.   
+ *   A       (input) SuperMatrix*
+ *           The M by N sparse matrix A. 
+ *
+ *  =====================================================================
+ * 
+ */ float slangs(char *norm, SuperMatrix *A) { -/* - Purpose - ======= - - SLANGS returns the value of the one norm, or the Frobenius norm, or - the infinity norm, or the element of largest absolute value of a - real matrix A. - - Description - =========== - - SLANGE returns the value - - SLANGE = ( max(abs(A(i,j))), NORM = 'M' or 'm' - ( - ( norm1(A), NORM = '1', 'O' or 'o' - ( - ( normI(A), NORM = 'I' or 'i' - ( - ( normF(A), NORM = 'F', 'f', 'E' or 'e' - - where norm1 denotes the one norm of a matrix (maximum column sum), - normI denotes the infinity norm of a matrix (maximum row sum) and - normF denotes the Frobenius norm of a matrix (square root of sum of - squares). Note that max(abs(A(i,j))) is not a matrix norm. - - Arguments - ========= - - NORM (input) CHARACTER*1 - Specifies the value to be returned in SLANGE as described above. - A (input) SuperMatrix* - The M by N sparse matrix A. - - ===================================================================== -*/ /* Local variables */ NCformat *Astore; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slaqgs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slaqgs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slaqgs.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slaqgs.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,80 +1,88 @@ - -/* +/*! @file slaqgs.c + * \brief Equlibrates a general sprase matrix + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
- *
+ * 
+ * Modified from LAPACK routine SLAQGE
+ * 
*/ /* * File name: slaqgs.c * History: Modified from LAPACK routine SLAQGE */ #include -#include "ssp_defs.h" -#include "util.h" +#include "slu_sdefs.h" + +/*! \brief + * + *
+ *   Purpose   
+ *   =======   
+ *
+ *   SLAQGS equilibrates a general sparse M by N matrix A using the row and   
+ *   scaling factors in the vectors R and C.   
+ *
+ *   See supermatrix.h for the definition of 'SuperMatrix' structure.
+ *
+ *   Arguments   
+ *   =========   
+ *
+ *   A       (input/output) SuperMatrix*
+ *           On exit, the equilibrated matrix.  See EQUED for the form of 
+ *           the equilibrated matrix. The type of A can be:
+ *	    Stype = NC; Dtype = SLU_S; Mtype = GE.
+ *	    
+ *   R       (input) float*, dimension (A->nrow)
+ *           The row scale factors for A.
+ *	    
+ *   C       (input) float*, dimension (A->ncol)
+ *           The column scale factors for A.
+ *	    
+ *   ROWCND  (input) float
+ *           Ratio of the smallest R(i) to the largest R(i).
+ *	    
+ *   COLCND  (input) float
+ *           Ratio of the smallest C(i) to the largest C(i).
+ *	    
+ *   AMAX    (input) float
+ *           Absolute value of largest matrix entry.
+ *	    
+ *   EQUED   (output) char*
+ *           Specifies the form of equilibration that was done.   
+ *           = 'N':  No equilibration   
+ *           = 'R':  Row equilibration, i.e., A has been premultiplied by  
+ *                   diag(R).   
+ *           = 'C':  Column equilibration, i.e., A has been postmultiplied  
+ *                   by diag(C).   
+ *           = 'B':  Both row and column equilibration, i.e., A has been
+ *                   replaced by diag(R) * A * diag(C).   
+ *
+ *   Internal Parameters   
+ *   ===================   
+ *
+ *   THRESH is a threshold value used to decide if row or column scaling   
+ *   should be done based on the ratio of the row or column scaling   
+ *   factors.  If ROWCND < THRESH, row scaling is done, and if   
+ *   COLCND < THRESH, column scaling is done.   
+ *
+ *   LARGE and SMALL are threshold values used to decide if row scaling   
+ *   should be done based on the absolute size of the largest matrix   
+ *   element.  If AMAX > LARGE or AMAX < SMALL, row scaling is done.   
+ *
+ *   ===================================================================== 
+ * 
+ */ void slaqgs(SuperMatrix *A, float *r, float *c, float rowcnd, float colcnd, float amax, char *equed) { -/* - Purpose - ======= - - SLAQGS equilibrates a general sparse M by N matrix A using the row and - scaling factors in the vectors R and C. - - See supermatrix.h for the definition of 'SuperMatrix' structure. - - Arguments - ========= - - A (input/output) SuperMatrix* - On exit, the equilibrated matrix. See EQUED for the form of - the equilibrated matrix. The type of A can be: - Stype = NC; Dtype = SLU_S; Mtype = GE. - - R (input) float*, dimension (A->nrow) - The row scale factors for A. - - C (input) float*, dimension (A->ncol) - The column scale factors for A. - - ROWCND (input) float - Ratio of the smallest R(i) to the largest R(i). - - COLCND (input) float - Ratio of the smallest C(i) to the largest C(i). - - AMAX (input) float - Absolute value of largest matrix entry. - - EQUED (output) char* - Specifies the form of equilibration that was done. - = 'N': No equilibration - = 'R': Row equilibration, i.e., A has been premultiplied by - diag(R). - = 'C': Column equilibration, i.e., A has been postmultiplied - by diag(C). - = 'B': Both row and column equilibration, i.e., A has been - replaced by diag(R) * A * diag(C). - - Internal Parameters - =================== - - THRESH is a threshold value used to decide if row or column scaling - should be done based on the ratio of the row or column scaling - factors. If ROWCND < THRESH, row scaling is done, and if - COLCND < THRESH, column scaling is done. - - LARGE and SMALL are threshold values used to decide if row scaling - should be done based on the absolute size of the largest matrix - element. If AMAX > LARGE or AMAX < SMALL, row scaling is done. - ===================================================================== -*/ #define THRESH (0.1) diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sldperm.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sldperm.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sldperm.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sldperm.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,168 @@ + +/*! @file + * \brief Finds a row permutation so that the matrix has large entries on the diagonal + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory.
+ * June 30, 2009
+ * 
+ */ + +#include "slu_sdefs.h" + +extern void mc64id_(int_t*); +extern void mc64ad_(int_t*, int_t*, int_t*, int_t [], int_t [], double [], + int_t*, int_t [], int_t*, int_t[], int_t*, double [], + int_t [], int_t []); + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *
+ *   SLDPERM finds a row permutation so that the matrix has large
+ *   entries on the diagonal.
+ *
+ * Arguments
+ * =========
+ *
+ * job    (input) int
+ *        Control the action. Possible values for JOB are:
+ *        = 1 : Compute a row permutation of the matrix so that the
+ *              permuted matrix has as many entries on its diagonal as
+ *              possible. The values on the diagonal are of arbitrary size.
+ *              HSL subroutine MC21A/AD is used for this.
+ *        = 2 : Compute a row permutation of the matrix so that the smallest 
+ *              value on the diagonal of the permuted matrix is maximized.
+ *        = 3 : Compute a row permutation of the matrix so that the smallest
+ *              value on the diagonal of the permuted matrix is maximized.
+ *              The algorithm differs from the one used for JOB = 2 and may
+ *              have quite a different performance.
+ *        = 4 : Compute a row permutation of the matrix so that the sum
+ *              of the diagonal entries of the permuted matrix is maximized.
+ *        = 5 : Compute a row permutation of the matrix so that the product
+ *              of the diagonal entries of the permuted matrix is maximized
+ *              and vectors to scale the matrix so that the nonzero diagonal 
+ *              entries of the permuted matrix are one in absolute value and 
+ *              all the off-diagonal entries are less than or equal to one in 
+ *              absolute value.
+ *        Restriction: 1 <= JOB <= 5.
+ *
+ * n      (input) int
+ *        The order of the matrix.
+ *
+ * nnz    (input) int
+ *        The number of nonzeros in the matrix.
+ *
+ * adjncy (input) int*, of size nnz
+ *        The adjacency structure of the matrix, which contains the row
+ *        indices of the nonzeros.
+ *
+ * colptr (input) int*, of size n+1
+ *        The pointers to the beginning of each column in ADJNCY.
+ *
+ * nzval  (input) float*, of size nnz
+ *        The nonzero values of the matrix. nzval[k] is the value of
+ *        the entry corresponding to adjncy[k].
+ *        It is not used if job = 1.
+ *
+ * perm   (output) int*, of size n
+ *        The permutation vector. perm[i] = j means row i in the
+ *        original matrix is in row j of the permuted matrix.
+ *
+ * u      (output) double*, of size n
+ *        If job = 5, the natural logarithms of the row scaling factors. 
+ *
+ * v      (output) double*, of size n
+ *        If job = 5, the natural logarithms of the column scaling factors. 
+ *        The scaled matrix B has entries b_ij = a_ij * exp(u_i + v_j).
+ * 
+ */ + +int +sldperm(int_t job, int_t n, int_t nnz, int_t colptr[], int_t adjncy[], + float nzval[], int_t *perm, float u[], float v[]) +{ + int_t i, liw, ldw, num; + int_t *iw, icntl[10], info[10]; + double *dw; + double *nzval_d = (double *) SUPERLU_MALLOC(nnz * sizeof(double)); + +#if ( DEBUGlevel>=1 ) + CHECK_MALLOC(0, "Enter sldperm()"); +#endif + liw = 5*n; + if ( job == 3 ) liw = 10*n + nnz; + if ( !(iw = intMalloc(liw)) ) ABORT("Malloc fails for iw[]"); + ldw = 3*n + nnz; + if ( !(dw = (double*) SUPERLU_MALLOC(ldw * sizeof(double))) ) + ABORT("Malloc fails for dw[]"); + + /* Increment one to get 1-based indexing. */ + for (i = 0; i <= n; ++i) ++colptr[i]; + for (i = 0; i < nnz; ++i) ++adjncy[i]; +#if ( DEBUGlevel>=2 ) + printf("LDPERM(): n %d, nnz %d\n", n, nnz); + slu_PrintInt10("colptr", n+1, colptr); + slu_PrintInt10("adjncy", nnz, adjncy); +#endif + + /* + * NOTE: + * ===== + * + * MC64AD assumes that column permutation vector is defined as: + * perm(i) = j means column i of permuted A is in column j of original A. + * + * Since a symmetric permutation preserves the diagonal entries. Then + * by the following relation: + * P'(A*P')P = P'A + * we can apply inverse(perm) to rows of A to get large diagonal entries. + * But, since 'perm' defined in MC64AD happens to be the reverse of + * SuperLU's definition of permutation vector, therefore, it is already + * an inverse for our purpose. We will thus use it directly. + * + */ + mc64id_(icntl); +#if 0 + /* Suppress error and warning messages. */ + icntl[0] = -1; + icntl[1] = -1; +#endif + + for (i = 0; i < nnz; ++i) nzval_d[i] = nzval[i]; + mc64ad_(&job, &n, &nnz, colptr, adjncy, nzval_d, &num, perm, + &liw, iw, &ldw, dw, icntl, info); + +#if ( DEBUGlevel>=2 ) + slu_PrintInt10("perm", n, perm); + printf(".. After MC64AD info %d\tsize of matching %d\n", info[0], num); +#endif + if ( info[0] == 1 ) { /* Structurally singular */ + printf(".. The last %d permutations:\n", n-num); + slu_PrintInt10("perm", n-num, &perm[num]); + } + + /* Restore to 0-based indexing. */ + for (i = 0; i <= n; ++i) --colptr[i]; + for (i = 0; i < nnz; ++i) --adjncy[i]; + for (i = 0; i < n; ++i) --perm[i]; + + if ( job == 5 ) + for (i = 0; i < n; ++i) { + u[i] = dw[i]; + v[i] = dw[n+i]; + } + + SUPERLU_FREE(iw); + SUPERLU_FREE(dw); + SUPERLU_FREE(nzval_d); + +#if ( DEBUGlevel>=1 ) + CHECK_MALLOC(0, "Exit sldperm()"); +#endif + + return info[0]; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_cdefs.h python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_cdefs.h --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_cdefs.h 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_cdefs.h 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,282 @@ + +/*! @file slu_cdefs.h + * \brief Header file for real operations + * + *
 
+ * -- SuperLU routine (version 4.0) --
+ * Univ. of California Berkeley, Xerox Palo Alto Research Center,
+ * and Lawrence Berkeley National Lab.
+ * June 30, 2009
+ * 
+ * Global data structures used in LU factorization -
+ * 
+ *   nsuper: #supernodes = nsuper + 1, numbered [0, nsuper].
+ *   (xsup,supno): supno[i] is the supernode no to which i belongs;
+ *	xsup(s) points to the beginning of the s-th supernode.
+ *	e.g.   supno 0 1 2 2 3 3 3 4 4 4 4 4   (n=12)
+ *	        xsup 0 1 2 4 7 12
+ *	Note: dfs will be performed on supernode rep. relative to the new 
+ *	      row pivoting ordering
+ *
+ *   (xlsub,lsub): lsub[*] contains the compressed subscript of
+ *	rectangular supernodes; xlsub[j] points to the starting
+ *	location of the j-th column in lsub[*]. Note that xlsub 
+ *	is indexed by column.
+ *	Storage: original row subscripts
+ *
+ *      During the course of sparse LU factorization, we also use
+ *	(xlsub,lsub) for the purpose of symmetric pruning. For each
+ *	supernode {s,s+1,...,t=s+r} with first column s and last
+ *	column t, the subscript set
+ *		lsub[j], j=xlsub[s], .., xlsub[s+1]-1
+ *	is the structure of column s (i.e. structure of this supernode).
+ *	It is used for the storage of numerical values.
+ *	Furthermore,
+ *		lsub[j], j=xlsub[t], .., xlsub[t+1]-1
+ *	is the structure of the last column t of this supernode.
+ *	It is for the purpose of symmetric pruning. Therefore, the
+ *	structural subscripts can be rearranged without making physical
+ *	interchanges among the numerical values.
+ *
+ *	However, if the supernode has only one column, then we
+ *	only keep one set of subscripts. For any subscript interchange
+ *	performed, similar interchange must be done on the numerical
+ *	values.
+ *
+ *	The last column structures (for pruning) will be removed
+ *	after the numercial LU factorization phase.
+ *
+ *   (xlusup,lusup): lusup[*] contains the numerical values of the
+ *	rectangular supernodes; xlusup[j] points to the starting
+ *	location of the j-th column in storage vector lusup[*]
+ *	Note: xlusup is indexed by column.
+ *	Each rectangular supernode is stored by column-major
+ *	scheme, consistent with Fortran 2-dim array storage.
+ *
+ *   (xusub,ucol,usub): ucol[*] stores the numerical values of
+ *	U-columns outside the rectangular supernodes. The row
+ *	subscript of nonzero ucol[k] is stored in usub[k].
+ *	xusub[i] points to the starting location of column i in ucol.
+ *	Storage: new row subscripts; that is subscripts of PA.
+ * 
+ */ +#ifndef __SUPERLU_cSP_DEFS /* allow multiple inclusions */ +#define __SUPERLU_cSP_DEFS + +/* + * File name: csp_defs.h + * Purpose: Sparse matrix types and function prototypes + * History: + */ + +#ifdef _CRAY +#include +#include +#endif + +/* Define my integer type int_t */ +typedef int int_t; /* default */ + +#include +#include +#include "slu_Cnames.h" +#include "supermatrix.h" +#include "slu_util.h" +#include "slu_scomplex.h" + + + +typedef struct { + int *xsup; /* supernode and column mapping */ + int *supno; + int *lsub; /* compressed L subscripts */ + int *xlsub; + complex *lusup; /* L supernodes */ + int *xlusup; + complex *ucol; /* U columns */ + int *usub; + int *xusub; + int nzlmax; /* current max size of lsub */ + int nzumax; /* " " " ucol */ + int nzlumax; /* " " " lusup */ + int n; /* number of columns in the matrix */ + LU_space_t MemModel; /* 0 - system malloc'd; 1 - user provided */ + int num_expansions; + ExpHeader *expanders; /* Array of pointers to 4 types of memory */ + LU_stack_t stack; /* use user supplied memory */ +} GlobalLU_t; + + +/* -------- Prototypes -------- */ + +#ifdef __cplusplus +extern "C" { +#endif + +/*! \brief Driver routines */ +extern void +cgssv(superlu_options_t *, SuperMatrix *, int *, int *, SuperMatrix *, + SuperMatrix *, SuperMatrix *, SuperLUStat_t *, int *); +extern void +cgssvx(superlu_options_t *, SuperMatrix *, int *, int *, int *, + char *, float *, float *, SuperMatrix *, SuperMatrix *, + void *, int, SuperMatrix *, SuperMatrix *, + float *, float *, float *, float *, + mem_usage_t *, SuperLUStat_t *, int *); + /* ILU */ +extern void +cgsisv(superlu_options_t *, SuperMatrix *, int *, int *, SuperMatrix *, + SuperMatrix *, SuperMatrix *, SuperLUStat_t *, int *); +extern void +cgsisx(superlu_options_t *, SuperMatrix *, int *, int *, int *, + char *, float *, float *, SuperMatrix *, SuperMatrix *, + void *, int, SuperMatrix *, SuperMatrix *, float *, float *, + mem_usage_t *, SuperLUStat_t *, int *); + + +/*! \brief Supernodal LU factor related */ +extern void +cCreate_CompCol_Matrix(SuperMatrix *, int, int, int, complex *, + int *, int *, Stype_t, Dtype_t, Mtype_t); +extern void +cCreate_CompRow_Matrix(SuperMatrix *, int, int, int, complex *, + int *, int *, Stype_t, Dtype_t, Mtype_t); +extern void +cCopy_CompCol_Matrix(SuperMatrix *, SuperMatrix *); +extern void +cCreate_Dense_Matrix(SuperMatrix *, int, int, complex *, int, + Stype_t, Dtype_t, Mtype_t); +extern void +cCreate_SuperNode_Matrix(SuperMatrix *, int, int, int, complex *, + int *, int *, int *, int *, int *, + Stype_t, Dtype_t, Mtype_t); +extern void +cCopy_Dense_Matrix(int, int, complex *, int, complex *, int); + +extern void countnz (const int, int *, int *, int *, GlobalLU_t *); +extern void ilu_countnz (const int, int *, int *, GlobalLU_t *); +extern void fixupL (const int, const int *, GlobalLU_t *); + +extern void callocateA (int, int, complex **, int **, int **); +extern void cgstrf (superlu_options_t*, SuperMatrix*, + int, int, int*, void *, int, int *, int *, + SuperMatrix *, SuperMatrix *, SuperLUStat_t*, int *); +extern int csnode_dfs (const int, const int, const int *, const int *, + const int *, int *, int *, GlobalLU_t *); +extern int csnode_bmod (const int, const int, const int, complex *, + complex *, GlobalLU_t *, SuperLUStat_t*); +extern void cpanel_dfs (const int, const int, const int, SuperMatrix *, + int *, int *, complex *, int *, int *, int *, + int *, int *, int *, int *, GlobalLU_t *); +extern void cpanel_bmod (const int, const int, const int, const int, + complex *, complex *, int *, int *, + GlobalLU_t *, SuperLUStat_t*); +extern int ccolumn_dfs (const int, const int, int *, int *, int *, int *, + int *, int *, int *, int *, int *, GlobalLU_t *); +extern int ccolumn_bmod (const int, const int, complex *, + complex *, int *, int *, int, + GlobalLU_t *, SuperLUStat_t*); +extern int ccopy_to_ucol (int, int, int *, int *, int *, + complex *, GlobalLU_t *); +extern int cpivotL (const int, const double, int *, int *, + int *, int *, int *, GlobalLU_t *, SuperLUStat_t*); +extern void cpruneL (const int, const int *, const int, const int, + const int *, const int *, int *, GlobalLU_t *); +extern void creadmt (int *, int *, int *, complex **, int **, int **); +extern void cGenXtrue (int, int, complex *, int); +extern void cFillRHS (trans_t, int, complex *, int, SuperMatrix *, + SuperMatrix *); +extern void cgstrs (trans_t, SuperMatrix *, SuperMatrix *, int *, int *, + SuperMatrix *, SuperLUStat_t*, int *); +/* ILU */ +extern void cgsitrf (superlu_options_t*, SuperMatrix*, int, int, int*, + void *, int, int *, int *, SuperMatrix *, SuperMatrix *, + SuperLUStat_t*, int *); +extern int cldperm(int, int, int, int [], int [], complex [], + int [], float [], float []); +extern int ilu_csnode_dfs (const int, const int, const int *, const int *, + const int *, int *, GlobalLU_t *); +extern void ilu_cpanel_dfs (const int, const int, const int, SuperMatrix *, + int *, int *, complex *, float *, int *, int *, + int *, int *, int *, int *, GlobalLU_t *); +extern int ilu_ccolumn_dfs (const int, const int, int *, int *, int *, + int *, int *, int *, int *, int *, + GlobalLU_t *); +extern int ilu_ccopy_to_ucol (int, int, int *, int *, int *, + complex *, int, milu_t, double, int, + complex *, int *, GlobalLU_t *, int *); +extern int ilu_cpivotL (const int, const double, int *, int *, int, int *, + int *, int *, int *, double, milu_t, + complex, GlobalLU_t *, SuperLUStat_t*); +extern int ilu_cdrop_row (superlu_options_t *, int, int, double, + int, int *, double *, GlobalLU_t *, + float *, int *, int); + + +/*! \brief Driver related */ + +extern void cgsequ (SuperMatrix *, float *, float *, float *, + float *, float *, int *); +extern void claqgs (SuperMatrix *, float *, float *, float, + float, float, char *); +extern void cgscon (char *, SuperMatrix *, SuperMatrix *, + float, float *, SuperLUStat_t*, int *); +extern float cPivotGrowth(int, SuperMatrix *, int *, + SuperMatrix *, SuperMatrix *); +extern void cgsrfs (trans_t, SuperMatrix *, SuperMatrix *, + SuperMatrix *, int *, int *, char *, float *, + float *, SuperMatrix *, SuperMatrix *, + float *, float *, SuperLUStat_t*, int *); + +extern int sp_ctrsv (char *, char *, char *, SuperMatrix *, + SuperMatrix *, complex *, SuperLUStat_t*, int *); +extern int sp_cgemv (char *, complex, SuperMatrix *, complex *, + int, complex, complex *, int); + +extern int sp_cgemm (char *, char *, int, int, int, complex, + SuperMatrix *, complex *, int, complex, + complex *, int); +extern double slamch_(char *); + + +/*! \brief Memory-related */ +extern int cLUMemInit (fact_t, void *, int, int, int, int, int, + float, SuperMatrix *, SuperMatrix *, + GlobalLU_t *, int **, complex **); +extern void cSetRWork (int, int, complex *, complex **, complex **); +extern void cLUWorkFree (int *, complex *, GlobalLU_t *); +extern int cLUMemXpand (int, int, MemType, int *, GlobalLU_t *); + +extern complex *complexMalloc(int); +extern complex *complexCalloc(int); +extern float *floatMalloc(int); +extern float *floatCalloc(int); +extern int cmemory_usage(const int, const int, const int, const int); +extern int cQuerySpace (SuperMatrix *, SuperMatrix *, mem_usage_t *); +extern int ilu_cQuerySpace (SuperMatrix *, SuperMatrix *, mem_usage_t *); + +/*! \brief Auxiliary routines */ +extern void creadhb(int *, int *, int *, complex **, int **, int **); +extern void creadrb(int *, int *, int *, complex **, int **, int **); +extern void creadtriple(int *, int *, int *, complex **, int **, int **); +extern void cCompRow_to_CompCol(int, int, int, complex*, int*, int*, + complex **, int **, int **); +extern void cfill (complex *, int, complex); +extern void cinf_norm_error (int, SuperMatrix *, complex *); +extern void PrintPerf (SuperMatrix *, SuperMatrix *, mem_usage_t *, + complex, complex, complex *, complex *, char *); + +/*! \brief Routines for debugging */ +extern void cPrint_CompCol_Matrix(char *, SuperMatrix *); +extern void cPrint_SuperNode_Matrix(char *, SuperMatrix *); +extern void cPrint_Dense_Matrix(char *, SuperMatrix *); +extern void cprint_lu_col(char *, int, int, int *, GlobalLU_t *); +extern int print_double_vec(char *, int, double *); +extern void check_tempv(int, complex *); + +#ifdef __cplusplus + } +#endif + +#endif /* __SUPERLU_cSP_DEFS */ + diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_Cnames.h python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_Cnames.h --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_Cnames.h 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_Cnames.h 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,362 @@ +/*! @file slu_Cnames.h + * \brief Macros defining how C routines will be called + * + *
+ * -- SuperLU routine (version 2.0) --
+ * Univ. of California Berkeley, Xerox Palo Alto Research Center,
+ * and Lawrence Berkeley National Lab.
+ * November 1, 1997
+ *
+ * These macros define how C routines will be called.  ADD_ assumes that
+ * they will be called by fortran, which expects C routines to have an
+ * underscore postfixed to the name (Suns, and the Intel expect this).
+ * NOCHANGE indicates that fortran will be calling, and that it expects
+ * the name called by fortran to be identical to that compiled by the C
+ * (RS6K's do this).  UPCASE says it expects C routines called by fortran
+ * to be in all upcase (CRAY wants this). 
+ * 
+ */ +#ifndef __SUPERLU_CNAMES /* allow multiple inclusions */ +#define __SUPERLU_CNAMES + +#include "scipy_slu_config.h" + +#define ADD_ 0 +#define ADD__ 1 +#define NOCHANGE 2 +#define UPCASE 3 +#define C_CALL 4 + +#ifdef UpCase +#define F77_CALL_C UPCASE +#endif + +#ifdef NoChange +#define F77_CALL_C NOCHANGE +#endif + +#ifdef Add_ +#define F77_CALL_C ADD_ +#endif + +#ifdef Add__ +#define F77_CALL_C ADD__ +#endif + +/* Default */ +#ifndef F77_CALL_C +#define F77_CALL_C ADD_ +#endif + + +#if (F77_CALL_C == ADD_) +/* + * These defines set up the naming scheme required to have a fortran 77 + * routine call a C routine + * No redefinition necessary to have following Fortran to C interface: + * FORTRAN CALL C DECLARATION + * call dgemm(...) void dgemm_(...) + * + * This is the default. + */ + +#endif + +#if (F77_CALL_C == ADD__) +/* + * These defines set up the naming scheme required to have a fortran 77 + * routine call a C routine + * for following Fortran to C interface: + * FORTRAN CALL C DECLARATION + * call dgemm(...) void dgemm__(...) + */ +/* BLAS */ +#define sswap_ sswap__ +#define saxpy_ saxpy__ +#define sasum_ sasum__ +#define isamax_ isamax__ +#define scopy_ scopy__ +#define sscal_ sscal__ +#define sger_ sger__ +#define snrm2_ snrm2__ +#define ssymv_ ssymv__ +#define sdot_ sdot__ +#define saxpy_ saxpy__ +#define ssyr2_ ssyr2__ +#define srot_ srot__ +#define sgemv_ sgemv__ +#define strsv_ strsv__ +#define sgemm_ sgemm__ +#define strsm_ strsm__ + +#define dswap_ dswap__ +#define daxpy_ daxpy__ +#define dasum_ dasum__ +#define idamax_ idamax__ +#define dcopy_ dcopy__ +#define dscal_ dscal__ +#define dger_ dger__ +#define dnrm2_ dnrm2__ +#define dsymv_ dsymv__ +#define ddot_ ddot__ +#define daxpy_ daxpy__ +#define dsyr2_ dsyr2__ +#define drot_ drot__ +#define dgemv_ dgemv__ +#define dtrsv_ dtrsv__ +#define dgemm_ dgemm__ +#define dtrsm_ dtrsm__ + +#define cswap_ cswap__ +#define caxpy_ caxpy__ +#define scasum_ scasum__ +#define icamax_ icamax__ +#define ccopy_ ccopy__ +#define cscal_ cscal__ +#define scnrm2_ scnrm2__ +#define caxpy_ caxpy__ +#define cgemv_ cgemv__ +#define ctrsv_ ctrsv__ +#define cgemm_ cgemm__ +#define ctrsm_ ctrsm__ +#define cgerc_ cgerc__ +#define chemv_ chemv__ +#define cher2_ cher2__ + +#define zswap_ zswap__ +#define zaxpy_ zaxpy__ +#define dzasum_ dzasum__ +#define izamax_ izamax__ +#define zcopy_ zcopy__ +#define zscal_ zscal__ +#define dznrm2_ dznrm2__ +#define zaxpy_ zaxpy__ +#define zgemv_ zgemv__ +#define ztrsv_ ztrsv__ +#define zgemm_ zgemm__ +#define ztrsm_ ztrsm__ +#define zgerc_ zgerc__ +#define zhemv_ zhemv__ +#define zher2_ zher2__ + +/* LAPACK */ +#define dlamch_ dlamch__ +#define slamch_ slamch__ +#define xerbla_ xerbla__ +#define lsame_ lsame__ +#define dlacon_ dlacon__ +#define slacon_ slacon__ +#define icmax1_ icmax1__ +#define scsum1_ scsum1__ +#define clacon_ clacon__ +#define dzsum1_ dzsum1__ +#define izmax1_ izmax1__ +#define zlacon_ zlacon__ + +/* Fortran interface */ +#define c_bridge_dgssv_ c_bridge_dgssv__ +#define c_fortran_sgssv_ c_fortran_sgssv__ +#define c_fortran_dgssv_ c_fortran_dgssv__ +#define c_fortran_cgssv_ c_fortran_cgssv__ +#define c_fortran_zgssv_ c_fortran_zgssv__ +#endif + +#if (F77_CALL_C == UPCASE) +/* + * These defines set up the naming scheme required to have a fortran 77 + * routine call a C routine + * following Fortran to C interface: + * FORTRAN CALL C DECLARATION + * call dgemm(...) void DGEMM(...) + */ +/* BLAS */ +#define sswap_ SSWAP +#define saxpy_ SAXPY +#define sasum_ SASUM +#define isamax_ ISAMAX +#define scopy_ SCOPY +#define sscal_ SSCAL +#define sger_ SGER +#define snrm2_ SNRM2 +#define ssymv_ SSYMV +#define sdot_ SDOT +#define saxpy_ SAXPY +#define ssyr2_ SSYR2 +#define srot_ SROT +#define sgemv_ SGEMV +#define strsv_ STRSV +#define sgemm_ SGEMM +#define strsm_ STRSM + +#define dswap_ DSWAP +#define daxpy_ DAXPY +#define dasum_ SASUM +#define idamax_ ISAMAX +#define dcopy_ SCOPY +#define dscal_ SSCAL +#define dger_ SGER +#define dnrm2_ SNRM2 +#define dsymv_ SSYMV +#define ddot_ SDOT +#define daxpy_ SAXPY +#define dsyr2_ SSYR2 +#define drot_ SROT +#define dgemv_ SGEMV +#define dtrsv_ STRSV +#define dgemm_ SGEMM +#define dtrsm_ STRSM + +#define cswap_ CSWAP +#define caxpy_ CAXPY +#define scasum_ SCASUM +#define icamax_ ICAMAX +#define ccopy_ CCOPY +#define cscal_ CSCAL +#define scnrm2_ SCNRM2 +#define caxpy_ CAXPY +#define cgemv_ CGEMV +#define ctrsv_ CTRSV +#define cgemm_ CGEMM +#define ctrsm_ CTRSM +#define cgerc_ CGERC +#define chemv_ CHEMV +#define cher2_ CHER2 + +#define zswap_ ZSWAP +#define zaxpy_ ZAXPY +#define dzasum_ DZASUM +#define izamax_ IZAMAX +#define zcopy_ ZCOPY +#define zscal_ ZSCAL +#define dznrm2_ DZNRM2 +#define zaxpy_ ZAXPY +#define zgemv_ ZGEMV +#define ztrsv_ ZTRSV +#define zgemm_ ZGEMM +#define ztrsm_ ZTRSM +#define zgerc_ ZGERC +#define zhemv_ ZHEMV +#define zher2_ ZHER2 + +/* LAPACK */ +#define dlamch_ DLAMCH +#define slamch_ SLAMCH +#define xerbla_ XERBLA +#define lsame_ LSAME +#define dlacon_ DLACON +#define slacon_ SLACON +#define icmax1_ ICMAX1 +#define scsum1_ SCSUM1 +#define clacon_ CLACON +#define dzsum1_ DZSUM1 +#define izmax1_ IZMAX1 +#define zlacon_ ZLACON + +/* Fortran interface */ +#define c_bridge_dgssv_ C_BRIDGE_DGSSV +#define c_fortran_sgssv_ C_FORTRAN_SGSSV +#define c_fortran_dgssv_ C_FORTRAN_DGSSV +#define c_fortran_cgssv_ C_FORTRAN_CGSSV +#define c_fortran_zgssv_ C_FORTRAN_ZGSSV +#endif + +#if (F77_CALL_C == NOCHANGE) +/* + * These defines set up the naming scheme required to have a fortran 77 + * routine call a C routine + * for following Fortran to C interface: + * FORTRAN CALL C DECLARATION + * call dgemm(...) void dgemm(...) + */ +/* BLAS */ +#define sswap_ sswap +#define saxpy_ saxpy +#define sasum_ sasum +#define isamax_ isamax +#define scopy_ scopy +#define sscal_ sscal +#define sger_ sger +#define snrm2_ snrm2 +#define ssymv_ ssymv +#define sdot_ sdot +#define saxpy_ saxpy +#define ssyr2_ ssyr2 +#define srot_ srot +#define sgemv_ sgemv +#define strsv_ strsv +#define sgemm_ sgemm +#define strsm_ strsm + +#define dswap_ dswap +#define daxpy_ daxpy +#define dasum_ dasum +#define idamax_ idamax +#define dcopy_ dcopy +#define dscal_ dscal +#define dger_ dger +#define dnrm2_ dnrm2 +#define dsymv_ dsymv +#define ddot_ ddot +#define daxpy_ daxpy +#define dsyr2_ dsyr2 +#define drot_ drot +#define dgemv_ dgemv +#define dtrsv_ dtrsv +#define dgemm_ dgemm +#define dtrsm_ dtrsm + +#define cswap_ cswap +#define caxpy_ caxpy +#define scasum_ scasum +#define icamax_ icamax +#define ccopy_ ccopy +#define cscal_ cscal +#define scnrm2_ scnrm2 +#define caxpy_ caxpy +#define cgemv_ cgemv +#define ctrsv_ ctrsv +#define cgemm_ cgemm +#define ctrsm_ ctrsm +#define cgerc_ cgerc +#define chemv_ chemv +#define cher2_ cher2 + +#define zswap_ zswap +#define zaxpy_ zaxpy +#define dzasum_ dzasum +#define izamax_ izamax +#define zcopy_ zcopy +#define zscal_ zscal +#define dznrm2_ dznrm2 +#define zaxpy_ zaxpy +#define zgemv_ zgemv +#define ztrsv_ ztrsv +#define zgemm_ zgemm +#define ztrsm_ ztrsm +#define zgerc_ zgerc +#define zhemv_ zhemv +#define zher2_ zher2 + +/* LAPACK */ +#define dlamch_ dlamch +#define slamch_ slamch +#define xerbla_ xerbla +#define lsame_ lsame +#define dlacon_ dlacon +#define slacon_ slacon +#define icmax1_ icmax1 +#define scsum1_ scsum1 +#define clacon_ clacon +#define dzsum1_ dzsum1 +#define izmax1_ izmax1 +#define zlacon_ zlacon + +/* Fortran interface */ +#define c_bridge_dgssv_ c_bridge_dgssv +#define c_fortran_sgssv_ c_fortran_sgssv +#define c_fortran_dgssv_ c_fortran_dgssv +#define c_fortran_cgssv_ c_fortran_cgssv +#define c_fortran_zgssv_ c_fortran_zgssv +#endif + +#endif /* __SUPERLU_CNAMES */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_dcomplex.h python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_dcomplex.h --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_dcomplex.h 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_dcomplex.h 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,78 @@ + +/*! @file slu_dcomplex.h + * \brief Header file for complex operations + *
 
+ *  -- SuperLU routine (version 2.0) --
+ * Univ. of California Berkeley, Xerox Palo Alto Research Center,
+ * and Lawrence Berkeley National Lab.
+ * November 15, 1997
+ *
+ * Contains definitions for various complex operations.
+ * This header file is to be included in source files z*.c
+ * 
+ */ +#ifndef __SUPERLU_DCOMPLEX /* allow multiple inclusions */ +#define __SUPERLU_DCOMPLEX + + +#ifndef DCOMPLEX_INCLUDE +#define DCOMPLEX_INCLUDE + +typedef struct { double r, i; } doublecomplex; + + +/* Macro definitions */ + +/*! \brief Complex Addition c = a + b */ +#define z_add(c, a, b) { (c)->r = (a)->r + (b)->r; \ + (c)->i = (a)->i + (b)->i; } + +/*! \brief Complex Subtraction c = a - b */ +#define z_sub(c, a, b) { (c)->r = (a)->r - (b)->r; \ + (c)->i = (a)->i - (b)->i; } + +/*! \brief Complex-Double Multiplication */ +#define zd_mult(c, a, b) { (c)->r = (a)->r * (b); \ + (c)->i = (a)->i * (b); } + +/*! \brief Complex-Complex Multiplication */ +#define zz_mult(c, a, b) { \ + double cr, ci; \ + cr = (a)->r * (b)->r - (a)->i * (b)->i; \ + ci = (a)->i * (b)->r + (a)->r * (b)->i; \ + (c)->r = cr; \ + (c)->i = ci; \ + } + +#define zz_conj(a, b) { \ + (a)->r = (b)->r; \ + (a)->i = -((b)->i); \ + } + +/*! \brief Complex equality testing */ +#define z_eq(a, b) ( (a)->r == (b)->r && (a)->i == (b)->i ) + + +#ifdef __cplusplus +extern "C" { +#endif + +/* Prototypes for functions in dcomplex.c */ +void z_div(doublecomplex *, doublecomplex *, doublecomplex *); +double z_abs(doublecomplex *); /* exact */ +double z_abs1(doublecomplex *); /* approximate */ +void z_exp(doublecomplex *, doublecomplex *); +void d_cnjg(doublecomplex *r, doublecomplex *z); +double d_imag(doublecomplex *); +doublecomplex z_sgn(doublecomplex *); +doublecomplex z_sqrt(doublecomplex *); + + + +#ifdef __cplusplus + } +#endif + +#endif + +#endif /* __SUPERLU_DCOMPLEX */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_ddefs.h python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_ddefs.h --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_ddefs.h 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_ddefs.h 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,279 @@ + +/*! @file slu_ddefs.h + * \brief Header file for real operations + * + *
 
+ * -- SuperLU routine (version 4.0) --
+ * Univ. of California Berkeley, Xerox Palo Alto Research Center,
+ * and Lawrence Berkeley National Lab.
+ * June 30, 2009
+ * 
+ * Global data structures used in LU factorization -
+ * 
+ *   nsuper: #supernodes = nsuper + 1, numbered [0, nsuper].
+ *   (xsup,supno): supno[i] is the supernode no to which i belongs;
+ *	xsup(s) points to the beginning of the s-th supernode.
+ *	e.g.   supno 0 1 2 2 3 3 3 4 4 4 4 4   (n=12)
+ *	        xsup 0 1 2 4 7 12
+ *	Note: dfs will be performed on supernode rep. relative to the new 
+ *	      row pivoting ordering
+ *
+ *   (xlsub,lsub): lsub[*] contains the compressed subscript of
+ *	rectangular supernodes; xlsub[j] points to the starting
+ *	location of the j-th column in lsub[*]. Note that xlsub 
+ *	is indexed by column.
+ *	Storage: original row subscripts
+ *
+ *      During the course of sparse LU factorization, we also use
+ *	(xlsub,lsub) for the purpose of symmetric pruning. For each
+ *	supernode {s,s+1,...,t=s+r} with first column s and last
+ *	column t, the subscript set
+ *		lsub[j], j=xlsub[s], .., xlsub[s+1]-1
+ *	is the structure of column s (i.e. structure of this supernode).
+ *	It is used for the storage of numerical values.
+ *	Furthermore,
+ *		lsub[j], j=xlsub[t], .., xlsub[t+1]-1
+ *	is the structure of the last column t of this supernode.
+ *	It is for the purpose of symmetric pruning. Therefore, the
+ *	structural subscripts can be rearranged without making physical
+ *	interchanges among the numerical values.
+ *
+ *	However, if the supernode has only one column, then we
+ *	only keep one set of subscripts. For any subscript interchange
+ *	performed, similar interchange must be done on the numerical
+ *	values.
+ *
+ *	The last column structures (for pruning) will be removed
+ *	after the numercial LU factorization phase.
+ *
+ *   (xlusup,lusup): lusup[*] contains the numerical values of the
+ *	rectangular supernodes; xlusup[j] points to the starting
+ *	location of the j-th column in storage vector lusup[*]
+ *	Note: xlusup is indexed by column.
+ *	Each rectangular supernode is stored by column-major
+ *	scheme, consistent with Fortran 2-dim array storage.
+ *
+ *   (xusub,ucol,usub): ucol[*] stores the numerical values of
+ *	U-columns outside the rectangular supernodes. The row
+ *	subscript of nonzero ucol[k] is stored in usub[k].
+ *	xusub[i] points to the starting location of column i in ucol.
+ *	Storage: new row subscripts; that is subscripts of PA.
+ * 
+ */ +#ifndef __SUPERLU_dSP_DEFS /* allow multiple inclusions */ +#define __SUPERLU_dSP_DEFS + +/* + * File name: dsp_defs.h + * Purpose: Sparse matrix types and function prototypes + * History: + */ + +#ifdef _CRAY +#include +#include +#endif + +/* Define my integer type int_t */ +typedef int int_t; /* default */ + +#include +#include +#include "slu_Cnames.h" +#include "supermatrix.h" +#include "slu_util.h" + + + +typedef struct { + int *xsup; /* supernode and column mapping */ + int *supno; + int *lsub; /* compressed L subscripts */ + int *xlsub; + double *lusup; /* L supernodes */ + int *xlusup; + double *ucol; /* U columns */ + int *usub; + int *xusub; + int nzlmax; /* current max size of lsub */ + int nzumax; /* " " " ucol */ + int nzlumax; /* " " " lusup */ + int n; /* number of columns in the matrix */ + LU_space_t MemModel; /* 0 - system malloc'd; 1 - user provided */ + int num_expansions; + ExpHeader *expanders; /* Array of pointers to 4 types of memory */ + LU_stack_t stack; /* use user supplied memory */ +} GlobalLU_t; + + +/* -------- Prototypes -------- */ + +#ifdef __cplusplus +extern "C" { +#endif + +/*! \brief Driver routines */ +extern void +dgssv(superlu_options_t *, SuperMatrix *, int *, int *, SuperMatrix *, + SuperMatrix *, SuperMatrix *, SuperLUStat_t *, int *); +extern void +dgssvx(superlu_options_t *, SuperMatrix *, int *, int *, int *, + char *, double *, double *, SuperMatrix *, SuperMatrix *, + void *, int, SuperMatrix *, SuperMatrix *, + double *, double *, double *, double *, + mem_usage_t *, SuperLUStat_t *, int *); + /* ILU */ +extern void +dgsisv(superlu_options_t *, SuperMatrix *, int *, int *, SuperMatrix *, + SuperMatrix *, SuperMatrix *, SuperLUStat_t *, int *); +extern void +dgsisx(superlu_options_t *, SuperMatrix *, int *, int *, int *, + char *, double *, double *, SuperMatrix *, SuperMatrix *, + void *, int, SuperMatrix *, SuperMatrix *, double *, double *, + mem_usage_t *, SuperLUStat_t *, int *); + + +/*! \brief Supernodal LU factor related */ +extern void +dCreate_CompCol_Matrix(SuperMatrix *, int, int, int, double *, + int *, int *, Stype_t, Dtype_t, Mtype_t); +extern void +dCreate_CompRow_Matrix(SuperMatrix *, int, int, int, double *, + int *, int *, Stype_t, Dtype_t, Mtype_t); +extern void +dCopy_CompCol_Matrix(SuperMatrix *, SuperMatrix *); +extern void +dCreate_Dense_Matrix(SuperMatrix *, int, int, double *, int, + Stype_t, Dtype_t, Mtype_t); +extern void +dCreate_SuperNode_Matrix(SuperMatrix *, int, int, int, double *, + int *, int *, int *, int *, int *, + Stype_t, Dtype_t, Mtype_t); +extern void +dCopy_Dense_Matrix(int, int, double *, int, double *, int); + +extern void countnz (const int, int *, int *, int *, GlobalLU_t *); +extern void ilu_countnz (const int, int *, int *, GlobalLU_t *); +extern void fixupL (const int, const int *, GlobalLU_t *); + +extern void dallocateA (int, int, double **, int **, int **); +extern void dgstrf (superlu_options_t*, SuperMatrix*, + int, int, int*, void *, int, int *, int *, + SuperMatrix *, SuperMatrix *, SuperLUStat_t*, int *); +extern int dsnode_dfs (const int, const int, const int *, const int *, + const int *, int *, int *, GlobalLU_t *); +extern int dsnode_bmod (const int, const int, const int, double *, + double *, GlobalLU_t *, SuperLUStat_t*); +extern void dpanel_dfs (const int, const int, const int, SuperMatrix *, + int *, int *, double *, int *, int *, int *, + int *, int *, int *, int *, GlobalLU_t *); +extern void dpanel_bmod (const int, const int, const int, const int, + double *, double *, int *, int *, + GlobalLU_t *, SuperLUStat_t*); +extern int dcolumn_dfs (const int, const int, int *, int *, int *, int *, + int *, int *, int *, int *, int *, GlobalLU_t *); +extern int dcolumn_bmod (const int, const int, double *, + double *, int *, int *, int, + GlobalLU_t *, SuperLUStat_t*); +extern int dcopy_to_ucol (int, int, int *, int *, int *, + double *, GlobalLU_t *); +extern int dpivotL (const int, const double, int *, int *, + int *, int *, int *, GlobalLU_t *, SuperLUStat_t*); +extern void dpruneL (const int, const int *, const int, const int, + const int *, const int *, int *, GlobalLU_t *); +extern void dreadmt (int *, int *, int *, double **, int **, int **); +extern void dGenXtrue (int, int, double *, int); +extern void dFillRHS (trans_t, int, double *, int, SuperMatrix *, + SuperMatrix *); +extern void dgstrs (trans_t, SuperMatrix *, SuperMatrix *, int *, int *, + SuperMatrix *, SuperLUStat_t*, int *); +/* ILU */ +extern void dgsitrf (superlu_options_t*, SuperMatrix*, int, int, int*, + void *, int, int *, int *, SuperMatrix *, SuperMatrix *, + SuperLUStat_t*, int *); +extern int dldperm(int, int, int, int [], int [], double [], + int [], double [], double []); +extern int ilu_dsnode_dfs (const int, const int, const int *, const int *, + const int *, int *, GlobalLU_t *); +extern void ilu_dpanel_dfs (const int, const int, const int, SuperMatrix *, + int *, int *, double *, double *, int *, int *, + int *, int *, int *, int *, GlobalLU_t *); +extern int ilu_dcolumn_dfs (const int, const int, int *, int *, int *, + int *, int *, int *, int *, int *, + GlobalLU_t *); +extern int ilu_dcopy_to_ucol (int, int, int *, int *, int *, + double *, int, milu_t, double, int, + double *, int *, GlobalLU_t *, int *); +extern int ilu_dpivotL (const int, const double, int *, int *, int, int *, + int *, int *, int *, double, milu_t, + double, GlobalLU_t *, SuperLUStat_t*); +extern int ilu_ddrop_row (superlu_options_t *, int, int, double, + int, int *, double *, GlobalLU_t *, + double *, int *, int); + + +/*! \brief Driver related */ + +extern void dgsequ (SuperMatrix *, double *, double *, double *, + double *, double *, int *); +extern void dlaqgs (SuperMatrix *, double *, double *, double, + double, double, char *); +extern void dgscon (char *, SuperMatrix *, SuperMatrix *, + double, double *, SuperLUStat_t*, int *); +extern double dPivotGrowth(int, SuperMatrix *, int *, + SuperMatrix *, SuperMatrix *); +extern void dgsrfs (trans_t, SuperMatrix *, SuperMatrix *, + SuperMatrix *, int *, int *, char *, double *, + double *, SuperMatrix *, SuperMatrix *, + double *, double *, SuperLUStat_t*, int *); + +extern int sp_dtrsv (char *, char *, char *, SuperMatrix *, + SuperMatrix *, double *, SuperLUStat_t*, int *); +extern int sp_dgemv (char *, double, SuperMatrix *, double *, + int, double, double *, int); + +extern int sp_dgemm (char *, char *, int, int, int, double, + SuperMatrix *, double *, int, double, + double *, int); +extern double dlamch_(char *); + + +/*! \brief Memory-related */ +extern int dLUMemInit (fact_t, void *, int, int, int, int, int, + double, SuperMatrix *, SuperMatrix *, + GlobalLU_t *, int **, double **); +extern void dSetRWork (int, int, double *, double **, double **); +extern void dLUWorkFree (int *, double *, GlobalLU_t *); +extern int dLUMemXpand (int, int, MemType, int *, GlobalLU_t *); + +extern double *doubleMalloc(int); +extern double *doubleCalloc(int); +extern int dmemory_usage(const int, const int, const int, const int); +extern int dQuerySpace (SuperMatrix *, SuperMatrix *, mem_usage_t *); +extern int ilu_dQuerySpace (SuperMatrix *, SuperMatrix *, mem_usage_t *); + +/*! \brief Auxiliary routines */ +extern void dreadhb(int *, int *, int *, double **, int **, int **); +extern void dreadrb(int *, int *, int *, double **, int **, int **); +extern void dreadtriple(int *, int *, int *, double **, int **, int **); +extern void dCompRow_to_CompCol(int, int, int, double*, int*, int*, + double **, int **, int **); +extern void dfill (double *, int, double); +extern void dinf_norm_error (int, SuperMatrix *, double *); +extern void PrintPerf (SuperMatrix *, SuperMatrix *, mem_usage_t *, + double, double, double *, double *, char *); + +/*! \brief Routines for debugging */ +extern void dPrint_CompCol_Matrix(char *, SuperMatrix *); +extern void dPrint_SuperNode_Matrix(char *, SuperMatrix *); +extern void dPrint_Dense_Matrix(char *, SuperMatrix *); +extern void dprint_lu_col(char *, int, int, int *, GlobalLU_t *); +extern int print_double_vec(char *, int, double *); +extern void check_tempv(int, double *); + +#ifdef __cplusplus + } +#endif + +#endif /* __SUPERLU_dSP_DEFS */ + diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_scomplex.h python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_scomplex.h --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_scomplex.h 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_scomplex.h 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,78 @@ + +/*! @file slu_scomplex.h + * \brief Header file for complex operations + *
 
+ *  -- SuperLU routine (version 2.0) --
+ * Univ. of California Berkeley, Xerox Palo Alto Research Center,
+ * and Lawrence Berkeley National Lab.
+ * November 15, 1997
+ *
+ * Contains definitions for various complex operations.
+ * This header file is to be included in source files c*.c
+ * 
+ */ +#ifndef __SUPERLU_SCOMPLEX /* allow multiple inclusions */ +#define __SUPERLU_SCOMPLEX + + +#ifndef SCOMPLEX_INCLUDE +#define SCOMPLEX_INCLUDE + +typedef struct { float r, i; } complex; + + +/* Macro definitions */ + +/*! \brief Complex Addition c = a + b */ +#define c_add(c, a, b) { (c)->r = (a)->r + (b)->r; \ + (c)->i = (a)->i + (b)->i; } + +/*! \brief Complex Subtraction c = a - b */ +#define c_sub(c, a, b) { (c)->r = (a)->r - (b)->r; \ + (c)->i = (a)->i - (b)->i; } + +/*! \brief Complex-Double Multiplication */ +#define cs_mult(c, a, b) { (c)->r = (a)->r * (b); \ + (c)->i = (a)->i * (b); } + +/*! \brief Complex-Complex Multiplication */ +#define cc_mult(c, a, b) { \ + float cr, ci; \ + cr = (a)->r * (b)->r - (a)->i * (b)->i; \ + ci = (a)->i * (b)->r + (a)->r * (b)->i; \ + (c)->r = cr; \ + (c)->i = ci; \ + } + +#define cc_conj(a, b) { \ + (a)->r = (b)->r; \ + (a)->i = -((b)->i); \ + } + +/*! \brief Complex equality testing */ +#define c_eq(a, b) ( (a)->r == (b)->r && (a)->i == (b)->i ) + + +#ifdef __cplusplus +extern "C" { +#endif + +/* Prototypes for functions in scomplex.c */ +void c_div(complex *, complex *, complex *); +double slu_c_abs(complex *); /* exact */ +double slu_c_abs1(complex *); /* approximate */ +void c_exp(complex *, complex *); +void r_cnjg(complex *, complex *); +double r_imag(complex *); +complex c_sgn(complex *); +complex c_sqrt(complex *); + + + +#ifdef __cplusplus + } +#endif + +#endif + +#endif /* __SUPERLU_SCOMPLEX */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_sdefs.h python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_sdefs.h --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_sdefs.h 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_sdefs.h 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,279 @@ + +/*! @file slu_sdefs.h + * \brief Header file for real operations + * + *
 
+ * -- SuperLU routine (version 4.0) --
+ * Univ. of California Berkeley, Xerox Palo Alto Research Center,
+ * and Lawrence Berkeley National Lab.
+ * June 30, 2009
+ * 
+ * Global data structures used in LU factorization -
+ * 
+ *   nsuper: #supernodes = nsuper + 1, numbered [0, nsuper].
+ *   (xsup,supno): supno[i] is the supernode no to which i belongs;
+ *	xsup(s) points to the beginning of the s-th supernode.
+ *	e.g.   supno 0 1 2 2 3 3 3 4 4 4 4 4   (n=12)
+ *	        xsup 0 1 2 4 7 12
+ *	Note: dfs will be performed on supernode rep. relative to the new 
+ *	      row pivoting ordering
+ *
+ *   (xlsub,lsub): lsub[*] contains the compressed subscript of
+ *	rectangular supernodes; xlsub[j] points to the starting
+ *	location of the j-th column in lsub[*]. Note that xlsub 
+ *	is indexed by column.
+ *	Storage: original row subscripts
+ *
+ *      During the course of sparse LU factorization, we also use
+ *	(xlsub,lsub) for the purpose of symmetric pruning. For each
+ *	supernode {s,s+1,...,t=s+r} with first column s and last
+ *	column t, the subscript set
+ *		lsub[j], j=xlsub[s], .., xlsub[s+1]-1
+ *	is the structure of column s (i.e. structure of this supernode).
+ *	It is used for the storage of numerical values.
+ *	Furthermore,
+ *		lsub[j], j=xlsub[t], .., xlsub[t+1]-1
+ *	is the structure of the last column t of this supernode.
+ *	It is for the purpose of symmetric pruning. Therefore, the
+ *	structural subscripts can be rearranged without making physical
+ *	interchanges among the numerical values.
+ *
+ *	However, if the supernode has only one column, then we
+ *	only keep one set of subscripts. For any subscript interchange
+ *	performed, similar interchange must be done on the numerical
+ *	values.
+ *
+ *	The last column structures (for pruning) will be removed
+ *	after the numercial LU factorization phase.
+ *
+ *   (xlusup,lusup): lusup[*] contains the numerical values of the
+ *	rectangular supernodes; xlusup[j] points to the starting
+ *	location of the j-th column in storage vector lusup[*]
+ *	Note: xlusup is indexed by column.
+ *	Each rectangular supernode is stored by column-major
+ *	scheme, consistent with Fortran 2-dim array storage.
+ *
+ *   (xusub,ucol,usub): ucol[*] stores the numerical values of
+ *	U-columns outside the rectangular supernodes. The row
+ *	subscript of nonzero ucol[k] is stored in usub[k].
+ *	xusub[i] points to the starting location of column i in ucol.
+ *	Storage: new row subscripts; that is subscripts of PA.
+ * 
+ */ +#ifndef __SUPERLU_sSP_DEFS /* allow multiple inclusions */ +#define __SUPERLU_sSP_DEFS + +/* + * File name: ssp_defs.h + * Purpose: Sparse matrix types and function prototypes + * History: + */ + +#ifdef _CRAY +#include +#include +#endif + +/* Define my integer type int_t */ +typedef int int_t; /* default */ + +#include +#include +#include "slu_Cnames.h" +#include "supermatrix.h" +#include "slu_util.h" + + + +typedef struct { + int *xsup; /* supernode and column mapping */ + int *supno; + int *lsub; /* compressed L subscripts */ + int *xlsub; + float *lusup; /* L supernodes */ + int *xlusup; + float *ucol; /* U columns */ + int *usub; + int *xusub; + int nzlmax; /* current max size of lsub */ + int nzumax; /* " " " ucol */ + int nzlumax; /* " " " lusup */ + int n; /* number of columns in the matrix */ + LU_space_t MemModel; /* 0 - system malloc'd; 1 - user provided */ + int num_expansions; + ExpHeader *expanders; /* Array of pointers to 4 types of memory */ + LU_stack_t stack; /* use user supplied memory */ +} GlobalLU_t; + + +/* -------- Prototypes -------- */ + +#ifdef __cplusplus +extern "C" { +#endif + +/*! \brief Driver routines */ +extern void +sgssv(superlu_options_t *, SuperMatrix *, int *, int *, SuperMatrix *, + SuperMatrix *, SuperMatrix *, SuperLUStat_t *, int *); +extern void +sgssvx(superlu_options_t *, SuperMatrix *, int *, int *, int *, + char *, float *, float *, SuperMatrix *, SuperMatrix *, + void *, int, SuperMatrix *, SuperMatrix *, + float *, float *, float *, float *, + mem_usage_t *, SuperLUStat_t *, int *); + /* ILU */ +extern void +sgsisv(superlu_options_t *, SuperMatrix *, int *, int *, SuperMatrix *, + SuperMatrix *, SuperMatrix *, SuperLUStat_t *, int *); +extern void +sgsisx(superlu_options_t *, SuperMatrix *, int *, int *, int *, + char *, float *, float *, SuperMatrix *, SuperMatrix *, + void *, int, SuperMatrix *, SuperMatrix *, float *, float *, + mem_usage_t *, SuperLUStat_t *, int *); + + +/*! \brief Supernodal LU factor related */ +extern void +sCreate_CompCol_Matrix(SuperMatrix *, int, int, int, float *, + int *, int *, Stype_t, Dtype_t, Mtype_t); +extern void +sCreate_CompRow_Matrix(SuperMatrix *, int, int, int, float *, + int *, int *, Stype_t, Dtype_t, Mtype_t); +extern void +sCopy_CompCol_Matrix(SuperMatrix *, SuperMatrix *); +extern void +sCreate_Dense_Matrix(SuperMatrix *, int, int, float *, int, + Stype_t, Dtype_t, Mtype_t); +extern void +sCreate_SuperNode_Matrix(SuperMatrix *, int, int, int, float *, + int *, int *, int *, int *, int *, + Stype_t, Dtype_t, Mtype_t); +extern void +sCopy_Dense_Matrix(int, int, float *, int, float *, int); + +extern void countnz (const int, int *, int *, int *, GlobalLU_t *); +extern void ilu_countnz (const int, int *, int *, GlobalLU_t *); +extern void fixupL (const int, const int *, GlobalLU_t *); + +extern void sallocateA (int, int, float **, int **, int **); +extern void sgstrf (superlu_options_t*, SuperMatrix*, + int, int, int*, void *, int, int *, int *, + SuperMatrix *, SuperMatrix *, SuperLUStat_t*, int *); +extern int ssnode_dfs (const int, const int, const int *, const int *, + const int *, int *, int *, GlobalLU_t *); +extern int ssnode_bmod (const int, const int, const int, float *, + float *, GlobalLU_t *, SuperLUStat_t*); +extern void spanel_dfs (const int, const int, const int, SuperMatrix *, + int *, int *, float *, int *, int *, int *, + int *, int *, int *, int *, GlobalLU_t *); +extern void spanel_bmod (const int, const int, const int, const int, + float *, float *, int *, int *, + GlobalLU_t *, SuperLUStat_t*); +extern int scolumn_dfs (const int, const int, int *, int *, int *, int *, + int *, int *, int *, int *, int *, GlobalLU_t *); +extern int scolumn_bmod (const int, const int, float *, + float *, int *, int *, int, + GlobalLU_t *, SuperLUStat_t*); +extern int scopy_to_ucol (int, int, int *, int *, int *, + float *, GlobalLU_t *); +extern int spivotL (const int, const double, int *, int *, + int *, int *, int *, GlobalLU_t *, SuperLUStat_t*); +extern void spruneL (const int, const int *, const int, const int, + const int *, const int *, int *, GlobalLU_t *); +extern void sreadmt (int *, int *, int *, float **, int **, int **); +extern void sGenXtrue (int, int, float *, int); +extern void sFillRHS (trans_t, int, float *, int, SuperMatrix *, + SuperMatrix *); +extern void sgstrs (trans_t, SuperMatrix *, SuperMatrix *, int *, int *, + SuperMatrix *, SuperLUStat_t*, int *); +/* ILU */ +extern void sgsitrf (superlu_options_t*, SuperMatrix*, int, int, int*, + void *, int, int *, int *, SuperMatrix *, SuperMatrix *, + SuperLUStat_t*, int *); +extern int sldperm(int, int, int, int [], int [], float [], + int [], float [], float []); +extern int ilu_ssnode_dfs (const int, const int, const int *, const int *, + const int *, int *, GlobalLU_t *); +extern void ilu_spanel_dfs (const int, const int, const int, SuperMatrix *, + int *, int *, float *, float *, int *, int *, + int *, int *, int *, int *, GlobalLU_t *); +extern int ilu_scolumn_dfs (const int, const int, int *, int *, int *, + int *, int *, int *, int *, int *, + GlobalLU_t *); +extern int ilu_scopy_to_ucol (int, int, int *, int *, int *, + float *, int, milu_t, double, int, + float *, int *, GlobalLU_t *, int *); +extern int ilu_spivotL (const int, const double, int *, int *, int, int *, + int *, int *, int *, double, milu_t, + float, GlobalLU_t *, SuperLUStat_t*); +extern int ilu_sdrop_row (superlu_options_t *, int, int, double, + int, int *, double *, GlobalLU_t *, + float *, int *, int); + + +/*! \brief Driver related */ + +extern void sgsequ (SuperMatrix *, float *, float *, float *, + float *, float *, int *); +extern void slaqgs (SuperMatrix *, float *, float *, float, + float, float, char *); +extern void sgscon (char *, SuperMatrix *, SuperMatrix *, + float, float *, SuperLUStat_t*, int *); +extern float sPivotGrowth(int, SuperMatrix *, int *, + SuperMatrix *, SuperMatrix *); +extern void sgsrfs (trans_t, SuperMatrix *, SuperMatrix *, + SuperMatrix *, int *, int *, char *, float *, + float *, SuperMatrix *, SuperMatrix *, + float *, float *, SuperLUStat_t*, int *); + +extern int sp_strsv (char *, char *, char *, SuperMatrix *, + SuperMatrix *, float *, SuperLUStat_t*, int *); +extern int sp_sgemv (char *, float, SuperMatrix *, float *, + int, float, float *, int); + +extern int sp_sgemm (char *, char *, int, int, int, float, + SuperMatrix *, float *, int, float, + float *, int); +extern double slamch_(char *); + + +/*! \brief Memory-related */ +extern int sLUMemInit (fact_t, void *, int, int, int, int, int, + float, SuperMatrix *, SuperMatrix *, + GlobalLU_t *, int **, float **); +extern void sSetRWork (int, int, float *, float **, float **); +extern void sLUWorkFree (int *, float *, GlobalLU_t *); +extern int sLUMemXpand (int, int, MemType, int *, GlobalLU_t *); + +extern float *floatMalloc(int); +extern float *floatCalloc(int); +extern int smemory_usage(const int, const int, const int, const int); +extern int sQuerySpace (SuperMatrix *, SuperMatrix *, mem_usage_t *); +extern int ilu_sQuerySpace (SuperMatrix *, SuperMatrix *, mem_usage_t *); + +/*! \brief Auxiliary routines */ +extern void sreadhb(int *, int *, int *, float **, int **, int **); +extern void sreadrb(int *, int *, int *, float **, int **, int **); +extern void sreadtriple(int *, int *, int *, float **, int **, int **); +extern void sCompRow_to_CompCol(int, int, int, float*, int*, int*, + float **, int **, int **); +extern void sfill (float *, int, float); +extern void sinf_norm_error (int, SuperMatrix *, float *); +extern void PrintPerf (SuperMatrix *, SuperMatrix *, mem_usage_t *, + float, float, float *, float *, char *); + +/*! \brief Routines for debugging */ +extern void sPrint_CompCol_Matrix(char *, SuperMatrix *); +extern void sPrint_SuperNode_Matrix(char *, SuperMatrix *); +extern void sPrint_Dense_Matrix(char *, SuperMatrix *); +extern void sprint_lu_col(char *, int, int, int *, GlobalLU_t *); +extern int print_double_vec(char *, int, double *); +extern void check_tempv(int, float *); + +#ifdef __cplusplus + } +#endif + +#endif /* __SUPERLU_sSP_DEFS */ + diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_util.h python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_util.h --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_util.h 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_util.h 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,369 @@ +/** @file slu_util.h + * \brief Utility header file + * + * -- SuperLU routine (version 3.1) -- + * Univ. of California Berkeley, Xerox Palo Alto Research Center, + * and Lawrence Berkeley National Lab. + * August 1, 2008 + * + */ + +#ifndef __SUPERLU_UTIL /* allow multiple inclusions */ +#define __SUPERLU_UTIL + +#include +#include +#include +/* +#ifndef __STDC__ +#include +#endif +*/ +#include + +#include "scipy_slu_config.h" + +/*********************************************************************** + * Macros + ***********************************************************************/ +#define FIRSTCOL_OF_SNODE(i) (xsup[i]) +/* No of marker arrays used in the symbolic factorization, + each of size n */ +#define NO_MARKER 3 +#define NUM_TEMPV(m,w,t,b) ( SUPERLU_MAX(m, (t + b)*w) ) + +#ifndef USER_ABORT +#define USER_ABORT(msg) superlu_abort_and_exit(msg) +#endif + +#define ABORT(err_msg) \ + { char msg[256];\ + sprintf(msg,"%s at line %d in file %s\n",err_msg,__LINE__, __FILE__);\ + USER_ABORT(msg); } + + +#ifndef USER_MALLOC +#if 1 +#define USER_MALLOC(size) superlu_malloc(size) +#else +/* The following may check out some uninitialized data */ +#define USER_MALLOC(size) memset (superlu_malloc(size), '\x0F', size) +#endif +#endif + +#define SUPERLU_MALLOC(size) USER_MALLOC(size) + +#ifndef USER_FREE +#define USER_FREE(addr) superlu_free(addr) +#endif + +#define SUPERLU_FREE(addr) USER_FREE(addr) + +#define CHECK_MALLOC(where) { \ + extern int superlu_malloc_total; \ + printf("%s: malloc_total %d Bytes\n", \ + where, superlu_malloc_total); \ +} + +#define SUPERLU_MAX(x, y) ( (x) > (y) ? (x) : (y) ) +#define SUPERLU_MIN(x, y) ( (x) < (y) ? (x) : (y) ) + +/********************************************************* + * Macros used for easy access of sparse matrix entries. * + *********************************************************/ +#define L_SUB_START(col) ( Lstore->rowind_colptr[col] ) +#define L_SUB(ptr) ( Lstore->rowind[ptr] ) +#define L_NZ_START(col) ( Lstore->nzval_colptr[col] ) +#define L_FST_SUPC(superno) ( Lstore->sup_to_col[superno] ) +#define U_NZ_START(col) ( Ustore->colptr[col] ) +#define U_SUB(ptr) ( Ustore->rowind[ptr] ) + + +/*********************************************************************** + * Constants + ***********************************************************************/ +#define EMPTY (-1) +/*#define NO (-1)*/ +#define FALSE 0 +#define TRUE 1 + +#define NO_MEMTYPE 4 /* 0: lusup; + 1: ucol; + 2: lsub; + 3: usub */ + +#define GluIntArray(n) (5 * (n) + 5) + +/* Dropping rules */ +#define NODROP ( 0x0000 ) +#define DROP_BASIC ( 0x0001 ) /* ILU(tau) */ +#define DROP_PROWS ( 0x0002 ) /* ILUTP: keep p maximum rows */ +#define DROP_COLUMN ( 0x0004 ) /* ILUTP: for j-th column, + p = gamma * nnz(A(:,j)) */ +#define DROP_AREA ( 0x0008 ) /* ILUTP: for j-th column, use + nnz(F(:,1:j)) / nnz(A(:,1:j)) + to limit memory growth */ +#define DROP_SECONDARY ( 0x000E ) /* PROWS | COLUMN | AREA */ +#define DROP_DYNAMIC ( 0x0010 ) /* adaptive tau */ +#define DROP_INTERP ( 0x0100 ) /* use interpolation */ + + +#if 1 +#define MILU_ALPHA (1.0e-2) /* multiple of drop_sum to be added to diagonal */ +#else +#define MILU_ALPHA 1.0 /* multiple of drop_sum to be added to diagonal */ +#endif + + +/*********************************************************************** + * Enumerate types + ***********************************************************************/ +typedef enum {NO, YES} yes_no_t; +typedef enum {DOFACT, SamePattern, SamePattern_SameRowPerm, FACTORED} fact_t; +typedef enum {NOROWPERM, LargeDiag, MY_PERMR} rowperm_t; +typedef enum {NATURAL, MMD_ATA, MMD_AT_PLUS_A, COLAMD, MY_PERMC}colperm_t; +typedef enum {NOTRANS, TRANS, CONJ} trans_t; +typedef enum {NOEQUIL, ROW, COL, BOTH} DiagScale_t; +typedef enum {NOREFINE, SINGLE=1, DOUBLE, EXTRA} IterRefine_t; +typedef enum {LUSUP, UCOL, LSUB, USUB} MemType; +typedef enum {HEAD, TAIL} stack_end_t; +typedef enum {SYSTEM, USER} LU_space_t; +typedef enum {ONE_NORM, TWO_NORM, INF_NORM} norm_t; +typedef enum {SILU, SMILU_1, SMILU_2, SMILU_3} milu_t; +#if 0 +typedef enum {NODROP = 0x0000, + DROP_BASIC = 0x0001, /* ILU(tau) */ + DROP_PROWS = 0x0002, /* ILUTP: keep p maximum rows */ + DROP_COLUMN = 0x0004, /* ILUTP: for j-th column, + p = gamma * nnz(A(:,j)) */ + DROP_AREA = 0x0008, /* ILUTP: for j-th column, use + nnz(F(:,1:j)) / nnz(A(:,1:j)) + to limit memory growth */ + DROP_SECONDARY = 0x000E, /* PROWS | COLUMN | AREA */ + DROP_DYNAMIC = 0x0010, + DROP_INTERP = 0x0100} rule_t; +#endif + + +/* + * The following enumerate type is used by the statistics variable + * to keep track of flop count and time spent at various stages. + * + * Note that not all of the fields are disjoint. + */ +typedef enum { + COLPERM, /* find a column ordering that minimizes fills */ + RELAX, /* find artificial supernodes */ + ETREE, /* compute column etree */ + EQUIL, /* equilibrate the original matrix */ + FACT, /* perform LU factorization */ + RCOND, /* estimate reciprocal condition number */ + SOLVE, /* forward and back solves */ + REFINE, /* perform iterative refinement */ + TRSV, /* fraction of FACT spent in xTRSV */ + GEMV, /* fraction of FACT spent in xGEMV */ + FERR, /* estimate error bounds after iterative refinement */ + NPHASES /* total number of phases */ +} PhaseType; + + +/*********************************************************************** + * Type definitions + ***********************************************************************/ +typedef float flops_t; +typedef unsigned char Logical; + +/* + *-- This contains the options used to control the solve process. + * + * Fact (fact_t) + * Specifies whether or not the factored form of the matrix + * A is supplied on entry, and if not, how the matrix A should + * be factorizaed. + * = DOFACT: The matrix A will be factorized from scratch, and the + * factors will be stored in L and U. + * = SamePattern: The matrix A will be factorized assuming + * that a factorization of a matrix with the same sparsity + * pattern was performed prior to this one. Therefore, this + * factorization will reuse column permutation vector + * ScalePermstruct->perm_c and the column elimination tree + * LUstruct->etree. + * = SamePattern_SameRowPerm: The matrix A will be factorized + * assuming that a factorization of a matrix with the same + * sparsity pattern and similar numerical values was performed + * prior to this one. Therefore, this factorization will reuse + * both row and column scaling factors R and C, both row and + * column permutation vectors perm_r and perm_c, and the + * data structure set up from the previous symbolic factorization. + * = FACTORED: On entry, L, U, perm_r and perm_c contain the + * factored form of A. If DiagScale is not NOEQUIL, the matrix + * A has been equilibrated with scaling factors R and C. + * + * Equil (yes_no_t) + * Specifies whether to equilibrate the system (scale A's row and + * columns to have unit norm). + * + * ColPerm (colperm_t) + * Specifies what type of column permutation to use to reduce fill. + * = NATURAL: use the natural ordering + * = MMD_ATA: use minimum degree ordering on structure of A'*A + * = MMD_AT_PLUS_A: use minimum degree ordering on structure of A'+A + * = COLAMD: use approximate minimum degree column ordering + * = MY_PERMC: use the ordering specified in ScalePermstruct->perm_c[] + * + * Trans (trans_t) + * Specifies the form of the system of equations: + * = NOTRANS: A * X = B (No transpose) + * = TRANS: A**T * X = B (Transpose) + * = CONJ: A**H * X = B (Transpose) + * + * IterRefine (IterRefine_t) + * Specifies whether to perform iterative refinement. + * = NO: no iterative refinement + * = WorkingPrec: perform iterative refinement in working precision + * = ExtraPrec: perform iterative refinement in extra precision + * + * DiagPivotThresh (double, in [0.0, 1.0]) (only for sequential SuperLU) + * Specifies the threshold used for a diagonal entry to be an + * acceptable pivot. + * + * PivotGrowth (yes_no_t) + * Specifies whether to compute the reciprocal pivot growth. + * + * ConditionNumber (ues_no_t) + * Specifies whether to compute the reciprocal condition number. + * + * RowPerm (rowperm_t) (only for SuperLU_DIST or ILU) + * Specifies whether to permute rows of the original matrix. + * = NO: not to permute the rows + * = LargeDiag: make the diagonal large relative to the off-diagonal + * = MY_PERMR: use the permutation given in ScalePermstruct->perm_r[] + * + * SymmetricMode (yest_no_t) + * Specifies whether to use symmetric mode. + * + * PrintStat (yes_no_t) + * Specifies whether to print the solver's statistics. + * + * ReplaceTinyPivot (yes_no_t) (only for SuperLU_DIST) + * Specifies whether to replace the tiny diagonals by + * sqrt(epsilon)*||A|| during LU factorization. + * + * SolveInitialized (yes_no_t) (only for SuperLU_DIST) + * Specifies whether the initialization has been performed to the + * triangular solve. + * + * RefineInitialized (yes_no_t) (only for SuperLU_DIST) + * Specifies whether the initialization has been performed to the + * sparse matrix-vector multiplication routine needed in iterative + * refinement. + */ +typedef struct { + fact_t Fact; + yes_no_t Equil; + colperm_t ColPerm; + trans_t Trans; + IterRefine_t IterRefine; + double DiagPivotThresh; + yes_no_t PivotGrowth; + yes_no_t ConditionNumber; + rowperm_t RowPerm; + yes_no_t SymmetricMode; + yes_no_t PrintStat; + yes_no_t ReplaceTinyPivot; + yes_no_t SolveInitialized; + yes_no_t RefineInitialized; + double ILU_DropTol; /* threshold for dropping */ + double ILU_FillTol; /* threshold for zero pivot perturbation */ + double ILU_FillFactor; /* gamma in the secondary dropping */ + int ILU_DropRule; + norm_t ILU_Norm; + milu_t ILU_MILU; +} superlu_options_t; + +/*! \brief Headers for 4 types of dynamatically managed memory */ +typedef struct e_node { + int size; /* length of the memory that has been used */ + void *mem; /* pointer to the new malloc'd store */ +} ExpHeader; + +typedef struct { + int size; + int used; + int top1; /* grow upward, relative to &array[0] */ + int top2; /* grow downward */ + void *array; +} LU_stack_t; + +typedef struct { + int *panel_histo; /* histogram of panel size distribution */ + double *utime; /* running time at various phases */ + flops_t *ops; /* operation count at various phases */ + int TinyPivots; /* number of tiny pivots */ + int RefineSteps; /* number of iterative refinement steps */ + int expansions; /* number of memory expansions */ +} SuperLUStat_t; + +typedef struct { + float for_lu; + float total_needed; +} mem_usage_t; + + +/*********************************************************************** + * Prototypes + ***********************************************************************/ +#ifdef __cplusplus +extern "C" { +#endif + +extern void Destroy_SuperMatrix_Store(SuperMatrix *); +extern void Destroy_CompCol_Matrix(SuperMatrix *); +extern void Destroy_CompRow_Matrix(SuperMatrix *); +extern void Destroy_SuperNode_Matrix(SuperMatrix *); +extern void Destroy_CompCol_Permuted(SuperMatrix *); +extern void Destroy_Dense_Matrix(SuperMatrix *); +extern void get_perm_c(int, SuperMatrix *, int *); +extern void set_default_options(superlu_options_t *options); +extern void ilu_set_default_options(superlu_options_t *options); +extern void sp_preorder (superlu_options_t *, SuperMatrix*, int*, int*, + SuperMatrix*); +extern void superlu_abort_and_exit(char*); +extern void *superlu_malloc (size_t); +extern int *intMalloc (int); +extern int *intCalloc (int); +extern void superlu_free (void*); +extern void SetIWork (int, int, int, int *, int **, int **, int **, + int **, int **, int **, int **); +extern int sp_coletree (int *, int *, int *, int, int, int *); +extern void relax_snode (const int, int *, const int, int *, int *); +extern void heap_relax_snode (const int, int *, const int, int *, int *); +extern int mark_relax(int, int *, int *, int *, int *, int *, int *); +extern void ilu_relax_snode (const int, int *, const int, int *, + int *, int *); +extern void ilu_heap_relax_snode (const int, int *, const int, int *, + int *, int*); +extern void resetrep_col (const int, const int *, int *); +extern int spcoletree (int *, int *, int *, int, int, int *); +extern int *TreePostorder (int, int *); +extern double SuperLU_timer_ (); +extern int sp_ienv (int); +extern int lsame_ (char *, char *); +extern int xerbla_ (char *, int *); +extern void ifill (int *, int, int); +extern void snode_profile (int, int *); +extern void super_stats (int, int *); +extern void check_repfnz(int, int, int, int *); +extern void PrintSumm (char *, int, int, int); +extern void StatInit(SuperLUStat_t *); +extern void StatPrint (SuperLUStat_t *); +extern void StatFree(SuperLUStat_t *); +extern void print_panel_seg(int, int, int, int, int *, int *); +extern int print_int_vec(char *,int, int *); +extern int slu_PrintInt10(char *, int, int *); + +#ifdef __cplusplus + } +#endif + +#endif /* __SUPERLU_UTIL */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_zdefs.h python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_zdefs.h --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_zdefs.h 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/slu_zdefs.h 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,282 @@ + +/*! @file slu_zdefs.h + * \brief Header file for real operations + * + *
 
+ * -- SuperLU routine (version 4.0) --
+ * Univ. of California Berkeley, Xerox Palo Alto Research Center,
+ * and Lawrence Berkeley National Lab.
+ * June 30, 2009
+ * 
+ * Global data structures used in LU factorization -
+ * 
+ *   nsuper: #supernodes = nsuper + 1, numbered [0, nsuper].
+ *   (xsup,supno): supno[i] is the supernode no to which i belongs;
+ *	xsup(s) points to the beginning of the s-th supernode.
+ *	e.g.   supno 0 1 2 2 3 3 3 4 4 4 4 4   (n=12)
+ *	        xsup 0 1 2 4 7 12
+ *	Note: dfs will be performed on supernode rep. relative to the new 
+ *	      row pivoting ordering
+ *
+ *   (xlsub,lsub): lsub[*] contains the compressed subscript of
+ *	rectangular supernodes; xlsub[j] points to the starting
+ *	location of the j-th column in lsub[*]. Note that xlsub 
+ *	is indexed by column.
+ *	Storage: original row subscripts
+ *
+ *      During the course of sparse LU factorization, we also use
+ *	(xlsub,lsub) for the purpose of symmetric pruning. For each
+ *	supernode {s,s+1,...,t=s+r} with first column s and last
+ *	column t, the subscript set
+ *		lsub[j], j=xlsub[s], .., xlsub[s+1]-1
+ *	is the structure of column s (i.e. structure of this supernode).
+ *	It is used for the storage of numerical values.
+ *	Furthermore,
+ *		lsub[j], j=xlsub[t], .., xlsub[t+1]-1
+ *	is the structure of the last column t of this supernode.
+ *	It is for the purpose of symmetric pruning. Therefore, the
+ *	structural subscripts can be rearranged without making physical
+ *	interchanges among the numerical values.
+ *
+ *	However, if the supernode has only one column, then we
+ *	only keep one set of subscripts. For any subscript interchange
+ *	performed, similar interchange must be done on the numerical
+ *	values.
+ *
+ *	The last column structures (for pruning) will be removed
+ *	after the numercial LU factorization phase.
+ *
+ *   (xlusup,lusup): lusup[*] contains the numerical values of the
+ *	rectangular supernodes; xlusup[j] points to the starting
+ *	location of the j-th column in storage vector lusup[*]
+ *	Note: xlusup is indexed by column.
+ *	Each rectangular supernode is stored by column-major
+ *	scheme, consistent with Fortran 2-dim array storage.
+ *
+ *   (xusub,ucol,usub): ucol[*] stores the numerical values of
+ *	U-columns outside the rectangular supernodes. The row
+ *	subscript of nonzero ucol[k] is stored in usub[k].
+ *	xusub[i] points to the starting location of column i in ucol.
+ *	Storage: new row subscripts; that is subscripts of PA.
+ * 
+ */ +#ifndef __SUPERLU_zSP_DEFS /* allow multiple inclusions */ +#define __SUPERLU_zSP_DEFS + +/* + * File name: zsp_defs.h + * Purpose: Sparse matrix types and function prototypes + * History: + */ + +#ifdef _CRAY +#include +#include +#endif + +/* Define my integer type int_t */ +typedef int int_t; /* default */ + +#include +#include +#include "slu_Cnames.h" +#include "supermatrix.h" +#include "slu_util.h" +#include "slu_dcomplex.h" + + + +typedef struct { + int *xsup; /* supernode and column mapping */ + int *supno; + int *lsub; /* compressed L subscripts */ + int *xlsub; + doublecomplex *lusup; /* L supernodes */ + int *xlusup; + doublecomplex *ucol; /* U columns */ + int *usub; + int *xusub; + int nzlmax; /* current max size of lsub */ + int nzumax; /* " " " ucol */ + int nzlumax; /* " " " lusup */ + int n; /* number of columns in the matrix */ + LU_space_t MemModel; /* 0 - system malloc'd; 1 - user provided */ + int num_expansions; + ExpHeader *expanders; /* Array of pointers to 4 types of memory */ + LU_stack_t stack; /* use user supplied memory */ +} GlobalLU_t; + + +/* -------- Prototypes -------- */ + +#ifdef __cplusplus +extern "C" { +#endif + +/*! \brief Driver routines */ +extern void +zgssv(superlu_options_t *, SuperMatrix *, int *, int *, SuperMatrix *, + SuperMatrix *, SuperMatrix *, SuperLUStat_t *, int *); +extern void +zgssvx(superlu_options_t *, SuperMatrix *, int *, int *, int *, + char *, double *, double *, SuperMatrix *, SuperMatrix *, + void *, int, SuperMatrix *, SuperMatrix *, + double *, double *, double *, double *, + mem_usage_t *, SuperLUStat_t *, int *); + /* ILU */ +extern void +zgsisv(superlu_options_t *, SuperMatrix *, int *, int *, SuperMatrix *, + SuperMatrix *, SuperMatrix *, SuperLUStat_t *, int *); +extern void +zgsisx(superlu_options_t *, SuperMatrix *, int *, int *, int *, + char *, double *, double *, SuperMatrix *, SuperMatrix *, + void *, int, SuperMatrix *, SuperMatrix *, double *, double *, + mem_usage_t *, SuperLUStat_t *, int *); + + +/*! \brief Supernodal LU factor related */ +extern void +zCreate_CompCol_Matrix(SuperMatrix *, int, int, int, doublecomplex *, + int *, int *, Stype_t, Dtype_t, Mtype_t); +extern void +zCreate_CompRow_Matrix(SuperMatrix *, int, int, int, doublecomplex *, + int *, int *, Stype_t, Dtype_t, Mtype_t); +extern void +zCopy_CompCol_Matrix(SuperMatrix *, SuperMatrix *); +extern void +zCreate_Dense_Matrix(SuperMatrix *, int, int, doublecomplex *, int, + Stype_t, Dtype_t, Mtype_t); +extern void +zCreate_SuperNode_Matrix(SuperMatrix *, int, int, int, doublecomplex *, + int *, int *, int *, int *, int *, + Stype_t, Dtype_t, Mtype_t); +extern void +zCopy_Dense_Matrix(int, int, doublecomplex *, int, doublecomplex *, int); + +extern void countnz (const int, int *, int *, int *, GlobalLU_t *); +extern void ilu_countnz (const int, int *, int *, GlobalLU_t *); +extern void fixupL (const int, const int *, GlobalLU_t *); + +extern void zallocateA (int, int, doublecomplex **, int **, int **); +extern void zgstrf (superlu_options_t*, SuperMatrix*, + int, int, int*, void *, int, int *, int *, + SuperMatrix *, SuperMatrix *, SuperLUStat_t*, int *); +extern int zsnode_dfs (const int, const int, const int *, const int *, + const int *, int *, int *, GlobalLU_t *); +extern int zsnode_bmod (const int, const int, const int, doublecomplex *, + doublecomplex *, GlobalLU_t *, SuperLUStat_t*); +extern void zpanel_dfs (const int, const int, const int, SuperMatrix *, + int *, int *, doublecomplex *, int *, int *, int *, + int *, int *, int *, int *, GlobalLU_t *); +extern void zpanel_bmod (const int, const int, const int, const int, + doublecomplex *, doublecomplex *, int *, int *, + GlobalLU_t *, SuperLUStat_t*); +extern int zcolumn_dfs (const int, const int, int *, int *, int *, int *, + int *, int *, int *, int *, int *, GlobalLU_t *); +extern int zcolumn_bmod (const int, const int, doublecomplex *, + doublecomplex *, int *, int *, int, + GlobalLU_t *, SuperLUStat_t*); +extern int zcopy_to_ucol (int, int, int *, int *, int *, + doublecomplex *, GlobalLU_t *); +extern int zpivotL (const int, const double, int *, int *, + int *, int *, int *, GlobalLU_t *, SuperLUStat_t*); +extern void zpruneL (const int, const int *, const int, const int, + const int *, const int *, int *, GlobalLU_t *); +extern void zreadmt (int *, int *, int *, doublecomplex **, int **, int **); +extern void zGenXtrue (int, int, doublecomplex *, int); +extern void zFillRHS (trans_t, int, doublecomplex *, int, SuperMatrix *, + SuperMatrix *); +extern void zgstrs (trans_t, SuperMatrix *, SuperMatrix *, int *, int *, + SuperMatrix *, SuperLUStat_t*, int *); +/* ILU */ +extern void zgsitrf (superlu_options_t*, SuperMatrix*, int, int, int*, + void *, int, int *, int *, SuperMatrix *, SuperMatrix *, + SuperLUStat_t*, int *); +extern int zldperm(int, int, int, int [], int [], doublecomplex [], + int [], double [], double []); +extern int ilu_zsnode_dfs (const int, const int, const int *, const int *, + const int *, int *, GlobalLU_t *); +extern void ilu_zpanel_dfs (const int, const int, const int, SuperMatrix *, + int *, int *, doublecomplex *, double *, int *, int *, + int *, int *, int *, int *, GlobalLU_t *); +extern int ilu_zcolumn_dfs (const int, const int, int *, int *, int *, + int *, int *, int *, int *, int *, + GlobalLU_t *); +extern int ilu_zcopy_to_ucol (int, int, int *, int *, int *, + doublecomplex *, int, milu_t, double, int, + doublecomplex *, int *, GlobalLU_t *, int *); +extern int ilu_zpivotL (const int, const double, int *, int *, int, int *, + int *, int *, int *, double, milu_t, + doublecomplex, GlobalLU_t *, SuperLUStat_t*); +extern int ilu_zdrop_row (superlu_options_t *, int, int, double, + int, int *, double *, GlobalLU_t *, + double *, int *, int); + + +/*! \brief Driver related */ + +extern void zgsequ (SuperMatrix *, double *, double *, double *, + double *, double *, int *); +extern void zlaqgs (SuperMatrix *, double *, double *, double, + double, double, char *); +extern void zgscon (char *, SuperMatrix *, SuperMatrix *, + double, double *, SuperLUStat_t*, int *); +extern double zPivotGrowth(int, SuperMatrix *, int *, + SuperMatrix *, SuperMatrix *); +extern void zgsrfs (trans_t, SuperMatrix *, SuperMatrix *, + SuperMatrix *, int *, int *, char *, double *, + double *, SuperMatrix *, SuperMatrix *, + double *, double *, SuperLUStat_t*, int *); + +extern int sp_ztrsv (char *, char *, char *, SuperMatrix *, + SuperMatrix *, doublecomplex *, SuperLUStat_t*, int *); +extern int sp_zgemv (char *, doublecomplex, SuperMatrix *, doublecomplex *, + int, doublecomplex, doublecomplex *, int); + +extern int sp_zgemm (char *, char *, int, int, int, doublecomplex, + SuperMatrix *, doublecomplex *, int, doublecomplex, + doublecomplex *, int); +extern double dlamch_(char *); + + +/*! \brief Memory-related */ +extern int zLUMemInit (fact_t, void *, int, int, int, int, int, + double, SuperMatrix *, SuperMatrix *, + GlobalLU_t *, int **, doublecomplex **); +extern void zSetRWork (int, int, doublecomplex *, doublecomplex **, doublecomplex **); +extern void zLUWorkFree (int *, doublecomplex *, GlobalLU_t *); +extern int zLUMemXpand (int, int, MemType, int *, GlobalLU_t *); + +extern doublecomplex *doublecomplexMalloc(int); +extern doublecomplex *doublecomplexCalloc(int); +extern double *doubleMalloc(int); +extern double *doubleCalloc(int); +extern int zmemory_usage(const int, const int, const int, const int); +extern int zQuerySpace (SuperMatrix *, SuperMatrix *, mem_usage_t *); +extern int ilu_zQuerySpace (SuperMatrix *, SuperMatrix *, mem_usage_t *); + +/*! \brief Auxiliary routines */ +extern void zreadhb(int *, int *, int *, doublecomplex **, int **, int **); +extern void zreadrb(int *, int *, int *, doublecomplex **, int **, int **); +extern void zreadtriple(int *, int *, int *, doublecomplex **, int **, int **); +extern void zCompRow_to_CompCol(int, int, int, doublecomplex*, int*, int*, + doublecomplex **, int **, int **); +extern void zfill (doublecomplex *, int, doublecomplex); +extern void zinf_norm_error (int, SuperMatrix *, doublecomplex *); +extern void PrintPerf (SuperMatrix *, SuperMatrix *, mem_usage_t *, + doublecomplex, doublecomplex, doublecomplex *, doublecomplex *, char *); + +/*! \brief Routines for debugging */ +extern void zPrint_CompCol_Matrix(char *, SuperMatrix *); +extern void zPrint_SuperNode_Matrix(char *, SuperMatrix *); +extern void zPrint_Dense_Matrix(char *, SuperMatrix *); +extern void zprint_lu_col(char *, int, int, int *, GlobalLU_t *); +extern int print_double_vec(char *, int, double *); +extern void check_tempv(int, doublecomplex *); + +#ifdef __cplusplus + } +#endif + +#endif /* __SUPERLU_zSP_DEFS */ + diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/smemory.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/smemory.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/smemory.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/smemory.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,54 +1,32 @@ -/* - * -- SuperLU routine (version 3.0) -- - * Univ. of California Berkeley, Xerox Palo Alto Research Center, - * and Lawrence Berkeley National Lab. - * October 15, 2003 +/*! @file smemory.c + * \brief Memory details * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory.
+ * June 30, 2009
+ * 
*/ -#include "ssp_defs.h" +#include "slu_sdefs.h" -/* Constants */ -#define NO_MEMTYPE 4 /* 0: lusup; - 1: ucol; - 2: lsub; - 3: usub */ -#define GluIntArray(n) (5 * (n) + 5) /* Internal prototypes */ void *sexpand (int *, MemType,int, int, GlobalLU_t *); -int sLUWorkInit (int, int, int, int **, float **, LU_space_t); +int sLUWorkInit (int, int, int, int **, float **, GlobalLU_t *); void copy_mem_float (int, void *, void *); void sStackCompress (GlobalLU_t *); -void sSetupSpace (void *, int, LU_space_t *); -void *suser_malloc (int, int); -void suser_free (int, int); +void sSetupSpace (void *, int, GlobalLU_t *); +void *suser_malloc (int, int, GlobalLU_t *); +void suser_free (int, int, GlobalLU_t *); -/* External prototypes (in memory.c - prec-indep) */ +/* External prototypes (in memory.c - prec-independent) */ extern void copy_mem_int (int, void *, void *); extern void user_bcopy (char *, char *, int); -/* Headers for 4 types of dynamatically managed memory */ -typedef struct e_node { - int size; /* length of the memory that has been used */ - void *mem; /* pointer to the new malloc'd store */ -} ExpHeader; - -typedef struct { - int size; - int used; - int top1; /* grow upward, relative to &array[0] */ - int top2; /* grow downward */ - void *array; -} LU_stack_t; - -/* Variables local to this file */ -static ExpHeader *expanders = 0; /* Array of pointers to 4 types of memory */ -static LU_stack_t stack; -static int no_expand; /* Macros to manipulate stack */ -#define StackFull(x) ( x + stack.used >= stack.size ) +#define StackFull(x) ( x + Glu->stack.used >= Glu->stack.size ) #define NotDoubleAlign(addr) ( (long int)addr & 7 ) #define DoubleAlign(addr) ( ((long int)addr + 7) & ~7L ) #define TempSpace(m, w) ( (2*w + 4 + NO_MARKER) * m * sizeof(int) + \ @@ -58,66 +36,67 @@ -/* - * Setup the memory model to be used for factorization. +/*! \brief Setup the memory model to be used for factorization. + * * lwork = 0: use system malloc; * lwork > 0: use user-supplied work[] space. */ -void sSetupSpace(void *work, int lwork, LU_space_t *MemModel) +void sSetupSpace(void *work, int lwork, GlobalLU_t *Glu) { if ( lwork == 0 ) { - *MemModel = SYSTEM; /* malloc/free */ + Glu->MemModel = SYSTEM; /* malloc/free */ } else if ( lwork > 0 ) { - *MemModel = USER; /* user provided space */ - stack.used = 0; - stack.top1 = 0; - stack.top2 = (lwork/4)*4; /* must be word addressable */ - stack.size = stack.top2; - stack.array = (void *) work; + Glu->MemModel = USER; /* user provided space */ + Glu->stack.used = 0; + Glu->stack.top1 = 0; + Glu->stack.top2 = (lwork/4)*4; /* must be word addressable */ + Glu->stack.size = Glu->stack.top2; + Glu->stack.array = (void *) work; } } -void *suser_malloc(int bytes, int which_end) +void *suser_malloc(int bytes, int which_end, GlobalLU_t *Glu) { void *buf; if ( StackFull(bytes) ) return (NULL); if ( which_end == HEAD ) { - buf = (char*) stack.array + stack.top1; - stack.top1 += bytes; + buf = (char*) Glu->stack.array + Glu->stack.top1; + Glu->stack.top1 += bytes; } else { - stack.top2 -= bytes; - buf = (char*) stack.array + stack.top2; + Glu->stack.top2 -= bytes; + buf = (char*) Glu->stack.array + Glu->stack.top2; } - stack.used += bytes; + Glu->stack.used += bytes; return buf; } -void suser_free(int bytes, int which_end) +void suser_free(int bytes, int which_end, GlobalLU_t *Glu) { if ( which_end == HEAD ) { - stack.top1 -= bytes; + Glu->stack.top1 -= bytes; } else { - stack.top2 += bytes; + Glu->stack.top2 += bytes; } - stack.used -= bytes; + Glu->stack.used -= bytes; } -/* +/*! \brief + * + *
  * mem_usage consists of the following fields:
  *    - for_lu (float)
  *      The amount of space used in bytes for the L\U data structures.
  *    - total_needed (float)
  *      The amount of space needed in bytes to perform factorization.
- *    - expansions (int)
- *      Number of memory expansions during the LU factorization.
+ * 
*/ int sQuerySpace(SuperMatrix *L, SuperMatrix *U, mem_usage_t *mem_usage) { @@ -132,33 +111,75 @@ dword = sizeof(float); /* For LU factors */ - mem_usage->for_lu = (float)( (4*n + 3) * iword + Lstore->nzval_colptr[n] * - dword + Lstore->rowind_colptr[n] * iword ); - mem_usage->for_lu += (float)( (n + 1) * iword + + mem_usage->for_lu = (float)( (4.0*n + 3.0) * iword + + Lstore->nzval_colptr[n] * dword + + Lstore->rowind_colptr[n] * iword ); + mem_usage->for_lu += (float)( (n + 1.0) * iword + Ustore->colptr[n] * (dword + iword) ); /* Working storage to support factorization */ mem_usage->total_needed = mem_usage->for_lu + - (float)( (2 * panel_size + 4 + NO_MARKER) * n * iword + - (panel_size + 1) * n * dword ); - - mem_usage->expansions = --no_expand; + (float)( (2.0 * panel_size + 4.0 + NO_MARKER) * n * iword + + (panel_size + 1.0) * n * dword ); return 0; } /* sQuerySpace */ -/* - * Allocate storage for the data structures common to all factor routines. - * For those unpredictable size, make a guess as FILL * nnz(A). + +/*! \brief + * + *
+ * mem_usage consists of the following fields:
+ *    - for_lu (float)
+ *      The amount of space used in bytes for the L\U data structures.
+ *    - total_needed (float)
+ *      The amount of space needed in bytes to perform factorization.
+ * 
+ */ +int ilu_sQuerySpace(SuperMatrix *L, SuperMatrix *U, mem_usage_t *mem_usage) +{ + SCformat *Lstore; + NCformat *Ustore; + register int n, panel_size = sp_ienv(1); + register float iword, dword; + + Lstore = L->Store; + Ustore = U->Store; + n = L->ncol; + iword = sizeof(int); + dword = sizeof(double); + + /* For LU factors */ + mem_usage->for_lu = (float)( (4.0f * n + 3.0f) * iword + + Lstore->nzval_colptr[n] * dword + + Lstore->rowind_colptr[n] * iword ); + mem_usage->for_lu += (float)( (n + 1.0f) * iword + + Ustore->colptr[n] * (dword + iword) ); + + /* Working storage to support factorization. + ILU needs 5*n more integers than LU */ + mem_usage->total_needed = mem_usage->for_lu + + (float)( (2.0f * panel_size + 9.0f + NO_MARKER) * n * iword + + (panel_size + 1.0f) * n * dword ); + + return 0; +} /* ilu_sQuerySpace */ + + +/*! \brief Allocate storage for the data structures common to all factor routines. + * + *
+ * For those unpredictable size, estimate as fill_ratio * nnz(A).
  * Return value:
  *     If lwork = -1, return the estimated amount of space required, plus n;
  *     otherwise, return the amount of space actually allocated when
  *     memory allocation failure occurred.
+ * 
*/ int sLUMemInit(fact_t fact, void *work, int lwork, int m, int n, int annz, - int panel_size, SuperMatrix *L, SuperMatrix *U, GlobalLU_t *Glu, - int **iwork, float **dwork) + int panel_size, float fill_ratio, SuperMatrix *L, SuperMatrix *U, + GlobalLU_t *Glu, int **iwork, float **dwork) { int info, iword, dword; SCformat *Lstore; @@ -170,32 +191,33 @@ float *ucol; int *usub, *xusub; int nzlmax, nzumax, nzlumax; - int FILL = sp_ienv(6); - Glu->n = n; - no_expand = 0; iword = sizeof(int); dword = sizeof(float); + Glu->n = n; + Glu->num_expansions = 0; - if ( !expanders ) - expanders = (ExpHeader*)SUPERLU_MALLOC(NO_MEMTYPE * sizeof(ExpHeader)); - if ( !expanders ) ABORT("SUPERLU_MALLOC fails for expanders"); + if ( !Glu->expanders ) + Glu->expanders = (ExpHeader*)SUPERLU_MALLOC( NO_MEMTYPE * + sizeof(ExpHeader) ); + if ( !Glu->expanders ) ABORT("SUPERLU_MALLOC fails for expanders"); if ( fact != SamePattern_SameRowPerm ) { /* Guess for L\U factors */ - nzumax = nzlumax = FILL * annz; - nzlmax = SUPERLU_MAX(1, FILL/4.) * annz; + nzumax = nzlumax = fill_ratio * annz; + nzlmax = SUPERLU_MAX(1, fill_ratio/4.) * annz; if ( lwork == -1 ) { return ( GluIntArray(n) * iword + TempSpace(m, panel_size) + (nzlmax+nzumax)*iword + (nzlumax+nzumax)*dword + n ); } else { - sSetupSpace(work, lwork, &Glu->MemModel); + sSetupSpace(work, lwork, Glu); } -#ifdef DEBUG - printf("sLUMemInit() called: annz %d, MemModel %d\n", - annz, Glu->MemModel); +#if ( PRNTlevel >= 1 ) + printf("sLUMemInit() called: fill_ratio %ld, nzlmax %ld, nzumax %ld\n", + fill_ratio, nzlmax, nzumax); + fflush(stdout); #endif /* Integer pointers for L\U factors */ @@ -206,11 +228,11 @@ xlusup = intMalloc(n+1); xusub = intMalloc(n+1); } else { - xsup = (int *)suser_malloc((n+1) * iword, HEAD); - supno = (int *)suser_malloc((n+1) * iword, HEAD); - xlsub = (int *)suser_malloc((n+1) * iword, HEAD); - xlusup = (int *)suser_malloc((n+1) * iword, HEAD); - xusub = (int *)suser_malloc((n+1) * iword, HEAD); + xsup = (int *)suser_malloc((n+1) * iword, HEAD, Glu); + supno = (int *)suser_malloc((n+1) * iword, HEAD, Glu); + xlsub = (int *)suser_malloc((n+1) * iword, HEAD, Glu); + xlusup = (int *)suser_malloc((n+1) * iword, HEAD, Glu); + xusub = (int *)suser_malloc((n+1) * iword, HEAD, Glu); } lusup = (float *) sexpand( &nzlumax, LUSUP, 0, 0, Glu ); @@ -225,7 +247,8 @@ SUPERLU_FREE(lsub); SUPERLU_FREE(usub); } else { - suser_free((nzlumax+nzumax)*dword+(nzlmax+nzumax)*iword, HEAD); + suser_free((nzlumax+nzumax)*dword+(nzlmax+nzumax)*iword, + HEAD, Glu); } nzlumax /= 2; nzumax /= 2; @@ -234,6 +257,11 @@ printf("Not enough memory to perform factorization.\n"); return (smemory_usage(nzlmax, nzumax, nzlumax, n) + n); } +#if ( PRNTlevel >= 1) + printf("sLUMemInit() reduce size: nzlmax %ld, nzumax %ld\n", + nzlmax, nzumax); + fflush(stdout); +#endif lusup = (float *) sexpand( &nzlumax, LUSUP, 0, 0, Glu ); ucol = (float *) sexpand( &nzumax, UCOL, 0, 0, Glu ); lsub = (int *) sexpand( &nzlmax, LSUB, 0, 0, Glu ); @@ -260,18 +288,18 @@ Glu->MemModel = SYSTEM; } else { Glu->MemModel = USER; - stack.top2 = (lwork/4)*4; /* must be word-addressable */ - stack.size = stack.top2; + Glu->stack.top2 = (lwork/4)*4; /* must be word-addressable */ + Glu->stack.size = Glu->stack.top2; } - lsub = expanders[LSUB].mem = Lstore->rowind; - lusup = expanders[LUSUP].mem = Lstore->nzval; - usub = expanders[USUB].mem = Ustore->rowind; - ucol = expanders[UCOL].mem = Ustore->nzval;; - expanders[LSUB].size = nzlmax; - expanders[LUSUP].size = nzlumax; - expanders[USUB].size = nzumax; - expanders[UCOL].size = nzumax; + lsub = Glu->expanders[LSUB].mem = Lstore->rowind; + lusup = Glu->expanders[LUSUP].mem = Lstore->nzval; + usub = Glu->expanders[USUB].mem = Ustore->rowind; + ucol = Glu->expanders[UCOL].mem = Ustore->nzval;; + Glu->expanders[LSUB].size = nzlmax; + Glu->expanders[LUSUP].size = nzlumax; + Glu->expanders[USUB].size = nzumax; + Glu->expanders[UCOL].size = nzumax; } Glu->xsup = xsup; @@ -287,20 +315,20 @@ Glu->nzumax = nzumax; Glu->nzlumax = nzlumax; - info = sLUWorkInit(m, n, panel_size, iwork, dwork, Glu->MemModel); + info = sLUWorkInit(m, n, panel_size, iwork, dwork, Glu); if ( info ) return ( info + smemory_usage(nzlmax, nzumax, nzlumax, n) + n); - ++no_expand; + ++Glu->num_expansions; return 0; } /* sLUMemInit */ -/* Allocate known working storage. Returns 0 if success, otherwise +/*! \brief Allocate known working storage. Returns 0 if success, otherwise returns the number of bytes allocated so far when failure occurred. */ int sLUWorkInit(int m, int n, int panel_size, int **iworkptr, - float **dworkptr, LU_space_t MemModel) + float **dworkptr, GlobalLU_t *Glu) { int isize, dsize, extra; float *old_ptr; @@ -311,19 +339,19 @@ dsize = (m * panel_size + NUM_TEMPV(m,panel_size,maxsuper,rowblk)) * sizeof(float); - if ( MemModel == SYSTEM ) + if ( Glu->MemModel == SYSTEM ) *iworkptr = (int *) intCalloc(isize/sizeof(int)); else - *iworkptr = (int *) suser_malloc(isize, TAIL); + *iworkptr = (int *) suser_malloc(isize, TAIL, Glu); if ( ! *iworkptr ) { fprintf(stderr, "sLUWorkInit: malloc fails for local iworkptr[]\n"); return (isize + n); } - if ( MemModel == SYSTEM ) + if ( Glu->MemModel == SYSTEM ) *dworkptr = (float *) SUPERLU_MALLOC(dsize); else { - *dworkptr = (float *) suser_malloc(dsize, TAIL); + *dworkptr = (float *) suser_malloc(dsize, TAIL, Glu); if ( NotDoubleAlign(*dworkptr) ) { old_ptr = *dworkptr; *dworkptr = (float*) DoubleAlign(*dworkptr); @@ -332,8 +360,8 @@ #ifdef DEBUG printf("sLUWorkInit: not aligned, extra %d\n", extra); #endif - stack.top2 -= extra; - stack.used += extra; + Glu->stack.top2 -= extra; + Glu->stack.used += extra; } } if ( ! *dworkptr ) { @@ -345,8 +373,7 @@ } -/* - * Set up pointers for real working arrays. +/*! \brief Set up pointers for real working arrays. */ void sSetRWork(int m, int panel_size, float *dworkptr, @@ -362,8 +389,7 @@ sfill (*tempv, NUM_TEMPV(m,panel_size,maxsuper,rowblk), zero); } -/* - * Free the working storage used by factor routines. +/*! \brief Free the working storage used by factor routines. */ void sLUWorkFree(int *iwork, float *dwork, GlobalLU_t *Glu) { @@ -371,18 +397,21 @@ SUPERLU_FREE (iwork); SUPERLU_FREE (dwork); } else { - stack.used -= (stack.size - stack.top2); - stack.top2 = stack.size; + Glu->stack.used -= (Glu->stack.size - Glu->stack.top2); + Glu->stack.top2 = Glu->stack.size; /* sStackCompress(Glu); */ } - SUPERLU_FREE (expanders); - expanders = 0; + SUPERLU_FREE (Glu->expanders); + Glu->expanders = NULL; } -/* Expand the data structures for L and U during the factorization. +/*! \brief Expand the data structures for L and U during the factorization. + * + *
  * Return value:   0 - successful return
  *               > 0 - number of bytes allocated when run out of space
+ * 
*/ int sLUMemXpand(int jcol, @@ -446,8 +475,7 @@ for (i = 0; i < howmany; i++) dnew[i] = dold[i]; } -/* - * Expand the existing storage to accommodate more fill-ins. +/*! \brief Expand the existing storage to accommodate more fill-ins. */ void *sexpand ( @@ -463,12 +491,14 @@ float alpha; void *new_mem, *old_mem; int new_len, tries, lword, extra, bytes_to_copy; + ExpHeader *expanders = Glu->expanders; /* Array of 4 types of memory */ alpha = EXPAND; - if ( no_expand == 0 || keep_prev ) /* First time allocate requested */ + if ( Glu->num_expansions == 0 || keep_prev ) { + /* First time allocate requested */ new_len = *prev_len; - else { + } else { new_len = alpha * *prev_len; } @@ -476,9 +506,8 @@ else lword = sizeof(float); if ( Glu->MemModel == SYSTEM ) { - new_mem = (void *) SUPERLU_MALLOC(new_len * lword); -/* new_mem = (void *) calloc(new_len, lword); */ - if ( no_expand != 0 ) { + new_mem = (void *) SUPERLU_MALLOC((size_t)new_len * lword); + if ( Glu->num_expansions != 0 ) { tries = 0; if ( keep_prev ) { if ( !new_mem ) return (NULL); @@ -487,8 +516,7 @@ if ( ++tries > 10 ) return (NULL); alpha = Reduce(alpha); new_len = alpha * *prev_len; - new_mem = (void *) SUPERLU_MALLOC(new_len * lword); -/* new_mem = (void *) calloc(new_len, lword); */ + new_mem = (void *) SUPERLU_MALLOC((size_t)new_len * lword); } } if ( type == LSUB || type == USUB ) { @@ -501,8 +529,8 @@ expanders[type].mem = (void *) new_mem; } else { /* MemModel == USER */ - if ( no_expand == 0 ) { - new_mem = suser_malloc(new_len * lword, HEAD); + if ( Glu->num_expansions == 0 ) { + new_mem = suser_malloc(new_len * lword, HEAD, Glu); if ( NotDoubleAlign(new_mem) && (type == LUSUP || type == UCOL) ) { old_mem = new_mem; @@ -511,12 +539,11 @@ #ifdef DEBUG printf("expand(): not aligned, extra %d\n", extra); #endif - stack.top1 += extra; - stack.used += extra; + Glu->stack.top1 += extra; + Glu->stack.used += extra; } expanders[type].mem = (void *) new_mem; - } - else { + } else { tries = 0; extra = (new_len - *prev_len) * lword; if ( keep_prev ) { @@ -532,7 +559,7 @@ if ( type != USUB ) { new_mem = (void*)((char*)expanders[type + 1].mem + extra); - bytes_to_copy = (char*)stack.array + stack.top1 + bytes_to_copy = (char*)Glu->stack.array + Glu->stack.top1 - (char*)expanders[type + 1].mem; user_bcopy(expanders[type+1].mem, new_mem, bytes_to_copy); @@ -548,11 +575,11 @@ Glu->ucol = expanders[UCOL].mem = (void*)((char*)expanders[UCOL].mem + extra); } - stack.top1 += extra; - stack.used += extra; + Glu->stack.top1 += extra; + Glu->stack.used += extra; if ( type == UCOL ) { - stack.top1 += extra; /* Add same amount for USUB */ - stack.used += extra; + Glu->stack.top1 += extra; /* Add same amount for USUB */ + Glu->stack.used += extra; } } /* if ... */ @@ -562,15 +589,14 @@ expanders[type].size = new_len; *prev_len = new_len; - if ( no_expand ) ++no_expand; + if ( Glu->num_expansions ) ++Glu->num_expansions; return (void *) expanders[type].mem; } /* sexpand */ -/* - * Compress the work[] array to remove fragmentation. +/*! \brief Compress the work[] array to remove fragmentation. */ void sStackCompress(GlobalLU_t *Glu) @@ -610,9 +636,9 @@ usub = ito; last = (char*)usub + xusub[ndim] * iword; - fragment = (char*) (((char*)stack.array + stack.top1) - last); - stack.used -= (long int) fragment; - stack.top1 -= (long int) fragment; + fragment = (char*) (((char*)Glu->stack.array + Glu->stack.top1) - last); + Glu->stack.used -= (long int) fragment; + Glu->stack.top1 -= (long int) fragment; Glu->ucol = ucol; Glu->lsub = lsub; @@ -626,8 +652,7 @@ } -/* - * Allocate storage for original matrix A +/*! \brief Allocate storage for original matrix A */ void sallocateA(int n, int nnz, float **a, int **asub, int **xa) @@ -641,7 +666,7 @@ float *floatMalloc(int n) { float *buf; - buf = (float *) SUPERLU_MALLOC(n * sizeof(float)); + buf = (float *) SUPERLU_MALLOC((size_t)n * sizeof(float)); if ( !buf ) { ABORT("SUPERLU_MALLOC failed for buf in floatMalloc()\n"); } @@ -653,7 +678,7 @@ float *buf; register int i; float zero = 0.0; - buf = (float *) SUPERLU_MALLOC(n * sizeof(float)); + buf = (float *) SUPERLU_MALLOC((size_t)n * sizeof(float)); if ( !buf ) { ABORT("SUPERLU_MALLOC failed for buf in floatCalloc()\n"); } diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/spanel_bmod.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/spanel_bmod.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/spanel_bmod.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/spanel_bmod.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,27 +1,32 @@ -/* +/*! @file spanel_bmod.c + * \brief Performs numeric block updates + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ /* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. + */ #include #include -#include "ssp_defs.h" +#include "slu_sdefs.h" /* * Function prototypes @@ -30,6 +35,25 @@ void smatvec(int, int, int, float *, float *, float *); extern void scheck_tempv(); +/*! \brief + * + *
+ * Purpose
+ * =======
+ *
+ *    Performs numeric block updates (sup-panel) in topological order.
+ *    It features: col-col, 2cols-col, 3cols-col, and sup-col updates.
+ *    Special processing on the supernodal portion of L\U[*,j]
+ *
+ *    Before entering this routine, the original nonzeros in the panel 
+ *    were already copied into the spa[m,w].
+ *
+ *    Updated/Output parameters-
+ *    dense[0:m-1,w]: L[*,j:j+w-1] and U[*,j:j+w-1] are returned 
+ *    collectively in the m-by-w vector dense[*]. 
+ * 
+ */ + void spanel_bmod ( const int m, /* in - number of rows in the matrix */ @@ -44,22 +68,7 @@ SuperLUStat_t *stat /* output */ ) { -/* - * Purpose - * ======= - * - * Performs numeric block updates (sup-panel) in topological order. - * It features: col-col, 2cols-col, 3cols-col, and sup-col updates. - * Special processing on the supernodal portion of L\U[*,j] - * - * Before entering this routine, the original nonzeros in the panel - * were already copied into the spa[m,w]. - * - * Updated/Output parameters- - * dense[0:m-1,w]: L[*,j:j+w-1] and U[*,j:j+w-1] are returned - * collectively in the m-by-w vector dense[*]. - * - */ + #ifdef USE_VENDOR_BLAS #ifdef _CRAY diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/spanel_dfs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/spanel_dfs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/spanel_dfs.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/spanel_dfs.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,48 +1,32 @@ - -/* +/*! @file spanel_dfs.c + * \brief Peforms a symbolic factorization on a panel of symbols + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "ssp_defs.h" -#include "util.h" -void -spanel_dfs ( - const int m, /* in - number of rows in the matrix */ - const int w, /* in */ - const int jcol, /* in */ - SuperMatrix *A, /* in - original matrix */ - int *perm_r, /* in */ - int *nseg, /* out */ - float *dense, /* out */ - int *panel_lsub, /* out */ - int *segrep, /* out */ - int *repfnz, /* out */ - int *xprune, /* out */ - int *marker, /* out */ - int *parent, /* working array */ - int *xplore, /* working array */ - GlobalLU_t *Glu /* modified */ - ) -{ -/* +#include "slu_sdefs.h" + +/*! \brief + * + *
  * Purpose
  * =======
  *
@@ -68,8 +52,29 @@
  *   repfnz: SuperA-col --> PA-row
  *   parent: SuperA-col --> SuperA-col
  *   xplore: SuperA-col --> index to L-structure
- *
+ * 
*/ + +void +spanel_dfs ( + const int m, /* in - number of rows in the matrix */ + const int w, /* in */ + const int jcol, /* in */ + SuperMatrix *A, /* in - original matrix */ + int *perm_r, /* in */ + int *nseg, /* out */ + float *dense, /* out */ + int *panel_lsub, /* out */ + int *segrep, /* out */ + int *repfnz, /* out */ + int *xprune, /* out */ + int *marker, /* out */ + int *parent, /* working array */ + int *xplore, /* working array */ + GlobalLU_t *Glu /* modified */ + ) +{ + NCPformat *Astore; float *a; int *asub; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sp_coletree.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sp_coletree.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sp_coletree.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sp_coletree.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,9 +1,30 @@ +/*! @file sp_coletree.c + * \brief Tree layout and computation routines + * + *
+ * -- SuperLU routine (version 3.1) --
+ * Univ. of California Berkeley, Xerox Palo Alto Research Center,
+ * and Lawrence Berkeley National Lab.
+ * August 1, 2008
+ *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
+*/ /* Elimination tree computation and layout routines */ #include #include -#include "dsp_defs.h" +#include "slu_ddefs.h" /* * Implementation of disjoint set union routines. @@ -24,7 +45,6 @@ * Implemented path-halving by XSL 07/05/95. */ -static int *pp; /* parent array for sets */ static int *mxCallocInt(int n) @@ -42,17 +62,19 @@ static void initialize_disjoint_sets ( - int n - ) + int n, + int **pp + ) { - pp = mxCallocInt(n); + (*pp) = mxCallocInt(n); } static int make_set ( - int i - ) + int i, + int *pp + ) { pp[i] = i; return i; @@ -61,9 +83,10 @@ static int link ( - int s, - int t - ) + int s, + int t, + int *pp + ) { pp[s] = t; return t; @@ -72,7 +95,10 @@ /* PATH HALVING */ static -int find (int i) +int find ( + int i, + int *pp + ) { register int p, gp; @@ -102,8 +128,8 @@ static void finalize_disjoint_sets ( - void - ) + int *pp + ) { SUPERLU_FREE(pp); } @@ -143,9 +169,10 @@ int row, col; int rroot; int p; + int *pp; root = mxCallocInt (nc); - initialize_disjoint_sets (nc); + initialize_disjoint_sets (nc, &pp); /* Compute firstcol[row] = first nonzero column in row */ @@ -163,17 +190,17 @@ centered at its first vertex, which has the same fill. */ for (col = 0; col < nc; col++) { - cset = make_set (col); + cset = make_set (col, pp); root[cset] = col; parent[col] = nc; /* Matlab */ for (p = acolst[col]; p < acolend[col]; p++) { row = firstcol[arow[p]]; if (row >= col) continue; - rset = find (row); + rset = find (row, pp); rroot = root[rset]; if (rroot != col) { parent[rroot] = col; - cset = link (cset, rset); + cset = link (cset, rset, pp); root[cset] = col; } } @@ -181,7 +208,7 @@ SUPERLU_FREE (root); SUPERLU_FREE (firstcol); - finalize_disjoint_sets (); + finalize_disjoint_sets (pp); return 0; } @@ -209,35 +236,88 @@ * Based on code written by John Gilbert at CMI in 1987. */ -static int *first_kid, *next_kid; /* Linked list of children. */ -static int *post, postnum; - static /* * Depth-first search from vertex v. */ void etdfs ( - int v - ) + int v, + int first_kid[], + int next_kid[], + int post[], + int *postnum + ) { int w; for (w = first_kid[v]; w != -1; w = next_kid[w]) { - etdfs (w); + etdfs (w, first_kid, next_kid, post, postnum); } /* post[postnum++] = v; in Matlab */ - post[v] = postnum++; /* Modified by X.Li on 2/14/95 */ + post[v] = (*postnum)++; /* Modified by X. Li on 08/10/07 */ } +static +/* + * Depth-first search from vertex n. No recursion. + * This routine was contributed by Cédric Doucet, CEDRAT Group, Meylan, France. + */ +void nr_etdfs (int n, int *parent, + int *first_kid, int *next_kid, + int *post, int postnum) +{ + int current = n, first, next; + + while (postnum != n){ + + /* no kid for the current node */ + first = first_kid[current]; + + /* no first kid for the current node */ + if (first == -1){ + + /* numbering this node because it has no kid */ + post[current] = postnum++; + + /* looking for the next kid */ + next = next_kid[current]; + + while (next == -1){ + + /* no more kids : back to the parent node */ + current = parent[current]; + + /* numbering the parent node */ + post[current] = postnum++; + + /* get the next kid */ + next = next_kid[current]; + } + + /* stopping criterion */ + if (postnum==n+1) return; + + /* updating current node */ + current = next; + } + /* updating current node */ + else { + current = first; + } + } +} + /* * Post order a tree */ int *TreePostorder( - int n, - int *parent -) + int n, + int *parent + ) { + int *first_kid, *next_kid; /* Linked list of children. */ + int *post, postnum; int v, dad; /* Allocate storage for working arrays and results */ @@ -255,7 +335,13 @@ /* Depth-first search from dummy root vertex #n */ postnum = 0; - etdfs (n); +#if 0 + /* recursion */ + etdfs (n, first_kid, next_kid, post, &postnum); +#else + /* no recursion */ + nr_etdfs(n, parent, first_kid, next_kid, post, postnum); +#endif SUPERLU_FREE (first_kid); SUPERLU_FREE (next_kid); @@ -306,27 +392,28 @@ int row, col; int rroot; int p; + int *pp; root = mxCallocInt (n); - initialize_disjoint_sets (n); + initialize_disjoint_sets (n, &pp); for (col = 0; col < n; col++) { - cset = make_set (col); + cset = make_set (col, pp); root[cset] = col; parent[col] = n; /* Matlab */ for (p = acolst[col]; p < acolend[col]; p++) { row = arow[p]; if (row >= col) continue; - rset = find (row); + rset = find (row, pp); rroot = root[rset]; if (rroot != col) { parent[rroot] = col; - cset = link (cset, rset); + cset = link (cset, rset, pp); root[cset] = col; } } } SUPERLU_FREE (root); - finalize_disjoint_sets (); + finalize_disjoint_sets (pp); return 0; } /* SP_SYMETREE */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sp_ienv.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sp_ienv.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sp_ienv.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sp_ienv.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,11 +1,16 @@ +/*! @file sp_ienv.c + * \brief Chooses machine-dependent parameters for the local environment +*/ + /* * File name: sp_ienv.c * History: Modified from lapack routine ILAENV */ -int -sp_ienv(int ispec) -{ -/* +#include "slu_Cnames.h" + +/*! \brief + +
     Purpose   
     =======   
 
@@ -40,7 +45,11 @@
             < 0:  if SP_IENV = -k, the k-th argument had an illegal value. 
   
     ===================================================================== 
+
*/ +int +sp_ienv(int ispec) +{ int i; switch (ispec) { diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/spivotgrowth.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/spivotgrowth.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/spivotgrowth.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/spivotgrowth.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,21 +1,20 @@ - -/* +/*! @file spivotgrowth.c + * \brief Computes the reciprocal pivot growth factor + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
- *
+ * 
*/ #include -#include "ssp_defs.h" -#include "util.h" +#include "slu_sdefs.h" -float -sPivotGrowth(int ncols, SuperMatrix *A, int *perm_c, - SuperMatrix *L, SuperMatrix *U) -{ -/* +/*! \brief + * + *
  * Purpose
  * =======
  *
@@ -43,8 +42,14 @@
  *	    The factor U from the factorization Pr*A*Pc=L*U. Use column-wise
  *          storage scheme, i.e., U has types: Stype = NC;
  *          Dtype = SLU_S; Mtype = TRU.
- *
+ * 
*/ + +float +sPivotGrowth(int ncols, SuperMatrix *A, int *perm_c, + SuperMatrix *L, SuperMatrix *U) +{ + NCformat *Astore; SCformat *Lstore; NCformat *Ustore; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/spivotL.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/spivotL.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/spivotL.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/spivotL.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,44 +1,36 @@ -/* +/*! @file spivotL.c + * \brief Performs numerical pivoting + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ + #include #include -#include "ssp_defs.h" +#include "slu_sdefs.h" #undef DEBUG -int -spivotL( - const int jcol, /* in */ - const float u, /* in - diagonal pivoting threshold */ - int *usepr, /* re-use the pivot sequence given by perm_r/iperm_r */ - int *perm_r, /* may be modified */ - int *iperm_r, /* in - inverse of perm_r */ - int *iperm_c, /* in - used to find diagonal of Pc*A*Pc' */ - int *pivrow, /* out */ - GlobalLU_t *Glu, /* modified - global LU data structures */ - SuperLUStat_t *stat /* output */ - ) -{ -/* +/*! \brief + * + *
  * Purpose
  * =======
  *   Performs the numerical pivoting on the current column of L,
@@ -57,8 +49,23 @@
  *
  *   Return value: 0      success;
  *                 i > 0  U(i,i) is exactly zero.
- *
+ * 
*/ + +int +spivotL( + const int jcol, /* in */ + const double u, /* in - diagonal pivoting threshold */ + int *usepr, /* re-use the pivot sequence given by perm_r/iperm_r */ + int *perm_r, /* may be modified */ + int *iperm_r, /* in - inverse of perm_r */ + int *iperm_c, /* in - used to find diagonal of Pc*A*Pc' */ + int *pivrow, /* out */ + GlobalLU_t *Glu, /* modified - global LU data structures */ + SuperLUStat_t *stat /* output */ + ) +{ + int fsupc; /* first column in the supernode */ int nsupc; /* no of columns in the supernode */ int nsupr; /* no of rows in the supernode */ @@ -100,7 +107,11 @@ Also search for user-specified pivot, and diagonal element. */ if ( *usepr ) *pivrow = iperm_r[jcol]; diagind = iperm_c[jcol]; +#ifdef SCIPY_SPECIFIC_FIX + pivmax = -1.0; +#else pivmax = 0.0; +#endif pivptr = nsupc; diag = EMPTY; old_pivptr = nsupc; @@ -115,9 +126,20 @@ } /* Test for singularity */ +#ifdef SCIPY_SPECIFIC_FIX + if (pivmax < 0.0) { + perm_r[diagind] = jcol; + *usepr = 0; + return (jcol+1); + } +#endif if ( pivmax == 0.0 ) { +#if 1 *pivrow = lsub_ptr[pivptr]; perm_r[*pivrow] = jcol; +#else + perm_r[diagind] = jcol; +#endif *usepr = 0; return (jcol+1); } diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sp_preorder.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sp_preorder.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sp_preorder.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sp_preorder.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,10 +1,12 @@ -#include "dsp_defs.h" +/*! @file sp_preorder.c + * \brief Permute and performs functions on columns of orginal matrix + */ +#include "slu_ddefs.h" -void -sp_preorder(superlu_options_t *options, SuperMatrix *A, int *perm_c, - int *etree, SuperMatrix *AC) -{ -/* + +/*! \brief + * + *
  * Purpose
  * =======
  *
@@ -54,9 +56,12 @@
  *         The resulting matrix after applied the column permutation
  *         perm_c[] to matrix A. The type of AC can be:
  *         Stype = SLU_NCP; Dtype = A->Dtype; Mtype = SLU_GE.
- *
+ * 
*/ - +void +sp_preorder(superlu_options_t *options, SuperMatrix *A, int *perm_c, + int *etree, SuperMatrix *AC) +{ NCformat *Astore; NCPformat *ACstore; int *iwork, *post; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/spruneL.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/spruneL.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/spruneL.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/spruneL.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,27 +1,38 @@ - -/* +/*! @file spruneL.c + * \brief Prunes the L-structure + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ *
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "ssp_defs.h" -#include "util.h" + +#include "slu_sdefs.h" + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *   Prunes the L-structure of supernodes whose L-structure
+ *   contains the current pivot row "pivrow"
+ * 
+ */ void spruneL( @@ -35,13 +46,7 @@ GlobalLU_t *Glu /* modified - global LU data structures */ ) { -/* - * Purpose - * ======= - * Prunes the L-structure of supernodes whose L-structure - * contains the current pivot row "pivrow" - * - */ + float utemp; int jsupno, irep, irep1, kmin, kmax, krow, movnum; int i, ktemp, minloc, maxloc; @@ -108,8 +113,8 @@ kmax--; else if ( perm_r[lsub[kmin]] != EMPTY ) kmin++; - else { /* kmin below pivrow, and kmax above pivrow: - * interchange the two subscripts + else { /* kmin below pivrow (not yet pivoted), and kmax + * above pivrow: interchange the two subscripts */ ktemp = lsub[kmin]; lsub[kmin] = lsub[kmax]; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sreadhb.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sreadhb.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sreadhb.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sreadhb.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,18 +1,85 @@ - -/* +/*! @file sreadhb.c + * \brief Read a matrix stored in Harwell-Boeing format + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
  *
+ * Purpose
+ * =======
+ * 
+ * Read a FLOAT PRECISION matrix stored in Harwell-Boeing format 
+ * as described below.
+ * 
+ * Line 1 (A72,A8) 
+ *  	Col. 1 - 72   Title (TITLE) 
+ *	Col. 73 - 80  Key (KEY) 
+ * 
+ * Line 2 (5I14) 
+ * 	Col. 1 - 14   Total number of lines excluding header (TOTCRD) 
+ * 	Col. 15 - 28  Number of lines for pointers (PTRCRD) 
+ * 	Col. 29 - 42  Number of lines for row (or variable) indices (INDCRD) 
+ * 	Col. 43 - 56  Number of lines for numerical values (VALCRD) 
+ *	Col. 57 - 70  Number of lines for right-hand sides (RHSCRD) 
+ *                    (including starting guesses and solution vectors 
+ *		       if present) 
+ *           	      (zero indicates no right-hand side data is present) 
+ *
+ * Line 3 (A3, 11X, 4I14) 
+ *   	Col. 1 - 3    Matrix type (see below) (MXTYPE) 
+ * 	Col. 15 - 28  Number of rows (or variables) (NROW) 
+ * 	Col. 29 - 42  Number of columns (or elements) (NCOL) 
+ *	Col. 43 - 56  Number of row (or variable) indices (NNZERO) 
+ *	              (equal to number of entries for assembled matrices) 
+ * 	Col. 57 - 70  Number of elemental matrix entries (NELTVL) 
+ *	              (zero in the case of assembled matrices) 
+ * Line 4 (2A16, 2A20) 
+ * 	Col. 1 - 16   Format for pointers (PTRFMT) 
+ *	Col. 17 - 32  Format for row (or variable) indices (INDFMT) 
+ *	Col. 33 - 52  Format for numerical values of coefficient matrix (VALFMT) 
+ * 	Col. 53 - 72 Format for numerical values of right-hand sides (RHSFMT) 
+ *
+ * Line 5 (A3, 11X, 2I14) Only present if there are right-hand sides present 
+ *    	Col. 1 	      Right-hand side type: 
+ *	         	  F for full storage or M for same format as matrix 
+ *    	Col. 2        G if a starting vector(s) (Guess) is supplied. (RHSTYP) 
+ *    	Col. 3        X if an exact solution vector(s) is supplied. 
+ *	Col. 15 - 28  Number of right-hand sides (NRHS) 
+ *	Col. 29 - 42  Number of row indices (NRHSIX) 
+ *          	      (ignored in case of unassembled matrices) 
+ *
+ * The three character type field on line 3 describes the matrix type. 
+ * The following table lists the permitted values for each of the three 
+ * characters. As an example of the type field, RSA denotes that the matrix 
+ * is real, symmetric, and assembled. 
+ *
+ * First Character: 
+ *	R Real matrix 
+ *	C Complex matrix 
+ *	P Pattern only (no numerical values supplied) 
+ *
+ * Second Character: 
+ *	S Symmetric 
+ *	U Unsymmetric 
+ *	H Hermitian 
+ *	Z Skew symmetric 
+ *	R Rectangular 
+ *
+ * Third Character: 
+ *	A Assembled 
+ *	E Elemental matrices (unassembled) 
+ *
+ * 
*/ #include #include -#include "ssp_defs.h" +#include "slu_sdefs.h" -/* Eat up the rest of the current line */ +/*! \brief Eat up the rest of the current line */ int sDumpLine(FILE *fp) { register int c; @@ -60,7 +127,7 @@ return 0; } -int sReadVector(FILE *fp, int n, int *where, int perline, int persize) +static int ReadVector(FILE *fp, int n, int *where, int perline, int persize) { register int i, j, item; char tmp, buf[100]; @@ -108,72 +175,6 @@ sreadhb(int *nrow, int *ncol, int *nonz, float **nzval, int **rowind, int **colptr) { -/* - * Purpose - * ======= - * - * Read a FLOAT PRECISION matrix stored in Harwell-Boeing format - * as described below. - * - * Line 1 (A72,A8) - * Col. 1 - 72 Title (TITLE) - * Col. 73 - 80 Key (KEY) - * - * Line 2 (5I14) - * Col. 1 - 14 Total number of lines excluding header (TOTCRD) - * Col. 15 - 28 Number of lines for pointers (PTRCRD) - * Col. 29 - 42 Number of lines for row (or variable) indices (INDCRD) - * Col. 43 - 56 Number of lines for numerical values (VALCRD) - * Col. 57 - 70 Number of lines for right-hand sides (RHSCRD) - * (including starting guesses and solution vectors - * if present) - * (zero indicates no right-hand side data is present) - * - * Line 3 (A3, 11X, 4I14) - * Col. 1 - 3 Matrix type (see below) (MXTYPE) - * Col. 15 - 28 Number of rows (or variables) (NROW) - * Col. 29 - 42 Number of columns (or elements) (NCOL) - * Col. 43 - 56 Number of row (or variable) indices (NNZERO) - * (equal to number of entries for assembled matrices) - * Col. 57 - 70 Number of elemental matrix entries (NELTVL) - * (zero in the case of assembled matrices) - * Line 4 (2A16, 2A20) - * Col. 1 - 16 Format for pointers (PTRFMT) - * Col. 17 - 32 Format for row (or variable) indices (INDFMT) - * Col. 33 - 52 Format for numerical values of coefficient matrix (VALFMT) - * Col. 53 - 72 Format for numerical values of right-hand sides (RHSFMT) - * - * Line 5 (A3, 11X, 2I14) Only present if there are right-hand sides present - * Col. 1 Right-hand side type: - * F for full storage or M for same format as matrix - * Col. 2 G if a starting vector(s) (Guess) is supplied. (RHSTYP) - * Col. 3 X if an exact solution vector(s) is supplied. - * Col. 15 - 28 Number of right-hand sides (NRHS) - * Col. 29 - 42 Number of row indices (NRHSIX) - * (ignored in case of unassembled matrices) - * - * The three character type field on line 3 describes the matrix type. - * The following table lists the permitted values for each of the three - * characters. As an example of the type field, RSA denotes that the matrix - * is real, symmetric, and assembled. - * - * First Character: - * R Real matrix - * C Complex matrix - * P Pattern only (no numerical values supplied) - * - * Second Character: - * S Symmetric - * U Unsymmetric - * H Hermitian - * Z Skew symmetric - * R Rectangular - * - * Third Character: - * A Assembled - * E Elemental matrices (unassembled) - * - */ register int i, numer_lines = 0, rhscrd = 0; int tmp, colnum, colsize, rownum, rowsize, valnum, valsize; @@ -244,8 +245,8 @@ printf("valnum %d, valsize %d\n", valnum, valsize); #endif - sReadVector(fp, *ncol+1, *colptr, colnum, colsize); - sReadVector(fp, *nonz, *rowind, rownum, rowsize); + ReadVector(fp, *ncol+1, *colptr, colnum, colsize); + ReadVector(fp, *nonz, *rowind, rownum, rowsize); if ( numer_lines ) { sReadValues(fp, *nonz, *nzval, valnum, valsize); } diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sreadrb.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sreadrb.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sreadrb.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sreadrb.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,237 @@ + +/*! @file sreadrb.c + * \brief Read a matrix stored in Rutherford-Boeing format + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory.
+ * June 30, 2009
+ * 
+ * + * Purpose + * ======= + * + * Read a FLOAT PRECISION matrix stored in Rutherford-Boeing format + * as described below. + * + * Line 1 (A72, A8) + * Col. 1 - 72 Title (TITLE) + * Col. 73 - 80 Matrix name / identifier (MTRXID) + * + * Line 2 (I14, 3(1X, I13)) + * Col. 1 - 14 Total number of lines excluding header (TOTCRD) + * Col. 16 - 28 Number of lines for pointers (PTRCRD) + * Col. 30 - 42 Number of lines for row (or variable) indices (INDCRD) + * Col. 44 - 56 Number of lines for numerical values (VALCRD) + * + * Line 3 (A3, 11X, 4(1X, I13)) + * Col. 1 - 3 Matrix type (see below) (MXTYPE) + * Col. 15 - 28 Compressed Column: Number of rows (NROW) + * Elemental: Largest integer used to index variable (MVAR) + * Col. 30 - 42 Compressed Column: Number of columns (NCOL) + * Elemental: Number of element matrices (NELT) + * Col. 44 - 56 Compressed Column: Number of entries (NNZERO) + * Elemental: Number of variable indeces (NVARIX) + * Col. 58 - 70 Compressed Column: Unused, explicitly zero + * Elemental: Number of elemental matrix entries (NELTVL) + * + * Line 4 (2A16, A20) + * Col. 1 - 16 Fortran format for pointers (PTRFMT) + * Col. 17 - 32 Fortran format for row (or variable) indices (INDFMT) + * Col. 33 - 52 Fortran format for numerical values of coefficient matrix + * (VALFMT) + * (blank in the case of matrix patterns) + * + * The three character type field on line 3 describes the matrix type. + * The following table lists the permitted values for each of the three + * characters. As an example of the type field, RSA denotes that the matrix + * is real, symmetric, and assembled. + * + * First Character: + * R Real matrix + * C Complex matrix + * I integer matrix + * P Pattern only (no numerical values supplied) + * Q Pattern only (numerical values supplied in associated auxiliary value + * file) + * + * Second Character: + * S Symmetric + * U Unsymmetric + * H Hermitian + * Z Skew symmetric + * R Rectangular + * + * Third Character: + * A Compressed column form + * E Elemental form + * + *
+ */ + +#include "slu_sdefs.h" + + +/*! \brief Eat up the rest of the current line */ +static int sDumpLine(FILE *fp) +{ + register int c; + while ((c = fgetc(fp)) != '\n') ; + return 0; +} + +static int sParseIntFormat(char *buf, int *num, int *size) +{ + char *tmp; + + tmp = buf; + while (*tmp++ != '(') ; + sscanf(tmp, "%d", num); + while (*tmp != 'I' && *tmp != 'i') ++tmp; + ++tmp; + sscanf(tmp, "%d", size); + return 0; +} + +static int sParseFloatFormat(char *buf, int *num, int *size) +{ + char *tmp, *period; + + tmp = buf; + while (*tmp++ != '(') ; + *num = atoi(tmp); /*sscanf(tmp, "%d", num);*/ + while (*tmp != 'E' && *tmp != 'e' && *tmp != 'D' && *tmp != 'd' + && *tmp != 'F' && *tmp != 'f') { + /* May find kP before nE/nD/nF, like (1P6F13.6). In this case the + num picked up refers to P, which should be skipped. */ + if (*tmp=='p' || *tmp=='P') { + ++tmp; + *num = atoi(tmp); /*sscanf(tmp, "%d", num);*/ + } else { + ++tmp; + } + } + ++tmp; + period = tmp; + while (*period != '.' && *period != ')') ++period ; + *period = '\0'; + *size = atoi(tmp); /*sscanf(tmp, "%2d", size);*/ + + return 0; +} + +static int ReadVector(FILE *fp, int n, int *where, int perline, int persize) +{ + register int i, j, item; + char tmp, buf[100]; + + i = 0; + while (i < n) { + fgets(buf, 100, fp); /* read a line at a time */ + for (j=0; j + * -- SuperLU routine (version 4.0) -- + * Lawrence Berkeley National Laboratory. + * June 30, 2009 + *
+ */ + +#include "slu_sdefs.h" + + +void +sreadtriple(int *m, int *n, int *nonz, + float **nzval, int **rowind, int **colptr) +{ +/* + * Output parameters + * ================= + * (a,asub,xa): asub[*] contains the row subscripts of nonzeros + * in columns of matrix A; a[*] the numerical values; + * row i of A is given by a[k],k=xa[i],...,xa[i+1]-1. + * + */ + int j, k, jsize, nnz, nz; + float *a, *val; + int *asub, *xa, *row, *col; + int zero_base = 0; + + /* Matrix format: + * First line: #rows, #cols, #non-zero + * Triplet in the rest of lines: + * row, col, value + */ + + scanf("%d%d", n, nonz); + *m = *n; + printf("m %d, n %d, nonz %d\n", *m, *n, *nonz); + sallocateA(*n, *nonz, nzval, rowind, colptr); /* Allocate storage */ + a = *nzval; + asub = *rowind; + xa = *colptr; + + val = (float *) SUPERLU_MALLOC(*nonz * sizeof(float)); + row = (int *) SUPERLU_MALLOC(*nonz * sizeof(int)); + col = (int *) SUPERLU_MALLOC(*nonz * sizeof(int)); + + for (j = 0; j < *n; ++j) xa[j] = 0; + + /* Read into the triplet array from a file */ + for (nnz = 0, nz = 0; nnz < *nonz; ++nnz) { + scanf("%d%d%f\n", &row[nz], &col[nz], &val[nz]); + + if ( nnz == 0 ) { /* first nonzero */ + if ( row[0] == 0 || col[0] == 0 ) { + zero_base = 1; + printf("triplet file: row/col indices are zero-based.\n"); + } else + printf("triplet file: row/col indices are one-based.\n"); + } + + if ( !zero_base ) { + /* Change to 0-based indexing. */ + --row[nz]; + --col[nz]; + } + + if (row[nz] < 0 || row[nz] >= *m || col[nz] < 0 || col[nz] >= *n + /*|| val[nz] == 0.*/) { + fprintf(stderr, "nz %d, (%d, %d) = %e out of bound, removed\n", + nz, row[nz], col[nz], val[nz]); + exit(-1); + } else { + ++xa[col[nz]]; + ++nz; + } + } + + *nonz = nz; + + /* Initialize the array of column pointers */ + k = 0; + jsize = xa[0]; + xa[0] = 0; + for (j = 1; j < *n; ++j) { + k += jsize; + jsize = xa[j]; + xa[j] = k; + } + + /* Copy the triplets into the column oriented storage */ + for (nz = 0; nz < *nonz; ++nz) { + j = col[nz]; + k = xa[j]; + asub[k] = row[nz]; + a[k] = val[nz]; + ++xa[j]; + } + + /* Reset the column pointers to the beginning of each column */ + for (j = *n; j > 0; --j) + xa[j] = xa[j-1]; + xa[0] = 0; + + SUPERLU_FREE(val); + SUPERLU_FREE(row); + SUPERLU_FREE(col); + +#ifdef CHK_INPUT + { + int i; + for (i = 0; i < *n; i++) { + printf("Col %d, xa %d\n", i, xa[i]); + for (k = xa[i]; k < xa[i+1]; k++) + printf("%d\t%16.10f\n", asub[k], a[k]); + } + } +#endif + +} + + +void sreadrhs(int m, float *b) +{ + FILE *fp, *fopen(); + int i; + /*int j;*/ + + if ( !(fp = fopen("b.dat", "r")) ) { + fprintf(stderr, "dreadrhs: file does not exist\n"); + exit(-1); + } + for (i = 0; i < m; ++i) + fscanf(fp, "%f\n", &b[i]); + + /* readpair_(j, &b[i]);*/ + fclose(fp); +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ssnode_bmod.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ssnode_bmod.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ssnode_bmod.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ssnode_bmod.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,29 +1,31 @@ -/* +/*! @file ssnode_bmod.c + * \brief Performs numeric block updates within the relaxed snode. + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "ssp_defs.h" + +#include "slu_sdefs.h" -/* - * Performs numeric block updates within the relaxed snode. +/*! \brief Performs numeric block updates within the relaxed snode. */ int ssnode_bmod ( diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ssnode_dfs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ssnode_dfs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ssnode_dfs.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ssnode_dfs.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,27 +1,45 @@ - -/* +/*! @file ssnode_dfs.c + * \brief Determines the union of row structures of columns within the relaxed node + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "ssp_defs.h" -#include "util.h" + +#include "slu_sdefs.h" + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *    ssnode_dfs() - Determine the union of the row structures of those 
+ *    columns within the relaxed snode.
+ *    Note: The relaxed snodes are leaves of the supernodal etree, therefore, 
+ *    the portion outside the rectangular supernode must be zero.
+ *
+ * Return value
+ * ============
+ *     0   success;
+ *    >0   number of bytes allocated when run out of memory.
+ * 
+ */ int ssnode_dfs ( @@ -35,19 +53,7 @@ GlobalLU_t *Glu /* modified */ ) { -/* Purpose - * ======= - * ssnode_dfs() - Determine the union of the row structures of those - * columns within the relaxed snode. - * Note: The relaxed snodes are leaves of the supernodal etree, therefore, - * the portion outside the rectangular supernode must be zero. - * - * Return value - * ============ - * 0 success; - * >0 number of bytes allocated when run out of memory. - * - */ + register int i, k, ifrom, ito, nextl, new_next; int nsuper, krow, kmark, mem_error; int *xsup, *supno; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ssp_blas2.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ssp_blas2.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ssp_blas2.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ssp_blas2.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,17 +1,20 @@ -/* +/*! @file ssp_blas2.c + * \brief Sparse BLAS 2, using some dense BLAS 2 operations + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
- *
+ * 
*/ /* * File name: ssp_blas2.c * Purpose: Sparse BLAS 2, using some dense BLAS 2 operations. */ -#include "ssp_defs.h" +#include "slu_sdefs.h" /* * Function prototypes @@ -20,12 +23,9 @@ void slsolve(int, int, float*, float*); void smatvec(int, int, int, float*, float*, float*); - -int -sp_strsv(char *uplo, char *trans, char *diag, SuperMatrix *L, - SuperMatrix *U, float *x, SuperLUStat_t *stat, int *info) -{ -/* +/*! \brief Solves one of the systems of equations A*x = b, or A'*x = b + * + *
  *   Purpose
  *   =======
  *
@@ -49,7 +49,7 @@
  *             On entry, trans specifies the equations to be solved as   
  *             follows:   
  *                trans = 'N' or 'n'   A*x = b.   
- *                trans = 'T' or 't'   A'*x = b.   
+ *                trans = 'T' or 't'   A'*x = b.
  *                trans = 'C' or 'c'   A'*x = b.   
  *
  *   diag   - (input) char*
@@ -75,8 +75,12 @@
  *
  *   info    - (output) int*
  *             If *info = -i, the i-th argument had an illegal value.
- *
+ * 
*/ +int +sp_strsv(char *uplo, char *trans, char *diag, SuperMatrix *L, + SuperMatrix *U, float *x, SuperLUStat_t *stat, int *info) +{ #ifdef _CRAY _fcd ftcs1 = _cptofcd("L", strlen("L")), ftcs2 = _cptofcd("N", strlen("N")), @@ -96,7 +100,8 @@ /* Test the input parameters */ *info = 0; if ( !lsame_(uplo,"L") && !lsame_(uplo, "U") ) *info = -1; - else if ( !lsame_(trans, "N") && !lsame_(trans, "T") ) *info = -2; + else if ( !lsame_(trans, "N") && !lsame_(trans, "T") && + !lsame_(trans, "C")) *info = -2; else if ( !lsame_(diag, "U") && !lsame_(diag, "N") ) *info = -3; else if ( L->nrow != L->ncol || L->nrow < 0 ) *info = -4; else if ( U->nrow != U->ncol || U->nrow < 0 ) *info = -5; @@ -298,68 +303,71 @@ +/*! \brief Performs one of the matrix-vector operations y := alpha*A*x + beta*y, or y := alpha*A'*x + beta*y, + * + *
+ *   Purpose   
+ *   =======   
+ *
+ *   sp_sgemv()  performs one of the matrix-vector operations   
+ *      y := alpha*A*x + beta*y,   or   y := alpha*A'*x + beta*y,   
+ *   where alpha and beta are scalars, x and y are vectors and A is a
+ *   sparse A->nrow by A->ncol matrix.   
+ *
+ *   Parameters   
+ *   ==========   
+ *
+ *   TRANS  - (input) char*
+ *            On entry, TRANS specifies the operation to be performed as   
+ *            follows:   
+ *               TRANS = 'N' or 'n'   y := alpha*A*x + beta*y.   
+ *               TRANS = 'T' or 't'   y := alpha*A'*x + beta*y.   
+ *               TRANS = 'C' or 'c'   y := alpha*A'*x + beta*y.   
+ *
+ *   ALPHA  - (input) float
+ *            On entry, ALPHA specifies the scalar alpha.   
+ *
+ *   A      - (input) SuperMatrix*
+ *            Matrix A with a sparse format, of dimension (A->nrow, A->ncol).
+ *            Currently, the type of A can be:
+ *                Stype = NC or NCP; Dtype = SLU_S; Mtype = GE. 
+ *            In the future, more general A can be handled.
+ *
+ *   X      - (input) float*, array of DIMENSION at least   
+ *            ( 1 + ( n - 1 )*abs( INCX ) ) when TRANS = 'N' or 'n'   
+ *            and at least   
+ *            ( 1 + ( m - 1 )*abs( INCX ) ) otherwise.   
+ *            Before entry, the incremented array X must contain the   
+ *            vector x.   
+ *
+ *   INCX   - (input) int
+ *            On entry, INCX specifies the increment for the elements of   
+ *            X. INCX must not be zero.   
+ *
+ *   BETA   - (input) float
+ *            On entry, BETA specifies the scalar beta. When BETA is   
+ *            supplied as zero then Y need not be set on input.   
+ *
+ *   Y      - (output) float*,  array of DIMENSION at least   
+ *            ( 1 + ( m - 1 )*abs( INCY ) ) when TRANS = 'N' or 'n'   
+ *            and at least   
+ *            ( 1 + ( n - 1 )*abs( INCY ) ) otherwise.   
+ *            Before entry with BETA non-zero, the incremented array Y   
+ *            must contain the vector y. On exit, Y is overwritten by the 
+ *            updated vector y.
+ *	     
+ *   INCY   - (input) int
+ *            On entry, INCY specifies the increment for the elements of   
+ *            Y. INCY must not be zero.   
+ *
+ *   ==== Sparse Level 2 Blas routine.   
+ * 
+ */ int sp_sgemv(char *trans, float alpha, SuperMatrix *A, float *x, int incx, float beta, float *y, int incy) { -/* Purpose - ======= - - sp_sgemv() performs one of the matrix-vector operations - y := alpha*A*x + beta*y, or y := alpha*A'*x + beta*y, - where alpha and beta are scalars, x and y are vectors and A is a - sparse A->nrow by A->ncol matrix. - - Parameters - ========== - - TRANS - (input) char* - On entry, TRANS specifies the operation to be performed as - follows: - TRANS = 'N' or 'n' y := alpha*A*x + beta*y. - TRANS = 'T' or 't' y := alpha*A'*x + beta*y. - TRANS = 'C' or 'c' y := alpha*A'*x + beta*y. - - ALPHA - (input) float - On entry, ALPHA specifies the scalar alpha. - - A - (input) SuperMatrix* - Matrix A with a sparse format, of dimension (A->nrow, A->ncol). - Currently, the type of A can be: - Stype = NC or NCP; Dtype = SLU_S; Mtype = GE. - In the future, more general A can be handled. - - X - (input) float*, array of DIMENSION at least - ( 1 + ( n - 1 )*abs( INCX ) ) when TRANS = 'N' or 'n' - and at least - ( 1 + ( m - 1 )*abs( INCX ) ) otherwise. - Before entry, the incremented array X must contain the - vector x. - - INCX - (input) int - On entry, INCX specifies the increment for the elements of - X. INCX must not be zero. - - BETA - (input) float - On entry, BETA specifies the scalar beta. When BETA is - supplied as zero then Y need not be set on input. - - Y - (output) float*, array of DIMENSION at least - ( 1 + ( m - 1 )*abs( INCY ) ) when TRANS = 'N' or 'n' - and at least - ( 1 + ( n - 1 )*abs( INCY ) ) otherwise. - Before entry with BETA non-zero, the incremented array Y - must contain the vector y. On exit, Y is overwritten by the - updated vector y. - - INCY - (input) int - On entry, INCY specifies the increment for the elements of - Y. INCY must not be zero. - - ==== Sparse Level 2 Blas routine. -*/ - /* Local variables */ NCformat *Astore; float *Aval; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ssp_blas3.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ssp_blas3.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ssp_blas3.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ssp_blas3.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,116 +1,122 @@ - -/* +/*! @file ssp_blas3.c + * \brief Sparse BLAS3, using some dense BLAS3 operations + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
- *
+ * 
*/ /* * File name: sp_blas3.c * Purpose: Sparse BLAS3, using some dense BLAS3 operations. */ -#include "ssp_defs.h" -#include "util.h" +#include "slu_sdefs.h" + +/*! \brief + * + *
+ * Purpose   
+ *   =======   
+ * 
+ *   sp_s performs one of the matrix-matrix operations   
+ * 
+ *      C := alpha*op( A )*op( B ) + beta*C,   
+ * 
+ *   where  op( X ) is one of 
+ * 
+ *      op( X ) = X   or   op( X ) = X'   or   op( X ) = conjg( X' ),
+ * 
+ *   alpha and beta are scalars, and A, B and C are matrices, with op( A ) 
+ *   an m by k matrix,  op( B )  a  k by n matrix and  C an m by n matrix. 
+ *   
+ * 
+ *   Parameters   
+ *   ==========   
+ * 
+ *   TRANSA - (input) char*
+ *            On entry, TRANSA specifies the form of op( A ) to be used in 
+ *            the matrix multiplication as follows:   
+ *               TRANSA = 'N' or 'n',  op( A ) = A.   
+ *               TRANSA = 'T' or 't',  op( A ) = A'.   
+ *               TRANSA = 'C' or 'c',  op( A ) = conjg( A' ).   
+ *            Unchanged on exit.   
+ * 
+ *   TRANSB - (input) char*
+ *            On entry, TRANSB specifies the form of op( B ) to be used in 
+ *            the matrix multiplication as follows:   
+ *               TRANSB = 'N' or 'n',  op( B ) = B.   
+ *               TRANSB = 'T' or 't',  op( B ) = B'.   
+ *               TRANSB = 'C' or 'c',  op( B ) = conjg( B' ).   
+ *            Unchanged on exit.   
+ * 
+ *   M      - (input) int   
+ *            On entry,  M  specifies  the number of rows of the matrix 
+ *	     op( A ) and of the matrix C.  M must be at least zero. 
+ *	     Unchanged on exit.   
+ * 
+ *   N      - (input) int
+ *            On entry,  N specifies the number of columns of the matrix 
+ *	     op( B ) and the number of columns of the matrix C. N must be 
+ *	     at least zero.
+ *	     Unchanged on exit.   
+ * 
+ *   K      - (input) int
+ *            On entry, K specifies the number of columns of the matrix 
+ *	     op( A ) and the number of rows of the matrix op( B ). K must 
+ *	     be at least  zero.   
+ *           Unchanged on exit.
+ *      
+ *   ALPHA  - (input) float
+ *            On entry, ALPHA specifies the scalar alpha.   
+ * 
+ *   A      - (input) SuperMatrix*
+ *            Matrix A with a sparse format, of dimension (A->nrow, A->ncol).
+ *            Currently, the type of A can be:
+ *                Stype = NC or NCP; Dtype = SLU_S; Mtype = GE. 
+ *            In the future, more general A can be handled.
+ * 
+ *   B      - FLOAT PRECISION array of DIMENSION ( LDB, kb ), where kb is 
+ *            n when TRANSB = 'N' or 'n',  and is  k otherwise.   
+ *            Before entry with  TRANSB = 'N' or 'n',  the leading k by n 
+ *            part of the array B must contain the matrix B, otherwise 
+ *            the leading n by k part of the array B must contain the 
+ *            matrix B.   
+ *            Unchanged on exit.   
+ * 
+ *   LDB    - (input) int
+ *            On entry, LDB specifies the first dimension of B as declared 
+ *            in the calling (sub) program. LDB must be at least max( 1, n ).  
+ *            Unchanged on exit.   
+ * 
+ *   BETA   - (input) float
+ *            On entry, BETA specifies the scalar beta. When BETA is   
+ *            supplied as zero then C need not be set on input.   
+ *  
+ *   C      - FLOAT PRECISION array of DIMENSION ( LDC, n ).   
+ *            Before entry, the leading m by n part of the array C must 
+ *            contain the matrix C,  except when beta is zero, in which 
+ *            case C need not be set on entry.   
+ *            On exit, the array C is overwritten by the m by n matrix 
+ *	     ( alpha*op( A )*B + beta*C ).   
+ *  
+ *   LDC    - (input) int
+ *            On entry, LDC specifies the first dimension of C as declared 
+ *            in the calling (sub)program. LDC must be at least max(1,m).   
+ *            Unchanged on exit.   
+ *  
+ *   ==== Sparse Level 3 Blas routine.   
+ * 
+ */ int sp_sgemm(char *transa, char *transb, int m, int n, int k, float alpha, SuperMatrix *A, float *b, int ldb, float beta, float *c, int ldc) { -/* Purpose - ======= - - sp_s performs one of the matrix-matrix operations - - C := alpha*op( A )*op( B ) + beta*C, - - where op( X ) is one of - - op( X ) = X or op( X ) = X' or op( X ) = conjg( X' ), - - alpha and beta are scalars, and A, B and C are matrices, with op( A ) - an m by k matrix, op( B ) a k by n matrix and C an m by n matrix. - - - Parameters - ========== - - TRANSA - (input) char* - On entry, TRANSA specifies the form of op( A ) to be used in - the matrix multiplication as follows: - TRANSA = 'N' or 'n', op( A ) = A. - TRANSA = 'T' or 't', op( A ) = A'. - TRANSA = 'C' or 'c', op( A ) = conjg( A' ). - Unchanged on exit. - - TRANSB - (input) char* - On entry, TRANSB specifies the form of op( B ) to be used in - the matrix multiplication as follows: - TRANSB = 'N' or 'n', op( B ) = B. - TRANSB = 'T' or 't', op( B ) = B'. - TRANSB = 'C' or 'c', op( B ) = conjg( B' ). - Unchanged on exit. - - M - (input) int - On entry, M specifies the number of rows of the matrix - op( A ) and of the matrix C. M must be at least zero. - Unchanged on exit. - - N - (input) int - On entry, N specifies the number of columns of the matrix - op( B ) and the number of columns of the matrix C. N must be - at least zero. - Unchanged on exit. - - K - (input) int - On entry, K specifies the number of columns of the matrix - op( A ) and the number of rows of the matrix op( B ). K must - be at least zero. - Unchanged on exit. - - ALPHA - (input) float - On entry, ALPHA specifies the scalar alpha. - - A - (input) SuperMatrix* - Matrix A with a sparse format, of dimension (A->nrow, A->ncol). - Currently, the type of A can be: - Stype = NC or NCP; Dtype = SLU_S; Mtype = GE. - In the future, more general A can be handled. - - B - FLOAT PRECISION array of DIMENSION ( LDB, kb ), where kb is - n when TRANSB = 'N' or 'n', and is k otherwise. - Before entry with TRANSB = 'N' or 'n', the leading k by n - part of the array B must contain the matrix B, otherwise - the leading n by k part of the array B must contain the - matrix B. - Unchanged on exit. - - LDB - (input) int - On entry, LDB specifies the first dimension of B as declared - in the calling (sub) program. LDB must be at least max( 1, n ). - Unchanged on exit. - - BETA - (input) float - On entry, BETA specifies the scalar beta. When BETA is - supplied as zero then C need not be set on input. - - C - FLOAT PRECISION array of DIMENSION ( LDC, n ). - Before entry, the leading m by n part of the array C must - contain the matrix C, except when beta is zero, in which - case C need not be set on entry. - On exit, the array C is overwritten by the m by n matrix - ( alpha*op( A )*B + beta*C ). - - LDC - (input) int - On entry, LDC specifies the first dimension of C as declared - in the calling (sub)program. LDC must be at least max(1,m). - Unchanged on exit. - - ==== Sparse Level 3 Blas routine. -*/ int incx = 1, incy = 1; int j; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ssp_defs.h python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ssp_defs.h --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ssp_defs.h 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/ssp_defs.h 1970-01-01 01:00:00.000000000 +0100 @@ -1,234 +0,0 @@ - -/* - * -- SuperLU routine (version 3.0) -- - * Univ. of California Berkeley, Xerox Palo Alto Research Center, - * and Lawrence Berkeley National Lab. - * October 15, 2003 - * - */ -#ifndef __SUPERLU_sSP_DEFS /* allow multiple inclusions */ -#define __SUPERLU_sSP_DEFS - -/* - * File name: ssp_defs.h - * Purpose: Sparse matrix types and function prototypes - * History: - */ - -#ifdef _CRAY -#include -#include -#endif - -/* Define my integer type int_t */ -typedef int int_t; /* default */ - -#include "Cnames.h" -#include "supermatrix.h" -#include "util.h" - - -/* - * Global data structures used in LU factorization - - * - * nsuper: #supernodes = nsuper + 1, numbered [0, nsuper]. - * (xsup,supno): supno[i] is the supernode no to which i belongs; - * xsup(s) points to the beginning of the s-th supernode. - * e.g. supno 0 1 2 2 3 3 3 4 4 4 4 4 (n=12) - * xsup 0 1 2 4 7 12 - * Note: dfs will be performed on supernode rep. relative to the new - * row pivoting ordering - * - * (xlsub,lsub): lsub[*] contains the compressed subscript of - * rectangular supernodes; xlsub[j] points to the starting - * location of the j-th column in lsub[*]. Note that xlsub - * is indexed by column. - * Storage: original row subscripts - * - * During the course of sparse LU factorization, we also use - * (xlsub,lsub) for the purpose of symmetric pruning. For each - * supernode {s,s+1,...,t=s+r} with first column s and last - * column t, the subscript set - * lsub[j], j=xlsub[s], .., xlsub[s+1]-1 - * is the structure of column s (i.e. structure of this supernode). - * It is used for the storage of numerical values. - * Furthermore, - * lsub[j], j=xlsub[t], .., xlsub[t+1]-1 - * is the structure of the last column t of this supernode. - * It is for the purpose of symmetric pruning. Therefore, the - * structural subscripts can be rearranged without making physical - * interchanges among the numerical values. - * - * However, if the supernode has only one column, then we - * only keep one set of subscripts. For any subscript interchange - * performed, similar interchange must be done on the numerical - * values. - * - * The last column structures (for pruning) will be removed - * after the numercial LU factorization phase. - * - * (xlusup,lusup): lusup[*] contains the numerical values of the - * rectangular supernodes; xlusup[j] points to the starting - * location of the j-th column in storage vector lusup[*] - * Note: xlusup is indexed by column. - * Each rectangular supernode is stored by column-major - * scheme, consistent with Fortran 2-dim array storage. - * - * (xusub,ucol,usub): ucol[*] stores the numerical values of - * U-columns outside the rectangular supernodes. The row - * subscript of nonzero ucol[k] is stored in usub[k]. - * xusub[i] points to the starting location of column i in ucol. - * Storage: new row subscripts; that is subscripts of PA. - */ -typedef struct { - int *xsup; /* supernode and column mapping */ - int *supno; - int *lsub; /* compressed L subscripts */ - int *xlsub; - float *lusup; /* L supernodes */ - int *xlusup; - float *ucol; /* U columns */ - int *usub; - int *xusub; - int nzlmax; /* current max size of lsub */ - int nzumax; /* " " " ucol */ - int nzlumax; /* " " " lusup */ - int n; /* number of columns in the matrix */ - LU_space_t MemModel; /* 0 - system malloc'd; 1 - user provided */ -} GlobalLU_t; - -typedef struct { - float for_lu; - float total_needed; - int expansions; -} mem_usage_t; - -#ifdef __cplusplus -extern "C" { -#endif - -/* Driver routines */ -extern void -sgssv(superlu_options_t *, SuperMatrix *, int *, int *, SuperMatrix *, - SuperMatrix *, SuperMatrix *, SuperLUStat_t *, int *); -extern void -sgssvx(superlu_options_t *, SuperMatrix *, int *, int *, int *, - char *, float *, float *, SuperMatrix *, SuperMatrix *, - void *, int, SuperMatrix *, SuperMatrix *, - float *, float *, float *, float *, - mem_usage_t *, SuperLUStat_t *, int *); - -/* Supernodal LU factor related */ -extern void -sCreate_CompCol_Matrix(SuperMatrix *, int, int, int, float *, - int *, int *, Stype_t, Dtype_t, Mtype_t); -extern void -sCreate_CompRow_Matrix(SuperMatrix *, int, int, int, float *, - int *, int *, Stype_t, Dtype_t, Mtype_t); -extern void -sCopy_CompCol_Matrix(SuperMatrix *, SuperMatrix *); -extern void -sCreate_Dense_Matrix(SuperMatrix *, int, int, float *, int, - Stype_t, Dtype_t, Mtype_t); -extern void -sCreate_SuperNode_Matrix(SuperMatrix *, int, int, int, float *, - int *, int *, int *, int *, int *, - Stype_t, Dtype_t, Mtype_t); -extern void -sCopy_Dense_Matrix(int, int, float *, int, float *, int); - -extern void countnz (const int, int *, int *, int *, GlobalLU_t *); -extern void fixupL (const int, const int *, GlobalLU_t *); - -extern void sallocateA (int, int, float **, int **, int **); -extern void sgstrf (superlu_options_t*, SuperMatrix*, float, - int, int, int*, void *, int, int *, int *, - SuperMatrix *, SuperMatrix *, SuperLUStat_t*, int *); -extern int ssnode_dfs (const int, const int, const int *, const int *, - const int *, int *, int *, GlobalLU_t *); -extern int ssnode_bmod (const int, const int, const int, float *, - float *, GlobalLU_t *, SuperLUStat_t*); -extern void spanel_dfs (const int, const int, const int, SuperMatrix *, - int *, int *, float *, int *, int *, int *, - int *, int *, int *, int *, GlobalLU_t *); -extern void spanel_bmod (const int, const int, const int, const int, - float *, float *, int *, int *, - GlobalLU_t *, SuperLUStat_t*); -extern int scolumn_dfs (const int, const int, int *, int *, int *, int *, - int *, int *, int *, int *, int *, GlobalLU_t *); -extern int scolumn_bmod (const int, const int, float *, - float *, int *, int *, int, - GlobalLU_t *, SuperLUStat_t*); -extern int scopy_to_ucol (int, int, int *, int *, int *, - float *, GlobalLU_t *); -extern int spivotL (const int, const float, int *, int *, - int *, int *, int *, GlobalLU_t *, SuperLUStat_t*); -extern void spruneL (const int, const int *, const int, const int, - const int *, const int *, int *, GlobalLU_t *); -extern void sreadmt (int *, int *, int *, float **, int **, int **); -extern void sGenXtrue (int, int, float *, int); -extern void sFillRHS (trans_t, int, float *, int, SuperMatrix *, - SuperMatrix *); -extern void sgstrs (trans_t, SuperMatrix *, SuperMatrix *, int *, int *, - SuperMatrix *, SuperLUStat_t*, int *); - - -/* Driver related */ - -extern void sgsequ (SuperMatrix *, float *, float *, float *, - float *, float *, int *); -extern void slaqgs (SuperMatrix *, float *, float *, float, - float, float, char *); -extern void sgscon (char *, SuperMatrix *, SuperMatrix *, - float, float *, SuperLUStat_t*, int *); -extern float sPivotGrowth(int, SuperMatrix *, int *, - SuperMatrix *, SuperMatrix *); -extern void sgsrfs (trans_t, SuperMatrix *, SuperMatrix *, - SuperMatrix *, int *, int *, char *, float *, - float *, SuperMatrix *, SuperMatrix *, - float *, float *, SuperLUStat_t*, int *); - -extern int sp_strsv (char *, char *, char *, SuperMatrix *, - SuperMatrix *, float *, SuperLUStat_t*, int *); -extern int sp_sgemv (char *, float, SuperMatrix *, float *, - int, float, float *, int); - -extern int sp_sgemm (char *, char *, int, int, int, float, - SuperMatrix *, float *, int, float, - float *, int); - -/* Memory-related */ -extern int sLUMemInit (fact_t, void *, int, int, int, int, int, - SuperMatrix *, SuperMatrix *, - GlobalLU_t *, int **, float **); -extern void sSetRWork (int, int, float *, float **, float **); -extern void sLUWorkFree (int *, float *, GlobalLU_t *); -extern int sLUMemXpand (int, int, MemType, int *, GlobalLU_t *); - -extern float *floatMalloc(int); -extern float *floatCalloc(int); -extern int smemory_usage(const int, const int, const int, const int); -extern int sQuerySpace (SuperMatrix *, SuperMatrix *, mem_usage_t *); - -/* Auxiliary routines */ -extern void sreadhb(int *, int *, int *, float **, int **, int **); -extern void sCompRow_to_CompCol(int, int, int, float*, int*, int*, - float **, int **, int **); -extern void sfill (float *, int, float); -extern void sinf_norm_error (int, SuperMatrix *, float *); -extern void PrintPerf (SuperMatrix *, SuperMatrix *, mem_usage_t *, - float, float, float *, float *, char *); - -/* Routines for debugging */ -extern void sPrint_CompCol_Matrix(char *, SuperMatrix *); -extern void sPrint_SuperNode_Matrix(char *, SuperMatrix *); -extern void sPrint_Dense_Matrix(char *, SuperMatrix *); -extern void print_lu_col(char *, int, int, int *, GlobalLU_t *); -extern void check_tempv(int, float *); - -#ifdef __cplusplus - } -#endif - -#endif /* __SUPERLU_sSP_DEFS */ - diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/superlu_timer.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/superlu_timer.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/superlu_timer.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/superlu_timer.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,11 +1,15 @@ -/* +/*! @file superlu_timer.c + * \brief Returns the time used + * + *
  * Purpose
  * ======= 
- *	Returns the time in seconds used by the process.
+ * 
+ * Returns the time in seconds used by the process.
  *
  * Note: the timer function call is machine dependent. Use conditional
  *       compilation to choose the appropriate function.
- *
+ * 
*/ @@ -15,11 +19,23 @@ * nanoseconds. */ #include - + double SuperLU_timer_() { return ( (double)gethrtime() / 1e9 ); } +#elif _WIN32 + +#include + +double SuperLU_timer_() +{ + clock_t t; + t=clock(); + + return ((double)t)/CLOCKS_PER_SEC; +} + #else #ifndef NO_TIMER @@ -32,13 +48,14 @@ #ifndef CLK_TCK #define CLK_TCK 60 #endif - +/*! \brief Timer function + */ double SuperLU_timer_() { #ifdef NO_TIMER - /* no sys/times.h on WIN32 */ - double tmp; - tmp = 0.0; + /* no sys/times.h on WIN32 */ + double tmp; + tmp = 0.0; #else struct tms use; double tmp; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/supermatrix.h python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/supermatrix.h --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/supermatrix.h 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/supermatrix.h 2010-07-26 15:48:34.000000000 +0100 @@ -1,18 +1,24 @@ +/*! @file supermatrix.h + * \brief Defines matrix types + */ #ifndef __SUPERLU_SUPERMATRIX /* allow multiple inclusions */ #define __SUPERLU_SUPERMATRIX + /******************************************** * The matrix types are defined as follows. * ********************************************/ typedef enum { SLU_NC, /* column-wise, no supernode */ - SLU_NR, /* row-wize, no supernode */ - SLU_SC, /* column-wise, supernode */ - SLU_SR, /* row-wise, supernode */ SLU_NCP, /* column-wise, column-permuted, no supernode (The consecutive columns of nonzeros, after permutation, may not be stored contiguously.) */ - SLU_DN /* Fortran style column-wise storage for dense matrix */ + SLU_NR, /* row-wize, no supernode */ + SLU_SC, /* column-wise, supernode */ + SLU_SCP, /* supernode, column-wise, permuted */ + SLU_SR, /* row-wise, supernode */ + SLU_DN, /* Fortran style column-wise storage for dense matrix */ + SLU_NR_loc /* distributed compressed row format */ } Stype_t; typedef enum { @@ -49,10 +55,10 @@ * The storage schemes are defined as follows. * ***********************************************/ -/* Stype == NC (Also known as Harwell-Boeing sparse matrix format) */ +/* Stype == SLU_NC (Also known as Harwell-Boeing sparse matrix format) */ typedef struct { int_t nnz; /* number of nonzeros in the matrix */ - void *nzval; /* pointer to array of nonzero values, packed by column */ + void *nzval; /* pointer to array of nonzero values, packed by column */ int_t *rowind; /* pointer to array of row indices of the nonzeros */ int_t *colptr; /* pointer to array of beginning of columns in nzval[] and rowind[] */ @@ -62,21 +68,20 @@ beyond the last column, so that colptr[ncol] = nnz. */ } NCformat; -/* Stype == NR (Also known as row compressed storage (RCS). */ +/* Stype == SLU_NR */ typedef struct { - int_t nnz; /* number of nonzeros in the matrix */ - void *nzval; /* pointer to array of nonzero values, packed by row */ - int_t *colind; /* pointer to array of column indices of the nonzeros */ - int_t *rowptr; /* pointer to array of beginning of rows in nzval[] - and colind[] */ - /* Note: - Zero-based indexing is used; - nzval[] and colind[] are of the same length, nnz; - rowptr[] has nrow+1 entries, the last one pointing - beyond the last column, so that rowptr[nrow] = nnz. */ + int_t nnz; /* number of nonzeros in the matrix */ + void *nzval; /* pointer to array of nonzero values, packed by raw */ + int_t *colind; /* pointer to array of columns indices of the nonzeros */ + int_t *rowptr; /* pointer to array of beginning of rows in nzval[] + and colind[] */ + /* Note: + Zero-based indexing is used; + rowptr[] has nrow+1 entries, the last one pointing + beyond the last row, so that rowptr[nrow] = nnz. */ } NRformat; -/* Stype == SC */ +/* Stype == SLU_SC */ typedef struct { int_t nnz; /* number of nonzeros in the matrix */ int_t nsuper; /* number of supernodes, minus 1 */ @@ -85,9 +90,9 @@ int_t *rowind; /* pointer to array of compressed row indices of rectangular supernodes */ int_t *rowind_colptr;/* pointer to array of beginning of columns in rowind[] */ - int_t *col_to_sup; /* col_to_sup[j] is the supernode number to which column + int_t *col_to_sup; /* col_to_sup[j] is the supernode number to which column j belongs; mapping from column to supernode number. */ - int_t *sup_to_col; /* sup_to_col[s] points to the start of the s-th + int_t *sup_to_col; /* sup_to_col[s] points to the start of the s-th supernode; mapping from supernode number to column. e.g.: col_to_sup: 0 1 2 2 3 3 3 4 4 4 4 4 4 (ncol=12) sup_to_col: 0 1 2 4 7 12 (nsuper=4) */ @@ -101,7 +106,39 @@ entries are defined. */ } SCformat; -/* Stype == NCP */ +/* Stype == SLU_SCP */ +typedef struct { + int_t nnz; /* number of nonzeros in the matrix */ + int_t nsuper; /* number of supernodes */ + void *nzval; /* pointer to array of nonzero values, packed by column */ + int_t *nzval_colbeg;/* nzval_colbeg[j] points to beginning of column j + in nzval[] */ + int_t *nzval_colend;/* nzval_colend[j] points to one past the last element + of column j in nzval[] */ + int_t *rowind; /* pointer to array of compressed row indices of + rectangular supernodes */ + int_t *rowind_colbeg;/* rowind_colbeg[j] points to beginning of column j + in rowind[] */ + int_t *rowind_colend;/* rowind_colend[j] points to one past the last element + of column j in rowind[] */ + int_t *col_to_sup; /* col_to_sup[j] is the supernode number to which column + j belongs; mapping from column to supernode. */ + int_t *sup_to_colbeg; /* sup_to_colbeg[s] points to the start of the s-th + supernode; mapping from supernode to column.*/ + int_t *sup_to_colend; /* sup_to_colend[s] points to one past the end of the + s-th supernode; mapping from supernode number to + column. + e.g.: col_to_sup: 0 1 2 2 3 3 3 4 4 4 4 4 4 (ncol=12) + sup_to_colbeg: 0 1 2 4 7 (nsuper=4) + sup_to_colend: 1 2 4 7 12 */ + /* Note: + Zero-based indexing is used; + nzval_colptr[], rowind_colptr[], col_to_sup and + sup_to_col[] have ncol+1 entries, the last one + pointing beyond the last column. */ +} SCPformat; + +/* Stype == SLU_NCP */ typedef struct { int_t nnz; /* number of nonzeros in the matrix */ void *nzval; /* pointer to array of nonzero values, packed by column */ @@ -118,23 +155,26 @@ postmultiplied by a column permutation matrix. */ } NCPformat; -/* Stype == DN */ +/* Stype == SLU_DN */ typedef struct { int_t lda; /* leading dimension */ void *nzval; /* array of size lda*ncol to represent a dense matrix */ } DNformat; - - -/********************************************************* - * Macros used for easy access of sparse matrix entries. * - *********************************************************/ -#define L_SUB_START(col) ( Lstore->rowind_colptr[col] ) -#define L_SUB(ptr) ( Lstore->rowind[ptr] ) -#define L_NZ_START(col) ( Lstore->nzval_colptr[col] ) -#define L_FST_SUPC(superno) ( Lstore->sup_to_col[superno] ) -#define U_NZ_START(col) ( Ustore->colptr[col] ) -#define U_SUB(ptr) ( Ustore->rowind[ptr] ) +/* Stype == SLU_NR_loc (Distributed Compressed Row Format) */ +typedef struct { + int_t nnz_loc; /* number of nonzeros in the local submatrix */ + int_t m_loc; /* number of rows local to this processor */ + int_t fst_row; /* global index of the first row */ + void *nzval; /* pointer to array of nonzero values, packed by row */ + int_t *rowptr; /* pointer to array of beginning of rows in nzval[] + and colind[] */ + int_t *colind; /* pointer to array of column indices of the nonzeros */ + /* Note: + Zero-based indexing is used; + rowptr[] has n_loc + 1 entries, the last one pointing + beyond the last row, so that rowptr[n_loc] = nnz_loc.*/ +} NRformat_loc; #endif /* __SUPERLU_SUPERMATRIX */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sutil.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sutil.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sutil.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/sutil.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,26 +1,29 @@ -/* - * -- SuperLU routine (version 3.0) -- +/*! @file sutil.c + * \brief Matrix utility functions + * + *
+ * -- SuperLU routine (version 3.1) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
- * October 15, 2003
+ * August 1, 2008
+ *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
  *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ + #include -#include "ssp_defs.h" +#include "slu_sdefs.h" void sCreate_CompCol_Matrix(SuperMatrix *A, int m, int n, int nnz, @@ -64,7 +67,7 @@ Astore->rowptr = rowptr; } -/* Copy matrix A into matrix B. */ +/*! \brief Copy matrix A into matrix B. */ void sCopy_CompCol_Matrix(SuperMatrix *A, SuperMatrix *B) { @@ -108,12 +111,7 @@ sCopy_Dense_Matrix(int M, int N, float *X, int ldx, float *Y, int ldy) { -/* - * - * Purpose - * ======= - * - * Copies a two-dimensional matrix X to another matrix Y. +/*! \brief Copies a two-dimensional matrix X to another matrix Y. */ int i, j; @@ -150,8 +148,7 @@ } -/* - * Convert a row compressed storage into a column compressed storage. +/*! \brief Convert a row compressed storage into a column compressed storage. */ void sCompRow_to_CompCol(int m, int n, int nnz, @@ -266,23 +263,24 @@ void sPrint_Dense_Matrix(char *what, SuperMatrix *A) { - DNformat *Astore; - register int i; + DNformat *Astore = (DNformat *) A->Store; + register int i, j, lda = Astore->lda; float *dp; printf("\nDense matrix %s:\n", what); printf("Stype %d, Dtype %d, Mtype %d\n", A->Stype,A->Dtype,A->Mtype); - Astore = (DNformat *) A->Store; dp = (float *) Astore->nzval; - printf("nrow %d, ncol %d, lda %d\n", A->nrow,A->ncol,Astore->lda); + printf("nrow %d, ncol %d, lda %d\n", A->nrow,A->ncol,lda); printf("\nnzval: "); - for (i = 0; i < A->nrow; ++i) printf("%f ", dp[i]); + for (j = 0; j < A->ncol; ++j) { + for (i = 0; i < A->nrow; ++i) printf("%f ", dp[i + j*lda]); + printf("\n"); + } printf("\n"); fflush(stdout); } -/* - * Diagnostic print of column "jcol" in the U/L factor. +/*! \brief Diagnostic print of column "jcol" in the U/L factor. */ void sprint_lu_col(char *msg, int jcol, int pivrow, int *xprune, GlobalLU_t *Glu) @@ -324,9 +322,7 @@ } -/* - * Check whether tempv[] == 0. This should be true before and after - * calling any numeric routines, i.e., "panel_bmod" and "column_bmod". +/*! \brief Check whether tempv[] == 0. This should be true before and after calling any numeric routines, i.e., "panel_bmod" and "column_bmod". */ void scheck_tempv(int n, float *tempv) { @@ -352,8 +348,7 @@ } } -/* - * Let rhs[i] = sum of i-th row of A, so the solution vector is all 1's +/*! \brief Let rhs[i] = sum of i-th row of A, so the solution vector is all 1's */ void sFillRHS(trans_t trans, int nrhs, float *x, int ldx, @@ -382,8 +377,7 @@ } -/* - * Fills a float precision array with a given value. +/*! \brief Fills a float precision array with a given value. */ void sfill(float *a, int alen, float dval) @@ -394,8 +388,7 @@ -/* - * Check the inf-norm of the error vector +/*! \brief Check the inf-norm of the error vector */ void sinf_norm_error(int nrhs, SuperMatrix *X, float *xtrue) { @@ -421,7 +414,7 @@ -/* Print performance of the code. */ +/*! \brief Print performance of the code. */ void sPrintPerf(SuperMatrix *L, SuperMatrix *U, mem_usage_t *mem_usage, float rpg, float rcond, float *ferr, @@ -449,9 +442,9 @@ printf("\tNo of nonzeros in factor U = %d\n", Ustore->nnz); printf("\tNo of nonzeros in L+U = %d\n", Lstore->nnz + Ustore->nnz); - printf("L\\U MB %.3f\ttotal MB needed %.3f\texpansions %d\n", - mem_usage->for_lu/1e6, mem_usage->total_needed/1e6, - mem_usage->expansions); + printf("L\\U MB %.3f\ttotal MB needed %.3f\n", + mem_usage->for_lu/1e6, mem_usage->total_needed/1e6); + printf("Number of memory expansions: %d\n", stat->expansions); printf("\tFactor\tMflops\tSolve\tMflops\tEtree\tEquil\tRcond\tRefine\n"); printf("PERF:%8.2f%8.2f%8.2f%8.2f%8.2f%8.2f%8.2f%8.2f\n", diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/util.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/util.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/util.c 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/util.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,39 +1,39 @@ -/* +/*! @file util.c + * \brief Utility functions + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ + #include -#include "dsp_defs.h" -#include "util.h" +#include "slu_ddefs.h" -/* - * Global statistics variale +/*! \brief Global statistics variale */ void superlu_abort_and_exit(char* msg) { - fprintf(stderr, msg); + fprintf(stderr, "%s\n", msg); exit (-1); } -/* - * Set the default values for the options argument. +/*! \brief Set the default values for the options argument. */ void set_default_options(superlu_options_t *options) { @@ -49,7 +49,57 @@ options->PrintStat = YES; } -/* Deallocate the structure pointing to the actual storage of the matrix. */ +/*! \brief Set the default values for the options argument for ILU. + */ +void ilu_set_default_options(superlu_options_t *options) +{ + set_default_options(options); + + /* further options for incomplete factorization */ + options->DiagPivotThresh = 0.1; + options->RowPerm = LargeDiag; + options->DiagPivotThresh = 0.1; + options->ILU_FillFactor = 10.0; + options->ILU_DropTol = 1e-4; + options->ILU_DropRule = DROP_BASIC | DROP_AREA; + options->ILU_Norm = INF_NORM; + options->ILU_MILU = SMILU_2; /* SILU */ + options->ILU_FillTol = 1e-2; +} + +/*! \brief Print the options setting. + */ +void print_options(superlu_options_t *options) +{ + printf(".. options:\n"); + printf("\tFact\t %8d\n", options->Fact); + printf("\tEquil\t %8d\n", options->Equil); + printf("\tColPerm\t %8d\n", options->ColPerm); + printf("\tDiagPivotThresh %8.4f\n", options->DiagPivotThresh); + printf("\tTrans\t %8d\n", options->Trans); + printf("\tIterRefine\t%4d\n", options->IterRefine); + printf("\tSymmetricMode\t%4d\n", options->SymmetricMode); + printf("\tPivotGrowth\t%4d\n", options->PivotGrowth); + printf("\tConditionNumber\t%4d\n", options->ConditionNumber); + printf("..\n"); +} + +/*! \brief Print the options setting. + */ +void print_ilu_options(superlu_options_t *options) +{ + printf(".. ILU options:\n"); + printf("\tDiagPivotThresh\t%6.2e\n", options->DiagPivotThresh); + printf("\ttau\t%6.2e\n", options->ILU_DropTol); + printf("\tgamma\t%6.2f\n", options->ILU_FillFactor); + printf("\tDropRule\t%0x\n", options->ILU_DropRule); + printf("\tMILU\t%d\n", options->ILU_MILU); + printf("\tMILU_ALPHA\t%6.2e\n", MILU_ALPHA); + printf("\tDiagFillTol\t%6.2e\n", options->ILU_FillTol); + printf("..\n"); +} + +/*! \brief Deallocate the structure pointing to the actual storage of the matrix. */ void Destroy_SuperMatrix_Store(SuperMatrix *A) { @@ -86,7 +136,7 @@ SUPERLU_FREE ( A->Store ); } -/* A is of type Stype==NCP */ +/*! \brief A is of type Stype==NCP */ void Destroy_CompCol_Permuted(SuperMatrix *A) { @@ -95,7 +145,7 @@ SUPERLU_FREE ( A->Store ); } -/* A is of type Stype==DN */ +/*! \brief A is of type Stype==DN */ void Destroy_Dense_Matrix(SuperMatrix *A) { @@ -104,8 +154,7 @@ SUPERLU_FREE ( A->Store ); } -/* - * Reset repfnz[] for the current column +/*! \brief Reset repfnz[] for the current column */ void resetrep_col (const int nseg, const int *segrep, int *repfnz) @@ -119,9 +168,7 @@ } -/* - * Count the total number of nonzeros in factors L and U, and in the - * symmetrically reduced L. +/*! \brief Count the total number of nonzeros in factors L and U, and in the symmetrically reduced L. */ void countnz(const int n, int *xprune, int *nnzL, int *nnzU, GlobalLU_t *Glu) @@ -158,12 +205,41 @@ /* printf("\tNo of nonzeros in symm-reduced L = %d\n", nnzL0);*/ } +/*! \brief Count the total number of nonzeros in factors L and U. + */ +void +ilu_countnz(const int n, int *nnzL, int *nnzU, GlobalLU_t *Glu) +{ + int nsuper, fsupc, i, j; + int jlen, irep; + int *xsup, *xlsub; + xsup = Glu->xsup; + xlsub = Glu->xlsub; + *nnzL = 0; + *nnzU = (Glu->xusub)[n]; + nsuper = (Glu->supno)[n]; -/* - * Fix up the data storage lsub for L-subscripts. It removes the subscript - * sets for structural pruning, and applies permuation to the remaining - * subscripts. + if ( n <= 0 ) return; + + /* + * For each supernode + */ + for (i = 0; i <= nsuper; i++) { + fsupc = xsup[i]; + jlen = xlsub[fsupc+1] - xlsub[fsupc]; + + for (j = fsupc; j < xsup[i+1]; j++) { + *nnzL += jlen; + *nnzU += j - fsupc + 1; + jlen--; + } + irep = xsup[i+1] - 1; + } +} + + +/*! \brief Fix up the data storage lsub for L-subscripts. It removes the subscript sets for structural pruning, and applies permuation to the remaining subscripts. */ void fixupL(const int n, const int *perm_r, GlobalLU_t *Glu) @@ -199,8 +275,7 @@ } -/* - * Diagnostic print of segment info after panel_dfs(). +/*! \brief Diagnostic print of segment info after panel_dfs(). */ void print_panel_seg(int n, int w, int jcol, int nseg, int *segrep, int *repfnz) @@ -234,6 +309,9 @@ stat->utime[i] = 0.; stat->ops[i] = 0.; } + stat->TinyPivots = 0; + stat->RefineSteps = 0; + stat->expansions = 0; } @@ -255,6 +333,8 @@ printf("Solve flops = %e\tMflops = %8.2f\n", ops[SOLVE], ops[SOLVE]*1e-6/utime[SOLVE]); + printf("Number of memory expansions: %d\n", stat->expansions); + } @@ -283,8 +363,7 @@ -/* - * Fills an integer array with a given value. +/*! \brief Fills an integer array with a given value. */ void ifill(int *a, int alen, int ival) { @@ -294,8 +373,7 @@ -/* - * Get the statistics of the supernodes +/*! \brief Get the statistics of the supernodes */ #define NBUCKS 10 static int max_sup_size; @@ -350,8 +428,7 @@ -/* - * Check whether repfnz[] == EMPTY after reset. +/*! \brief Check whether repfnz[] == EMPTY after reset. */ void check_repfnz(int n, int w, int jcol, int *repfnz) { @@ -367,7 +444,7 @@ } -/* Print a summary of the testing results. */ +/*! \brief Print a summary of the testing results. */ void PrintSumm(char *type, int nfail, int nrun, int nerrs) { @@ -389,3 +466,19 @@ for (i = 0; i < n; ++i) printf("%d\t%d\n", i, vec[i]); return 0; } + +int slu_PrintInt10(char *name, int len, int *x) +{ + register i; + + printf("%10s:", name); + for (i = 0; i < len; ++i) + { + if ( i % 10 == 0 ) printf("\n\t[%2d-%2d]", i, i + 9); + printf("%6d", x[i]); + } + printf("\n"); + return 0; +} + + diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/util.h python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/util.h --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/util.h 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/util.h 1970-01-01 01:00:00.000000000 +0100 @@ -1,272 +0,0 @@ -#ifndef __SUPERLU_UTIL /* allow multiple inclusions */ -#define __SUPERLU_UTIL - -#include -#include -#include -#ifndef __STDC__ -#include -#endif -#include - -/*********************************************************************** - * Macros - ***********************************************************************/ -#define FIRSTCOL_OF_SNODE(i) (xsup[i]) -/* No of marker arrays used in the symbolic factorization, - each of size n */ -#define NO_MARKER 3 -#define NUM_TEMPV(m,w,t,b) ( SUPERLU_MAX(m, (t + b)*w) ) - -#ifndef USER_ABORT -#define USER_ABORT(msg) superlu_python_module_abort(msg) -#endif - -#define ABORT(err_msg) \ - { char msg[256];\ - sprintf(msg,"%s at line %d in file %s\n",err_msg,__LINE__, __FILE__);\ - USER_ABORT(msg); } - - -#ifndef USER_MALLOC -#if 1 -#define USER_MALLOC(size) superlu_python_module_malloc(size) -#else -/* The following may check out some uninitialized data */ -#define USER_MALLOC(size) memset (superlu_malloc(size), '\x0F', size) -#endif -#endif - -#define SUPERLU_MALLOC(size) USER_MALLOC(size) - -#ifndef USER_FREE -#define USER_FREE(addr) superlu_python_module_free(addr) -#endif - -#define SUPERLU_FREE(addr) USER_FREE(addr) - -#define CHECK_MALLOC(where) { \ - extern int superlu_malloc_total; \ - printf("%s: malloc_total %d Bytes\n", \ - where, superlu_malloc_total); \ -} - -#define SUPERLU_MAX(x, y) ( (x) > (y) ? (x) : (y) ) -#define SUPERLU_MIN(x, y) ( (x) < (y) ? (x) : (y) ) - -/*********************************************************************** - * Constants - ***********************************************************************/ -#define EMPTY (-1) -/*#define NO (-1)*/ -#define FALSE 0 -#define TRUE 1 - -/*********************************************************************** - * Enumerate types - ***********************************************************************/ -typedef enum {NO, YES} yes_no_t; -typedef enum {DOFACT, SamePattern, SamePattern_SameRowPerm, FACTORED} fact_t; -typedef enum {NOROWPERM, LargeDiag, MY_PERMR} rowperm_t; -typedef enum {NATURAL, MMD_ATA, MMD_AT_PLUS_A, COLAMD, MY_PERMC}colperm_t; -typedef enum {NOTRANS, TRANS, CONJ} trans_t; -typedef enum {NOEQUIL, ROW, COL, BOTH} DiagScale_t; -typedef enum {NOREFINE, SINGLE=1, DOUBLE, EXTRA} IterRefine_t; -typedef enum {LUSUP, UCOL, LSUB, USUB} MemType; -typedef enum {HEAD, TAIL} stack_end_t; -typedef enum {SYSTEM, USER} LU_space_t; - -/* - * The following enumerate type is used by the statistics variable - * to keep track of flop count and time spent at various stages. - * - * Note that not all of the fields are disjoint. - */ -typedef enum { - COLPERM, /* find a column ordering that minimizes fills */ - RELAX, /* find artificial supernodes */ - ETREE, /* compute column etree */ - EQUIL, /* equilibrate the original matrix */ - FACT, /* perform LU factorization */ - RCOND, /* estimate reciprocal condition number */ - SOLVE, /* forward and back solves */ - REFINE, /* perform iterative refinement */ - FLOAT, /* time spent in floating-point operations */ - TRSV, /* fraction of FACT spent in xTRSV */ - GEMV, /* fraction of FACT spent in xGEMV */ - FERR, /* estimate error bounds after iterative refinement */ - NPHASES /* total number of phases */ -} PhaseType; - - -/*********************************************************************** - * Type definitions - ***********************************************************************/ -typedef float flops_t; -typedef unsigned char Logical; - -/* - *-- This contains the options used to control the solve process. - * - * Fact (fact_t) - * Specifies whether or not the factored form of the matrix - * A is supplied on entry, and if not, how the matrix A should - * be factorizaed. - * = DOFACT: The matrix A will be factorized from scratch, and the - * factors will be stored in L and U. - * = SamePattern: The matrix A will be factorized assuming - * that a factorization of a matrix with the same sparsity - * pattern was performed prior to this one. Therefore, this - * factorization will reuse column permutation vector - * ScalePermstruct->perm_c and the column elimination tree - * LUstruct->etree. - * = SamePattern_SameRowPerm: The matrix A will be factorized - * assuming that a factorization of a matrix with the same - * sparsity pattern and similar numerical values was performed - * prior to this one. Therefore, this factorization will reuse - * both row and column scaling factors R and C, and the - * both row and column permutation vectors perm_r and perm_c, - * distributed data structure set up from the previous symbolic - * factorization. - * = FACTORED: On entry, L, U, perm_r and perm_c contain the - * factored form of A. If DiagScale is not NOEQUIL, the matrix - * A has been equilibrated with scaling factors R and C. - * - * Equil (yes_no_t) - * Specifies whether to equilibrate the system (scale A's row and - * columns to have unit norm). - * - * ColPerm (colperm_t) - * Specifies what type of column permutation to use to reduce fill. - * = NATURAL: use the natural ordering - * = MMD_ATA: use minimum degree ordering on structure of A'*A - * = MMD_AT_PLUS_A: use minimum degree ordering on structure of A'+A - * = COLAMD: use approximate minimum degree column ordering - * = MY_PERMC: use the ordering specified in ScalePermstruct->perm_c[] - * - * Trans (trans_t) - * Specifies the form of the system of equations: - * = NOTRANS: A * X = B (No transpose) - * = TRANS: A**T * X = B (Transpose) - * = CONJ: A**H * X = B (Transpose) - * - * IterRefine (IterRefine_t) - * Specifies whether to perform iterative refinement. - * = NO: no iterative refinement - * = WorkingPrec: perform iterative refinement in working precision - * = ExtraPrec: perform iterative refinement in extra precision - * - * PrintStat (yes_no_t) - * Specifies whether to print the solver's statistics. - * - * DiagPivotThresh (double, in [0.0, 1.0]) (only for sequential SuperLU) - * Specifies the threshold used for a diagonal entry to be an - * acceptable pivot. - * - * PivotGrowth (yes_no_t) - * Specifies whether to compute the reciprocal pivot growth. - * - * ConditionNumber (ues_no_t) - * Specifies whether to compute the reciprocal condition number. - * - * RowPerm (rowperm_t) (only for SuperLU_DIST) - * Specifies whether to permute rows of the original matrix. - * = NO: not to permute the rows - * = LargeDiag: make the diagonal large relative to the off-diagonal - * = MY_PERMR: use the permutation given in ScalePermstruct->perm_r[] - * - * ReplaceTinyPivot (yes_no_t) (only for SuperLU_DIST) - * Specifies whether to replace the tiny diagonals by - * sqrt(epsilon)*||A|| during LU factorization. - * - * SolveInitialized (yes_no_t) (only for SuperLU_DIST) - * Specifies whether the initialization has been performed to the - * triangular solve. - * - * RefineInitialized (yes_no_t) (only for SuperLU_DIST) - * Specifies whether the initialization has been performed to the - * sparse matrix-vector multiplication routine needed in iterative - * refinement. - */ -typedef struct { - fact_t Fact; - yes_no_t Equil; - colperm_t ColPerm; - trans_t Trans; - IterRefine_t IterRefine; - yes_no_t PrintStat; - yes_no_t SymmetricMode; - double DiagPivotThresh; - yes_no_t PivotGrowth; - yes_no_t ConditionNumber; - rowperm_t RowPerm; - yes_no_t ReplaceTinyPivot; - yes_no_t SolveInitialized; - yes_no_t RefineInitialized; -} superlu_options_t; - -typedef struct { - int *panel_histo; /* histogram of panel size distribution */ - double *utime; /* running time at various phases */ - flops_t *ops; /* operation count at various phases */ - int TinyPivots; /* number of tiny pivots */ - int RefineSteps; /* number of iterative refinement steps */ -} SuperLUStat_t; - - -/*********************************************************************** - * Prototypes - ***********************************************************************/ -#ifdef __cplusplus -extern "C" { -#endif - - /* Added for SciPy */ -extern void superlu_python_module_abort(char *); -extern void *superlu_python_module_malloc (size_t); -extern void superlu_python_module_free (void *); - /* Added for SciPy */ - -extern void Destroy_SuperMatrix_Store(SuperMatrix *); -extern void Destroy_CompCol_Matrix(SuperMatrix *); -extern void Destroy_CompRow_Matrix(SuperMatrix *); -extern void Destroy_SuperNode_Matrix(SuperMatrix *); -extern void Destroy_CompCol_Permuted(SuperMatrix *); -extern void Destroy_Dense_Matrix(SuperMatrix *); -extern void get_perm_c(int, SuperMatrix *, int *); -extern void set_default_options(superlu_options_t *options); -extern void sp_preorder (superlu_options_t *, SuperMatrix*, int*, int*, - SuperMatrix*); -/* extern void superlu_abort_and_exit(char*); -extern void *superlu_malloc (size_t); */ -extern int *intMalloc (int); -extern int *intCalloc (int); -/* extern void superlu_free (void*); */ -extern void SetIWork (int, int, int, int *, int **, int **, int **, - int **, int **, int **, int **); -extern int sp_coletree (int *, int *, int *, int, int, int *); -extern void relax_snode (const int, int *, const int, int *, int *); -extern void heap_relax_snode (const int, int *, const int, int *, int *); -extern void resetrep_col (const int, const int *, int *); -extern int spcoletree (int *, int *, int *, int, int, int *); -extern int *TreePostorder (int, int *); -extern double SuperLU_timer_ (void); -extern int sp_ienv (int); -extern int lsame_ (char *, char *); -extern int xerbla_ (char *, int *); -extern void ifill (int *, int, int); -extern void snode_profile (int, int *); -extern void super_stats (int, int *); -extern void PrintSumm (char *, int, int, int); -extern void StatInit(SuperLUStat_t *); -extern void StatPrint (SuperLUStat_t *); -extern void StatFree(SuperLUStat_t *); -extern void print_panel_seg(int, int, int, int, int *, int *); -extern void check_repfnz(int, int, int, int *); - - -#ifdef __cplusplus - } -#endif - -#endif /* __SUPERLU_UTIL */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/xerbla.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/xerbla.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/xerbla.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/xerbla.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,3 +1,6 @@ +#include +#include "slu_Cnames.h" + /* Subroutine */ int xerbla_(char *srname, int *info) { /* -- LAPACK auxiliary routine (version 2.0) -- diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zcolumn_bmod.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zcolumn_bmod.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zcolumn_bmod.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zcolumn_bmod.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,27 +1,29 @@ -/* +/*! @file zcolumn_bmod.c + * \brief performs numeric block updates + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
- */
-/*
-  Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
- 
-  THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
-  EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
- 
-  Permission is hereby granted to use or copy this program for any
-  purpose, provided the above notices are retained on all copies.
-  Permission to modify the code and to distribute modified code is
-  granted, provided the above notices are retained, and a notice that
-  the code was modified is included with the above copyright notice.
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ *  Permission is hereby granted to use or copy this program for any
+ *  purpose, provided the above notices are retained on all copies.
+ *  Permission to modify the code and to distribute modified code is
+ *  granted, provided the above notices are retained, and a notice that
+ *  the code was modified is included with the above copyright notice.
+ * 
*/ #include #include -#include "zsp_defs.h" +#include "slu_zdefs.h" /* * Function prototypes @@ -32,8 +34,17 @@ -/* Return value: 0 - successful return +/*! \brief + * + *
+ * Purpose:
+ * ========
+ * Performs numeric block updates (sup-col) in topological order.
+ * It features: col-col, 2cols-col, 3cols-col, and sup-col updates.
+ * Special processing on the supernodal portion of L\U[*,j]
+ * Return value:   0 - successful return
  *               > 0 - number of bytes allocated when run out of space
+ * 
*/ int zcolumn_bmod ( @@ -48,14 +59,7 @@ SuperLUStat_t *stat /* output */ ) { -/* - * Purpose: - * ======== - * Performs numeric block updates (sup-col) in topological order. - * It features: col-col, 2cols-col, 3cols-col, and sup-col updates. - * Special processing on the supernodal portion of L\U[*,j] - * - */ + #ifdef _CRAY _fcd ftcs1 = _cptofcd("L", strlen("L")), ftcs2 = _cptofcd("N", strlen("N")), diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zcolumn_dfs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zcolumn_dfs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zcolumn_dfs.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zcolumn_dfs.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,50 +1,38 @@ - -/* +/*! @file zcolumn_dfs.c + * \brief Performs a symbolic factorization + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
- */
-/*
-  Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
- 
-  THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
-  EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
- 
-  Permission is hereby granted to use or copy this program for any
-  purpose, provided the above notices are retained on all copies.
-  Permission to modify the code and to distribute modified code is
-  granted, provided the above notices are retained, and a notice that
-  the code was modified is included with the above copyright notice.
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -#include "zsp_defs.h" +#include "slu_zdefs.h" -/* What type of supernodes we want */ +/*! \brief What type of supernodes we want */ #define T2_SUPER -int -zcolumn_dfs( - const int m, /* in - number of rows in the matrix */ - const int jcol, /* in */ - int *perm_r, /* in */ - int *nseg, /* modified - with new segments appended */ - int *lsub_col, /* in - defines the RHS vector to start the dfs */ - int *segrep, /* modified - with new segments appended */ - int *repfnz, /* modified */ - int *xprune, /* modified */ - int *marker, /* modified */ - int *parent, /* working array */ - int *xplore, /* working array */ - GlobalLU_t *Glu /* modified */ - ) -{ -/* + +/*! \brief + * + *
  * Purpose
  * =======
- *   "column_dfs" performs a symbolic factorization on column jcol, and
+ *   ZCOLUMN_DFS performs a symbolic factorization on column jcol, and
  *   decide the supernode boundary.
  *
  *   This routine does not use numeric values, but only use the RHS 
@@ -72,8 +60,25 @@
  * ============
  *     0  success;
  *   > 0  number of bytes allocated when run out of space.
- *
+ * 
*/ +int +zcolumn_dfs( + const int m, /* in - number of rows in the matrix */ + const int jcol, /* in */ + int *perm_r, /* in */ + int *nseg, /* modified - with new segments appended */ + int *lsub_col, /* in - defines the RHS vector to start the dfs */ + int *segrep, /* modified - with new segments appended */ + int *repfnz, /* modified */ + int *xprune, /* modified */ + int *marker, /* modified */ + int *parent, /* working array */ + int *xplore, /* working array */ + GlobalLU_t *Glu /* modified */ + ) +{ + int jcolp1, jcolm1, jsuper, nsuper, nextl; int k, krep, krow, kmark, kperm; int *marker2; /* Used for small panel LU */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zcopy_to_ucol.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zcopy_to_ucol.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zcopy_to_ucol.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zcopy_to_ucol.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,27 +1,26 @@ - -/* +/*! @file zcopy_to_ucol.c + * \brief Copy a computed column of U to the compressed data structure + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
  *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "zsp_defs.h" -#include "util.h" +#include "slu_zdefs.h" int zcopy_to_ucol( @@ -47,7 +46,6 @@ doublecomplex *ucol; int *usub, *xusub; int nzumax; - doublecomplex zero = {0.0, 0.0}; xsup = Glu->xsup; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zdiagonal.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zdiagonal.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zdiagonal.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zdiagonal.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,133 @@ + +/*! @file zdiagonal.c + * \brief Auxiliary routines to work with diagonal elements + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory
+ * June 30, 2009
+ * 
+ */ + +#include "slu_zdefs.h" + +int zfill_diag(int n, NCformat *Astore) +/* fill explicit zeros on the diagonal entries, so that the matrix is not + structurally singular. */ +{ + doublecomplex *nzval = (doublecomplex *)Astore->nzval; + int *rowind = Astore->rowind; + int *colptr = Astore->colptr; + int nnz = colptr[n]; + int fill = 0; + doublecomplex *nzval_new; + doublecomplex zero = {1.0, 0.0}; + int *rowind_new; + int i, j, diag; + + for (i = 0; i < n; i++) + { + diag = -1; + for (j = colptr[i]; j < colptr[i + 1]; j++) + if (rowind[j] == i) diag = j; + if (diag < 0) fill++; + } + if (fill) + { + nzval_new = doublecomplexMalloc(nnz + fill); + rowind_new = intMalloc(nnz + fill); + fill = 0; + for (i = 0; i < n; i++) + { + diag = -1; + for (j = colptr[i] - fill; j < colptr[i + 1]; j++) + { + if ((rowind_new[j + fill] = rowind[j]) == i) diag = j; + nzval_new[j + fill] = nzval[j]; + } + if (diag < 0) + { + rowind_new[colptr[i + 1] + fill] = i; + nzval_new[colptr[i + 1] + fill] = zero; + fill++; + } + colptr[i + 1] += fill; + } + Astore->nzval = nzval_new; + Astore->rowind = rowind_new; + SUPERLU_FREE(nzval); + SUPERLU_FREE(rowind); + } + Astore->nnz += fill; + return fill; +} + +int zdominate(int n, NCformat *Astore) +/* make the matrix diagonally dominant */ +{ + doublecomplex *nzval = (doublecomplex *)Astore->nzval; + int *rowind = Astore->rowind; + int *colptr = Astore->colptr; + int nnz = colptr[n]; + int fill = 0; + doublecomplex *nzval_new; + int *rowind_new; + int i, j, diag; + double s; + + for (i = 0; i < n; i++) + { + diag = -1; + for (j = colptr[i]; j < colptr[i + 1]; j++) + if (rowind[j] == i) diag = j; + if (diag < 0) fill++; + } + if (fill) + { + nzval_new = doublecomplexMalloc(nnz + fill); + rowind_new = intMalloc(nnz+ fill); + fill = 0; + for (i = 0; i < n; i++) + { + s = 1e-6; + diag = -1; + for (j = colptr[i] - fill; j < colptr[i + 1]; j++) + { + if ((rowind_new[j + fill] = rowind[j]) == i) diag = j; + nzval_new[j + fill] = nzval[j]; + s += z_abs1(&nzval_new[j + fill]); + } + if (diag >= 0) { + nzval_new[diag+fill].r = s * 3.0; + nzval_new[diag+fill].i = 0.0; + } else { + rowind_new[colptr[i + 1] + fill] = i; + nzval_new[colptr[i + 1] + fill].r = s * 3.0; + nzval_new[colptr[i + 1] + fill].i = 0.0; + fill++; + } + colptr[i + 1] += fill; + } + Astore->nzval = nzval_new; + Astore->rowind = rowind_new; + SUPERLU_FREE(nzval); + SUPERLU_FREE(rowind); + } + else + { + for (i = 0; i < n; i++) + { + s = 1e-6; + diag = -1; + for (j = colptr[i]; j < colptr[i + 1]; j++) + { + if (rowind[j] == i) diag = j; + s += z_abs1(&nzval[j]); + } + nzval[diag].r = s * 3.0; + nzval[diag].i = 0.0; + } + } + Astore->nnz += fill; + return fill; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgscon.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgscon.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgscon.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgscon.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,69 +1,80 @@ -/* +/*! @file zgscon.c + * \brief Estimates reciprocal of the condition number of a general matrix + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
+ * Modified from lapack routines ZGECON.
+ * 
*/ + /* * File name: zgscon.c * History: Modified from lapack routines ZGECON. */ #include -#include "zsp_defs.h" +#include "slu_zdefs.h" + +/*! \brief + * + *
+ *   Purpose   
+ *   =======   
+ *
+ *   ZGSCON estimates the reciprocal of the condition number of a general 
+ *   real matrix A, in either the 1-norm or the infinity-norm, using   
+ *   the LU factorization computed by ZGETRF.   *
+ *
+ *   An estimate is obtained for norm(inv(A)), and the reciprocal of the   
+ *   condition number is computed as   
+ *      RCOND = 1 / ( norm(A) * norm(inv(A)) ).   
+ *
+ *   See supermatrix.h for the definition of 'SuperMatrix' structure.
+ * 
+ *   Arguments   
+ *   =========   
+ *
+ *    NORM    (input) char*
+ *            Specifies whether the 1-norm condition number or the   
+ *            infinity-norm condition number is required:   
+ *            = '1' or 'O':  1-norm;   
+ *            = 'I':         Infinity-norm.
+ *	    
+ *    L       (input) SuperMatrix*
+ *            The factor L from the factorization Pr*A*Pc=L*U as computed by
+ *            zgstrf(). Use compressed row subscripts storage for supernodes,
+ *            i.e., L has types: Stype = SLU_SC, Dtype = SLU_Z, Mtype = SLU_TRLU.
+ * 
+ *    U       (input) SuperMatrix*
+ *            The factor U from the factorization Pr*A*Pc=L*U as computed by
+ *            zgstrf(). Use column-wise storage scheme, i.e., U has types:
+ *            Stype = SLU_NC, Dtype = SLU_Z, Mtype = SLU_TRU.
+ *	    
+ *    ANORM   (input) double
+ *            If NORM = '1' or 'O', the 1-norm of the original matrix A.   
+ *            If NORM = 'I', the infinity-norm of the original matrix A.
+ *	    
+ *    RCOND   (output) double*
+ *           The reciprocal of the condition number of the matrix A,   
+ *           computed as RCOND = 1/(norm(A) * norm(inv(A))).
+ *	    
+ *    INFO    (output) int*
+ *           = 0:  successful exit   
+ *           < 0:  if INFO = -i, the i-th argument had an illegal value   
+ *
+ *    ===================================================================== 
+ * 
+ */ void zgscon(char *norm, SuperMatrix *L, SuperMatrix *U, double anorm, double *rcond, SuperLUStat_t *stat, int *info) { -/* - Purpose - ======= - - ZGSCON estimates the reciprocal of the condition number of a general - real matrix A, in either the 1-norm or the infinity-norm, using - the LU factorization computed by ZGETRF. - - An estimate is obtained for norm(inv(A)), and the reciprocal of the - condition number is computed as - RCOND = 1 / ( norm(A) * norm(inv(A)) ). - - See supermatrix.h for the definition of 'SuperMatrix' structure. - - Arguments - ========= - - NORM (input) char* - Specifies whether the 1-norm condition number or the - infinity-norm condition number is required: - = '1' or 'O': 1-norm; - = 'I': Infinity-norm. - - L (input) SuperMatrix* - The factor L from the factorization Pr*A*Pc=L*U as computed by - zgstrf(). Use compressed row subscripts storage for supernodes, - i.e., L has types: Stype = SLU_SC, Dtype = SLU_Z, Mtype = SLU_TRLU. - - U (input) SuperMatrix* - The factor U from the factorization Pr*A*Pc=L*U as computed by - zgstrf(). Use column-wise storage scheme, i.e., U has types: - Stype = SLU_NC, Dtype = SLU_Z, Mtype = TRU. - - ANORM (input) double - If NORM = '1' or 'O', the 1-norm of the original matrix A. - If NORM = 'I', the infinity-norm of the original matrix A. - - RCOND (output) double* - The reciprocal of the condition number of the matrix A, - computed as RCOND = 1/(norm(A) * norm(inv(A))). - - INFO (output) int* - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - ===================================================================== -*/ /* Local variables */ int kase, kase1, onenrm, i; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgsequ.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgsequ.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgsequ.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgsequ.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,81 +1,90 @@ - -/* +/*! @file zgsequ.c + * \brief Computes row and column scalings + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
  *
+ * Modified from LAPACK routine ZGEEQU
+ * 
*/ /* * File name: zgsequ.c * History: Modified from LAPACK routine ZGEEQU */ #include -#include "zsp_defs.h" -#include "util.h" +#include "slu_zdefs.h" + + +/*! \brief + * + *
+ * Purpose   
+ *   =======   
+ *
+ *   ZGSEQU computes row and column scalings intended to equilibrate an   
+ *   M-by-N sparse matrix A and reduce its condition number. R returns the row
+ *   scale factors and C the column scale factors, chosen to try to make   
+ *   the largest element in each row and column of the matrix B with   
+ *   elements B(i,j)=R(i)*A(i,j)*C(j) have absolute value 1.   
+ *
+ *   R(i) and C(j) are restricted to be between SMLNUM = smallest safe   
+ *   number and BIGNUM = largest safe number.  Use of these scaling   
+ *   factors is not guaranteed to reduce the condition number of A but   
+ *   works well in practice.   
+ *
+ *   See supermatrix.h for the definition of 'SuperMatrix' structure.
+ *
+ *   Arguments   
+ *   =========   
+ *
+ *   A       (input) SuperMatrix*
+ *           The matrix of dimension (A->nrow, A->ncol) whose equilibration
+ *           factors are to be computed. The type of A can be:
+ *           Stype = SLU_NC; Dtype = SLU_Z; Mtype = SLU_GE.
+ *	    
+ *   R       (output) double*, size A->nrow
+ *           If INFO = 0 or INFO > M, R contains the row scale factors   
+ *           for A.
+ *	    
+ *   C       (output) double*, size A->ncol
+ *           If INFO = 0,  C contains the column scale factors for A.
+ *	    
+ *   ROWCND  (output) double*
+ *           If INFO = 0 or INFO > M, ROWCND contains the ratio of the   
+ *           smallest R(i) to the largest R(i).  If ROWCND >= 0.1 and   
+ *           AMAX is neither too large nor too small, it is not worth   
+ *           scaling by R.
+ *	    
+ *   COLCND  (output) double*
+ *           If INFO = 0, COLCND contains the ratio of the smallest   
+ *           C(i) to the largest C(i).  If COLCND >= 0.1, it is not   
+ *           worth scaling by C.
+ *	    
+ *   AMAX    (output) double*
+ *           Absolute value of largest matrix element.  If AMAX is very   
+ *           close to overflow or very close to underflow, the matrix   
+ *           should be scaled.
+ *	    
+ *   INFO    (output) int*
+ *           = 0:  successful exit   
+ *           < 0:  if INFO = -i, the i-th argument had an illegal value   
+ *           > 0:  if INFO = i,  and i is   
+ *                 <= A->nrow:  the i-th row of A is exactly zero   
+ *                 >  A->ncol:  the (i-M)-th column of A is exactly zero   
+ *
+ *   ===================================================================== 
+ * 
+ */ void zgsequ(SuperMatrix *A, double *r, double *c, double *rowcnd, double *colcnd, double *amax, int *info) { -/* - Purpose - ======= - - ZGSEQU computes row and column scalings intended to equilibrate an - M-by-N sparse matrix A and reduce its condition number. R returns the row - scale factors and C the column scale factors, chosen to try to make - the largest element in each row and column of the matrix B with - elements B(i,j)=R(i)*A(i,j)*C(j) have absolute value 1. - - R(i) and C(j) are restricted to be between SMLNUM = smallest safe - number and BIGNUM = largest safe number. Use of these scaling - factors is not guaranteed to reduce the condition number of A but - works well in practice. - - See supermatrix.h for the definition of 'SuperMatrix' structure. - - Arguments - ========= - - A (input) SuperMatrix* - The matrix of dimension (A->nrow, A->ncol) whose equilibration - factors are to be computed. The type of A can be: - Stype = SLU_NC; Dtype = SLU_Z; Mtype = SLU_GE. - - R (output) double*, size A->nrow - If INFO = 0 or INFO > M, R contains the row scale factors - for A. - - C (output) double*, size A->ncol - If INFO = 0, C contains the column scale factors for A. - - ROWCND (output) double* - If INFO = 0 or INFO > M, ROWCND contains the ratio of the - smallest R(i) to the largest R(i). If ROWCND >= 0.1 and - AMAX is neither too large nor too small, it is not worth - scaling by R. - - COLCND (output) double* - If INFO = 0, COLCND contains the ratio of the smallest - C(i) to the largest C(i). If COLCND >= 0.1, it is not - worth scaling by C. - - AMAX (output) double* - Absolute value of largest matrix element. If AMAX is very - close to overflow or very close to underflow, the matrix - should be scaled. - - INFO (output) int* - = 0: successful exit - < 0: if INFO = -i, the i-th argument had an illegal value - > 0: if INFO = i, and i is - <= A->nrow: the i-th row of A is exactly zero - > A->ncol: the (i-M)-th column of A is exactly zero - ===================================================================== -*/ /* Local variables */ NCformat *Astore; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgsisx.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgsisx.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgsisx.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgsisx.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,693 @@ + +/*! @file zgsisx.c + * \brief Gives the approximate solutions of linear equations A*X=B or A'*X=B + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory.
+ * June 30, 2009
+ * 
+ */ +#include "slu_zdefs.h" + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *
+ * ZGSISX gives the approximate solutions of linear equations A*X=B or A'*X=B,
+ * using the ILU factorization from zgsitrf(). An estimation of
+ * the condition number is provided. It performs the following steps:
+ *
+ *   1. If A is stored column-wise (A->Stype = SLU_NC):
+ *  
+ *	1.1. If options->Equil = YES or options->RowPerm = LargeDiag, scaling
+ *	     factors are computed to equilibrate the system:
+ *	     options->Trans = NOTRANS:
+ *		 diag(R)*A*diag(C) *inv(diag(C))*X = diag(R)*B
+ *	     options->Trans = TRANS:
+ *		 (diag(R)*A*diag(C))**T *inv(diag(R))*X = diag(C)*B
+ *	     options->Trans = CONJ:
+ *		 (diag(R)*A*diag(C))**H *inv(diag(R))*X = diag(C)*B
+ *	     Whether or not the system will be equilibrated depends on the
+ *	     scaling of the matrix A, but if equilibration is used, A is
+ *	     overwritten by diag(R)*A*diag(C) and B by diag(R)*B
+ *	     (if options->Trans=NOTRANS) or diag(C)*B (if options->Trans
+ *	     = TRANS or CONJ).
+ *
+ *	1.2. Permute columns of A, forming A*Pc, where Pc is a permutation
+ *	     matrix that usually preserves sparsity.
+ *	     For more details of this step, see sp_preorder.c.
+ *
+ *	1.3. If options->Fact != FACTORED, the LU decomposition is used to
+ *	     factor the matrix A (after equilibration if options->Equil = YES)
+ *	     as Pr*A*Pc = L*U, with Pr determined by partial pivoting.
+ *
+ *	1.4. Compute the reciprocal pivot growth factor.
+ *
+ *	1.5. If some U(i,i) = 0, so that U is exactly singular, then the
+ *	     routine fills a small number on the diagonal entry, that is
+ *		U(i,i) = ||A(:,i)||_oo * options->ILU_FillTol ** (1 - i / n),
+ *	     and info will be increased by 1. The factored form of A is used
+ *	     to estimate the condition number of the preconditioner. If the
+ *	     reciprocal of the condition number is less than machine precision,
+ *	     info = A->ncol+1 is returned as a warning, but the routine still
+ *	     goes on to solve for X.
+ *
+ *	1.6. The system of equations is solved for X using the factored form
+ *	     of A.
+ *
+ *	1.7. options->IterRefine is not used
+ *
+ *	1.8. If equilibration was used, the matrix X is premultiplied by
+ *	     diag(C) (if options->Trans = NOTRANS) or diag(R)
+ *	     (if options->Trans = TRANS or CONJ) so that it solves the
+ *	     original system before equilibration.
+ *
+ *	1.9. options for ILU only
+ *	     1) If options->RowPerm = LargeDiag, MC64 is used to scale and
+ *		permute the matrix to an I-matrix, that is Pr*Dr*A*Dc has
+ *		entries of modulus 1 on the diagonal and off-diagonal entries
+ *		of modulus at most 1. If MC64 fails, dgsequ() is used to
+ *		equilibrate the system.
+ *	     2) options->ILU_DropTol = tau is the threshold for dropping.
+ *		For L, it is used directly (for the whole row in a supernode);
+ *		For U, ||A(:,i)||_oo * tau is used as the threshold
+ *	        for the	i-th column.
+ *		If a secondary dropping rule is required, tau will
+ *	        also be used to compute the second threshold.
+ *	     3) options->ILU_FillFactor = gamma, used as the initial guess
+ *		of memory growth.
+ *		If a secondary dropping rule is required, it will also
+ *              be used as an upper bound of the memory.
+ *	     4) options->ILU_DropRule specifies the dropping rule.
+ *		Option		Explanation
+ *		======		===========
+ *		DROP_BASIC:	Basic dropping rule, supernodal based ILU.
+ *		DROP_PROWS:	Supernodal based ILUTP, p = gamma * nnz(A) / n.
+ *		DROP_COLUMN:	Variation of ILUTP, for j-th column,
+ *				p = gamma * nnz(A(:,j)).
+ *		DROP_AREA;	Variation of ILUTP, for j-th column, use
+ *				nnz(F(:,1:j)) / nnz(A(:,1:j)) to control the
+ *				memory.
+ *		DROP_DYNAMIC:	Modify the threshold tau during the
+ *				factorizaion.
+ *				If nnz(L(:,1:j)) / nnz(A(:,1:j)) < gamma
+ *				    tau_L(j) := MIN(1, tau_L(j-1) * 2);
+ *				Otherwise
+ *				    tau_L(j) := MIN(1, tau_L(j-1) * 2);
+ *				tau_U(j) uses the similar rule.
+ *				NOTE: the thresholds used by L and U are
+ *				indenpendent.
+ *		DROP_INTERP:	Compute the second dropping threshold by
+ *				interpolation instead of sorting (default).
+ *				In this case, the actual fill ratio is not
+ *				guaranteed smaller than gamma.
+ *		DROP_PROWS, DROP_COLUMN and DROP_AREA are mutually exclusive.
+ *		( The default option is DROP_BASIC | DROP_AREA. )
+ *	     5) options->ILU_Norm is the criterion of computing the average
+ *		value of a row in L.
+ *		options->ILU_Norm	average(x[1:n])
+ *		=================	===============
+ *		ONE_NORM		||x||_1 / n
+ *		TWO_NORM		||x||_2 / sqrt(n)
+ *		INF_NORM		max{|x[i]|}
+ *	     6) options->ILU_MILU specifies the type of MILU's variation.
+ *		= SILU (default): do not perform MILU;
+ *		= SMILU_1 (not recommended):
+ *		    U(i,i) := U(i,i) + sum(dropped entries);
+ *		= SMILU_2:
+ *		    U(i,i) := U(i,i) + SGN(U(i,i)) * sum(dropped entries);
+ *		= SMILU_3:
+ *		    U(i,i) := U(i,i) + SGN(U(i,i)) * sum(|dropped entries|);
+ *		NOTE: Even SMILU_1 does not preserve the column sum because of
+ *		late dropping.
+ *	     7) options->ILU_FillTol is used as the perturbation when
+ *		encountering zero pivots. If some U(i,i) = 0, so that U is
+ *		exactly singular, then
+ *		   U(i,i) := ||A(:,i)|| * options->ILU_FillTol ** (1 - i / n).
+ *
+ *   2. If A is stored row-wise (A->Stype = SLU_NR), apply the above algorithm
+ *	to the transpose of A:
+ *
+ *	2.1. If options->Equil = YES or options->RowPerm = LargeDiag, scaling
+ *	     factors are computed to equilibrate the system:
+ *	     options->Trans = NOTRANS:
+ *		 diag(R)*A*diag(C) *inv(diag(C))*X = diag(R)*B
+ *	     options->Trans = TRANS:
+ *		 (diag(R)*A*diag(C))**T *inv(diag(R))*X = diag(C)*B
+ *	     options->Trans = CONJ:
+ *		 (diag(R)*A*diag(C))**H *inv(diag(R))*X = diag(C)*B
+ *	     Whether or not the system will be equilibrated depends on the
+ *	     scaling of the matrix A, but if equilibration is used, A' is
+ *	     overwritten by diag(R)*A'*diag(C) and B by diag(R)*B
+ *	     (if trans='N') or diag(C)*B (if trans = 'T' or 'C').
+ *
+ *	2.2. Permute columns of transpose(A) (rows of A),
+ *	     forming transpose(A)*Pc, where Pc is a permutation matrix that
+ *	     usually preserves sparsity.
+ *	     For more details of this step, see sp_preorder.c.
+ *
+ *	2.3. If options->Fact != FACTORED, the LU decomposition is used to
+ *	     factor the transpose(A) (after equilibration if
+ *	     options->Fact = YES) as Pr*transpose(A)*Pc = L*U with the
+ *	     permutation Pr determined by partial pivoting.
+ *
+ *	2.4. Compute the reciprocal pivot growth factor.
+ *
+ *	2.5. If some U(i,i) = 0, so that U is exactly singular, then the
+ *	     routine fills a small number on the diagonal entry, that is
+ *		 U(i,i) = ||A(:,i)||_oo * options->ILU_FillTol ** (1 - i / n).
+ *	     And info will be increased by 1. The factored form of A is used
+ *	     to estimate the condition number of the preconditioner. If the
+ *	     reciprocal of the condition number is less than machine precision,
+ *	     info = A->ncol+1 is returned as a warning, but the routine still
+ *	     goes on to solve for X.
+ *
+ *	2.6. The system of equations is solved for X using the factored form
+ *	     of transpose(A).
+ *
+ *	2.7. If options->IterRefine is not used.
+ *
+ *	2.8. If equilibration was used, the matrix X is premultiplied by
+ *	     diag(C) (if options->Trans = NOTRANS) or diag(R)
+ *	     (if options->Trans = TRANS or CONJ) so that it solves the
+ *	     original system before equilibration.
+ *
+ *   See supermatrix.h for the definition of 'SuperMatrix' structure.
+ *
+ * Arguments
+ * =========
+ *
+ * options (input) superlu_options_t*
+ *	   The structure defines the input parameters to control
+ *	   how the LU decomposition will be performed and how the
+ *	   system will be solved.
+ *
+ * A	   (input/output) SuperMatrix*
+ *	   Matrix A in A*X=B, of dimension (A->nrow, A->ncol). The number
+ *	   of the linear equations is A->nrow. Currently, the type of A can be:
+ *	   Stype = SLU_NC or SLU_NR, Dtype = SLU_Z, Mtype = SLU_GE.
+ *	   In the future, more general A may be handled.
+ *
+ *	   On entry, If options->Fact = FACTORED and equed is not 'N',
+ *	   then A must have been equilibrated by the scaling factors in
+ *	   R and/or C.
+ *	   On exit, A is not modified if options->Equil = NO, or if
+ *	   options->Equil = YES but equed = 'N' on exit.
+ *	   Otherwise, if options->Equil = YES and equed is not 'N',
+ *	   A is scaled as follows:
+ *	   If A->Stype = SLU_NC:
+ *	     equed = 'R':  A := diag(R) * A
+ *	     equed = 'C':  A := A * diag(C)
+ *	     equed = 'B':  A := diag(R) * A * diag(C).
+ *	   If A->Stype = SLU_NR:
+ *	     equed = 'R':  transpose(A) := diag(R) * transpose(A)
+ *	     equed = 'C':  transpose(A) := transpose(A) * diag(C)
+ *	     equed = 'B':  transpose(A) := diag(R) * transpose(A) * diag(C).
+ *
+ * perm_c  (input/output) int*
+ *	   If A->Stype = SLU_NC, Column permutation vector of size A->ncol,
+ *	   which defines the permutation matrix Pc; perm_c[i] = j means
+ *	   column i of A is in position j in A*Pc.
+ *	   On exit, perm_c may be overwritten by the product of the input
+ *	   perm_c and a permutation that postorders the elimination tree
+ *	   of Pc'*A'*A*Pc; perm_c is not changed if the elimination tree
+ *	   is already in postorder.
+ *
+ *	   If A->Stype = SLU_NR, column permutation vector of size A->nrow,
+ *	   which describes permutation of columns of transpose(A) 
+ *	   (rows of A) as described above.
+ *
+ * perm_r  (input/output) int*
+ *	   If A->Stype = SLU_NC, row permutation vector of size A->nrow, 
+ *	   which defines the permutation matrix Pr, and is determined
+ *	   by partial pivoting.  perm_r[i] = j means row i of A is in 
+ *	   position j in Pr*A.
+ *
+ *	   If A->Stype = SLU_NR, permutation vector of size A->ncol, which
+ *	   determines permutation of rows of transpose(A)
+ *	   (columns of A) as described above.
+ *
+ *	   If options->Fact = SamePattern_SameRowPerm, the pivoting routine
+ *	   will try to use the input perm_r, unless a certain threshold
+ *	   criterion is violated. In that case, perm_r is overwritten by a
+ *	   new permutation determined by partial pivoting or diagonal
+ *	   threshold pivoting.
+ *	   Otherwise, perm_r is output argument.
+ *
+ * etree   (input/output) int*,  dimension (A->ncol)
+ *	   Elimination tree of Pc'*A'*A*Pc.
+ *	   If options->Fact != FACTORED and options->Fact != DOFACT,
+ *	   etree is an input argument, otherwise it is an output argument.
+ *	   Note: etree is a vector of parent pointers for a forest whose
+ *	   vertices are the integers 0 to A->ncol-1; etree[root]==A->ncol.
+ *
+ * equed   (input/output) char*
+ *	   Specifies the form of equilibration that was done.
+ *	   = 'N': No equilibration.
+ *	   = 'R': Row equilibration, i.e., A was premultiplied by diag(R).
+ *	   = 'C': Column equilibration, i.e., A was postmultiplied by diag(C).
+ *	   = 'B': Both row and column equilibration, i.e., A was replaced 
+ *		  by diag(R)*A*diag(C).
+ *	   If options->Fact = FACTORED, equed is an input argument,
+ *	   otherwise it is an output argument.
+ *
+ * R	   (input/output) double*, dimension (A->nrow)
+ *	   The row scale factors for A or transpose(A).
+ *	   If equed = 'R' or 'B', A (if A->Stype = SLU_NC) or transpose(A)
+ *	       (if A->Stype = SLU_NR) is multiplied on the left by diag(R).
+ *	   If equed = 'N' or 'C', R is not accessed.
+ *	   If options->Fact = FACTORED, R is an input argument,
+ *	       otherwise, R is output.
+ *	   If options->zFact = FACTORED and equed = 'R' or 'B', each element
+ *	       of R must be positive.
+ *
+ * C	   (input/output) double*, dimension (A->ncol)
+ *	   The column scale factors for A or transpose(A).
+ *	   If equed = 'C' or 'B', A (if A->Stype = SLU_NC) or transpose(A)
+ *	       (if A->Stype = SLU_NR) is multiplied on the right by diag(C).
+ *	   If equed = 'N' or 'R', C is not accessed.
+ *	   If options->Fact = FACTORED, C is an input argument,
+ *	       otherwise, C is output.
+ *	   If options->Fact = FACTORED and equed = 'C' or 'B', each element
+ *	       of C must be positive.
+ *
+ * L	   (output) SuperMatrix*
+ *	   The factor L from the factorization
+ *	       Pr*A*Pc=L*U		(if A->Stype SLU_= NC) or
+ *	       Pr*transpose(A)*Pc=L*U	(if A->Stype = SLU_NR).
+ *	   Uses compressed row subscripts storage for supernodes, i.e.,
+ *	   L has types: Stype = SLU_SC, Dtype = SLU_Z, Mtype = SLU_TRLU.
+ *
+ * U	   (output) SuperMatrix*
+ *	   The factor U from the factorization
+ *	       Pr*A*Pc=L*U		(if A->Stype = SLU_NC) or
+ *	       Pr*transpose(A)*Pc=L*U	(if A->Stype = SLU_NR).
+ *	   Uses column-wise storage scheme, i.e., U has types:
+ *	   Stype = SLU_NC, Dtype = SLU_Z, Mtype = SLU_TRU.
+ *
+ * work    (workspace/output) void*, size (lwork) (in bytes)
+ *	   User supplied workspace, should be large enough
+ *	   to hold data structures for factors L and U.
+ *	   On exit, if fact is not 'F', L and U point to this array.
+ *
+ * lwork   (input) int
+ *	   Specifies the size of work array in bytes.
+ *	   = 0:  allocate space internally by system malloc;
+ *	   > 0:  use user-supplied work array of length lwork in bytes,
+ *		 returns error if space runs out.
+ *	   = -1: the routine guesses the amount of space needed without
+ *		 performing the factorization, and returns it in
+ *		 mem_usage->total_needed; no other side effects.
+ *
+ *	   See argument 'mem_usage' for memory usage statistics.
+ *
+ * B	   (input/output) SuperMatrix*
+ *	   B has types: Stype = SLU_DN, Dtype = SLU_Z, Mtype = SLU_GE.
+ *	   On entry, the right hand side matrix.
+ *	   If B->ncol = 0, only LU decomposition is performed, the triangular
+ *			   solve is skipped.
+ *	   On exit,
+ *	      if equed = 'N', B is not modified; otherwise
+ *	      if A->Stype = SLU_NC:
+ *		 if options->Trans = NOTRANS and equed = 'R' or 'B',
+ *		    B is overwritten by diag(R)*B;
+ *		 if options->Trans = TRANS or CONJ and equed = 'C' of 'B',
+ *		    B is overwritten by diag(C)*B;
+ *	      if A->Stype = SLU_NR:
+ *		 if options->Trans = NOTRANS and equed = 'C' or 'B',
+ *		    B is overwritten by diag(C)*B;
+ *		 if options->Trans = TRANS or CONJ and equed = 'R' of 'B',
+ *		    B is overwritten by diag(R)*B.
+ *
+ * X	   (output) SuperMatrix*
+ *	   X has types: Stype = SLU_DN, Dtype = SLU_Z, Mtype = SLU_GE.
+ *	   If info = 0 or info = A->ncol+1, X contains the solution matrix
+ *	   to the original system of equations. Note that A and B are modified
+ *	   on exit if equed is not 'N', and the solution to the equilibrated
+ *	   system is inv(diag(C))*X if options->Trans = NOTRANS and
+ *	   equed = 'C' or 'B', or inv(diag(R))*X if options->Trans = 'T' or 'C'
+ *	   and equed = 'R' or 'B'.
+ *
+ * recip_pivot_growth (output) double*
+ *	   The reciprocal pivot growth factor max_j( norm(A_j)/norm(U_j) ).
+ *	   The infinity norm is used. If recip_pivot_growth is much less
+ *	   than 1, the stability of the LU factorization could be poor.
+ *
+ * rcond   (output) double*
+ *	   The estimate of the reciprocal condition number of the matrix A
+ *	   after equilibration (if done). If rcond is less than the machine
+ *	   precision (in particular, if rcond = 0), the matrix is singular
+ *	   to working precision. This condition is indicated by a return
+ *	   code of info > 0.
+ *
+ * mem_usage (output) mem_usage_t*
+ *	   Record the memory usage statistics, consisting of following fields:
+ *	   - for_lu (float)
+ *	     The amount of space used in bytes for L\U data structures.
+ *	   - total_needed (float)
+ *	     The amount of space needed in bytes to perform factorization.
+ *	   - expansions (int)
+ *	     The number of memory expansions during the LU factorization.
+ *
+ * stat   (output) SuperLUStat_t*
+ *	  Record the statistics on runtime and floating-point operation count.
+ *	  See slu_util.h for the definition of 'SuperLUStat_t'.
+ *
+ * info    (output) int*
+ *	   = 0: successful exit
+ *	   < 0: if info = -i, the i-th argument had an illegal value
+ *	   > 0: if info = i, and i is
+ *		<= A->ncol: number of zero pivots. They are replaced by small
+ *		      entries due to options->ILU_FillTol.
+ *		= A->ncol+1: U is nonsingular, but RCOND is less than machine
+ *		      precision, meaning that the matrix is singular to
+ *		      working precision. Nevertheless, the solution and
+ *		      error bounds are computed because there are a number
+ *		      of situations where the computed solution can be more
+ *		      accurate than the value of RCOND would suggest.
+ *		> A->ncol+1: number of bytes allocated when memory allocation
+ *		      failure occurred, plus A->ncol.
+ * 
+ */ + +void +zgsisx(superlu_options_t *options, SuperMatrix *A, int *perm_c, int *perm_r, + int *etree, char *equed, double *R, double *C, + SuperMatrix *L, SuperMatrix *U, void *work, int lwork, + SuperMatrix *B, SuperMatrix *X, + double *recip_pivot_growth, double *rcond, + mem_usage_t *mem_usage, SuperLUStat_t *stat, int *info) +{ + + DNformat *Bstore, *Xstore; + doublecomplex *Bmat, *Xmat; + int ldb, ldx, nrhs; + SuperMatrix *AA;/* A in SLU_NC format used by the factorization routine.*/ + SuperMatrix AC; /* Matrix postmultiplied by Pc */ + int colequ, equil, nofact, notran, rowequ, permc_spec, mc64; + trans_t trant; + char norm[1]; + int i, j, info1; + double amax, anorm, bignum, smlnum, colcnd, rowcnd, rcmax, rcmin; + int relax, panel_size; + double diag_pivot_thresh; + double t0; /* temporary time */ + double *utime; + + int *perm = NULL; + + /* External functions */ + extern double zlangs(char *, SuperMatrix *); + + Bstore = B->Store; + Xstore = X->Store; + Bmat = Bstore->nzval; + Xmat = Xstore->nzval; + ldb = Bstore->lda; + ldx = Xstore->lda; + nrhs = B->ncol; + + *info = 0; + nofact = (options->Fact != FACTORED); + equil = (options->Equil == YES); + notran = (options->Trans == NOTRANS); + mc64 = (options->RowPerm == LargeDiag); + if ( nofact ) { + *(unsigned char *)equed = 'N'; + rowequ = FALSE; + colequ = FALSE; + } else { + rowequ = lsame_(equed, "R") || lsame_(equed, "B"); + colequ = lsame_(equed, "C") || lsame_(equed, "B"); + smlnum = dlamch_("Safe minimum"); + bignum = 1. / smlnum; + } + + /* Test the input parameters */ + if (!nofact && options->Fact != DOFACT && options->Fact != SamePattern && + options->Fact != SamePattern_SameRowPerm && + !notran && options->Trans != TRANS && options->Trans != CONJ && + !equil && options->Equil != NO) + *info = -1; + else if ( A->nrow != A->ncol || A->nrow < 0 || + (A->Stype != SLU_NC && A->Stype != SLU_NR) || + A->Dtype != SLU_Z || A->Mtype != SLU_GE ) + *info = -2; + else if (options->Fact == FACTORED && + !(rowequ || colequ || lsame_(equed, "N"))) + *info = -6; + else { + if (rowequ) { + rcmin = bignum; + rcmax = 0.; + for (j = 0; j < A->nrow; ++j) { + rcmin = SUPERLU_MIN(rcmin, R[j]); + rcmax = SUPERLU_MAX(rcmax, R[j]); + } + if (rcmin <= 0.) *info = -7; + else if ( A->nrow > 0) + rowcnd = SUPERLU_MAX(rcmin,smlnum) / SUPERLU_MIN(rcmax,bignum); + else rowcnd = 1.; + } + if (colequ && *info == 0) { + rcmin = bignum; + rcmax = 0.; + for (j = 0; j < A->nrow; ++j) { + rcmin = SUPERLU_MIN(rcmin, C[j]); + rcmax = SUPERLU_MAX(rcmax, C[j]); + } + if (rcmin <= 0.) *info = -8; + else if (A->nrow > 0) + colcnd = SUPERLU_MAX(rcmin,smlnum) / SUPERLU_MIN(rcmax,bignum); + else colcnd = 1.; + } + if (*info == 0) { + if ( lwork < -1 ) *info = -12; + else if ( B->ncol < 0 || Bstore->lda < SUPERLU_MAX(0, A->nrow) || + B->Stype != SLU_DN || B->Dtype != SLU_Z || + B->Mtype != SLU_GE ) + *info = -13; + else if ( X->ncol < 0 || Xstore->lda < SUPERLU_MAX(0, A->nrow) || + (B->ncol != 0 && B->ncol != X->ncol) || + X->Stype != SLU_DN || + X->Dtype != SLU_Z || X->Mtype != SLU_GE ) + *info = -14; + } + } + if (*info != 0) { + i = -(*info); + xerbla_("zgsisx", &i); + return; + } + + /* Initialization for factor parameters */ + panel_size = sp_ienv(1); + relax = sp_ienv(2); + diag_pivot_thresh = options->DiagPivotThresh; + + utime = stat->utime; + + /* Convert A to SLU_NC format when necessary. */ + if ( A->Stype == SLU_NR ) { + NRformat *Astore = A->Store; + AA = (SuperMatrix *) SUPERLU_MALLOC( sizeof(SuperMatrix) ); + zCreate_CompCol_Matrix(AA, A->ncol, A->nrow, Astore->nnz, + Astore->nzval, Astore->colind, Astore->rowptr, + SLU_NC, A->Dtype, A->Mtype); + if ( notran ) { /* Reverse the transpose argument. */ + trant = TRANS; + notran = 0; + } else { + trant = NOTRANS; + notran = 1; + } + } else { /* A->Stype == SLU_NC */ + trant = options->Trans; + AA = A; + } + + if ( nofact ) { + register int i, j; + NCformat *Astore = AA->Store; + int nnz = Astore->nnz; + int *colptr = Astore->colptr; + int *rowind = Astore->rowind; + doublecomplex *nzval = (doublecomplex *)Astore->nzval; + int n = AA->nrow; + + if ( mc64 ) { + *equed = 'B'; + rowequ = colequ = 1; + t0 = SuperLU_timer_(); + if ((perm = intMalloc(n)) == NULL) + ABORT("SUPERLU_MALLOC fails for perm[]"); + + info1 = zldperm(5, n, nnz, colptr, rowind, nzval, perm, R, C); + + if (info1 > 0) { /* MC64 fails, call zgsequ() later */ + mc64 = 0; + SUPERLU_FREE(perm); + perm = NULL; + } else { + for (i = 0; i < n; i++) { + R[i] = exp(R[i]); + C[i] = exp(C[i]); + } + /* permute and scale the matrix */ + for (j = 0; j < n; j++) { + for (i = colptr[j]; i < colptr[j + 1]; i++) { + zd_mult(&nzval[i], &nzval[i], R[rowind[i]] * C[j]); + rowind[i] = perm[rowind[i]]; + } + } + } + utime[EQUIL] = SuperLU_timer_() - t0; + } + if ( !mc64 & equil ) { + t0 = SuperLU_timer_(); + /* Compute row and column scalings to equilibrate the matrix A. */ + zgsequ(AA, R, C, &rowcnd, &colcnd, &amax, &info1); + + if ( info1 == 0 ) { + /* Equilibrate matrix A. */ + zlaqgs(AA, R, C, rowcnd, colcnd, amax, equed); + rowequ = lsame_(equed, "R") || lsame_(equed, "B"); + colequ = lsame_(equed, "C") || lsame_(equed, "B"); + } + utime[EQUIL] = SuperLU_timer_() - t0; + } + } + + if ( nrhs > 0 ) { + /* Scale the right hand side if equilibration was performed. */ + if ( notran ) { + if ( rowequ ) { + for (j = 0; j < nrhs; ++j) + for (i = 0; i < A->nrow; ++i) { + zd_mult(&Bmat[i+j*ldb], &Bmat[i+j*ldb], R[i]); + } + } + } else if ( colequ ) { + for (j = 0; j < nrhs; ++j) + for (i = 0; i < A->nrow; ++i) { + zd_mult(&Bmat[i+j*ldb], &Bmat[i+j*ldb], C[i]); + } + } + } + + if ( nofact ) { + + t0 = SuperLU_timer_(); + /* + * Gnet column permutation vector perm_c[], according to permc_spec: + * permc_spec = NATURAL: natural ordering + * permc_spec = MMD_AT_PLUS_A: minimum degree on structure of A'+A + * permc_spec = MMD_ATA: minimum degree on structure of A'*A + * permc_spec = COLAMD: approximate minimum degree column ordering + * permc_spec = MY_PERMC: the ordering already supplied in perm_c[] + */ + permc_spec = options->ColPerm; + if ( permc_spec != MY_PERMC && options->Fact == DOFACT ) + get_perm_c(permc_spec, AA, perm_c); + utime[COLPERM] = SuperLU_timer_() - t0; + + t0 = SuperLU_timer_(); + sp_preorder(options, AA, perm_c, etree, &AC); + utime[ETREE] = SuperLU_timer_() - t0; + + /* Compute the LU factorization of A*Pc. */ + t0 = SuperLU_timer_(); + zgsitrf(options, &AC, relax, panel_size, etree, work, lwork, + perm_c, perm_r, L, U, stat, info); + utime[FACT] = SuperLU_timer_() - t0; + + if ( lwork == -1 ) { + mem_usage->total_needed = *info - A->ncol; + return; + } + } + + if ( options->PivotGrowth ) { + if ( *info > 0 ) return; + + /* Compute the reciprocal pivot growth factor *recip_pivot_growth. */ + *recip_pivot_growth = zPivotGrowth(A->ncol, AA, perm_c, L, U); + } + + if ( options->ConditionNumber ) { + /* Estimate the reciprocal of the condition number of A. */ + t0 = SuperLU_timer_(); + if ( notran ) { + *(unsigned char *)norm = '1'; + } else { + *(unsigned char *)norm = 'I'; + } + anorm = zlangs(norm, AA); + zgscon(norm, L, U, anorm, rcond, stat, &info1); + utime[RCOND] = SuperLU_timer_() - t0; + } + + if ( nrhs > 0 ) { + /* Compute the solution matrix X. */ + for (j = 0; j < nrhs; j++) /* Save a copy of the right hand sides */ + for (i = 0; i < B->nrow; i++) + Xmat[i + j*ldx] = Bmat[i + j*ldb]; + + t0 = SuperLU_timer_(); + zgstrs (trant, L, U, perm_c, perm_r, X, stat, &info1); + utime[SOLVE] = SuperLU_timer_() - t0; + + /* Transform the solution matrix X to a solution of the original + system. */ + if ( notran ) { + if ( colequ ) { + for (j = 0; j < nrhs; ++j) + for (i = 0; i < A->nrow; ++i) { + zd_mult(&Xmat[i+j*ldx], &Xmat[i+j*ldx], C[i]); + } + } + } else { + if ( rowequ ) { + if (perm) { + doublecomplex *tmp; + int n = A->nrow; + + if ((tmp = doublecomplexMalloc(n)) == NULL) + ABORT("SUPERLU_MALLOC fails for tmp[]"); + for (j = 0; j < nrhs; j++) { + for (i = 0; i < n; i++) + tmp[i] = Xmat[i + j * ldx]; /*dcopy*/ + for (i = 0; i < n; i++) + zd_mult(&Xmat[i+j*ldx], &tmp[perm[i]], R[i]); + } + SUPERLU_FREE(tmp); + } else { + for (j = 0; j < nrhs; ++j) + for (i = 0; i < A->nrow; ++i) { + zd_mult(&Xmat[i+j*ldx], &Xmat[i+j*ldx], R[i]); + } + } + } + } + } /* end if nrhs > 0 */ + + if ( options->ConditionNumber ) { + /* Set INFO = A->ncol+1 if the matrix is singular to working precision. */ + if ( *rcond < dlamch_("E") && *info == 0) *info = A->ncol + 1; + } + + if (perm) SUPERLU_FREE(perm); + + if ( nofact ) { + ilu_zQuerySpace(L, U, mem_usage); + Destroy_CompCol_Permuted(&AC); + } + if ( A->Stype == SLU_NR ) { + Destroy_SuperMatrix_Store(AA); + SUPERLU_FREE(AA); + } + +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgsitrf.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgsitrf.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgsitrf.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgsitrf.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,628 @@ + +/*! @file zgsitf.c + * \brief Computes an ILU factorization of a general sparse matrix + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory.
+ * June 30, 2009
+ * 
+ */ + +#include "slu_zdefs.h" + +#ifdef DEBUG +int num_drop_L; +#endif + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *
+ * ZGSITRF computes an ILU factorization of a general sparse m-by-n
+ * matrix A using partial pivoting with row interchanges.
+ * The factorization has the form
+ *     Pr * A = L * U
+ * where Pr is a row permutation matrix, L is lower triangular with unit
+ * diagonal elements (lower trapezoidal if A->nrow > A->ncol), and U is upper
+ * triangular (upper trapezoidal if A->nrow < A->ncol).
+ *
+ * See supermatrix.h for the definition of 'SuperMatrix' structure.
+ *
+ * Arguments
+ * =========
+ *
+ * options (input) superlu_options_t*
+ *	   The structure defines the input parameters to control
+ *	   how the ILU decomposition will be performed.
+ *
+ * A	    (input) SuperMatrix*
+ *	    Original matrix A, permuted by columns, of dimension
+ *	    (A->nrow, A->ncol). The type of A can be:
+ *	    Stype = SLU_NCP; Dtype = SLU_Z; Mtype = SLU_GE.
+ *
+ * relax    (input) int
+ *	    To control degree of relaxing supernodes. If the number
+ *	    of nodes (columns) in a subtree of the elimination tree is less
+ *	    than relax, this subtree is considered as one supernode,
+ *	    regardless of the row structures of those columns.
+ *
+ * panel_size (input) int
+ *	    A panel consists of at most panel_size consecutive columns.
+ *
+ * etree    (input) int*, dimension (A->ncol)
+ *	    Elimination tree of A'*A.
+ *	    Note: etree is a vector of parent pointers for a forest whose
+ *	    vertices are the integers 0 to A->ncol-1; etree[root]==A->ncol.
+ *	    On input, the columns of A should be permuted so that the
+ *	    etree is in a certain postorder.
+ *
+ * work     (input/output) void*, size (lwork) (in bytes)
+ *	    User-supplied work space and space for the output data structures.
+ *	    Not referenced if lwork = 0;
+ *
+ * lwork   (input) int
+ *	   Specifies the size of work array in bytes.
+ *	   = 0:  allocate space internally by system malloc;
+ *	   > 0:  use user-supplied work array of length lwork in bytes,
+ *		 returns error if space runs out.
+ *	   = -1: the routine guesses the amount of space needed without
+ *		 performing the factorization, and returns it in
+ *		 *info; no other side effects.
+ *
+ * perm_c   (input) int*, dimension (A->ncol)
+ *	    Column permutation vector, which defines the
+ *	    permutation matrix Pc; perm_c[i] = j means column i of A is
+ *	    in position j in A*Pc.
+ *	    When searching for diagonal, perm_c[*] is applied to the
+ *	    row subscripts of A, so that diagonal threshold pivoting
+ *	    can find the diagonal of A, rather than that of A*Pc.
+ *
+ * perm_r   (input/output) int*, dimension (A->nrow)
+ *	    Row permutation vector which defines the permutation matrix Pr,
+ *	    perm_r[i] = j means row i of A is in position j in Pr*A.
+ *	    If options->Fact = SamePattern_SameRowPerm, the pivoting routine
+ *	       will try to use the input perm_r, unless a certain threshold
+ *	       criterion is violated. In that case, perm_r is overwritten by
+ *	       a new permutation determined by partial pivoting or diagonal
+ *	       threshold pivoting.
+ *	    Otherwise, perm_r is output argument;
+ *
+ * L	    (output) SuperMatrix*
+ *	    The factor L from the factorization Pr*A=L*U; use compressed row
+ *	    subscripts storage for supernodes, i.e., L has type:
+ *	    Stype = SLU_SC, Dtype = SLU_Z, Mtype = SLU_TRLU.
+ *
+ * U	    (output) SuperMatrix*
+ *	    The factor U from the factorization Pr*A*Pc=L*U. Use column-wise
+ *	    storage scheme, i.e., U has types: Stype = SLU_NC,
+ *	    Dtype = SLU_Z, Mtype = SLU_TRU.
+ *
+ * stat     (output) SuperLUStat_t*
+ *	    Record the statistics on runtime and floating-point operation count.
+ *	    See slu_util.h for the definition of 'SuperLUStat_t'.
+ *
+ * info     (output) int*
+ *	    = 0: successful exit
+ *	    < 0: if info = -i, the i-th argument had an illegal value
+ *	    > 0: if info = i, and i is
+ *	       <= A->ncol: number of zero pivots. They are replaced by small
+ *		  entries according to options->ILU_FillTol.
+ *	       > A->ncol: number of bytes allocated when memory allocation
+ *		  failure occurred, plus A->ncol. If lwork = -1, it is
+ *		  the estimated amount of space needed, plus A->ncol.
+ *
+ * ======================================================================
+ *
+ * Local Working Arrays:
+ * ======================
+ *   m = number of rows in the matrix
+ *   n = number of columns in the matrix
+ *
+ *   marker[0:3*m-1]: marker[i] = j means that node i has been
+ *	reached when working on column j.
+ *	Storage: relative to original row subscripts
+ *	NOTE: There are 4 of them:
+ *	      marker/marker1 are used for panel dfs, see (ilu_)dpanel_dfs.c;
+ *	      marker2 is used for inner-factorization, see (ilu)_dcolumn_dfs.c;
+ *	      marker_relax(has its own space) is used for relaxed supernodes.
+ *
+ *   parent[0:m-1]: parent vector used during dfs
+ *	Storage: relative to new row subscripts
+ *
+ *   xplore[0:m-1]: xplore[i] gives the location of the next (dfs)
+ *	unexplored neighbor of i in lsub[*]
+ *
+ *   segrep[0:nseg-1]: contains the list of supernodal representatives
+ *	in topological order of the dfs. A supernode representative is the
+ *	last column of a supernode.
+ *	The maximum size of segrep[] is n.
+ *
+ *   repfnz[0:W*m-1]: for a nonzero segment U[*,j] that ends at a
+ *	supernodal representative r, repfnz[r] is the location of the first
+ *	nonzero in this segment.  It is also used during the dfs: repfnz[r]>0
+ *	indicates the supernode r has been explored.
+ *	NOTE: There are W of them, each used for one column of a panel.
+ *
+ *   panel_lsub[0:W*m-1]: temporary for the nonzeros row indices below
+ *	the panel diagonal. These are filled in during dpanel_dfs(), and are
+ *	used later in the inner LU factorization within the panel.
+ *	panel_lsub[]/dense[] pair forms the SPA data structure.
+ *	NOTE: There are W of them.
+ *
+ *   dense[0:W*m-1]: sparse accumulating (SPA) vector for intermediate values;
+ *		   NOTE: there are W of them.
+ *
+ *   tempv[0:*]: real temporary used for dense numeric kernels;
+ *	The size of this array is defined by NUM_TEMPV() in slu_util.h.
+ *	It is also used by the dropping routine ilu_ddrop_row().
+ * 
+ */ + +void +zgsitrf(superlu_options_t *options, SuperMatrix *A, int relax, int panel_size, + int *etree, void *work, int lwork, int *perm_c, int *perm_r, + SuperMatrix *L, SuperMatrix *U, SuperLUStat_t *stat, int *info) +{ + /* Local working arrays */ + NCPformat *Astore; + int *iperm_r = NULL; /* inverse of perm_r; used when + options->Fact == SamePattern_SameRowPerm */ + int *iperm_c; /* inverse of perm_c */ + int *swap, *iswap; /* swap is used to store the row permutation + during the factorization. Initially, it is set + to iperm_c (row indeces of Pc*A*Pc'). + iswap is the inverse of swap. After the + factorization, it is equal to perm_r. */ + int *iwork; + doublecomplex *zwork; + int *segrep, *repfnz, *parent, *xplore; + int *panel_lsub; /* dense[]/panel_lsub[] pair forms a w-wide SPA */ + int *marker, *marker_relax; + doublecomplex *dense, *tempv; + double *dtempv; + int *relax_end, *relax_fsupc; + doublecomplex *a; + int *asub; + int *xa_begin, *xa_end; + int *xsup, *supno; + int *xlsub, *xlusup, *xusub; + int nzlumax; + double *amax; + doublecomplex drop_sum; + static GlobalLU_t Glu; /* persistent to facilitate multiple factors. */ + int *iwork2; /* used by the second dropping rule */ + + /* Local scalars */ + fact_t fact = options->Fact; + double diag_pivot_thresh = options->DiagPivotThresh; + double drop_tol = options->ILU_DropTol; /* tau */ + double fill_ini = options->ILU_FillTol; /* tau^hat */ + double gamma = options->ILU_FillFactor; + int drop_rule = options->ILU_DropRule; + milu_t milu = options->ILU_MILU; + double fill_tol; + int pivrow; /* pivotal row number in the original matrix A */ + int nseg1; /* no of segments in U-column above panel row jcol */ + int nseg; /* no of segments in each U-column */ + register int jcol; + register int kcol; /* end column of a relaxed snode */ + register int icol; + register int i, k, jj, new_next, iinfo; + int m, n, min_mn, jsupno, fsupc, nextlu, nextu; + int w_def; /* upper bound on panel width */ + int usepr, iperm_r_allocated = 0; + int nnzL, nnzU; + int *panel_histo = stat->panel_histo; + flops_t *ops = stat->ops; + + int last_drop;/* the last column which the dropping rules applied */ + int quota; + int nnzAj; /* number of nonzeros in A(:,1:j) */ + int nnzLj, nnzUj; + double tol_L = drop_tol, tol_U = drop_tol; + doublecomplex zero = {0.0, 0.0}; + + /* Executable */ + iinfo = 0; + m = A->nrow; + n = A->ncol; + min_mn = SUPERLU_MIN(m, n); + Astore = A->Store; + a = Astore->nzval; + asub = Astore->rowind; + xa_begin = Astore->colbeg; + xa_end = Astore->colend; + + /* Allocate storage common to the factor routines */ + *info = zLUMemInit(fact, work, lwork, m, n, Astore->nnz, panel_size, + gamma, L, U, &Glu, &iwork, &zwork); + if ( *info ) return; + + xsup = Glu.xsup; + supno = Glu.supno; + xlsub = Glu.xlsub; + xlusup = Glu.xlusup; + xusub = Glu.xusub; + + SetIWork(m, n, panel_size, iwork, &segrep, &parent, &xplore, + &repfnz, &panel_lsub, &marker_relax, &marker); + zSetRWork(m, panel_size, zwork, &dense, &tempv); + + usepr = (fact == SamePattern_SameRowPerm); + if ( usepr ) { + /* Compute the inverse of perm_r */ + iperm_r = (int *) intMalloc(m); + for (k = 0; k < m; ++k) iperm_r[perm_r[k]] = k; + iperm_r_allocated = 1; + } + + iperm_c = (int *) intMalloc(n); + for (k = 0; k < n; ++k) iperm_c[perm_c[k]] = k; + swap = (int *)intMalloc(n); + for (k = 0; k < n; k++) swap[k] = iperm_c[k]; + iswap = (int *)intMalloc(n); + for (k = 0; k < n; k++) iswap[k] = perm_c[k]; + amax = (double *) doubleMalloc(panel_size); + if (drop_rule & DROP_SECONDARY) + iwork2 = (int *)intMalloc(n); + else + iwork2 = NULL; + + nnzAj = 0; + nnzLj = 0; + nnzUj = 0; + last_drop = SUPERLU_MAX(min_mn - 2 * sp_ienv(3), (int)(min_mn * 0.95)); + + /* Identify relaxed snodes */ + relax_end = (int *) intMalloc(n); + relax_fsupc = (int *) intMalloc(n); + if ( options->SymmetricMode == YES ) + ilu_heap_relax_snode(n, etree, relax, marker, relax_end, relax_fsupc); + else + ilu_relax_snode(n, etree, relax, marker, relax_end, relax_fsupc); + + ifill (perm_r, m, EMPTY); + ifill (marker, m * NO_MARKER, EMPTY); + supno[0] = -1; + xsup[0] = xlsub[0] = xusub[0] = xlusup[0] = 0; + w_def = panel_size; + + /* Mark the rows used by relaxed supernodes */ + ifill (marker_relax, m, EMPTY); + i = mark_relax(m, relax_end, relax_fsupc, xa_begin, xa_end, + asub, marker_relax); +#if ( PRNTlevel >= 1) + printf("%d relaxed supernodes.\n", i); +#endif + + /* + * Work on one "panel" at a time. A panel is one of the following: + * (a) a relaxed supernode at the bottom of the etree, or + * (b) panel_size contiguous columns, defined by the user + */ + for (jcol = 0; jcol < min_mn; ) { + + if ( relax_end[jcol] != EMPTY ) { /* start of a relaxed snode */ + kcol = relax_end[jcol]; /* end of the relaxed snode */ + panel_histo[kcol-jcol+1]++; + + /* Drop small rows in the previous supernode. */ + if (jcol > 0 && jcol < last_drop) { + int first = xsup[supno[jcol - 1]]; + int last = jcol - 1; + int quota; + + /* Compute the quota */ + if (drop_rule & DROP_PROWS) + quota = gamma * Astore->nnz / m * (m - first) / m + * (last - first + 1); + else if (drop_rule & DROP_COLUMN) { + int i; + quota = 0; + for (i = first; i <= last; i++) + quota += xa_end[i] - xa_begin[i]; + quota = gamma * quota * (m - first) / m; + } else if (drop_rule & DROP_AREA) + quota = gamma * nnzAj * (1.0 - 0.5 * (last + 1.0) / m) + - nnzLj; + else + quota = m * n; + fill_tol = pow(fill_ini, 1.0 - 0.5 * (first + last) / min_mn); + + /* Drop small rows */ + dtempv = (double *) tempv; + i = ilu_zdrop_row(options, first, last, tol_L, quota, &nnzLj, + &fill_tol, &Glu, dtempv, iwork2, 0); + /* Reset the parameters */ + if (drop_rule & DROP_DYNAMIC) { + if (gamma * nnzAj * (1.0 - 0.5 * (last + 1.0) / m) + < nnzLj) + tol_L = SUPERLU_MIN(1.0, tol_L * 2.0); + else + tol_L = SUPERLU_MAX(drop_tol, tol_L * 0.5); + } + if (fill_tol < 0) iinfo -= (int)fill_tol; +#ifdef DEBUG + num_drop_L += i * (last - first + 1); +#endif + } + + /* -------------------------------------- + * Factorize the relaxed supernode(jcol:kcol) + * -------------------------------------- */ + /* Determine the union of the row structure of the snode */ + if ( (*info = ilu_zsnode_dfs(jcol, kcol, asub, xa_begin, xa_end, + marker, &Glu)) != 0 ) + return; + + nextu = xusub[jcol]; + nextlu = xlusup[jcol]; + jsupno = supno[jcol]; + fsupc = xsup[jsupno]; + new_next = nextlu + (xlsub[fsupc+1]-xlsub[fsupc])*(kcol-jcol+1); + nzlumax = Glu.nzlumax; + while ( new_next > nzlumax ) { + if ((*info = zLUMemXpand(jcol, nextlu, LUSUP, &nzlumax, &Glu))) + return; + } + + for (icol = jcol; icol <= kcol; icol++) { + xusub[icol+1] = nextu; + + amax[0] = 0.0; + /* Scatter into SPA dense[*] */ + for (k = xa_begin[icol]; k < xa_end[icol]; k++) { + register double tmp = z_abs1 (&a[k]); + if (tmp > amax[0]) amax[0] = tmp; + dense[asub[k]] = a[k]; + } + nnzAj += xa_end[icol] - xa_begin[icol]; + if (amax[0] == 0.0) { + amax[0] = fill_ini; +#if ( PRNTlevel >= 1) + printf("Column %d is entirely zero!\n", icol); + fflush(stdout); +#endif + } + + /* Numeric update within the snode */ + zsnode_bmod(icol, jsupno, fsupc, dense, tempv, &Glu, stat); + + if (usepr) pivrow = iperm_r[icol]; + fill_tol = pow(fill_ini, 1.0 - (double)icol / (double)min_mn); + if ( (*info = ilu_zpivotL(icol, diag_pivot_thresh, &usepr, + perm_r, iperm_c[icol], swap, iswap, + marker_relax, &pivrow, + amax[0] * fill_tol, milu, zero, + &Glu, stat)) ) { + iinfo++; + marker[pivrow] = kcol; + } + + } + + jcol = kcol + 1; + + } else { /* Work on one panel of panel_size columns */ + + /* Adjust panel_size so that a panel won't overlap with the next + * relaxed snode. + */ + panel_size = w_def; + for (k = jcol + 1; k < SUPERLU_MIN(jcol+panel_size, min_mn); k++) + if ( relax_end[k] != EMPTY ) { + panel_size = k - jcol; + break; + } + if ( k == min_mn ) panel_size = min_mn - jcol; + panel_histo[panel_size]++; + + /* symbolic factor on a panel of columns */ + ilu_zpanel_dfs(m, panel_size, jcol, A, perm_r, &nseg1, + dense, amax, panel_lsub, segrep, repfnz, + marker, parent, xplore, &Glu); + + /* numeric sup-panel updates in topological order */ + zpanel_bmod(m, panel_size, jcol, nseg1, dense, + tempv, segrep, repfnz, &Glu, stat); + + /* Sparse LU within the panel, and below panel diagonal */ + for (jj = jcol; jj < jcol + panel_size; jj++) { + + k = (jj - jcol) * m; /* column index for w-wide arrays */ + + nseg = nseg1; /* Begin after all the panel segments */ + + nnzAj += xa_end[jj] - xa_begin[jj]; + + if ((*info = ilu_zcolumn_dfs(m, jj, perm_r, &nseg, + &panel_lsub[k], segrep, &repfnz[k], + marker, parent, xplore, &Glu))) + return; + + /* Numeric updates */ + if ((*info = zcolumn_bmod(jj, (nseg - nseg1), &dense[k], + tempv, &segrep[nseg1], &repfnz[k], + jcol, &Glu, stat)) != 0) return; + + /* Make a fill-in position if the column is entirely zero */ + if (xlsub[jj + 1] == xlsub[jj]) { + register int i, row; + int nextl; + int nzlmax = Glu.nzlmax; + int *lsub = Glu.lsub; + int *marker2 = marker + 2 * m; + + /* Allocate memory */ + nextl = xlsub[jj] + 1; + if (nextl >= nzlmax) { + int error = zLUMemXpand(jj, nextl, LSUB, &nzlmax, &Glu); + if (error) { *info = error; return; } + lsub = Glu.lsub; + } + xlsub[jj + 1]++; + assert(xlusup[jj]==xlusup[jj+1]); + xlusup[jj + 1]++; + Glu.lusup[xlusup[jj]] = zero; + + /* Choose a row index (pivrow) for fill-in */ + for (i = jj; i < n; i++) + if (marker_relax[swap[i]] <= jj) break; + row = swap[i]; + marker2[row] = jj; + lsub[xlsub[jj]] = row; +#ifdef DEBUG + printf("Fill col %d.\n", jj); + fflush(stdout); +#endif + } + + /* Computer the quota */ + if (drop_rule & DROP_PROWS) + quota = gamma * Astore->nnz / m * jj / m; + else if (drop_rule & DROP_COLUMN) + quota = gamma * (xa_end[jj] - xa_begin[jj]) * + (jj + 1) / m; + else if (drop_rule & DROP_AREA) + quota = gamma * 0.9 * nnzAj * 0.5 - nnzUj; + else + quota = m; + + /* Copy the U-segments to ucol[*] and drop small entries */ + if ((*info = ilu_zcopy_to_ucol(jj, nseg, segrep, &repfnz[k], + perm_r, &dense[k], drop_rule, + milu, amax[jj - jcol] * tol_U, + quota, &drop_sum, &nnzUj, &Glu, + iwork2)) != 0) + return; + + /* Reset the dropping threshold if required */ + if (drop_rule & DROP_DYNAMIC) { + if (gamma * 0.9 * nnzAj * 0.5 < nnzLj) + tol_U = SUPERLU_MIN(1.0, tol_U * 2.0); + else + tol_U = SUPERLU_MAX(drop_tol, tol_U * 0.5); + } + + zd_mult(&drop_sum, &drop_sum, MILU_ALPHA); + if (usepr) pivrow = iperm_r[jj]; + fill_tol = pow(fill_ini, 1.0 - (double)jj / (double)min_mn); + if ( (*info = ilu_zpivotL(jj, diag_pivot_thresh, &usepr, perm_r, + iperm_c[jj], swap, iswap, + marker_relax, &pivrow, + amax[jj - jcol] * fill_tol, milu, + drop_sum, &Glu, stat)) ) { + iinfo++; + marker[m + pivrow] = jj; + marker[2 * m + pivrow] = jj; + } + + /* Reset repfnz[] for this column */ + resetrep_col (nseg, segrep, &repfnz[k]); + + /* Start a new supernode, drop the previous one */ + if (jj > 0 && supno[jj] > supno[jj - 1] && jj < last_drop) { + int first = xsup[supno[jj - 1]]; + int last = jj - 1; + int quota; + + /* Compute the quota */ + if (drop_rule & DROP_PROWS) + quota = gamma * Astore->nnz / m * (m - first) / m + * (last - first + 1); + else if (drop_rule & DROP_COLUMN) { + int i; + quota = 0; + for (i = first; i <= last; i++) + quota += xa_end[i] - xa_begin[i]; + quota = gamma * quota * (m - first) / m; + } else if (drop_rule & DROP_AREA) + quota = gamma * nnzAj * (1.0 - 0.5 * (last + 1.0) + / m) - nnzLj; + else + quota = m * n; + fill_tol = pow(fill_ini, 1.0 - 0.5 * (first + last) / + (double)min_mn); + + /* Drop small rows */ + dtempv = (double *) tempv; + i = ilu_zdrop_row(options, first, last, tol_L, quota, + &nnzLj, &fill_tol, &Glu, dtempv, iwork2, + 1); + + /* Reset the parameters */ + if (drop_rule & DROP_DYNAMIC) { + if (gamma * nnzAj * (1.0 - 0.5 * (last + 1.0) / m) + < nnzLj) + tol_L = SUPERLU_MIN(1.0, tol_L * 2.0); + else + tol_L = SUPERLU_MAX(drop_tol, tol_L * 0.5); + } + if (fill_tol < 0) iinfo -= (int)fill_tol; +#ifdef DEBUG + num_drop_L += i * (last - first + 1); +#endif + } /* if start a new supernode */ + + } /* for */ + + jcol += panel_size; /* Move to the next panel */ + + } /* else */ + + } /* for */ + + *info = iinfo; + + if ( m > n ) { + k = 0; + for (i = 0; i < m; ++i) + if ( perm_r[i] == EMPTY ) { + perm_r[i] = n + k; + ++k; + } + } + + ilu_countnz(min_mn, &nnzL, &nnzU, &Glu); + fixupL(min_mn, perm_r, &Glu); + + zLUWorkFree(iwork, zwork, &Glu); /* Free work space and compress storage */ + + if ( fact == SamePattern_SameRowPerm ) { + /* L and U structures may have changed due to possibly different + pivoting, even though the storage is available. + There could also be memory expansions, so the array locations + may have changed, */ + ((SCformat *)L->Store)->nnz = nnzL; + ((SCformat *)L->Store)->nsuper = Glu.supno[n]; + ((SCformat *)L->Store)->nzval = Glu.lusup; + ((SCformat *)L->Store)->nzval_colptr = Glu.xlusup; + ((SCformat *)L->Store)->rowind = Glu.lsub; + ((SCformat *)L->Store)->rowind_colptr = Glu.xlsub; + ((NCformat *)U->Store)->nnz = nnzU; + ((NCformat *)U->Store)->nzval = Glu.ucol; + ((NCformat *)U->Store)->rowind = Glu.usub; + ((NCformat *)U->Store)->colptr = Glu.xusub; + } else { + zCreate_SuperNode_Matrix(L, A->nrow, min_mn, nnzL, Glu.lusup, + Glu.xlusup, Glu.lsub, Glu.xlsub, Glu.supno, + Glu.xsup, SLU_SC, SLU_Z, SLU_TRLU); + zCreate_CompCol_Matrix(U, min_mn, min_mn, nnzU, Glu.ucol, + Glu.usub, Glu.xusub, SLU_NC, SLU_Z, SLU_TRU); + } + + ops[FACT] += ops[TRSV] + ops[GEMV]; + + if ( iperm_r_allocated ) SUPERLU_FREE (iperm_r); + SUPERLU_FREE (iperm_c); + SUPERLU_FREE (relax_end); + SUPERLU_FREE (swap); + SUPERLU_FREE (iswap); + SUPERLU_FREE (relax_fsupc); + SUPERLU_FREE (amax); + if ( iwork2 ) SUPERLU_FREE (iwork2); + +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgsrfs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgsrfs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgsrfs.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgsrfs.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,25 +1,26 @@ -/* +/*! @file zgsrfs.c + * \brief Improves computed solution to a system of inear equations + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
+ * Modified from lapack routine ZGERFS
+ * 
*/ /* * File name: zgsrfs.c * History: Modified from lapack routine ZGERFS */ #include -#include "zsp_defs.h" +#include "slu_zdefs.h" -void -zgsrfs(trans_t trans, SuperMatrix *A, SuperMatrix *L, SuperMatrix *U, - int *perm_c, int *perm_r, char *equed, double *R, double *C, - SuperMatrix *B, SuperMatrix *X, double *ferr, double *berr, - SuperLUStat_t *stat, int *info) -{ -/* +/*! \brief + * + *
  *   Purpose   
  *   =======   
  *
@@ -123,7 +124,15 @@
  *
  *    ITMAX is the maximum number of steps of iterative refinement.   
  *
- */  
+ * 
+ */ +void +zgsrfs(trans_t trans, SuperMatrix *A, SuperMatrix *L, SuperMatrix *U, + int *perm_c, int *perm_r, char *equed, double *R, double *C, + SuperMatrix *B, SuperMatrix *X, double *ferr, double *berr, + SuperLUStat_t *stat, int *info) +{ + #define ITMAX 5 @@ -224,6 +233,8 @@ nz = A->ncol + 1; eps = dlamch_("Epsilon"); safmin = dlamch_("Safe minimum"); + /* Set SAFE1 essentially to be the underflow threshold times the + number of additions in each row. */ safe1 = nz * safmin; safe2 = safe1 / eps; @@ -274,7 +285,7 @@ where abs(Z) is the componentwise absolute value of the matrix or vector Z. If the i-th component of the denominator is less than SAFE2, then SAFE1 is added to the i-th component of the - numerator and denominator before dividing. */ + numerator before dividing. */ for (i = 0; i < A->nrow; ++i) rwork[i] = z_abs1( &Bptr[i] ); @@ -297,11 +308,13 @@ } s = 0.; for (i = 0; i < A->nrow; ++i) { - if (rwork[i] > safe2) + if (rwork[i] > safe2) { s = SUPERLU_MAX( s, z_abs1(&work[i]) / rwork[i] ); - else - s = SUPERLU_MAX( s, (z_abs1(&work[i]) + safe1) / - (rwork[i] + safe1) ); + } else if ( rwork[i] != 0.0 ) { + s = SUPERLU_MAX( s, (z_abs1(&work[i]) + safe1) / rwork[i] ); + } + /* If rwork[i] is exactly 0.0, then we know the true + residual also must be exactly 0.0. */ } berr[j] = s; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgssv.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgssv.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgssv.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgssv.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,20 +1,19 @@ - -/* +/*! @file zgssv.c + * \brief Solves the system of linear equations A*X=B + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
- *
+ * 
*/ -#include "zsp_defs.h" +#include "slu_zdefs.h" -void -zgssv(superlu_options_t *options, SuperMatrix *A, int *perm_c, int *perm_r, - SuperMatrix *L, SuperMatrix *U, SuperMatrix *B, - SuperLUStat_t *stat, int *info ) -{ -/* +/*! \brief + * + *
  * Purpose
  * =======
  *
@@ -127,15 +126,21 @@
  *                so the solution could not be computed.
  *             > A->ncol: number of bytes allocated when memory allocation
  *                failure occurred, plus A->ncol.
- *   
+ * 
*/ + +void +zgssv(superlu_options_t *options, SuperMatrix *A, int *perm_c, int *perm_r, + SuperMatrix *L, SuperMatrix *U, SuperMatrix *B, + SuperLUStat_t *stat, int *info ) +{ + DNformat *Bstore; SuperMatrix *AA;/* A in SLU_NC format used by the factorization routine.*/ SuperMatrix AC; /* Matrix postmultiplied by Pc */ int lwork = 0, *etree, i; /* Set default values for some parameters */ - double drop_tol = 0.; int panel_size; /* panel size */ int relax; /* no of columns in a relaxed snodes */ int permc_spec; @@ -201,8 +206,8 @@ relax, panel_size, sp_ienv(3), sp_ienv(4));*/ t = SuperLU_timer_(); /* Compute the LU factorization of A. */ - zgstrf(options, &AC, drop_tol, relax, panel_size, - etree, NULL, lwork, perm_c, perm_r, L, U, stat, info); + zgstrf(options, &AC, relax, panel_size, etree, + NULL, lwork, perm_c, perm_r, L, U, stat, info); utime[FACT] = SuperLU_timer_() - t; t = SuperLU_timer_(); diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgssvx.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgssvx.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgssvx.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgssvx.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,22 +1,19 @@ -/* +/*! @file zgssvx.c + * \brief Solves the system of linear equations A*X=B or A'*X=B + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
- *
+ * 
*/ -#include "zsp_defs.h" +#include "slu_zdefs.h" -void -zgssvx(superlu_options_t *options, SuperMatrix *A, int *perm_c, int *perm_r, - int *etree, char *equed, double *R, double *C, - SuperMatrix *L, SuperMatrix *U, void *work, int lwork, - SuperMatrix *B, SuperMatrix *X, double *recip_pivot_growth, - double *rcond, double *ferr, double *berr, - mem_usage_t *mem_usage, SuperLUStat_t *stat, int *info ) -{ -/* +/*! \brief + * + *
  * Purpose
  * =======
  *
@@ -314,7 +311,7 @@
  *
  * stat   (output) SuperLUStat_t*
  *        Record the statistics on runtime and floating-point operation count.
- *        See util.h for the definition of 'SuperLUStat_t'.
+ *        See slu_util.h for the definition of 'SuperLUStat_t'.
  *
  * info    (output) int*
  *         = 0: successful exit   
@@ -332,9 +329,19 @@
  *                    accurate than the value of RCOND would suggest.   
  *              > A->ncol+1: number of bytes allocated when memory allocation
  *                    failure occurred, plus A->ncol.
- *
+ * 
*/ +void +zgssvx(superlu_options_t *options, SuperMatrix *A, int *perm_c, int *perm_r, + int *etree, char *equed, double *R, double *C, + SuperMatrix *L, SuperMatrix *U, void *work, int lwork, + SuperMatrix *B, SuperMatrix *X, double *recip_pivot_growth, + double *rcond, double *ferr, double *berr, + mem_usage_t *mem_usage, SuperLUStat_t *stat, int *info ) +{ + + DNformat *Bstore, *Xstore; doublecomplex *Bmat, *Xmat; int ldb, ldx, nrhs; @@ -346,13 +353,12 @@ int i, j, info1; double amax, anorm, bignum, smlnum, colcnd, rowcnd, rcmax, rcmin; int relax, panel_size; - double diag_pivot_thresh, drop_tol; + double diag_pivot_thresh; double t0; /* temporary time */ double *utime; /* External functions */ extern double zlangs(char *, SuperMatrix *); - extern double dlamch_(char *); Bstore = B->Store; Xstore = X->Store; @@ -443,7 +449,6 @@ panel_size = sp_ienv(1); relax = sp_ienv(2); diag_pivot_thresh = options->DiagPivotThresh; - drop_tol = 0.0; utime = stat->utime; @@ -455,7 +460,7 @@ Astore->nzval, Astore->colind, Astore->rowptr, SLU_NC, A->Dtype, A->Mtype); if ( notran ) { /* Reverse the transpose argument. */ - trant = CONJ; + trant = TRANS; notran = 0; } else { trant = NOTRANS; @@ -523,8 +528,8 @@ /* Compute the LU factorization of A*Pc. */ t0 = SuperLU_timer_(); - zgstrf(options, &AC, drop_tol, relax, panel_size, - etree, work, lwork, perm_c, perm_r, L, U, stat, info); + zgstrf(options, &AC, relax, panel_size, etree, + work, lwork, perm_c, perm_r, L, U, stat, info); utime[FACT] = SuperLU_timer_() - t0; if ( lwork == -1 ) { diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgstrf.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgstrf.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgstrf.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgstrf.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,33 +1,32 @@ -/* +/*! @file zgstrf.c + * \brief Computes an LU factorization of a general sparse matrix + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
+ * 
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
  *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "zsp_defs.h" -void -zgstrf (superlu_options_t *options, SuperMatrix *A, double drop_tol, - int relax, int panel_size, int *etree, void *work, int lwork, - int *perm_c, int *perm_r, SuperMatrix *L, SuperMatrix *U, - SuperLUStat_t *stat, int *info) -{ -/* +#include "slu_zdefs.h" + +/*! \brief + * + *
  * Purpose
  * =======
  *
@@ -53,11 +52,6 @@
  *          (A->nrow, A->ncol). The type of A can be:
  *          Stype = SLU_NCP; Dtype = SLU_Z; Mtype = SLU_GE.
  *
- * drop_tol (input) double (NOT IMPLEMENTED)
- *	    Drop tolerance parameter. At step j of the Gaussian elimination,
- *          if abs(A_ij)/(max_i abs(A_ij)) < drop_tol, drop entry A_ij.
- *          0 <= drop_tol <= 1. The default value of drop_tol is 0.
- *
  * relax    (input) int
  *          To control degree of relaxing supernodes. If the number
  *          of nodes (columns) in a subtree of the elimination tree is less
@@ -117,7 +111,7 @@
  *
  * stat     (output) SuperLUStat_t*
  *          Record the statistics on runtime and floating-point operation count.
- *          See util.h for the definition of 'SuperLUStat_t'.
+ *          See slu_util.h for the definition of 'SuperLUStat_t'.
  *
  * info     (output) int*
  *          = 0: successful exit
@@ -177,13 +171,20 @@
  *	    	   NOTE: there are W of them.
  *
  *   tempv[0:*]: real temporary used for dense numeric kernels;
- *	The size of this array is defined by NUM_TEMPV() in zsp_defs.h.
- *
+ *	The size of this array is defined by NUM_TEMPV() in slu_zdefs.h.
+ * 
*/ + +void +zgstrf (superlu_options_t *options, SuperMatrix *A, + int relax, int panel_size, int *etree, void *work, int lwork, + int *perm_c, int *perm_r, SuperMatrix *L, SuperMatrix *U, + SuperLUStat_t *stat, int *info) +{ /* Local working arrays */ NCPformat *Astore; - int *iperm_r; /* inverse of perm_r; - used when options->Fact == SamePattern_SameRowPerm */ + int *iperm_r = NULL; /* inverse of perm_r; used when + options->Fact == SamePattern_SameRowPerm */ int *iperm_c; /* inverse of perm_c */ int *iwork; doublecomplex *zwork; @@ -199,7 +200,8 @@ int *xsup, *supno; int *xlsub, *xlusup, *xusub; int nzlumax; - static GlobalLU_t Glu; /* persistent to facilitate multiple factors. */ + double fill_ratio = sp_ienv(6); /* estimated fill ratio */ + static GlobalLU_t Glu; /* persistent to facilitate multiple factors. */ /* Local scalars */ fact_t fact = options->Fact; @@ -230,7 +232,7 @@ /* Allocate storage common to the factor routines */ *info = zLUMemInit(fact, work, lwork, m, n, Astore->nnz, - panel_size, L, U, &Glu, &iwork, &zwork); + panel_size, fill_ratio, L, U, &Glu, &iwork, &zwork); if ( *info ) return; xsup = Glu.xsup; @@ -417,7 +419,7 @@ ((NCformat *)U->Store)->rowind = Glu.usub; ((NCformat *)U->Store)->colptr = Glu.xusub; } else { - zCreate_SuperNode_Matrix(L, A->nrow, A->ncol, nnzL, Glu.lusup, + zCreate_SuperNode_Matrix(L, A->nrow, min_mn, nnzL, Glu.lusup, Glu.xlusup, Glu.lsub, Glu.xlsub, Glu.supno, Glu.xsup, SLU_SC, SLU_Z, SLU_TRLU); zCreate_CompCol_Matrix(U, min_mn, min_mn, nnzU, Glu.ucol, @@ -425,6 +427,7 @@ } ops[FACT] += ops[TRSV] + ops[GEMV]; + stat->expansions = --(Glu.num_expansions); if ( iperm_r_allocated ) SUPERLU_FREE (iperm_r); SUPERLU_FREE (iperm_c); diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgstrs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgstrs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgstrs.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zgstrs.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,25 +1,27 @@ -/* +/*! @file zgstrs.c + * \brief Solves a system using LU factorization + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "zsp_defs.h" +#include "slu_zdefs.h" /* @@ -29,13 +31,9 @@ void zlsolve(int, int, doublecomplex*, doublecomplex*); void zmatvec(int, int, int, doublecomplex*, doublecomplex*, doublecomplex*); - -void -zgstrs (trans_t trans, SuperMatrix *L, SuperMatrix *U, - int *perm_c, int *perm_r, SuperMatrix *B, - SuperLUStat_t *stat, int *info) -{ -/* +/*! \brief + * + *
  * Purpose
  * =======
  *
@@ -85,8 +83,15 @@
  * info    (output) int*
  * 	   = 0: successful exit
  *	   < 0: if info = -i, the i-th argument had an illegal value
- *
+ * 
*/ + +void +zgstrs (trans_t trans, SuperMatrix *L, SuperMatrix *U, + int *perm_c, int *perm_r, SuperMatrix *B, + SuperLUStat_t *stat, int *info) +{ + #ifdef _CRAY _fcd ftcs1, ftcs2, ftcs3, ftcs4; #endif @@ -293,7 +298,7 @@ stat->ops[SOLVE] = solve_ops; - } else { /* Solve A'*X=B */ + } else { /* Solve A'*X=B or CONJ(A)*X=B */ /* Permute right hand sides to form Pc'*B. */ for (i = 0; i < nrhs; i++) { rhs_work = &Bmat[i*ldb]; @@ -302,30 +307,23 @@ } stat->ops[SOLVE] = 0; - if (trans == TRANS) { - - for (k = 0; k < nrhs; ++k) { - - /* Multiply by inv(U'). */ - sp_ztrsv("U", "T", "N", L, U, &Bmat[k*ldb], stat, info); - - /* Multiply by inv(L'). */ - sp_ztrsv("L", "T", "U", L, U, &Bmat[k*ldb], stat, info); - - } - } - else { - for (k = 0; k < nrhs; ++k) { - /* Multiply by inv(U'). */ + for (k = 0; k < nrhs; ++k) { + /* Multiply by inv(U'). */ + sp_ztrsv("U", "T", "N", L, U, &Bmat[k*ldb], stat, info); + + /* Multiply by inv(L'). */ + sp_ztrsv("L", "T", "U", L, U, &Bmat[k*ldb], stat, info); + } + } else { /* trans == CONJ */ + for (k = 0; k < nrhs; ++k) { + /* Multiply by conj(inv(U')). */ sp_ztrsv("U", "C", "N", L, U, &Bmat[k*ldb], stat, info); - /* Multiply by inv(L'). */ + /* Multiply by conj(inv(L')). */ sp_ztrsv("L", "C", "U", L, U, &Bmat[k*ldb], stat, info); - - } - } - + } + } /* Compute the final solution X := Pr'*X (=inv(Pr)*X) */ for (i = 0; i < nrhs; i++) { rhs_work = &Bmat[i*ldb]; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zlacon.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zlacon.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zlacon.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zlacon.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,67 +1,74 @@ - -/* +/*! @file zlacon.c + * \brief Estimates the 1-norm + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
- *
+ * 
*/ #include -#include "Cnames.h" -#include "dcomplex.h" +#include "slu_Cnames.h" +#include "slu_dcomplex.h" + +/*! \brief + * + *
+ *   Purpose   
+ *   =======   
+ *
+ *   ZLACON estimates the 1-norm of a square matrix A.   
+ *   Reverse communication is used for evaluating matrix-vector products. 
+ * 
+ *
+ *   Arguments   
+ *   =========   
+ *
+ *   N      (input) INT
+ *          The order of the matrix.  N >= 1.   
+ *
+ *   V      (workspace) DOUBLE COMPLEX PRECISION array, dimension (N)   
+ *          On the final return, V = A*W,  where  EST = norm(V)/norm(W)   
+ *          (W is not returned).   
+ *
+ *   X      (input/output) DOUBLE COMPLEX PRECISION array, dimension (N)   
+ *          On an intermediate return, X should be overwritten by   
+ *                A * X,   if KASE=1,   
+ *                A' * X,  if KASE=2,
+ *          where A' is the conjugate transpose of A,
+ *         and ZLACON must be re-called with all the other parameters   
+ *          unchanged.   
+ *
+ *
+ *   EST    (output) DOUBLE PRECISION   
+ *          An estimate (a lower bound) for norm(A).   
+ *
+ *   KASE   (input/output) INT
+ *          On the initial call to ZLACON, KASE should be 0.   
+ *          On an intermediate return, KASE will be 1 or 2, indicating   
+ *          whether X should be overwritten by A * X  or A' * X.   
+ *          On the final return from ZLACON, KASE will again be 0.   
+ *
+ *   Further Details   
+ *   ======= =======   
+ *
+ *   Contributed by Nick Higham, University of Manchester.   
+ *   Originally named CONEST, dated March 16, 1988.   
+ *
+ *   Reference: N.J. Higham, "FORTRAN codes for estimating the one-norm of 
+ *   a real or complex matrix, with applications to condition estimation", 
+ *   ACM Trans. Math. Soft., vol. 14, no. 4, pp. 381-396, December 1988.   
+ *   ===================================================================== 
+ * 
+ */ int zlacon_(int *n, doublecomplex *v, doublecomplex *x, double *est, int *kase) { -/* - Purpose - ======= - - ZLACON estimates the 1-norm of a square matrix A. - Reverse communication is used for evaluating matrix-vector products. - - - Arguments - ========= - - N (input) INT - The order of the matrix. N >= 1. - - V (workspace) DOUBLE COMPLEX PRECISION array, dimension (N) - On the final return, V = A*W, where EST = norm(V)/norm(W) - (W is not returned). - - X (input/output) DOUBLE COMPLEX PRECISION array, dimension (N) - On an intermediate return, X should be overwritten by - A * X, if KASE=1, - A' * X, if KASE=2, - where A' is the conjugate transpose of A, - and ZLACON must be re-called with all the other parameters - unchanged. - - - EST (output) DOUBLE PRECISION - An estimate (a lower bound) for norm(A). - - KASE (input/output) INT - On the initial call to ZLACON, KASE should be 0. - On an intermediate return, KASE will be 1 or 2, indicating - whether X should be overwritten by A * X or A' * X. - On the final return from ZLACON, KASE will again be 0. - - Further Details - ======= ======= - - Contributed by Nick Higham, University of Manchester. - Originally named CONEST, dated March 16, 1988. - - Reference: N.J. Higham, "FORTRAN codes for estimating the one-norm of - a real or complex matrix, with applications to condition estimation", - ACM Trans. Math. Soft., vol. 14, no. 4, pp. 381-396, December 1988. - ===================================================================== -*/ + /* Table of constant values */ int c__1 = 1; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zlangs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zlangs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zlangs.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zlangs.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,58 +1,65 @@ - -/* +/*! @file zlangs.c + * \brief Returns the value of the one norm + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
  *
+ * Modified from lapack routine ZLANGE 
+ * 
*/ /* * File name: zlangs.c * History: Modified from lapack routine ZLANGE */ #include -#include "zsp_defs.h" -#include "util.h" +#include "slu_zdefs.h" + +/*! \brief + * + *
+ * Purpose   
+ *   =======   
+ *
+ *   ZLANGS returns the value of the one norm, or the Frobenius norm, or 
+ *   the infinity norm, or the element of largest absolute value of a 
+ *   real matrix A.   
+ *
+ *   Description   
+ *   ===========   
+ *
+ *   ZLANGE returns the value   
+ *
+ *      ZLANGE = ( max(abs(A(i,j))), NORM = 'M' or 'm'   
+ *               (   
+ *               ( norm1(A),         NORM = '1', 'O' or 'o'   
+ *               (   
+ *               ( normI(A),         NORM = 'I' or 'i'   
+ *               (   
+ *               ( normF(A),         NORM = 'F', 'f', 'E' or 'e'   
+ *
+ *   where  norm1  denotes the  one norm of a matrix (maximum column sum), 
+ *   normI  denotes the  infinity norm  of a matrix  (maximum row sum) and 
+ *   normF  denotes the  Frobenius norm of a matrix (square root of sum of 
+ *   squares).  Note that  max(abs(A(i,j)))  is not a  matrix norm.   
+ *
+ *   Arguments   
+ *   =========   
+ *
+ *   NORM    (input) CHARACTER*1   
+ *           Specifies the value to be returned in ZLANGE as described above.   
+ *   A       (input) SuperMatrix*
+ *           The M by N sparse matrix A. 
+ *
+ *  =====================================================================
+ * 
+ */ double zlangs(char *norm, SuperMatrix *A) { -/* - Purpose - ======= - - ZLANGS returns the value of the one norm, or the Frobenius norm, or - the infinity norm, or the element of largest absolute value of a - real matrix A. - - Description - =========== - - ZLANGE returns the value - - ZLANGE = ( max(abs(A(i,j))), NORM = 'M' or 'm' - ( - ( norm1(A), NORM = '1', 'O' or 'o' - ( - ( normI(A), NORM = 'I' or 'i' - ( - ( normF(A), NORM = 'F', 'f', 'E' or 'e' - - where norm1 denotes the one norm of a matrix (maximum column sum), - normI denotes the infinity norm of a matrix (maximum row sum) and - normF denotes the Frobenius norm of a matrix (square root of sum of - squares). Note that max(abs(A(i,j))) is not a matrix norm. - - Arguments - ========= - - NORM (input) CHARACTER*1 - Specifies the value to be returned in ZLANGE as described above. - A (input) SuperMatrix* - The M by N sparse matrix A. - - ===================================================================== -*/ /* Local variables */ NCformat *Astore; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zlaqgs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zlaqgs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zlaqgs.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zlaqgs.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,80 +1,88 @@ - -/* +/*! @file zlaqgs.c + * \brief Equlibrates a general sprase matrix + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
- *
+ * 
+ * Modified from LAPACK routine ZLAQGE
+ * 
*/ /* * File name: zlaqgs.c * History: Modified from LAPACK routine ZLAQGE */ #include -#include "zsp_defs.h" -#include "util.h" +#include "slu_zdefs.h" + +/*! \brief + * + *
+ *   Purpose   
+ *   =======   
+ *
+ *   ZLAQGS equilibrates a general sparse M by N matrix A using the row and   
+ *   scaling factors in the vectors R and C.   
+ *
+ *   See supermatrix.h for the definition of 'SuperMatrix' structure.
+ *
+ *   Arguments   
+ *   =========   
+ *
+ *   A       (input/output) SuperMatrix*
+ *           On exit, the equilibrated matrix.  See EQUED for the form of 
+ *           the equilibrated matrix. The type of A can be:
+ *	    Stype = NC; Dtype = SLU_Z; Mtype = GE.
+ *	    
+ *   R       (input) double*, dimension (A->nrow)
+ *           The row scale factors for A.
+ *	    
+ *   C       (input) double*, dimension (A->ncol)
+ *           The column scale factors for A.
+ *	    
+ *   ROWCND  (input) double
+ *           Ratio of the smallest R(i) to the largest R(i).
+ *	    
+ *   COLCND  (input) double
+ *           Ratio of the smallest C(i) to the largest C(i).
+ *	    
+ *   AMAX    (input) double
+ *           Absolute value of largest matrix entry.
+ *	    
+ *   EQUED   (output) char*
+ *           Specifies the form of equilibration that was done.   
+ *           = 'N':  No equilibration   
+ *           = 'R':  Row equilibration, i.e., A has been premultiplied by  
+ *                   diag(R).   
+ *           = 'C':  Column equilibration, i.e., A has been postmultiplied  
+ *                   by diag(C).   
+ *           = 'B':  Both row and column equilibration, i.e., A has been
+ *                   replaced by diag(R) * A * diag(C).   
+ *
+ *   Internal Parameters   
+ *   ===================   
+ *
+ *   THRESH is a threshold value used to decide if row or column scaling   
+ *   should be done based on the ratio of the row or column scaling   
+ *   factors.  If ROWCND < THRESH, row scaling is done, and if   
+ *   COLCND < THRESH, column scaling is done.   
+ *
+ *   LARGE and SMALL are threshold values used to decide if row scaling   
+ *   should be done based on the absolute size of the largest matrix   
+ *   element.  If AMAX > LARGE or AMAX < SMALL, row scaling is done.   
+ *
+ *   ===================================================================== 
+ * 
+ */ void zlaqgs(SuperMatrix *A, double *r, double *c, double rowcnd, double colcnd, double amax, char *equed) { -/* - Purpose - ======= - - ZLAQGS equilibrates a general sparse M by N matrix A using the row and - scaling factors in the vectors R and C. - - See supermatrix.h for the definition of 'SuperMatrix' structure. - - Arguments - ========= - - A (input/output) SuperMatrix* - On exit, the equilibrated matrix. See EQUED for the form of - the equilibrated matrix. The type of A can be: - Stype = NC; Dtype = SLU_Z; Mtype = GE. - - R (input) double*, dimension (A->nrow) - The row scale factors for A. - - C (input) double*, dimension (A->ncol) - The column scale factors for A. - - ROWCND (input) double - Ratio of the smallest R(i) to the largest R(i). - - COLCND (input) double - Ratio of the smallest C(i) to the largest C(i). - - AMAX (input) double - Absolute value of largest matrix entry. - - EQUED (output) char* - Specifies the form of equilibration that was done. - = 'N': No equilibration - = 'R': Row equilibration, i.e., A has been premultiplied by - diag(R). - = 'C': Column equilibration, i.e., A has been postmultiplied - by diag(C). - = 'B': Both row and column equilibration, i.e., A has been - replaced by diag(R) * A * diag(C). - - Internal Parameters - =================== - - THRESH is a threshold value used to decide if row or column scaling - should be done based on the ratio of the row or column scaling - factors. If ROWCND < THRESH, row scaling is done, and if - COLCND < THRESH, column scaling is done. - - LARGE and SMALL are threshold values used to decide if row scaling - should be done based on the absolute size of the largest matrix - element. If AMAX > LARGE or AMAX < SMALL, row scaling is done. - ===================================================================== -*/ #define THRESH (0.1) diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zldperm.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zldperm.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zldperm.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zldperm.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,168 @@ + +/*! @file + * \brief Finds a row permutation so that the matrix has large entries on the diagonal + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory.
+ * June 30, 2009
+ * 
+ */ + +#include "slu_zdefs.h" + +extern void mc64id_(int_t*); +extern void mc64ad_(int_t*, int_t*, int_t*, int_t [], int_t [], double [], + int_t*, int_t [], int_t*, int_t[], int_t*, double [], + int_t [], int_t []); + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *
+ *   ZLDPERM finds a row permutation so that the matrix has large
+ *   entries on the diagonal.
+ *
+ * Arguments
+ * =========
+ *
+ * job    (input) int
+ *        Control the action. Possible values for JOB are:
+ *        = 1 : Compute a row permutation of the matrix so that the
+ *              permuted matrix has as many entries on its diagonal as
+ *              possible. The values on the diagonal are of arbitrary size.
+ *              HSL subroutine MC21A/AD is used for this.
+ *        = 2 : Compute a row permutation of the matrix so that the smallest 
+ *              value on the diagonal of the permuted matrix is maximized.
+ *        = 3 : Compute a row permutation of the matrix so that the smallest
+ *              value on the diagonal of the permuted matrix is maximized.
+ *              The algorithm differs from the one used for JOB = 2 and may
+ *              have quite a different performance.
+ *        = 4 : Compute a row permutation of the matrix so that the sum
+ *              of the diagonal entries of the permuted matrix is maximized.
+ *        = 5 : Compute a row permutation of the matrix so that the product
+ *              of the diagonal entries of the permuted matrix is maximized
+ *              and vectors to scale the matrix so that the nonzero diagonal 
+ *              entries of the permuted matrix are one in absolute value and 
+ *              all the off-diagonal entries are less than or equal to one in 
+ *              absolute value.
+ *        Restriction: 1 <= JOB <= 5.
+ *
+ * n      (input) int
+ *        The order of the matrix.
+ *
+ * nnz    (input) int
+ *        The number of nonzeros in the matrix.
+ *
+ * adjncy (input) int*, of size nnz
+ *        The adjacency structure of the matrix, which contains the row
+ *        indices of the nonzeros.
+ *
+ * colptr (input) int*, of size n+1
+ *        The pointers to the beginning of each column in ADJNCY.
+ *
+ * nzval  (input) doublecomplex*, of size nnz
+ *        The nonzero values of the matrix. nzval[k] is the value of
+ *        the entry corresponding to adjncy[k].
+ *        It is not used if job = 1.
+ *
+ * perm   (output) int*, of size n
+ *        The permutation vector. perm[i] = j means row i in the
+ *        original matrix is in row j of the permuted matrix.
+ *
+ * u      (output) double*, of size n
+ *        If job = 5, the natural logarithms of the row scaling factors. 
+ *
+ * v      (output) double*, of size n
+ *        If job = 5, the natural logarithms of the column scaling factors. 
+ *        The scaled matrix B has entries b_ij = a_ij * exp(u_i + v_j).
+ * 
+ */ + +int +zldperm(int_t job, int_t n, int_t nnz, int_t colptr[], int_t adjncy[], + doublecomplex nzval[], int_t *perm, double u[], double v[]) +{ + int_t i, liw, ldw, num; + int_t *iw, icntl[10], info[10]; + double *dw; + double *nzval_d = (double *) SUPERLU_MALLOC(nnz * sizeof(double)); + +#if ( DEBUGlevel>=1 ) + CHECK_MALLOC(0, "Enter zldperm()"); +#endif + liw = 5*n; + if ( job == 3 ) liw = 10*n + nnz; + if ( !(iw = intMalloc(liw)) ) ABORT("Malloc fails for iw[]"); + ldw = 3*n + nnz; + if ( !(dw = (double*) SUPERLU_MALLOC(ldw * sizeof(double))) ) + ABORT("Malloc fails for dw[]"); + + /* Increment one to get 1-based indexing. */ + for (i = 0; i <= n; ++i) ++colptr[i]; + for (i = 0; i < nnz; ++i) ++adjncy[i]; +#if ( DEBUGlevel>=2 ) + printf("LDPERM(): n %d, nnz %d\n", n, nnz); + slu_PrintInt10("colptr", n+1, colptr); + slu_PrintInt10("adjncy", nnz, adjncy); +#endif + + /* + * NOTE: + * ===== + * + * MC64AD assumes that column permutation vector is defined as: + * perm(i) = j means column i of permuted A is in column j of original A. + * + * Since a symmetric permutation preserves the diagonal entries. Then + * by the following relation: + * P'(A*P')P = P'A + * we can apply inverse(perm) to rows of A to get large diagonal entries. + * But, since 'perm' defined in MC64AD happens to be the reverse of + * SuperLU's definition of permutation vector, therefore, it is already + * an inverse for our purpose. We will thus use it directly. + * + */ + mc64id_(icntl); +#if 0 + /* Suppress error and warning messages. */ + icntl[0] = -1; + icntl[1] = -1; +#endif + + for (i = 0; i < nnz; ++i) nzval_d[i] = z_abs1(&nzval[i]); + mc64ad_(&job, &n, &nnz, colptr, adjncy, nzval_d, &num, perm, + &liw, iw, &ldw, dw, icntl, info); + +#if ( DEBUGlevel>=2 ) + slu_PrintInt10("perm", n, perm); + printf(".. After MC64AD info %d\tsize of matching %d\n", info[0], num); +#endif + if ( info[0] == 1 ) { /* Structurally singular */ + printf(".. The last %d permutations:\n", n-num); + slu_PrintInt10("perm", n-num, &perm[num]); + } + + /* Restore to 0-based indexing. */ + for (i = 0; i <= n; ++i) --colptr[i]; + for (i = 0; i < nnz; ++i) --adjncy[i]; + for (i = 0; i < n; ++i) --perm[i]; + + if ( job == 5 ) + for (i = 0; i < n; ++i) { + u[i] = dw[i]; + v[i] = dw[n+i]; + } + + SUPERLU_FREE(iw); + SUPERLU_FREE(dw); + SUPERLU_FREE(nzval_d); + +#if ( DEBUGlevel>=1 ) + CHECK_MALLOC(0, "Exit zldperm()"); +#endif + + return info[0]; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zmemory.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zmemory.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zmemory.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zmemory.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,54 +1,32 @@ -/* - * -- SuperLU routine (version 3.0) -- - * Univ. of California Berkeley, Xerox Palo Alto Research Center, - * and Lawrence Berkeley National Lab. - * October 15, 2003 +/*! @file zmemory.c + * \brief Memory details * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory.
+ * June 30, 2009
+ * 
*/ -#include "zsp_defs.h" +#include "slu_zdefs.h" -/* Constants */ -#define NO_MEMTYPE 4 /* 0: lusup; - 1: ucol; - 2: lsub; - 3: usub */ -#define GluIntArray(n) (5 * (n) + 5) /* Internal prototypes */ void *zexpand (int *, MemType,int, int, GlobalLU_t *); -int zLUWorkInit (int, int, int, int **, doublecomplex **, LU_space_t); +int zLUWorkInit (int, int, int, int **, doublecomplex **, GlobalLU_t *); void copy_mem_doublecomplex (int, void *, void *); void zStackCompress (GlobalLU_t *); -void zSetupSpace (void *, int, LU_space_t *); -void *zuser_malloc (int, int); -void zuser_free (int, int); +void zSetupSpace (void *, int, GlobalLU_t *); +void *zuser_malloc (int, int, GlobalLU_t *); +void zuser_free (int, int, GlobalLU_t *); -/* External prototypes (in memory.c - prec-indep) */ +/* External prototypes (in memory.c - prec-independent) */ extern void copy_mem_int (int, void *, void *); extern void user_bcopy (char *, char *, int); -/* Headers for 4 types of dynamatically managed memory */ -typedef struct e_node { - int size; /* length of the memory that has been used */ - void *mem; /* pointer to the new malloc'd store */ -} ExpHeader; - -typedef struct { - int size; - int used; - int top1; /* grow upward, relative to &array[0] */ - int top2; /* grow downward */ - void *array; -} LU_stack_t; - -/* Variables local to this file */ -static ExpHeader *expanders = 0; /* Array of pointers to 4 types of memory */ -static LU_stack_t stack; -static int no_expand; /* Macros to manipulate stack */ -#define StackFull(x) ( x + stack.used >= stack.size ) +#define StackFull(x) ( x + Glu->stack.used >= Glu->stack.size ) #define NotDoubleAlign(addr) ( (long int)addr & 7 ) #define DoubleAlign(addr) ( ((long int)addr + 7) & ~7L ) #define TempSpace(m, w) ( (2*w + 4 + NO_MARKER) * m * sizeof(int) + \ @@ -58,66 +36,67 @@ -/* - * Setup the memory model to be used for factorization. +/*! \brief Setup the memory model to be used for factorization. + * * lwork = 0: use system malloc; * lwork > 0: use user-supplied work[] space. */ -void zSetupSpace(void *work, int lwork, LU_space_t *MemModel) +void zSetupSpace(void *work, int lwork, GlobalLU_t *Glu) { if ( lwork == 0 ) { - *MemModel = SYSTEM; /* malloc/free */ + Glu->MemModel = SYSTEM; /* malloc/free */ } else if ( lwork > 0 ) { - *MemModel = USER; /* user provided space */ - stack.used = 0; - stack.top1 = 0; - stack.top2 = (lwork/4)*4; /* must be word addressable */ - stack.size = stack.top2; - stack.array = (void *) work; + Glu->MemModel = USER; /* user provided space */ + Glu->stack.used = 0; + Glu->stack.top1 = 0; + Glu->stack.top2 = (lwork/4)*4; /* must be word addressable */ + Glu->stack.size = Glu->stack.top2; + Glu->stack.array = (void *) work; } } -void *zuser_malloc(int bytes, int which_end) +void *zuser_malloc(int bytes, int which_end, GlobalLU_t *Glu) { void *buf; if ( StackFull(bytes) ) return (NULL); if ( which_end == HEAD ) { - buf = (char*) stack.array + stack.top1; - stack.top1 += bytes; + buf = (char*) Glu->stack.array + Glu->stack.top1; + Glu->stack.top1 += bytes; } else { - stack.top2 -= bytes; - buf = (char*) stack.array + stack.top2; + Glu->stack.top2 -= bytes; + buf = (char*) Glu->stack.array + Glu->stack.top2; } - stack.used += bytes; + Glu->stack.used += bytes; return buf; } -void zuser_free(int bytes, int which_end) +void zuser_free(int bytes, int which_end, GlobalLU_t *Glu) { if ( which_end == HEAD ) { - stack.top1 -= bytes; + Glu->stack.top1 -= bytes; } else { - stack.top2 += bytes; + Glu->stack.top2 += bytes; } - stack.used -= bytes; + Glu->stack.used -= bytes; } -/* +/*! \brief + * + *
  * mem_usage consists of the following fields:
  *    - for_lu (float)
  *      The amount of space used in bytes for the L\U data structures.
  *    - total_needed (float)
  *      The amount of space needed in bytes to perform factorization.
- *    - expansions (int)
- *      Number of memory expansions during the LU factorization.
+ * 
*/ int zQuerySpace(SuperMatrix *L, SuperMatrix *U, mem_usage_t *mem_usage) { @@ -132,33 +111,75 @@ dword = sizeof(doublecomplex); /* For LU factors */ - mem_usage->for_lu = (float)( (4*n + 3) * iword + Lstore->nzval_colptr[n] * - dword + Lstore->rowind_colptr[n] * iword ); - mem_usage->for_lu += (float)( (n + 1) * iword + + mem_usage->for_lu = (float)( (4.0*n + 3.0) * iword + + Lstore->nzval_colptr[n] * dword + + Lstore->rowind_colptr[n] * iword ); + mem_usage->for_lu += (float)( (n + 1.0) * iword + Ustore->colptr[n] * (dword + iword) ); /* Working storage to support factorization */ mem_usage->total_needed = mem_usage->for_lu + - (float)( (2 * panel_size + 4 + NO_MARKER) * n * iword + - (panel_size + 1) * n * dword ); - - mem_usage->expansions = --no_expand; + (float)( (2.0 * panel_size + 4.0 + NO_MARKER) * n * iword + + (panel_size + 1.0) * n * dword ); return 0; } /* zQuerySpace */ -/* - * Allocate storage for the data structures common to all factor routines. - * For those unpredictable size, make a guess as FILL * nnz(A). + +/*! \brief + * + *
+ * mem_usage consists of the following fields:
+ *    - for_lu (float)
+ *      The amount of space used in bytes for the L\U data structures.
+ *    - total_needed (float)
+ *      The amount of space needed in bytes to perform factorization.
+ * 
+ */ +int ilu_zQuerySpace(SuperMatrix *L, SuperMatrix *U, mem_usage_t *mem_usage) +{ + SCformat *Lstore; + NCformat *Ustore; + register int n, panel_size = sp_ienv(1); + register float iword, dword; + + Lstore = L->Store; + Ustore = U->Store; + n = L->ncol; + iword = sizeof(int); + dword = sizeof(double); + + /* For LU factors */ + mem_usage->for_lu = (float)( (4.0f * n + 3.0f) * iword + + Lstore->nzval_colptr[n] * dword + + Lstore->rowind_colptr[n] * iword ); + mem_usage->for_lu += (float)( (n + 1.0f) * iword + + Ustore->colptr[n] * (dword + iword) ); + + /* Working storage to support factorization. + ILU needs 5*n more integers than LU */ + mem_usage->total_needed = mem_usage->for_lu + + (float)( (2.0f * panel_size + 9.0f + NO_MARKER) * n * iword + + (panel_size + 1.0f) * n * dword ); + + return 0; +} /* ilu_zQuerySpace */ + + +/*! \brief Allocate storage for the data structures common to all factor routines. + * + *
+ * For those unpredictable size, estimate as fill_ratio * nnz(A).
  * Return value:
  *     If lwork = -1, return the estimated amount of space required, plus n;
  *     otherwise, return the amount of space actually allocated when
  *     memory allocation failure occurred.
+ * 
*/ int zLUMemInit(fact_t fact, void *work, int lwork, int m, int n, int annz, - int panel_size, SuperMatrix *L, SuperMatrix *U, GlobalLU_t *Glu, - int **iwork, doublecomplex **dwork) + int panel_size, double fill_ratio, SuperMatrix *L, SuperMatrix *U, + GlobalLU_t *Glu, int **iwork, doublecomplex **dwork) { int info, iword, dword; SCformat *Lstore; @@ -170,32 +191,33 @@ doublecomplex *ucol; int *usub, *xusub; int nzlmax, nzumax, nzlumax; - int FILL = sp_ienv(6); - Glu->n = n; - no_expand = 0; iword = sizeof(int); dword = sizeof(doublecomplex); + Glu->n = n; + Glu->num_expansions = 0; - if ( !expanders ) - expanders = (ExpHeader*)SUPERLU_MALLOC(NO_MEMTYPE * sizeof(ExpHeader)); - if ( !expanders ) ABORT("SUPERLU_MALLOC fails for expanders"); + if ( !Glu->expanders ) + Glu->expanders = (ExpHeader*)SUPERLU_MALLOC( NO_MEMTYPE * + sizeof(ExpHeader) ); + if ( !Glu->expanders ) ABORT("SUPERLU_MALLOC fails for expanders"); if ( fact != SamePattern_SameRowPerm ) { /* Guess for L\U factors */ - nzumax = nzlumax = FILL * annz; - nzlmax = SUPERLU_MAX(1, FILL/4.) * annz; + nzumax = nzlumax = fill_ratio * annz; + nzlmax = SUPERLU_MAX(1, fill_ratio/4.) * annz; if ( lwork == -1 ) { return ( GluIntArray(n) * iword + TempSpace(m, panel_size) + (nzlmax+nzumax)*iword + (nzlumax+nzumax)*dword + n ); } else { - zSetupSpace(work, lwork, &Glu->MemModel); + zSetupSpace(work, lwork, Glu); } -#ifdef DEBUG - printf("zLUMemInit() called: annz %d, MemModel %d\n", - annz, Glu->MemModel); +#if ( PRNTlevel >= 1 ) + printf("zLUMemInit() called: fill_ratio %ld, nzlmax %ld, nzumax %ld\n", + fill_ratio, nzlmax, nzumax); + fflush(stdout); #endif /* Integer pointers for L\U factors */ @@ -206,11 +228,11 @@ xlusup = intMalloc(n+1); xusub = intMalloc(n+1); } else { - xsup = (int *)zuser_malloc((n+1) * iword, HEAD); - supno = (int *)zuser_malloc((n+1) * iword, HEAD); - xlsub = (int *)zuser_malloc((n+1) * iword, HEAD); - xlusup = (int *)zuser_malloc((n+1) * iword, HEAD); - xusub = (int *)zuser_malloc((n+1) * iword, HEAD); + xsup = (int *)zuser_malloc((n+1) * iword, HEAD, Glu); + supno = (int *)zuser_malloc((n+1) * iword, HEAD, Glu); + xlsub = (int *)zuser_malloc((n+1) * iword, HEAD, Glu); + xlusup = (int *)zuser_malloc((n+1) * iword, HEAD, Glu); + xusub = (int *)zuser_malloc((n+1) * iword, HEAD, Glu); } lusup = (doublecomplex *) zexpand( &nzlumax, LUSUP, 0, 0, Glu ); @@ -225,7 +247,8 @@ SUPERLU_FREE(lsub); SUPERLU_FREE(usub); } else { - zuser_free((nzlumax+nzumax)*dword+(nzlmax+nzumax)*iword, HEAD); + zuser_free((nzlumax+nzumax)*dword+(nzlmax+nzumax)*iword, + HEAD, Glu); } nzlumax /= 2; nzumax /= 2; @@ -234,6 +257,11 @@ printf("Not enough memory to perform factorization.\n"); return (zmemory_usage(nzlmax, nzumax, nzlumax, n) + n); } +#if ( PRNTlevel >= 1) + printf("zLUMemInit() reduce size: nzlmax %ld, nzumax %ld\n", + nzlmax, nzumax); + fflush(stdout); +#endif lusup = (doublecomplex *) zexpand( &nzlumax, LUSUP, 0, 0, Glu ); ucol = (doublecomplex *) zexpand( &nzumax, UCOL, 0, 0, Glu ); lsub = (int *) zexpand( &nzlmax, LSUB, 0, 0, Glu ); @@ -260,18 +288,18 @@ Glu->MemModel = SYSTEM; } else { Glu->MemModel = USER; - stack.top2 = (lwork/4)*4; /* must be word-addressable */ - stack.size = stack.top2; + Glu->stack.top2 = (lwork/4)*4; /* must be word-addressable */ + Glu->stack.size = Glu->stack.top2; } - lsub = expanders[LSUB].mem = Lstore->rowind; - lusup = expanders[LUSUP].mem = Lstore->nzval; - usub = expanders[USUB].mem = Ustore->rowind; - ucol = expanders[UCOL].mem = Ustore->nzval;; - expanders[LSUB].size = nzlmax; - expanders[LUSUP].size = nzlumax; - expanders[USUB].size = nzumax; - expanders[UCOL].size = nzumax; + lsub = Glu->expanders[LSUB].mem = Lstore->rowind; + lusup = Glu->expanders[LUSUP].mem = Lstore->nzval; + usub = Glu->expanders[USUB].mem = Ustore->rowind; + ucol = Glu->expanders[UCOL].mem = Ustore->nzval;; + Glu->expanders[LSUB].size = nzlmax; + Glu->expanders[LUSUP].size = nzlumax; + Glu->expanders[USUB].size = nzumax; + Glu->expanders[UCOL].size = nzumax; } Glu->xsup = xsup; @@ -287,20 +315,20 @@ Glu->nzumax = nzumax; Glu->nzlumax = nzlumax; - info = zLUWorkInit(m, n, panel_size, iwork, dwork, Glu->MemModel); + info = zLUWorkInit(m, n, panel_size, iwork, dwork, Glu); if ( info ) return ( info + zmemory_usage(nzlmax, nzumax, nzlumax, n) + n); - ++no_expand; + ++Glu->num_expansions; return 0; } /* zLUMemInit */ -/* Allocate known working storage. Returns 0 if success, otherwise +/*! \brief Allocate known working storage. Returns 0 if success, otherwise returns the number of bytes allocated so far when failure occurred. */ int zLUWorkInit(int m, int n, int panel_size, int **iworkptr, - doublecomplex **dworkptr, LU_space_t MemModel) + doublecomplex **dworkptr, GlobalLU_t *Glu) { int isize, dsize, extra; doublecomplex *old_ptr; @@ -311,19 +339,19 @@ dsize = (m * panel_size + NUM_TEMPV(m,panel_size,maxsuper,rowblk)) * sizeof(doublecomplex); - if ( MemModel == SYSTEM ) + if ( Glu->MemModel == SYSTEM ) *iworkptr = (int *) intCalloc(isize/sizeof(int)); else - *iworkptr = (int *) zuser_malloc(isize, TAIL); + *iworkptr = (int *) zuser_malloc(isize, TAIL, Glu); if ( ! *iworkptr ) { fprintf(stderr, "zLUWorkInit: malloc fails for local iworkptr[]\n"); return (isize + n); } - if ( MemModel == SYSTEM ) + if ( Glu->MemModel == SYSTEM ) *dworkptr = (doublecomplex *) SUPERLU_MALLOC(dsize); else { - *dworkptr = (doublecomplex *) zuser_malloc(dsize, TAIL); + *dworkptr = (doublecomplex *) zuser_malloc(dsize, TAIL, Glu); if ( NotDoubleAlign(*dworkptr) ) { old_ptr = *dworkptr; *dworkptr = (doublecomplex*) DoubleAlign(*dworkptr); @@ -332,8 +360,8 @@ #ifdef DEBUG printf("zLUWorkInit: not aligned, extra %d\n", extra); #endif - stack.top2 -= extra; - stack.used += extra; + Glu->stack.top2 -= extra; + Glu->stack.used += extra; } } if ( ! *dworkptr ) { @@ -345,8 +373,7 @@ } -/* - * Set up pointers for real working arrays. +/*! \brief Set up pointers for real working arrays. */ void zSetRWork(int m, int panel_size, doublecomplex *dworkptr, @@ -362,8 +389,7 @@ zfill (*tempv, NUM_TEMPV(m,panel_size,maxsuper,rowblk), zero); } -/* - * Free the working storage used by factor routines. +/*! \brief Free the working storage used by factor routines. */ void zLUWorkFree(int *iwork, doublecomplex *dwork, GlobalLU_t *Glu) { @@ -371,18 +397,21 @@ SUPERLU_FREE (iwork); SUPERLU_FREE (dwork); } else { - stack.used -= (stack.size - stack.top2); - stack.top2 = stack.size; + Glu->stack.used -= (Glu->stack.size - Glu->stack.top2); + Glu->stack.top2 = Glu->stack.size; /* zStackCompress(Glu); */ } - SUPERLU_FREE (expanders); - expanders = 0; + SUPERLU_FREE (Glu->expanders); + Glu->expanders = NULL; } -/* Expand the data structures for L and U during the factorization. +/*! \brief Expand the data structures for L and U during the factorization. + * + *
  * Return value:   0 - successful return
  *               > 0 - number of bytes allocated when run out of space
+ * 
*/ int zLUMemXpand(int jcol, @@ -446,8 +475,7 @@ for (i = 0; i < howmany; i++) dnew[i] = dold[i]; } -/* - * Expand the existing storage to accommodate more fill-ins. +/*! \brief Expand the existing storage to accommodate more fill-ins. */ void *zexpand ( @@ -463,12 +491,14 @@ float alpha; void *new_mem, *old_mem; int new_len, tries, lword, extra, bytes_to_copy; + ExpHeader *expanders = Glu->expanders; /* Array of 4 types of memory */ alpha = EXPAND; - if ( no_expand == 0 || keep_prev ) /* First time allocate requested */ + if ( Glu->num_expansions == 0 || keep_prev ) { + /* First time allocate requested */ new_len = *prev_len; - else { + } else { new_len = alpha * *prev_len; } @@ -476,9 +506,8 @@ else lword = sizeof(doublecomplex); if ( Glu->MemModel == SYSTEM ) { - new_mem = (void *) SUPERLU_MALLOC(new_len * lword); -/* new_mem = (void *) calloc(new_len, lword); */ - if ( no_expand != 0 ) { + new_mem = (void *) SUPERLU_MALLOC((size_t)new_len * lword); + if ( Glu->num_expansions != 0 ) { tries = 0; if ( keep_prev ) { if ( !new_mem ) return (NULL); @@ -487,8 +516,7 @@ if ( ++tries > 10 ) return (NULL); alpha = Reduce(alpha); new_len = alpha * *prev_len; - new_mem = (void *) SUPERLU_MALLOC(new_len * lword); -/* new_mem = (void *) calloc(new_len, lword); */ + new_mem = (void *) SUPERLU_MALLOC((size_t)new_len * lword); } } if ( type == LSUB || type == USUB ) { @@ -501,8 +529,8 @@ expanders[type].mem = (void *) new_mem; } else { /* MemModel == USER */ - if ( no_expand == 0 ) { - new_mem = zuser_malloc(new_len * lword, HEAD); + if ( Glu->num_expansions == 0 ) { + new_mem = zuser_malloc(new_len * lword, HEAD, Glu); if ( NotDoubleAlign(new_mem) && (type == LUSUP || type == UCOL) ) { old_mem = new_mem; @@ -511,12 +539,11 @@ #ifdef DEBUG printf("expand(): not aligned, extra %d\n", extra); #endif - stack.top1 += extra; - stack.used += extra; + Glu->stack.top1 += extra; + Glu->stack.used += extra; } expanders[type].mem = (void *) new_mem; - } - else { + } else { tries = 0; extra = (new_len - *prev_len) * lword; if ( keep_prev ) { @@ -532,7 +559,7 @@ if ( type != USUB ) { new_mem = (void*)((char*)expanders[type + 1].mem + extra); - bytes_to_copy = (char*)stack.array + stack.top1 + bytes_to_copy = (char*)Glu->stack.array + Glu->stack.top1 - (char*)expanders[type + 1].mem; user_bcopy(expanders[type+1].mem, new_mem, bytes_to_copy); @@ -548,11 +575,11 @@ Glu->ucol = expanders[UCOL].mem = (void*)((char*)expanders[UCOL].mem + extra); } - stack.top1 += extra; - stack.used += extra; + Glu->stack.top1 += extra; + Glu->stack.used += extra; if ( type == UCOL ) { - stack.top1 += extra; /* Add same amount for USUB */ - stack.used += extra; + Glu->stack.top1 += extra; /* Add same amount for USUB */ + Glu->stack.used += extra; } } /* if ... */ @@ -562,15 +589,14 @@ expanders[type].size = new_len; *prev_len = new_len; - if ( no_expand ) ++no_expand; + if ( Glu->num_expansions ) ++Glu->num_expansions; return (void *) expanders[type].mem; } /* zexpand */ -/* - * Compress the work[] array to remove fragmentation. +/*! \brief Compress the work[] array to remove fragmentation. */ void zStackCompress(GlobalLU_t *Glu) @@ -610,9 +636,9 @@ usub = ito; last = (char*)usub + xusub[ndim] * iword; - fragment = (char*) (((char*)stack.array + stack.top1) - last); - stack.used -= (long int) fragment; - stack.top1 -= (long int) fragment; + fragment = (char*) (((char*)Glu->stack.array + Glu->stack.top1) - last); + Glu->stack.used -= (long int) fragment; + Glu->stack.top1 -= (long int) fragment; Glu->ucol = ucol; Glu->lsub = lsub; @@ -626,8 +652,7 @@ } -/* - * Allocate storage for original matrix A +/*! \brief Allocate storage for original matrix A */ void zallocateA(int n, int nnz, doublecomplex **a, int **asub, int **xa) @@ -641,7 +666,7 @@ doublecomplex *doublecomplexMalloc(int n) { doublecomplex *buf; - buf = (doublecomplex *) SUPERLU_MALLOC(n * sizeof(doublecomplex)); + buf = (doublecomplex *) SUPERLU_MALLOC((size_t)n * sizeof(doublecomplex)); if ( !buf ) { ABORT("SUPERLU_MALLOC failed for buf in doublecomplexMalloc()\n"); } @@ -653,7 +678,7 @@ doublecomplex *buf; register int i; doublecomplex zero = {0.0, 0.0}; - buf = (doublecomplex *) SUPERLU_MALLOC(n * sizeof(doublecomplex)); + buf = (doublecomplex *) SUPERLU_MALLOC((size_t)n * sizeof(doublecomplex)); if ( !buf ) { ABORT("SUPERLU_MALLOC failed for buf in doublecomplexCalloc()\n"); } diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zpanel_bmod.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zpanel_bmod.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zpanel_bmod.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zpanel_bmod.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,27 +1,32 @@ -/* +/*! @file zpanel_bmod.c + * \brief Performs numeric block updates + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ /* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. + */ #include #include -#include "zsp_defs.h" +#include "slu_zdefs.h" /* * Function prototypes @@ -30,6 +35,25 @@ void zmatvec(int, int, int, doublecomplex *, doublecomplex *, doublecomplex *); extern void zcheck_tempv(); +/*! \brief + * + *
+ * Purpose
+ * =======
+ *
+ *    Performs numeric block updates (sup-panel) in topological order.
+ *    It features: col-col, 2cols-col, 3cols-col, and sup-col updates.
+ *    Special processing on the supernodal portion of L\U[*,j]
+ *
+ *    Before entering this routine, the original nonzeros in the panel 
+ *    were already copied into the spa[m,w].
+ *
+ *    Updated/Output parameters-
+ *    dense[0:m-1,w]: L[*,j:j+w-1] and U[*,j:j+w-1] are returned 
+ *    collectively in the m-by-w vector dense[*]. 
+ * 
+ */ + void zpanel_bmod ( const int m, /* in - number of rows in the matrix */ @@ -44,22 +68,7 @@ SuperLUStat_t *stat /* output */ ) { -/* - * Purpose - * ======= - * - * Performs numeric block updates (sup-panel) in topological order. - * It features: col-col, 2cols-col, 3cols-col, and sup-col updates. - * Special processing on the supernodal portion of L\U[*,j] - * - * Before entering this routine, the original nonzeros in the panel - * were already copied into the spa[m,w]. - * - * Updated/Output parameters- - * dense[0:m-1,w]: L[*,j:j+w-1] and U[*,j:j+w-1] are returned - * collectively in the m-by-w vector dense[*]. - * - */ + #ifdef USE_VENDOR_BLAS #ifdef _CRAY diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zpanel_dfs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zpanel_dfs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zpanel_dfs.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zpanel_dfs.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,48 +1,32 @@ - -/* +/*! @file zpanel_dfs.c + * \brief Peforms a symbolic factorization on a panel of symbols + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "zsp_defs.h" -#include "util.h" -void -zpanel_dfs ( - const int m, /* in - number of rows in the matrix */ - const int w, /* in */ - const int jcol, /* in */ - SuperMatrix *A, /* in - original matrix */ - int *perm_r, /* in */ - int *nseg, /* out */ - doublecomplex *dense, /* out */ - int *panel_lsub, /* out */ - int *segrep, /* out */ - int *repfnz, /* out */ - int *xprune, /* out */ - int *marker, /* out */ - int *parent, /* working array */ - int *xplore, /* working array */ - GlobalLU_t *Glu /* modified */ - ) -{ -/* +#include "slu_zdefs.h" + +/*! \brief + * + *
  * Purpose
  * =======
  *
@@ -68,8 +52,29 @@
  *   repfnz: SuperA-col --> PA-row
  *   parent: SuperA-col --> SuperA-col
  *   xplore: SuperA-col --> index to L-structure
- *
+ * 
*/ + +void +zpanel_dfs ( + const int m, /* in - number of rows in the matrix */ + const int w, /* in */ + const int jcol, /* in */ + SuperMatrix *A, /* in - original matrix */ + int *perm_r, /* in */ + int *nseg, /* out */ + doublecomplex *dense, /* out */ + int *panel_lsub, /* out */ + int *segrep, /* out */ + int *repfnz, /* out */ + int *xprune, /* out */ + int *marker, /* out */ + int *parent, /* working array */ + int *xplore, /* working array */ + GlobalLU_t *Glu /* modified */ + ) +{ + NCPformat *Astore; doublecomplex *a; int *asub; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zpivotgrowth.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zpivotgrowth.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zpivotgrowth.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zpivotgrowth.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,21 +1,20 @@ - -/* +/*! @file zpivotgrowth.c + * \brief Computes the reciprocal pivot growth factor + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
- *
+ * 
*/ #include -#include "zsp_defs.h" -#include "util.h" +#include "slu_zdefs.h" -double -zPivotGrowth(int ncols, SuperMatrix *A, int *perm_c, - SuperMatrix *L, SuperMatrix *U) -{ -/* +/*! \brief + * + *
  * Purpose
  * =======
  *
@@ -43,8 +42,14 @@
  *	    The factor U from the factorization Pr*A*Pc=L*U. Use column-wise
  *          storage scheme, i.e., U has types: Stype = NC;
  *          Dtype = SLU_Z; Mtype = TRU.
- *
+ * 
*/ + +double +zPivotGrowth(int ncols, SuperMatrix *A, int *perm_c, + SuperMatrix *L, SuperMatrix *U) +{ + NCformat *Astore; SCformat *Lstore; NCformat *Ustore; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zpivotL.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zpivotL.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zpivotL.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zpivotL.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,44 +1,36 @@ -/* +/*! @file zpivotL.c + * \brief Performs numerical pivoting + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ + #include #include -#include "zsp_defs.h" +#include "slu_zdefs.h" #undef DEBUG -int -zpivotL( - const int jcol, /* in */ - const double u, /* in - diagonal pivoting threshold */ - int *usepr, /* re-use the pivot sequence given by perm_r/iperm_r */ - int *perm_r, /* may be modified */ - int *iperm_r, /* in - inverse of perm_r */ - int *iperm_c, /* in - used to find diagonal of Pc*A*Pc' */ - int *pivrow, /* out */ - GlobalLU_t *Glu, /* modified - global LU data structures */ - SuperLUStat_t *stat /* output */ - ) -{ -/* +/*! \brief + * + *
  * Purpose
  * =======
  *   Performs the numerical pivoting on the current column of L,
@@ -57,8 +49,23 @@
  *
  *   Return value: 0      success;
  *                 i > 0  U(i,i) is exactly zero.
- *
+ * 
*/ + +int +zpivotL( + const int jcol, /* in */ + const double u, /* in - diagonal pivoting threshold */ + int *usepr, /* re-use the pivot sequence given by perm_r/iperm_r */ + int *perm_r, /* may be modified */ + int *iperm_r, /* in - inverse of perm_r */ + int *iperm_c, /* in - used to find diagonal of Pc*A*Pc' */ + int *pivrow, /* out */ + GlobalLU_t *Glu, /* modified - global LU data structures */ + SuperLUStat_t *stat /* output */ + ) +{ + doublecomplex one = {1.0, 0.0}; int fsupc; /* first column in the supernode */ int nsupc; /* no of columns in the supernode */ @@ -101,7 +108,11 @@ Also search for user-specified pivot, and diagonal element. */ if ( *usepr ) *pivrow = iperm_r[jcol]; diagind = iperm_c[jcol]; +#ifdef SCIPY_SPECIFIC_FIX + pivmax = -1.0; +#else pivmax = 0.0; +#endif pivptr = nsupc; diag = EMPTY; old_pivptr = nsupc; @@ -116,9 +127,20 @@ } /* Test for singularity */ +#ifdef SCIPY_SPECIFIC_FIX + if (pivmax < 0.0) { + perm_r[diagind] = jcol; + *usepr = 0; + return (jcol+1); + } +#endif if ( pivmax == 0.0 ) { +#if 1 *pivrow = lsub_ptr[pivptr]; perm_r[*pivrow] = jcol; +#else + perm_r[diagind] = jcol; +#endif *usepr = 0; return (jcol+1); } diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zpruneL.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zpruneL.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zpruneL.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zpruneL.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,27 +1,38 @@ - -/* +/*! @file zpruneL.c + * \brief Prunes the L-structure + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ *
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "zsp_defs.h" -#include "util.h" + +#include "slu_zdefs.h" + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *   Prunes the L-structure of supernodes whose L-structure
+ *   contains the current pivot row "pivrow"
+ * 
+ */ void zpruneL( @@ -35,13 +46,7 @@ GlobalLU_t *Glu /* modified - global LU data structures */ ) { -/* - * Purpose - * ======= - * Prunes the L-structure of supernodes whose L-structure - * contains the current pivot row "pivrow" - * - */ + doublecomplex utemp; int jsupno, irep, irep1, kmin, kmax, krow, movnum; int i, ktemp, minloc, maxloc; @@ -108,8 +113,8 @@ kmax--; else if ( perm_r[lsub[kmin]] != EMPTY ) kmin++; - else { /* kmin below pivrow, and kmax above pivrow: - * interchange the two subscripts + else { /* kmin below pivrow (not yet pivoted), and kmax + * above pivrow: interchange the two subscripts */ ktemp = lsub[kmin]; lsub[kmin] = lsub[kmax]; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zreadhb.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zreadhb.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zreadhb.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zreadhb.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,18 +1,85 @@ - -/* +/*! @file zreadhb.c + * \brief Read a matrix stored in Harwell-Boeing format + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
  *
+ * Purpose
+ * =======
+ * 
+ * Read a DOUBLE COMPLEX PRECISION matrix stored in Harwell-Boeing format 
+ * as described below.
+ * 
+ * Line 1 (A72,A8) 
+ *  	Col. 1 - 72   Title (TITLE) 
+ *	Col. 73 - 80  Key (KEY) 
+ * 
+ * Line 2 (5I14) 
+ * 	Col. 1 - 14   Total number of lines excluding header (TOTCRD) 
+ * 	Col. 15 - 28  Number of lines for pointers (PTRCRD) 
+ * 	Col. 29 - 42  Number of lines for row (or variable) indices (INDCRD) 
+ * 	Col. 43 - 56  Number of lines for numerical values (VALCRD) 
+ *	Col. 57 - 70  Number of lines for right-hand sides (RHSCRD) 
+ *                    (including starting guesses and solution vectors 
+ *		       if present) 
+ *           	      (zero indicates no right-hand side data is present) 
+ *
+ * Line 3 (A3, 11X, 4I14) 
+ *   	Col. 1 - 3    Matrix type (see below) (MXTYPE) 
+ * 	Col. 15 - 28  Number of rows (or variables) (NROW) 
+ * 	Col. 29 - 42  Number of columns (or elements) (NCOL) 
+ *	Col. 43 - 56  Number of row (or variable) indices (NNZERO) 
+ *	              (equal to number of entries for assembled matrices) 
+ * 	Col. 57 - 70  Number of elemental matrix entries (NELTVL) 
+ *	              (zero in the case of assembled matrices) 
+ * Line 4 (2A16, 2A20) 
+ * 	Col. 1 - 16   Format for pointers (PTRFMT) 
+ *	Col. 17 - 32  Format for row (or variable) indices (INDFMT) 
+ *	Col. 33 - 52  Format for numerical values of coefficient matrix (VALFMT) 
+ * 	Col. 53 - 72 Format for numerical values of right-hand sides (RHSFMT) 
+ *
+ * Line 5 (A3, 11X, 2I14) Only present if there are right-hand sides present 
+ *    	Col. 1 	      Right-hand side type: 
+ *	         	  F for full storage or M for same format as matrix 
+ *    	Col. 2        G if a starting vector(s) (Guess) is supplied. (RHSTYP) 
+ *    	Col. 3        X if an exact solution vector(s) is supplied. 
+ *	Col. 15 - 28  Number of right-hand sides (NRHS) 
+ *	Col. 29 - 42  Number of row indices (NRHSIX) 
+ *          	      (ignored in case of unassembled matrices) 
+ *
+ * The three character type field on line 3 describes the matrix type. 
+ * The following table lists the permitted values for each of the three 
+ * characters. As an example of the type field, RSA denotes that the matrix 
+ * is real, symmetric, and assembled. 
+ *
+ * First Character: 
+ *	R Real matrix 
+ *	C Complex matrix 
+ *	P Pattern only (no numerical values supplied) 
+ *
+ * Second Character: 
+ *	S Symmetric 
+ *	U Unsymmetric 
+ *	H Hermitian 
+ *	Z Skew symmetric 
+ *	R Rectangular 
+ *
+ * Third Character: 
+ *	A Assembled 
+ *	E Elemental matrices (unassembled) 
+ *
+ * 
*/ #include #include -#include "zsp_defs.h" +#include "slu_zdefs.h" -/* Eat up the rest of the current line */ +/*! \brief Eat up the rest of the current line */ int zDumpLine(FILE *fp) { register int c; @@ -60,7 +127,7 @@ return 0; } -int zReadVector(FILE *fp, int n, int *where, int perline, int persize) +static int ReadVector(FILE *fp, int n, int *where, int perline, int persize) { register int i, j, item; char tmp, buf[100]; @@ -80,7 +147,7 @@ return 0; } -/* Read complex numbers as pairs of (real, imaginary) */ +/*! \brief Read complex numbers as pairs of (real, imaginary) */ int zReadValues(FILE *fp, int n, doublecomplex *destination, int perline, int persize) { register int i, j, k, s, pair; @@ -118,72 +185,6 @@ zreadhb(int *nrow, int *ncol, int *nonz, doublecomplex **nzval, int **rowind, int **colptr) { -/* - * Purpose - * ======= - * - * Read a DOUBLE COMPLEX PRECISION matrix stored in Harwell-Boeing format - * as described below. - * - * Line 1 (A72,A8) - * Col. 1 - 72 Title (TITLE) - * Col. 73 - 80 Key (KEY) - * - * Line 2 (5I14) - * Col. 1 - 14 Total number of lines excluding header (TOTCRD) - * Col. 15 - 28 Number of lines for pointers (PTRCRD) - * Col. 29 - 42 Number of lines for row (or variable) indices (INDCRD) - * Col. 43 - 56 Number of lines for numerical values (VALCRD) - * Col. 57 - 70 Number of lines for right-hand sides (RHSCRD) - * (including starting guesses and solution vectors - * if present) - * (zero indicates no right-hand side data is present) - * - * Line 3 (A3, 11X, 4I14) - * Col. 1 - 3 Matrix type (see below) (MXTYPE) - * Col. 15 - 28 Number of rows (or variables) (NROW) - * Col. 29 - 42 Number of columns (or elements) (NCOL) - * Col. 43 - 56 Number of row (or variable) indices (NNZERO) - * (equal to number of entries for assembled matrices) - * Col. 57 - 70 Number of elemental matrix entries (NELTVL) - * (zero in the case of assembled matrices) - * Line 4 (2A16, 2A20) - * Col. 1 - 16 Format for pointers (PTRFMT) - * Col. 17 - 32 Format for row (or variable) indices (INDFMT) - * Col. 33 - 52 Format for numerical values of coefficient matrix (VALFMT) - * Col. 53 - 72 Format for numerical values of right-hand sides (RHSFMT) - * - * Line 5 (A3, 11X, 2I14) Only present if there are right-hand sides present - * Col. 1 Right-hand side type: - * F for full storage or M for same format as matrix - * Col. 2 G if a starting vector(s) (Guess) is supplied. (RHSTYP) - * Col. 3 X if an exact solution vector(s) is supplied. - * Col. 15 - 28 Number of right-hand sides (NRHS) - * Col. 29 - 42 Number of row indices (NRHSIX) - * (ignored in case of unassembled matrices) - * - * The three character type field on line 3 describes the matrix type. - * The following table lists the permitted values for each of the three - * characters. As an example of the type field, RSA denotes that the matrix - * is real, symmetric, and assembled. - * - * First Character: - * R Real matrix - * C Complex matrix - * P Pattern only (no numerical values supplied) - * - * Second Character: - * S Symmetric - * U Unsymmetric - * H Hermitian - * Z Skew symmetric - * R Rectangular - * - * Third Character: - * A Assembled - * E Elemental matrices (unassembled) - * - */ register int i, numer_lines = 0, rhscrd = 0; int tmp, colnum, colsize, rownum, rowsize, valnum, valsize; @@ -254,8 +255,8 @@ printf("valnum %d, valsize %d\n", valnum, valsize); #endif - zReadVector(fp, *ncol+1, *colptr, colnum, colsize); - zReadVector(fp, *nonz, *rowind, rownum, rowsize); + ReadVector(fp, *ncol+1, *colptr, colnum, colsize); + ReadVector(fp, *nonz, *rowind, rownum, rowsize); if ( numer_lines ) { zReadValues(fp, *nonz, *nzval, valnum, valsize); } diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zreadrb.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zreadrb.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zreadrb.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zreadrb.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,246 @@ + +/*! @file zreadrb.c + * \brief Read a matrix stored in Rutherford-Boeing format + * + *
+ * -- SuperLU routine (version 4.0) --
+ * Lawrence Berkeley National Laboratory.
+ * June 30, 2009
+ * 
+ * + * Purpose + * ======= + * + * Read a DOUBLE COMPLEX PRECISION matrix stored in Rutherford-Boeing format + * as described below. + * + * Line 1 (A72, A8) + * Col. 1 - 72 Title (TITLE) + * Col. 73 - 80 Matrix name / identifier (MTRXID) + * + * Line 2 (I14, 3(1X, I13)) + * Col. 1 - 14 Total number of lines excluding header (TOTCRD) + * Col. 16 - 28 Number of lines for pointers (PTRCRD) + * Col. 30 - 42 Number of lines for row (or variable) indices (INDCRD) + * Col. 44 - 56 Number of lines for numerical values (VALCRD) + * + * Line 3 (A3, 11X, 4(1X, I13)) + * Col. 1 - 3 Matrix type (see below) (MXTYPE) + * Col. 15 - 28 Compressed Column: Number of rows (NROW) + * Elemental: Largest integer used to index variable (MVAR) + * Col. 30 - 42 Compressed Column: Number of columns (NCOL) + * Elemental: Number of element matrices (NELT) + * Col. 44 - 56 Compressed Column: Number of entries (NNZERO) + * Elemental: Number of variable indeces (NVARIX) + * Col. 58 - 70 Compressed Column: Unused, explicitly zero + * Elemental: Number of elemental matrix entries (NELTVL) + * + * Line 4 (2A16, A20) + * Col. 1 - 16 Fortran format for pointers (PTRFMT) + * Col. 17 - 32 Fortran format for row (or variable) indices (INDFMT) + * Col. 33 - 52 Fortran format for numerical values of coefficient matrix + * (VALFMT) + * (blank in the case of matrix patterns) + * + * The three character type field on line 3 describes the matrix type. + * The following table lists the permitted values for each of the three + * characters. As an example of the type field, RSA denotes that the matrix + * is real, symmetric, and assembled. + * + * First Character: + * R Real matrix + * C Complex matrix + * I integer matrix + * P Pattern only (no numerical values supplied) + * Q Pattern only (numerical values supplied in associated auxiliary value + * file) + * + * Second Character: + * S Symmetric + * U Unsymmetric + * H Hermitian + * Z Skew symmetric + * R Rectangular + * + * Third Character: + * A Compressed column form + * E Elemental form + * + *
+ */ + +#include "slu_zdefs.h" + + +/*! \brief Eat up the rest of the current line */ +static int zDumpLine(FILE *fp) +{ + register int c; + while ((c = fgetc(fp)) != '\n') ; + return 0; +} + +static int zParseIntFormat(char *buf, int *num, int *size) +{ + char *tmp; + + tmp = buf; + while (*tmp++ != '(') ; + sscanf(tmp, "%d", num); + while (*tmp != 'I' && *tmp != 'i') ++tmp; + ++tmp; + sscanf(tmp, "%d", size); + return 0; +} + +static int zParseFloatFormat(char *buf, int *num, int *size) +{ + char *tmp, *period; + + tmp = buf; + while (*tmp++ != '(') ; + *num = atoi(tmp); /*sscanf(tmp, "%d", num);*/ + while (*tmp != 'E' && *tmp != 'e' && *tmp != 'D' && *tmp != 'd' + && *tmp != 'F' && *tmp != 'f') { + /* May find kP before nE/nD/nF, like (1P6F13.6). In this case the + num picked up refers to P, which should be skipped. */ + if (*tmp=='p' || *tmp=='P') { + ++tmp; + *num = atoi(tmp); /*sscanf(tmp, "%d", num);*/ + } else { + ++tmp; + } + } + ++tmp; + period = tmp; + while (*period != '.' && *period != ')') ++period ; + *period = '\0'; + *size = atoi(tmp); /*sscanf(tmp, "%2d", size);*/ + + return 0; +} + +static int ReadVector(FILE *fp, int n, int *where, int perline, int persize) +{ + register int i, j, item; + char tmp, buf[100]; + + i = 0; + while (i < n) { + fgets(buf, 100, fp); /* read a line at a time */ + for (j=0; j + * -- SuperLU routine (version 4.0) -- + * Lawrence Berkeley National Laboratory. + * June 30, 2009 + *
+ */ + +#include "slu_zdefs.h" + + +void +zreadtriple(int *m, int *n, int *nonz, + doublecomplex **nzval, int **rowind, int **colptr) +{ +/* + * Output parameters + * ================= + * (a,asub,xa): asub[*] contains the row subscripts of nonzeros + * in columns of matrix A; a[*] the numerical values; + * row i of A is given by a[k],k=xa[i],...,xa[i+1]-1. + * + */ + int j, k, jsize, nnz, nz; + doublecomplex *a, *val; + int *asub, *xa, *row, *col; + int zero_base = 0; + + /* Matrix format: + * First line: #rows, #cols, #non-zero + * Triplet in the rest of lines: + * row, col, value + */ + + scanf("%d%d", n, nonz); + *m = *n; + printf("m %d, n %d, nonz %d\n", *m, *n, *nonz); + zallocateA(*n, *nonz, nzval, rowind, colptr); /* Allocate storage */ + a = *nzval; + asub = *rowind; + xa = *colptr; + + val = (doublecomplex *) SUPERLU_MALLOC(*nonz * sizeof(doublecomplex)); + row = (int *) SUPERLU_MALLOC(*nonz * sizeof(int)); + col = (int *) SUPERLU_MALLOC(*nonz * sizeof(int)); + + for (j = 0; j < *n; ++j) xa[j] = 0; + + /* Read into the triplet array from a file */ + for (nnz = 0, nz = 0; nnz < *nonz; ++nnz) { + scanf("%d%d%lf%lf\n", &row[nz], &col[nz], &val[nz].r, &val[nz].i); + + if ( nnz == 0 ) { /* first nonzero */ + if ( row[0] == 0 || col[0] == 0 ) { + zero_base = 1; + printf("triplet file: row/col indices are zero-based.\n"); + } else + printf("triplet file: row/col indices are one-based.\n"); + } + + if ( !zero_base ) { + /* Change to 0-based indexing. */ + --row[nz]; + --col[nz]; + } + + if (row[nz] < 0 || row[nz] >= *m || col[nz] < 0 || col[nz] >= *n + /*|| val[nz] == 0.*/) { + fprintf(stderr, "nz %d, (%d, %d) = (%e,%e) out of bound, removed\n", + nz, row[nz], col[nz], val[nz].r, val[nz].i); + exit(-1); + } else { + ++xa[col[nz]]; + ++nz; + } + } + + *nonz = nz; + + /* Initialize the array of column pointers */ + k = 0; + jsize = xa[0]; + xa[0] = 0; + for (j = 1; j < *n; ++j) { + k += jsize; + jsize = xa[j]; + xa[j] = k; + } + + /* Copy the triplets into the column oriented storage */ + for (nz = 0; nz < *nonz; ++nz) { + j = col[nz]; + k = xa[j]; + asub[k] = row[nz]; + a[k] = val[nz]; + ++xa[j]; + } + + /* Reset the column pointers to the beginning of each column */ + for (j = *n; j > 0; --j) + xa[j] = xa[j-1]; + xa[0] = 0; + + SUPERLU_FREE(val); + SUPERLU_FREE(row); + SUPERLU_FREE(col); + +#ifdef CHK_INPUT + { + int i; + for (i = 0; i < *n; i++) { + printf("Col %d, xa %d\n", i, xa[i]); + for (k = xa[i]; k < xa[i+1]; k++) + printf("%d\t%16.10f\n", asub[k], a[k]); + } + } +#endif + +} + + +void zreadrhs(int m, doublecomplex *b) +{ + FILE *fp, *fopen(); + int i; + /*int j;*/ + + if ( !(fp = fopen("b.dat", "r")) ) { + fprintf(stderr, "dreadrhs: file does not exist\n"); + exit(-1); + } + for (i = 0; i < m; ++i) + fscanf(fp, "%lf%lf\n", &b[i].r, &b[i].i); + + /* readpair_(j, &b[i]);*/ + fclose(fp); +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zsnode_bmod.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zsnode_bmod.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zsnode_bmod.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zsnode_bmod.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,29 +1,31 @@ -/* +/*! @file zsnode_bmod.c + * \brief Performs numeric block updates within the relaxed snode. + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "zsp_defs.h" + +#include "slu_zdefs.h" -/* - * Performs numeric block updates within the relaxed snode. +/*! \brief Performs numeric block updates within the relaxed snode. */ int zsnode_bmod ( diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zsnode_dfs.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zsnode_dfs.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zsnode_dfs.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zsnode_dfs.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,27 +1,45 @@ - -/* +/*! @file zsnode_dfs.c + * \brief Determines the union of row structures of columns within the relaxed node + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
  *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ -#include "zsp_defs.h" -#include "util.h" + +#include "slu_zdefs.h" + +/*! \brief + * + *
+ * Purpose
+ * =======
+ *    zsnode_dfs() - Determine the union of the row structures of those 
+ *    columns within the relaxed snode.
+ *    Note: The relaxed snodes are leaves of the supernodal etree, therefore, 
+ *    the portion outside the rectangular supernode must be zero.
+ *
+ * Return value
+ * ============
+ *     0   success;
+ *    >0   number of bytes allocated when run out of memory.
+ * 
+ */ int zsnode_dfs ( @@ -35,19 +53,7 @@ GlobalLU_t *Glu /* modified */ ) { -/* Purpose - * ======= - * zsnode_dfs() - Determine the union of the row structures of those - * columns within the relaxed snode. - * Note: The relaxed snodes are leaves of the supernodal etree, therefore, - * the portion outside the rectangular supernode must be zero. - * - * Return value - * ============ - * 0 success; - * >0 number of bytes allocated when run out of memory. - * - */ + register int i, k, ifrom, ito, nextl, new_next; int nsuper, krow, kmark, mem_error; int *xsup, *supno; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zsp_blas2.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zsp_blas2.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zsp_blas2.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zsp_blas2.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,17 +1,20 @@ -/* +/*! @file zsp_blas2.c + * \brief Sparse BLAS 2, using some dense BLAS 2 operations + * + *
  * -- SuperLU routine (version 3.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * October 15, 2003
- *
+ * 
*/ /* * File name: zsp_blas2.c * Purpose: Sparse BLAS 2, using some dense BLAS 2 operations. */ -#include "zsp_defs.h" +#include "slu_zdefs.h" /* * Function prototypes @@ -20,12 +23,9 @@ void zlsolve(int, int, doublecomplex*, doublecomplex*); void zmatvec(int, int, int, doublecomplex*, doublecomplex*, doublecomplex*); - -int -sp_ztrsv(char *uplo, char *trans, char *diag, SuperMatrix *L, - SuperMatrix *U, doublecomplex *x, SuperLUStat_t *stat, int *info) -{ -/* +/*! \brief Solves one of the systems of equations A*x = b, or A'*x = b + * + *
  *   Purpose
  *   =======
  *
@@ -49,8 +49,8 @@
  *             On entry, trans specifies the equations to be solved as   
  *             follows:   
  *                trans = 'N' or 'n'   A*x = b.   
- *                trans = 'T' or 't'   A'*x = b.   
- *                trans = 'C' or 'c'   A'*x = b.   
+ *                trans = 'T' or 't'   A'*x = b.
+ *                trans = 'C' or 'c'   A^H*x = b.   
  *
  *   diag   - (input) char*
  *             On entry, diag specifies whether or not A is unit   
@@ -75,8 +75,12 @@
  *
  *   info    - (output) int*
  *             If *info = -i, the i-th argument had an illegal value.
- *
+ * 
*/ +int +sp_ztrsv(char *uplo, char *trans, char *diag, SuperMatrix *L, + SuperMatrix *U, doublecomplex *x, SuperLUStat_t *stat, int *info) +{ #ifdef _CRAY _fcd ftcs1 = _cptofcd("L", strlen("L")), ftcs2 = _cptofcd("N", strlen("N")), @@ -85,8 +89,8 @@ SCformat *Lstore; NCformat *Ustore; doublecomplex *Lval, *Uval; - doublecomplex temp; int incx = 1, incy = 1; + doublecomplex temp; doublecomplex alpha = {1.0, 0.0}, beta = {1.0, 0.0}; doublecomplex comp_zero = {0.0, 0.0}; int nrow; @@ -98,7 +102,8 @@ /* Test the input parameters */ *info = 0; if ( !lsame_(uplo,"L") && !lsame_(uplo, "U") ) *info = -1; - else if ( !lsame_(trans, "N") && !lsame_(trans, "T") && !lsame_(trans,"C") ) *info = -2; + else if ( !lsame_(trans, "N") && !lsame_(trans, "T") && + !lsame_(trans, "C")) *info = -2; else if ( !lsame_(diag, "U") && !lsame_(diag, "N") ) *info = -3; else if ( L->nrow != L->ncol || L->nrow < 0 ) *info = -4; else if ( U->nrow != U->ncol || U->nrow < 0 ) *info = -5; @@ -131,7 +136,8 @@ luptr = L_NZ_START(fsupc); nrow = nsupr - nsupc; - solve_ops += 4 * nsupc * (nsupc - 1); + /* 1 z_div costs 10 flops */ + solve_ops += 4 * nsupc * (nsupc - 1) + 10 * nsupc; solve_ops += 8 * nrow * nsupc; if ( nsupc == 1 ) { @@ -184,7 +190,8 @@ nsupc = L_FST_SUPC(k+1) - fsupc; luptr = L_NZ_START(fsupc); - solve_ops += 4 * nsupc * (nsupc + 1); + /* 1 z_div costs 10 flops */ + solve_ops += 4 * nsupc * (nsupc + 1) + 10 * nsupc; if ( nsupc == 1 ) { z_div(&x[fsupc], &x[fsupc], &Lval[luptr]); @@ -219,7 +226,7 @@ } /* for k ... */ } - } else if (lsame_(trans, "T") ) { /* Form x := inv(A')*x */ + } else if ( lsame_(trans, "T") ) { /* Form x := inv(A')*x */ if ( lsame_(uplo, "L") ) { /* Form x := inv(L')*x */ @@ -249,13 +256,13 @@ solve_ops += 4 * nsupc * (nsupc - 1); #ifdef _CRAY ftcs1 = _cptofcd("L", strlen("L")); - ftcs2 = _cptofcd(trans, strlen("T")); + ftcs2 = _cptofcd("T", strlen("T")); ftcs3 = _cptofcd("U", strlen("U")); CTRSV(ftcs1, ftcs2, ftcs3, &nsupc, &Lval[luptr], &nsupr, &x[fsupc], &incx); #else - ztrsv_("L", trans, "U", &nsupc, &Lval[luptr], &nsupr, - &x[fsupc], &incx); + ztrsv_("L", "T", "U", &nsupc, &Lval[luptr], &nsupr, + &x[fsupc], &incx); #endif } } @@ -278,26 +285,27 @@ } } - solve_ops += 4 * nsupc * (nsupc + 1); + /* 1 z_div costs 10 flops */ + solve_ops += 4 * nsupc * (nsupc + 1) + 10 * nsupc; if ( nsupc == 1 ) { z_div(&x[fsupc], &x[fsupc], &Lval[luptr]); } else { #ifdef _CRAY ftcs1 = _cptofcd("U", strlen("U")); - ftcs2 = _cptofcd(trans, strlen("T")); + ftcs2 = _cptofcd("T", strlen("T")); ftcs3 = _cptofcd("N", strlen("N")); CTRSV( ftcs1, ftcs2, ftcs3, &nsupc, &Lval[luptr], &nsupr, &x[fsupc], &incx); #else - ztrsv_("U", trans, "N", &nsupc, &Lval[luptr], &nsupr, - &x[fsupc], &incx); + ztrsv_("U", "T", "N", &nsupc, &Lval[luptr], &nsupr, + &x[fsupc], &incx); #endif } } /* for k ... */ } - } else { /* Form x := conj(inv(A'))*x */ - + } else { /* Form x := conj(inv(A'))*x */ + if ( lsame_(uplo, "L") ) { /* Form x := conj(inv(L'))*x */ if ( L->nrow == 0 ) return 0; /* Quick return */ @@ -321,19 +329,19 @@ z_sub(&x[jcol], &x[jcol], &comp_zero); iptr++; } - } - - if ( nsupc > 1 ) { + } + + if ( nsupc > 1 ) { solve_ops += 4 * nsupc * (nsupc - 1); #ifdef _CRAY ftcs1 = _cptofcd("L", strlen("L")); ftcs2 = _cptofcd(trans, strlen("T")); ftcs3 = _cptofcd("U", strlen("U")); - CTRSV(ftcs1, ftcs2, ftcs3, &nsupc, &Lval[luptr], &nsupr, + ZTRSV(ftcs1, ftcs2, ftcs3, &nsupc, &Lval[luptr], &nsupr, &x[fsupc], &incx); #else ztrsv_("L", trans, "U", &nsupc, &Lval[luptr], &nsupr, - &x[fsupc], &incx); + &x[fsupc], &incx); #endif } } @@ -357,25 +365,26 @@ } } - solve_ops += 4 * nsupc * (nsupc + 1); - + /* 1 z_div costs 10 flops */ + solve_ops += 4 * nsupc * (nsupc + 1) + 10 * nsupc; + if ( nsupc == 1 ) { - zz_conj(&temp, &Lval[luptr]) + zz_conj(&temp, &Lval[luptr]); z_div(&x[fsupc], &x[fsupc], &temp); } else { #ifdef _CRAY ftcs1 = _cptofcd("U", strlen("U")); ftcs2 = _cptofcd(trans, strlen("T")); ftcs3 = _cptofcd("N", strlen("N")); - CTRSV( ftcs1, ftcs2, ftcs3, &nsupc, &Lval[luptr], &nsupr, + ZTRSV( ftcs1, ftcs2, ftcs3, &nsupc, &Lval[luptr], &nsupr, &x[fsupc], &incx); #else ztrsv_("U", trans, "N", &nsupc, &Lval[luptr], &nsupr, &x[fsupc], &incx); #endif - } - } /* for k ... */ - } + } + } /* for k ... */ + } } stat->ops[SOLVE] += solve_ops; @@ -385,64 +394,68 @@ +/*! \brief Performs one of the matrix-vector operations y := alpha*A*x + beta*y, or y := alpha*A'*x + beta*y + * + *
  
+ *   Purpose   
+ *   =======   
+ *
+ *   sp_zgemv()  performs one of the matrix-vector operations   
+ *      y := alpha*A*x + beta*y,   or   y := alpha*A'*x + beta*y,   
+ *   where alpha and beta are scalars, x and y are vectors and A is a
+ *   sparse A->nrow by A->ncol matrix.   
+ *
+ *   Parameters   
+ *   ==========   
+ *
+ *   TRANS  - (input) char*
+ *            On entry, TRANS specifies the operation to be performed as   
+ *            follows:   
+ *               TRANS = 'N' or 'n'   y := alpha*A*x + beta*y.   
+ *               TRANS = 'T' or 't'   y := alpha*A'*x + beta*y.   
+ *               TRANS = 'C' or 'c'   y := alpha*A'*x + beta*y.   
+ *
+ *   ALPHA  - (input) doublecomplex
+ *            On entry, ALPHA specifies the scalar alpha.   
+ *
+ *   A      - (input) SuperMatrix*
+ *            Before entry, the leading m by n part of the array A must   
+ *            contain the matrix of coefficients.   
+ *
+ *   X      - (input) doublecomplex*, array of DIMENSION at least   
+ *            ( 1 + ( n - 1 )*abs( INCX ) ) when TRANS = 'N' or 'n'   
+ *           and at least   
+ *            ( 1 + ( m - 1 )*abs( INCX ) ) otherwise.   
+ *            Before entry, the incremented array X must contain the   
+ *            vector x.   
+ * 
+ *   INCX   - (input) int
+ *            On entry, INCX specifies the increment for the elements of   
+ *            X. INCX must not be zero.   
+ *
+ *   BETA   - (input) doublecomplex
+ *            On entry, BETA specifies the scalar beta. When BETA is   
+ *            supplied as zero then Y need not be set on input.   
+ *
+ *   Y      - (output) doublecomplex*,  array of DIMENSION at least   
+ *            ( 1 + ( m - 1 )*abs( INCY ) ) when TRANS = 'N' or 'n'   
+ *            and at least   
+ *            ( 1 + ( n - 1 )*abs( INCY ) ) otherwise.   
+ *            Before entry with BETA non-zero, the incremented array Y   
+ *            must contain the vector y. On exit, Y is overwritten by the 
+ *            updated vector y.
+ *	      
+ *   INCY   - (input) int
+ *            On entry, INCY specifies the increment for the elements of   
+ *            Y. INCY must not be zero.   
+ *
+ *    ==== Sparse Level 2 Blas routine.   
+ * 
+*/ int sp_zgemv(char *trans, doublecomplex alpha, SuperMatrix *A, doublecomplex *x, int incx, doublecomplex beta, doublecomplex *y, int incy) { -/* Purpose - ======= - - sp_zgemv() performs one of the matrix-vector operations - y := alpha*A*x + beta*y, or y := alpha*A'*x + beta*y, - where alpha and beta are scalars, x and y are vectors and A is a - sparse A->nrow by A->ncol matrix. - - Parameters - ========== - - TRANS - (input) char* - On entry, TRANS specifies the operation to be performed as - follows: - TRANS = 'N' or 'n' y := alpha*A*x + beta*y. - TRANS = 'T' or 't' y := alpha*A'*x + beta*y. - TRANS = 'C' or 'c' y := alpha*A'*x + beta*y. - - ALPHA - (input) doublecomplex - On entry, ALPHA specifies the scalar alpha. - - A - (input) SuperMatrix* - Before entry, the leading m by n part of the array A must - contain the matrix of coefficients. - - X - (input) doublecomplex*, array of DIMENSION at least - ( 1 + ( n - 1 )*abs( INCX ) ) when TRANS = 'N' or 'n' - and at least - ( 1 + ( m - 1 )*abs( INCX ) ) otherwise. - Before entry, the incremented array X must contain the - vector x. - - INCX - (input) int - On entry, INCX specifies the increment for the elements of - X. INCX must not be zero. - - BETA - (input) doublecomplex - On entry, BETA specifies the scalar beta. When BETA is - supplied as zero then Y need not be set on input. - - Y - (output) doublecomplex*, array of DIMENSION at least - ( 1 + ( m - 1 )*abs( INCY ) ) when TRANS = 'N' or 'n' - and at least - ( 1 + ( n - 1 )*abs( INCY ) ) otherwise. - Before entry with BETA non-zero, the incremented array Y - must contain the vector y. On exit, Y is overwritten by the - updated vector y. - - INCY - (input) int - On entry, INCY specifies the increment for the elements of - Y. INCY must not be zero. - - ==== Sparse Level 2 Blas routine. -*/ /* Local variables */ NCformat *Astore; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zsp_blas3.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zsp_blas3.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zsp_blas3.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zsp_blas3.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,116 +1,122 @@ - -/* +/*! @file zsp_blas3.c + * \brief Sparse BLAS3, using some dense BLAS3 operations + * + *
  * -- SuperLU routine (version 2.0) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
  * November 15, 1997
- *
+ * 
*/ /* * File name: sp_blas3.c * Purpose: Sparse BLAS3, using some dense BLAS3 operations. */ -#include "zsp_defs.h" -#include "util.h" +#include "slu_zdefs.h" + +/*! \brief + * + *
+ * Purpose   
+ *   =======   
+ * 
+ *   sp_z performs one of the matrix-matrix operations   
+ * 
+ *      C := alpha*op( A )*op( B ) + beta*C,   
+ * 
+ *   where  op( X ) is one of 
+ * 
+ *      op( X ) = X   or   op( X ) = X'   or   op( X ) = conjg( X' ),
+ * 
+ *   alpha and beta are scalars, and A, B and C are matrices, with op( A ) 
+ *   an m by k matrix,  op( B )  a  k by n matrix and  C an m by n matrix. 
+ *   
+ * 
+ *   Parameters   
+ *   ==========   
+ * 
+ *   TRANSA - (input) char*
+ *            On entry, TRANSA specifies the form of op( A ) to be used in 
+ *            the matrix multiplication as follows:   
+ *               TRANSA = 'N' or 'n',  op( A ) = A.   
+ *               TRANSA = 'T' or 't',  op( A ) = A'.   
+ *               TRANSA = 'C' or 'c',  op( A ) = conjg( A' ).   
+ *            Unchanged on exit.   
+ * 
+ *   TRANSB - (input) char*
+ *            On entry, TRANSB specifies the form of op( B ) to be used in 
+ *            the matrix multiplication as follows:   
+ *               TRANSB = 'N' or 'n',  op( B ) = B.   
+ *               TRANSB = 'T' or 't',  op( B ) = B'.   
+ *               TRANSB = 'C' or 'c',  op( B ) = conjg( B' ).   
+ *            Unchanged on exit.   
+ * 
+ *   M      - (input) int   
+ *            On entry,  M  specifies  the number of rows of the matrix 
+ *	     op( A ) and of the matrix C.  M must be at least zero. 
+ *	     Unchanged on exit.   
+ * 
+ *   N      - (input) int
+ *            On entry,  N specifies the number of columns of the matrix 
+ *	     op( B ) and the number of columns of the matrix C. N must be 
+ *	     at least zero.
+ *	     Unchanged on exit.   
+ * 
+ *   K      - (input) int
+ *            On entry, K specifies the number of columns of the matrix 
+ *	     op( A ) and the number of rows of the matrix op( B ). K must 
+ *	     be at least  zero.   
+ *           Unchanged on exit.
+ *      
+ *   ALPHA  - (input) doublecomplex
+ *            On entry, ALPHA specifies the scalar alpha.   
+ * 
+ *   A      - (input) SuperMatrix*
+ *            Matrix A with a sparse format, of dimension (A->nrow, A->ncol).
+ *            Currently, the type of A can be:
+ *                Stype = NC or NCP; Dtype = SLU_Z; Mtype = GE. 
+ *            In the future, more general A can be handled.
+ * 
+ *   B      - DOUBLE COMPLEX PRECISION array of DIMENSION ( LDB, kb ), where kb is 
+ *            n when TRANSB = 'N' or 'n',  and is  k otherwise.   
+ *            Before entry with  TRANSB = 'N' or 'n',  the leading k by n 
+ *            part of the array B must contain the matrix B, otherwise 
+ *            the leading n by k part of the array B must contain the 
+ *            matrix B.   
+ *            Unchanged on exit.   
+ * 
+ *   LDB    - (input) int
+ *            On entry, LDB specifies the first dimension of B as declared 
+ *            in the calling (sub) program. LDB must be at least max( 1, n ).  
+ *            Unchanged on exit.   
+ * 
+ *   BETA   - (input) doublecomplex
+ *            On entry, BETA specifies the scalar beta. When BETA is   
+ *            supplied as zero then C need not be set on input.   
+ *  
+ *   C      - DOUBLE COMPLEX PRECISION array of DIMENSION ( LDC, n ).   
+ *            Before entry, the leading m by n part of the array C must 
+ *            contain the matrix C,  except when beta is zero, in which 
+ *            case C need not be set on entry.   
+ *            On exit, the array C is overwritten by the m by n matrix 
+ *	     ( alpha*op( A )*B + beta*C ).   
+ *  
+ *   LDC    - (input) int
+ *            On entry, LDC specifies the first dimension of C as declared 
+ *            in the calling (sub)program. LDC must be at least max(1,m).   
+ *            Unchanged on exit.   
+ *  
+ *   ==== Sparse Level 3 Blas routine.   
+ * 
+ */ int sp_zgemm(char *transa, char *transb, int m, int n, int k, doublecomplex alpha, SuperMatrix *A, doublecomplex *b, int ldb, doublecomplex beta, doublecomplex *c, int ldc) { -/* Purpose - ======= - - sp_z performs one of the matrix-matrix operations - - C := alpha*op( A )*op( B ) + beta*C, - - where op( X ) is one of - - op( X ) = X or op( X ) = X' or op( X ) = conjg( X' ), - - alpha and beta are scalars, and A, B and C are matrices, with op( A ) - an m by k matrix, op( B ) a k by n matrix and C an m by n matrix. - - - Parameters - ========== - - TRANSA - (input) char* - On entry, TRANSA specifies the form of op( A ) to be used in - the matrix multiplication as follows: - TRANSA = 'N' or 'n', op( A ) = A. - TRANSA = 'T' or 't', op( A ) = A'. - TRANSA = 'C' or 'c', op( A ) = conjg( A' ). - Unchanged on exit. - - TRANSB - (input) char* - On entry, TRANSB specifies the form of op( B ) to be used in - the matrix multiplication as follows: - TRANSB = 'N' or 'n', op( B ) = B. - TRANSB = 'T' or 't', op( B ) = B'. - TRANSB = 'C' or 'c', op( B ) = conjg( B' ). - Unchanged on exit. - - M - (input) int - On entry, M specifies the number of rows of the matrix - op( A ) and of the matrix C. M must be at least zero. - Unchanged on exit. - - N - (input) int - On entry, N specifies the number of columns of the matrix - op( B ) and the number of columns of the matrix C. N must be - at least zero. - Unchanged on exit. - - K - (input) int - On entry, K specifies the number of columns of the matrix - op( A ) and the number of rows of the matrix op( B ). K must - be at least zero. - Unchanged on exit. - - ALPHA - (input) doublecomplex - On entry, ALPHA specifies the scalar alpha. - - A - (input) SuperMatrix* - Matrix A with a sparse format, of dimension (A->nrow, A->ncol). - Currently, the type of A can be: - Stype = NC or NCP; Dtype = SLU_Z; Mtype = GE. - In the future, more general A can be handled. - - B - DOUBLE COMPLEX PRECISION array of DIMENSION ( LDB, kb ), where kb is - n when TRANSB = 'N' or 'n', and is k otherwise. - Before entry with TRANSB = 'N' or 'n', the leading k by n - part of the array B must contain the matrix B, otherwise - the leading n by k part of the array B must contain the - matrix B. - Unchanged on exit. - - LDB - (input) int - On entry, LDB specifies the first dimension of B as declared - in the calling (sub) program. LDB must be at least max( 1, n ). - Unchanged on exit. - - BETA - (input) doublecomplex - On entry, BETA specifies the scalar beta. When BETA is - supplied as zero then C need not be set on input. - - C - DOUBLE COMPLEX PRECISION array of DIMENSION ( LDC, n ). - Before entry, the leading m by n part of the array C must - contain the matrix C, except when beta is zero, in which - case C need not be set on entry. - On exit, the array C is overwritten by the m by n matrix - ( alpha*op( A )*B + beta*C ). - - LDC - (input) int - On entry, LDC specifies the first dimension of C as declared - in the calling (sub)program. LDC must be at least max(1,m). - Unchanged on exit. - - ==== Sparse Level 3 Blas routine. -*/ int incx = 1, incy = 1; int j; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zsp_defs.h python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zsp_defs.h --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zsp_defs.h 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zsp_defs.h 1970-01-01 01:00:00.000000000 +0100 @@ -1,237 +0,0 @@ - -/* - * -- SuperLU routine (version 3.0) -- - * Univ. of California Berkeley, Xerox Palo Alto Research Center, - * and Lawrence Berkeley National Lab. - * October 15, 2003 - * - */ -#ifndef __SUPERLU_zSP_DEFS /* allow multiple inclusions */ -#define __SUPERLU_zSP_DEFS - -/* - * File name: zsp_defs.h - * Purpose: Sparse matrix types and function prototypes - * History: - */ - -#ifdef _CRAY -#include -#include -#endif - -/* Define my integer type int_t */ -typedef int int_t; /* default */ - -#include "Cnames.h" -#include "supermatrix.h" -#include "util.h" -#include "dcomplex.h" - - -/* - * Global data structures used in LU factorization - - * - * nsuper: #supernodes = nsuper + 1, numbered [0, nsuper]. - * (xsup,supno): supno[i] is the supernode no to which i belongs; - * xsup(s) points to the beginning of the s-th supernode. - * e.g. supno 0 1 2 2 3 3 3 4 4 4 4 4 (n=12) - * xsup 0 1 2 4 7 12 - * Note: dfs will be performed on supernode rep. relative to the new - * row pivoting ordering - * - * (xlsub,lsub): lsub[*] contains the compressed subscript of - * rectangular supernodes; xlsub[j] points to the starting - * location of the j-th column in lsub[*]. Note that xlsub - * is indexed by column. - * Storage: original row subscripts - * - * During the course of sparse LU factorization, we also use - * (xlsub,lsub) for the purpose of symmetric pruning. For each - * supernode {s,s+1,...,t=s+r} with first column s and last - * column t, the subscript set - * lsub[j], j=xlsub[s], .., xlsub[s+1]-1 - * is the structure of column s (i.e. structure of this supernode). - * It is used for the storage of numerical values. - * Furthermore, - * lsub[j], j=xlsub[t], .., xlsub[t+1]-1 - * is the structure of the last column t of this supernode. - * It is for the purpose of symmetric pruning. Therefore, the - * structural subscripts can be rearranged without making physical - * interchanges among the numerical values. - * - * However, if the supernode has only one column, then we - * only keep one set of subscripts. For any subscript interchange - * performed, similar interchange must be done on the numerical - * values. - * - * The last column structures (for pruning) will be removed - * after the numercial LU factorization phase. - * - * (xlusup,lusup): lusup[*] contains the numerical values of the - * rectangular supernodes; xlusup[j] points to the starting - * location of the j-th column in storage vector lusup[*] - * Note: xlusup is indexed by column. - * Each rectangular supernode is stored by column-major - * scheme, consistent with Fortran 2-dim array storage. - * - * (xusub,ucol,usub): ucol[*] stores the numerical values of - * U-columns outside the rectangular supernodes. The row - * subscript of nonzero ucol[k] is stored in usub[k]. - * xusub[i] points to the starting location of column i in ucol. - * Storage: new row subscripts; that is subscripts of PA. - */ -typedef struct { - int *xsup; /* supernode and column mapping */ - int *supno; - int *lsub; /* compressed L subscripts */ - int *xlsub; - doublecomplex *lusup; /* L supernodes */ - int *xlusup; - doublecomplex *ucol; /* U columns */ - int *usub; - int *xusub; - int nzlmax; /* current max size of lsub */ - int nzumax; /* " " " ucol */ - int nzlumax; /* " " " lusup */ - int n; /* number of columns in the matrix */ - LU_space_t MemModel; /* 0 - system malloc'd; 1 - user provided */ -} GlobalLU_t; - -typedef struct { - float for_lu; - float total_needed; - int expansions; -} mem_usage_t; - -#ifdef __cplusplus -extern "C" { -#endif - -/* Driver routines */ -extern void -zgssv(superlu_options_t *, SuperMatrix *, int *, int *, SuperMatrix *, - SuperMatrix *, SuperMatrix *, SuperLUStat_t *, int *); -extern void -zgssvx(superlu_options_t *, SuperMatrix *, int *, int *, int *, - char *, double *, double *, SuperMatrix *, SuperMatrix *, - void *, int, SuperMatrix *, SuperMatrix *, - double *, double *, double *, double *, - mem_usage_t *, SuperLUStat_t *, int *); - -/* Supernodal LU factor related */ -extern void -zCreate_CompCol_Matrix(SuperMatrix *, int, int, int, doublecomplex *, - int *, int *, Stype_t, Dtype_t, Mtype_t); -extern void -zCreate_CompRow_Matrix(SuperMatrix *, int, int, int, doublecomplex *, - int *, int *, Stype_t, Dtype_t, Mtype_t); -extern void -zCopy_CompCol_Matrix(SuperMatrix *, SuperMatrix *); -extern void -zCreate_Dense_Matrix(SuperMatrix *, int, int, doublecomplex *, int, - Stype_t, Dtype_t, Mtype_t); -extern void -zCreate_SuperNode_Matrix(SuperMatrix *, int, int, int, doublecomplex *, - int *, int *, int *, int *, int *, - Stype_t, Dtype_t, Mtype_t); -extern void -zCopy_Dense_Matrix(int, int, doublecomplex *, int, doublecomplex *, int); - -extern void countnz (const int, int *, int *, int *, GlobalLU_t *); -extern void fixupL (const int, const int *, GlobalLU_t *); - -extern void zallocateA (int, int, doublecomplex **, int **, int **); -extern void zgstrf (superlu_options_t*, SuperMatrix*, double, - int, int, int*, void *, int, int *, int *, - SuperMatrix *, SuperMatrix *, SuperLUStat_t*, int *); -extern int zsnode_dfs (const int, const int, const int *, const int *, - const int *, int *, int *, GlobalLU_t *); -extern int zsnode_bmod (const int, const int, const int, doublecomplex *, - doublecomplex *, GlobalLU_t *, SuperLUStat_t*); -extern void zpanel_dfs (const int, const int, const int, SuperMatrix *, - int *, int *, doublecomplex *, int *, int *, int *, - int *, int *, int *, int *, GlobalLU_t *); -extern void zpanel_bmod (const int, const int, const int, const int, - doublecomplex *, doublecomplex *, int *, int *, - GlobalLU_t *, SuperLUStat_t*); -extern int zcolumn_dfs (const int, const int, int *, int *, int *, int *, - int *, int *, int *, int *, int *, GlobalLU_t *); -extern int zcolumn_bmod (const int, const int, doublecomplex *, - doublecomplex *, int *, int *, int, - GlobalLU_t *, SuperLUStat_t*); -extern int zcopy_to_ucol (int, int, int *, int *, int *, - doublecomplex *, GlobalLU_t *); -extern int zpivotL (const int, const double, int *, int *, - int *, int *, int *, GlobalLU_t *, SuperLUStat_t*); -extern void zpruneL (const int, const int *, const int, const int, - const int *, const int *, int *, GlobalLU_t *); -extern void zreadmt (int *, int *, int *, doublecomplex **, int **, int **); -extern void zGenXtrue (int, int, doublecomplex *, int); -extern void zFillRHS (trans_t, int, doublecomplex *, int, SuperMatrix *, - SuperMatrix *); -extern void zgstrs (trans_t, SuperMatrix *, SuperMatrix *, int *, int *, - SuperMatrix *, SuperLUStat_t*, int *); - - -/* Driver related */ - -extern void zgsequ (SuperMatrix *, double *, double *, double *, - double *, double *, int *); -extern void zlaqgs (SuperMatrix *, double *, double *, double, - double, double, char *); -extern void zgscon (char *, SuperMatrix *, SuperMatrix *, - double, double *, SuperLUStat_t*, int *); -extern double zPivotGrowth(int, SuperMatrix *, int *, - SuperMatrix *, SuperMatrix *); -extern void zgsrfs (trans_t, SuperMatrix *, SuperMatrix *, - SuperMatrix *, int *, int *, char *, double *, - double *, SuperMatrix *, SuperMatrix *, - double *, double *, SuperLUStat_t*, int *); - -extern int sp_ztrsv (char *, char *, char *, SuperMatrix *, - SuperMatrix *, doublecomplex *, SuperLUStat_t*, int *); -extern int sp_zgemv (char *, doublecomplex, SuperMatrix *, doublecomplex *, - int, doublecomplex, doublecomplex *, int); - -extern int sp_zgemm (char *, char *, int, int, int, doublecomplex, - SuperMatrix *, doublecomplex *, int, doublecomplex, - doublecomplex *, int); - -/* Memory-related */ -extern int zLUMemInit (fact_t, void *, int, int, int, int, int, - SuperMatrix *, SuperMatrix *, - GlobalLU_t *, int **, doublecomplex **); -extern void zSetRWork (int, int, doublecomplex *, doublecomplex **, doublecomplex **); -extern void zLUWorkFree (int *, doublecomplex *, GlobalLU_t *); -extern int zLUMemXpand (int, int, MemType, int *, GlobalLU_t *); - -extern doublecomplex *doublecomplexMalloc(int); -extern doublecomplex *doublecomplexCalloc(int); -extern double *doubleMalloc(int); -extern double *doubleCalloc(int); -extern int zmemory_usage(const int, const int, const int, const int); -extern int zQuerySpace (SuperMatrix *, SuperMatrix *, mem_usage_t *); - -/* Auxiliary routines */ -extern void zreadhb(int *, int *, int *, doublecomplex **, int **, int **); -extern void zCompRow_to_CompCol(int, int, int, doublecomplex*, int*, int*, - doublecomplex **, int **, int **); -extern void zfill (doublecomplex *, int, doublecomplex); -extern void zinf_norm_error (int, SuperMatrix *, doublecomplex *); -extern void PrintPerf (SuperMatrix *, SuperMatrix *, mem_usage_t *, - doublecomplex, doublecomplex, doublecomplex *, doublecomplex *, char *); - -/* Routines for debugging */ -extern void zPrint_CompCol_Matrix(char *, SuperMatrix *); -extern void zPrint_SuperNode_Matrix(char *, SuperMatrix *); -extern void zPrint_Dense_Matrix(char *, SuperMatrix *); -extern void print_lu_col(char *, int, int, int *, GlobalLU_t *); -extern void check_tempv(int, doublecomplex *); - -#ifdef __cplusplus - } -#endif - -#endif /* __SUPERLU_zSP_DEFS */ - diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zutil.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zutil.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zutil.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/SuperLU/SRC/zutil.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,26 +1,29 @@ -/* - * -- SuperLU routine (version 3.0) -- +/*! @file zutil.c + * \brief Matrix utility functions + * + *
+ * -- SuperLU routine (version 3.1) --
  * Univ. of California Berkeley, Xerox Palo Alto Research Center,
  * and Lawrence Berkeley National Lab.
- * October 15, 2003
+ * August 1, 2008
+ *
+ * Copyright (c) 1994 by Xerox Corporation.  All rights reserved.
  *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY
+ * EXPRESSED OR IMPLIED.  ANY USE IS AT YOUR OWN RISK.
+ * 
+ * Permission is hereby granted to use or copy this program for any
+ * purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is
+ * granted, provided the above notices are retained, and a notice that
+ * the code was modified is included with the above copyright notice.
+ * 
*/ -/* - Copyright (c) 1994 by Xerox Corporation. All rights reserved. - - THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY - EXPRESSED OR IMPLIED. ANY USE IS AT YOUR OWN RISK. - - Permission is hereby granted to use or copy this program for any - purpose, provided the above notices are retained on all copies. - Permission to modify the code and to distribute modified code is - granted, provided the above notices are retained, and a notice that - the code was modified is included with the above copyright notice. -*/ + #include -#include "zsp_defs.h" +#include "slu_zdefs.h" void zCreate_CompCol_Matrix(SuperMatrix *A, int m, int n, int nnz, @@ -64,7 +67,7 @@ Astore->rowptr = rowptr; } -/* Copy matrix A into matrix B. */ +/*! \brief Copy matrix A into matrix B. */ void zCopy_CompCol_Matrix(SuperMatrix *A, SuperMatrix *B) { @@ -108,12 +111,7 @@ zCopy_Dense_Matrix(int M, int N, doublecomplex *X, int ldx, doublecomplex *Y, int ldy) { -/* - * - * Purpose - * ======= - * - * Copies a two-dimensional matrix X to another matrix Y. +/*! \brief Copies a two-dimensional matrix X to another matrix Y. */ int i, j; @@ -150,8 +148,7 @@ } -/* - * Convert a row compressed storage into a column compressed storage. +/*! \brief Convert a row compressed storage into a column compressed storage. */ void zCompRow_to_CompCol(int m, int n, int nnz, @@ -240,7 +237,8 @@ for (j = c; j < c + nsup; ++j) { d = Astore->nzval_colptr[j]; for (i = rowind_colptr[c]; i < rowind_colptr[c+1]; ++i) { - printf("%d\t%d\t%e\t%e\n", rowind[i], j, dp[d++], dp[d++]); + printf("%d\t%d\t%e\t%e\n", rowind[i], j, dp[d], dp[d+1]); + d += 2; } } } @@ -266,23 +264,24 @@ void zPrint_Dense_Matrix(char *what, SuperMatrix *A) { - DNformat *Astore; - register int i; + DNformat *Astore = (DNformat *) A->Store; + register int i, j, lda = Astore->lda; double *dp; printf("\nDense matrix %s:\n", what); printf("Stype %d, Dtype %d, Mtype %d\n", A->Stype,A->Dtype,A->Mtype); - Astore = (DNformat *) A->Store; dp = (double *) Astore->nzval; - printf("nrow %d, ncol %d, lda %d\n", A->nrow,A->ncol,Astore->lda); + printf("nrow %d, ncol %d, lda %d\n", A->nrow,A->ncol,lda); printf("\nnzval: "); - for (i = 0; i < 2*A->nrow; ++i) printf("%f ", dp[i]); + for (j = 0; j < A->ncol; ++j) { + for (i = 0; i < 2*A->nrow; ++i) printf("%f ", dp[i + j*2*lda]); + printf("\n"); + } printf("\n"); fflush(stdout); } -/* - * Diagnostic print of column "jcol" in the U/L factor. +/*! \brief Diagnostic print of column "jcol" in the U/L factor. */ void zprint_lu_col(char *msg, int jcol, int pivrow, int *xprune, GlobalLU_t *Glu) @@ -324,9 +323,7 @@ } -/* - * Check whether tempv[] == 0. This should be true before and after - * calling any numeric routines, i.e., "panel_bmod" and "column_bmod". +/*! \brief Check whether tempv[] == 0. This should be true before and after calling any numeric routines, i.e., "panel_bmod" and "column_bmod". */ void zcheck_tempv(int n, doublecomplex *tempv) { @@ -353,8 +350,7 @@ } } -/* - * Let rhs[i] = sum of i-th row of A, so the solution vector is all 1's +/*! \brief Let rhs[i] = sum of i-th row of A, so the solution vector is all 1's */ void zFillRHS(trans_t trans, int nrhs, doublecomplex *x, int ldx, @@ -383,8 +379,7 @@ } -/* - * Fills a doublecomplex precision array with a given value. +/*! \brief Fills a doublecomplex precision array with a given value. */ void zfill(doublecomplex *a, int alen, doublecomplex dval) @@ -395,8 +390,7 @@ -/* - * Check the inf-norm of the error vector +/*! \brief Check the inf-norm of the error vector */ void zinf_norm_error(int nrhs, SuperMatrix *X, doublecomplex *xtrue) { @@ -424,7 +418,7 @@ -/* Print performance of the code. */ +/*! \brief Print performance of the code. */ void zPrintPerf(SuperMatrix *L, SuperMatrix *U, mem_usage_t *mem_usage, double rpg, double rcond, double *ferr, @@ -452,9 +446,9 @@ printf("\tNo of nonzeros in factor U = %d\n", Ustore->nnz); printf("\tNo of nonzeros in L+U = %d\n", Lstore->nnz + Ustore->nnz); - printf("L\\U MB %.3f\ttotal MB needed %.3f\texpansions %d\n", - mem_usage->for_lu/1e6, mem_usage->total_needed/1e6, - mem_usage->expansions); + printf("L\\U MB %.3f\ttotal MB needed %.3f\n", + mem_usage->for_lu/1e6, mem_usage->total_needed/1e6); + printf("Number of memory expansions: %d\n", stat->expansions); printf("\tFactor\tMflops\tSolve\tMflops\tEtree\tEquil\tRcond\tRefine\n"); printf("PERF:%8.2f%8.2f%8.2f%8.2f%8.2f%8.2f%8.2f%8.2f\n", diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/_superlumodule.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/_superlumodule.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/_superlumodule.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/_superlumodule.c 2010-07-26 15:48:34.000000000 +0100 @@ -0,0 +1,256 @@ +/* -*-c-*- */ +/* + * _superlu module + * + * Python interface to SuperLU decompositions. + */ + +/* Copyright 1999 Travis Oliphant + * + * Permision to copy and modified this file is granted under + * the revised BSD license. No warranty is expressed or IMPLIED + */ + +#include +#include + +#include "_superluobject.h" + +extern jmp_buf _superlu_py_jmpbuf; + +/* + * Data-type dependent implementations for Xgssv and Xgstrf; + * + * These have to included from separate files because of SuperLU include + * structure. + */ + +static PyObject * +Py_gssv(PyObject *self, PyObject *args, PyObject *kwdict) +{ + PyObject *Py_B=NULL, *Py_X=NULL; + PyArrayObject *nzvals=NULL; + PyArrayObject *colind=NULL, *rowptr=NULL; + int N, nnz; + int info; + int csc=0; + int *perm_r=NULL, *perm_c=NULL; + SuperMatrix A, B, L, U; + superlu_options_t options; + SuperLUStat_t stat; + PyObject *option_dict = NULL; + int type; + int ssv_finished = 0; + + static char *kwlist[] = {"N","nnz","nzvals","colind","rowptr","B", "csc", + "options",NULL}; + + /* Get input arguments */ + if (!PyArg_ParseTupleAndKeywords(args, kwdict, "iiO!O!O!O|iO", kwlist, + &N, &nnz, &PyArray_Type, &nzvals, + &PyArray_Type, &colind, &PyArray_Type, + &rowptr, &Py_B, &csc, &option_dict)) { + return NULL; + } + + if (!_CHECK_INTEGER(colind) || !_CHECK_INTEGER(rowptr)) { + PyErr_SetString(PyExc_TypeError, + "colind and rowptr must be of type cint"); + return NULL; + } + + type = PyArray_TYPE(nzvals); + if (!CHECK_SLU_TYPE(type)) { + PyErr_SetString(PyExc_TypeError, + "nzvals is not of a type supported by SuperLU"); + return NULL; + } + + if (!set_superlu_options_from_dict(&options, 0, option_dict, NULL, NULL)) { + return NULL; + } + + /* Create Space for output */ + Py_X = PyArray_CopyFromObject(Py_B, type, 1, 2); + if (Py_X == NULL) return NULL; + + if (csc) { + if (NCFormat_from_spMatrix(&A, N, N, nnz, nzvals, colind, rowptr, + type)) { + Py_DECREF(Py_X); + return NULL; + } + } + else { + if (NRFormat_from_spMatrix(&A, N, N, nnz, nzvals, colind, rowptr, + type)) { + Py_DECREF(Py_X); + return NULL; + } + } + + if (DenseSuper_from_Numeric(&B, Py_X)) { + Destroy_SuperMatrix_Store(&A); + Py_DECREF(Py_X); + return NULL; + } + + /* B and Py_X share same data now but Py_X "owns" it */ + + /* Setup options */ + + if (setjmp(_superlu_py_jmpbuf)) { + goto fail; + } + else { + perm_c = intMalloc(N); + perm_r = intMalloc(N); + StatInit(&stat); + + /* Compute direct inverse of sparse Matrix */ + gssv(type, &options, &A, perm_c, perm_r, &L, &U, &B, &stat, &info); + } + ssv_finished = 1; + + SUPERLU_FREE(perm_r); + SUPERLU_FREE(perm_c); + Destroy_SuperMatrix_Store(&A); /* holds just a pointer to the data */ + Destroy_SuperMatrix_Store(&B); + Destroy_SuperNode_Matrix(&L); + Destroy_CompCol_Matrix(&U); + StatFree(&stat); + + return Py_BuildValue("Ni", Py_X, info); + +fail: + SUPERLU_FREE(perm_r); + SUPERLU_FREE(perm_c); + Destroy_SuperMatrix_Store(&A); /* holds just a pointer to the data */ + Destroy_SuperMatrix_Store(&B); + if (ssv_finished) { + /* Avoid trying to free partially initialized matrices; + might leak some memory, but avoids a crash */ + Destroy_SuperNode_Matrix(&L); + Destroy_CompCol_Matrix(&U); + } + StatFree(&stat); + Py_XDECREF(Py_X); + return NULL; +} + +static PyObject * +Py_gstrf(PyObject *self, PyObject *args, PyObject *keywds) +{ + /* default value for SuperLU parameters*/ + int N, nnz; + PyArrayObject *rowind, *colptr, *nzvals; + SuperMatrix A; + PyObject *result; + PyObject *option_dict = NULL; + int type; + int ilu = 0; + + static char *kwlist[] = {"N","nnz","nzvals","colind","rowptr", + "options", "ilu", + NULL}; + + int res = PyArg_ParseTupleAndKeywords( + args, keywds, "iiO!O!O!|Oi", kwlist, + &N, &nnz, + &PyArray_Type, &nzvals, + &PyArray_Type, &rowind, + &PyArray_Type, &colptr, + &option_dict, + &ilu); + + if (!res) + return NULL; + + if (!_CHECK_INTEGER(colptr) || !_CHECK_INTEGER(rowind)) { + PyErr_SetString(PyExc_TypeError, + "rowind and colptr must be of type cint"); + return NULL; + } + + type = PyArray_TYPE(nzvals); + if (!CHECK_SLU_TYPE(type)) { + PyErr_SetString(PyExc_TypeError, + "nzvals is not of a type supported by SuperLU"); + return NULL; + } + + if (NCFormat_from_spMatrix(&A, N, N, nnz, nzvals, rowind, colptr, + type)) { + goto fail; + } + + result = newSciPyLUObject(&A, option_dict, type, ilu); + if (result == NULL) { + goto fail; + } + + /* arrays of input matrix will not be freed */ + Destroy_SuperMatrix_Store(&A); + return result; + +fail: + /* arrays of input matrix will not be freed */ + Destroy_SuperMatrix_Store(&A); + return NULL; +} + +static char gssv_doc[] = "Direct inversion of sparse matrix.\n\nX = gssv(A,B) solves A*X = B for X."; + +static char gstrf_doc[] = "gstrf(A, ...)\n\ +\n\ +performs a factorization of the sparse matrix A=*(N,nnz,nzvals,rowind,colptr) and \n\ +returns a factored_lu object.\n\ +\n\ +arguments\n\ +---------\n\ +\n\ +Matrix to be factorized is represented as N,nnz,nzvals,rowind,colptr\n\ + as separate arguments. This is compressed sparse column representation.\n\ +\n\ +N number of rows and columns \n\ +nnz number of non-zero elements\n\ +nzvals non-zero values \n\ +rowind row-index for this column (same size as nzvals)\n\ +colptr index into rowind for first non-zero value in this column\n\ + size is (N+1). Last value should be nnz. \n\ +\n\ +additional keyword arguments:\n\ +-----------------------------\n\ +options specifies additional options for SuperLU\n\ + (same keys and values as in superlu_options_t C structure,\n\ + and additionally 'Relax' and 'PanelSize')\n\ +\n\ +ilu whether to perform an incomplete LU decomposition\n\ + (default: false)\n\ +"; + + +/* + * Main SuperLU module + */ + +static PyMethodDef SuperLU_Methods[] = { + {"gssv", (PyCFunction)Py_gssv, METH_VARARGS|METH_KEYWORDS, gssv_doc}, + {"gstrf", (PyCFunction)Py_gstrf, METH_VARARGS|METH_KEYWORDS, gstrf_doc}, + {NULL, NULL} +}; + +PyMODINIT_FUNC +init_superlu(void) +{ + PyObject *m, *d; + + SciPySuperLUType.ob_type = &PyType_Type; + + m = Py_InitModule("_superlu", SuperLU_Methods); + d = PyModule_GetDict(m); + + PyDict_SetItemString(d, "SciPyLUType", (PyObject *)&SciPySuperLUType); + + import_array(); +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/_superluobject.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/_superluobject.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/_superluobject.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/_superluobject.c 2010-07-26 15:48:34.000000000 +0100 @@ -1,12 +1,20 @@ +/* -*-c-*- */ +/* + * _superlu object + * + * Python object representing SuperLU factorization + some utility functions. + */ + +#include -#include "Python.h" -#include "SuperLU/SRC/zsp_defs.h" #define NO_IMPORT_ARRAY #include "_superluobject.h" #include +#include extern jmp_buf _superlu_py_jmpbuf; + /*********************************************************************** * SciPyLUObject methods */ @@ -22,7 +30,7 @@ x array, solution vector(s)\n\ trans 'N': solve A * x == b\n\ 'T': solve A^T * x == b\n\ - 'H': solve A^H * x == b (not yet implemented)\n\ + 'H': solve A^H * x == b\n\ (optional, default value 'N')\n\ "; @@ -37,12 +45,18 @@ static char *kwlist[] = {"rhs","trans",NULL}; + if (!CHECK_SLU_TYPE(self->type)) { + PyErr_SetString(PyExc_ValueError, "unsupported data type"); + return NULL; + } + if (!PyArg_ParseTupleAndKeywords(args, kwds, "O!|c", kwlist, &PyArray_Type, &b, &itrans)) return NULL; - /* solve transposed system: matrix was passed row-wise instead of column-wise */ + /* solve transposed system: matrix was passed row-wise instead of + * column-wise */ if (itrans == 'n' || itrans == 'N') trans = NOTRANS; else if (itrans == 't' || itrans == 'T') @@ -67,26 +81,13 @@ StatInit(&stat); /* Solve the system, overwriting vector x. */ - switch(self->type) { - case PyArray_FLOAT: - sgstrs(trans, &self->L, &self->U, self->perm_c, self->perm_r, &B, &stat, &info); - break; - case PyArray_DOUBLE: - dgstrs(trans, &self->L, &self->U, self->perm_c, self->perm_r, &B, &stat, &info); - break; - case PyArray_CFLOAT: - cgstrs(trans, &self->L, &self->U, self->perm_c, self->perm_r, &B, &stat, &info); - break; - case PyArray_CDOUBLE: - zgstrs(trans, &self->L, &self->U, self->perm_c, self->perm_r, &B, &stat, &info); - break; - default: - PyErr_SetString(PyExc_TypeError, "Invalid type for array."); - goto fail; - } + gstrs(self->type, + trans, &self->L, &self->U, self->perm_c, self->perm_r, &B, + &stat, &info); if (info) { - PyErr_SetString(PyExc_SystemError, "gstrs was called with invalid arguments"); + PyErr_SetString(PyExc_SystemError, + "gstrs was called with invalid arguments"); goto fail; } @@ -95,7 +96,7 @@ StatFree(&stat); return (PyObject *)x; - fail: +fail: Destroy_SuperMatrix_Store(&B); StatFree(&stat); Py_XDECREF(x); @@ -119,8 +120,12 @@ { SUPERLU_FREE(self->perm_r); SUPERLU_FREE(self->perm_c); - Destroy_SuperNode_Matrix(&self->L); - Destroy_CompCol_Matrix(&self->U); + if (self->L.Store != NULL) { + Destroy_SuperNode_Matrix(&self->L); + } + if (self->U.Store != NULL) { + Destroy_CompCol_Matrix(&self->U); + } PyObject_Del(self); } @@ -131,8 +136,22 @@ return Py_BuildValue("(i,i)", self->m, self->n); if (strcmp(name, "nnz") == 0) return Py_BuildValue("i", ((SCformat *)self->L.Store)->nnz + ((SCformat *)self->U.Store)->nnz); + if (strcmp(name, "perm_r") == 0) { + PyArrayObject* perm_r = PyArray_SimpleNewFromData(1, (npy_intp*) (&self->n), NPY_INT, (void*)self->perm_r); + /* For ref counting of the memory */ + PyArray_BASE(perm_r) = self; + Py_INCREF(self); + return perm_r ; + } + if (strcmp(name, "perm_c") == 0) { + PyArrayObject* perm_c = PyArray_SimpleNewFromData(1, (npy_intp*) (&self->n), NPY_INT, (void*)self->perm_c); + /* For ref counting of the memory */ + PyArray_BASE(perm_c) = self; + Py_INCREF(self); + return perm_c ; + } if (strcmp(name, "__members__") == 0) { - char *members[] = {"shape", "nnz"}; + char *members[] = {"shape", "nnz", "perm_r", "perm_c"}; int i; PyObject *list = PyList_New(sizeof(members)/sizeof(char *)); @@ -153,6 +172,27 @@ /*********************************************************************** * SciPySuperLUType structure */ +static char factored_lu_doc[] = "\ +Object resulting from a factorization of a sparse matrix\n\ +\n\ +Attributes\n\ +-----------\n\ +\n\ +shape : 2-tuple\n\ + the shape of the orginal matrix factored\n\ +nnz : int\n\ + the number of non-zero elements in the matrix\n\ +perm_c\n\ + the permutation applied to the colums of the matrix for the LU factorization\n\ +perm_r\n\ + the permutation applied to the rows of the matrix for the LU factorization\n\ +\n\ +Methods\n\ +-------\n\ +solve\n\ + solves the system for a given right hand side vector\n \ +\n\ +"; PyTypeObject SciPySuperLUType = { PyObject_HEAD_INIT(NULL) @@ -170,6 +210,13 @@ 0, /* tp_as_sequence*/ 0, /* tp_as_mapping*/ 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + 0, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + 0, /* tp_flags */ + factored_lu_doc, /* tp_doc */ }; @@ -197,32 +244,25 @@ ldx = m; } - if (setjmp(_superlu_py_jmpbuf)) return -1; - else - switch (aX->descr->type_num) { - case PyArray_FLOAT: - sCreate_Dense_Matrix(X, m, n, (float *)aX->data, ldx, SLU_DN, SLU_S, SLU_GE); - break; - case PyArray_DOUBLE: - dCreate_Dense_Matrix(X, m, n, (double *)aX->data, ldx, SLU_DN, SLU_D, SLU_GE); - break; - case PyArray_CFLOAT: - cCreate_Dense_Matrix(X, m, n, (complex *)aX->data, ldx, SLU_DN, SLU_C, SLU_GE); - break; - case PyArray_CDOUBLE: - zCreate_Dense_Matrix(X, m, n, (doublecomplex *)aX->data, ldx, SLU_DN, SLU_Z, SLU_GE); - break; - default: - PyErr_SetString(PyExc_TypeError, "Invalid type for Numeric array."); - return -1; + if (setjmp(_superlu_py_jmpbuf)) + return -1; + else { + if (!CHECK_SLU_TYPE(aX->descr->type_num)) { + PyErr_SetString(PyExc_ValueError, "unsupported data type"); + return -1; } - + Create_Dense_Matrix(aX->descr->type_num, X, m, n, + aX->data, ldx, SLU_DN, + NPY_TYPECODE_TO_SLU(aX->descr->type_num), SLU_GE); + } return 0; } /* Natively handles Compressed Sparse Row and CSC */ -int NRFormat_from_spMatrix(SuperMatrix *A, int m, int n, int nnz, PyArrayObject *nzvals, PyArrayObject *colind, PyArrayObject *rowptr, int typenum) +int NRFormat_from_spMatrix(SuperMatrix *A, int m, int n, int nnz, + PyArrayObject *nzvals, PyArrayObject *colind, + PyArrayObject *rowptr, int typenum) { int err = 0; @@ -234,34 +274,26 @@ return -1; } - if (setjmp(_superlu_py_jmpbuf)) return -1; - else - switch (nzvals->descr->type_num) { - case PyArray_FLOAT: - sCreate_CompRow_Matrix(A, m, n, nnz, (float *)nzvals->data, (int *)colind->data, \ - (int *)rowptr->data, SLU_NR, SLU_S, SLU_GE); - break; - case PyArray_DOUBLE: - dCreate_CompRow_Matrix(A, m, n, nnz, (double *)nzvals->data, (int *)colind->data, \ - (int *)rowptr->data, SLU_NR, SLU_D, SLU_GE); - break; - case PyArray_CFLOAT: - cCreate_CompRow_Matrix(A, m, n, nnz, (complex *)nzvals->data, (int *)colind->data, \ - (int *)rowptr->data, SLU_NR, SLU_C, SLU_GE); - break; - case PyArray_CDOUBLE: - zCreate_CompRow_Matrix(A, m, n, nnz, (doublecomplex *)nzvals->data, (int *)colind->data, \ - (int *)rowptr->data, SLU_NR, SLU_Z, SLU_GE); - break; - default: + if (setjmp(_superlu_py_jmpbuf)) + return -1; + else { + if (!CHECK_SLU_TYPE(nzvals->descr->type_num)) { PyErr_SetString(PyExc_TypeError, "Invalid type for array."); return -1; } + Create_CompRow_Matrix(nzvals->descr->type_num, + A, m, n, nnz, nzvals->data, (int *)colind->data, + (int *)rowptr->data, SLU_NR, + NPY_TYPECODE_TO_SLU(nzvals->descr->type_num), + SLU_GE); + } return 0; } -int NCFormat_from_spMatrix(SuperMatrix *A, int m, int n, int nnz, PyArrayObject *nzvals, PyArrayObject *rowind, PyArrayObject *colptr, int typenum) +int NCFormat_from_spMatrix(SuperMatrix *A, int m, int n, int nnz, + PyArrayObject *nzvals, PyArrayObject *rowind, + PyArrayObject *colptr, int typenum) { int err=0; @@ -274,53 +306,25 @@ } - if (setjmp(_superlu_py_jmpbuf)) return -1; - else - switch (nzvals->descr->type_num) { - case PyArray_FLOAT: - sCreate_CompCol_Matrix(A, m, n, nnz, (float *)nzvals->data, (int *)rowind->data, \ - (int *)colptr->data, SLU_NC, SLU_S, SLU_GE); - break; - case PyArray_DOUBLE: - dCreate_CompCol_Matrix(A, m, n, nnz, (double *)nzvals->data, (int *)rowind->data, \ - (int *)colptr->data, SLU_NC, SLU_D, SLU_GE); - break; - case PyArray_CFLOAT: - cCreate_CompCol_Matrix(A, m, n, nnz, (complex *)nzvals->data, (int *)rowind->data, \ - (int *)colptr->data, SLU_NC, SLU_C, SLU_GE); - break; - case PyArray_CDOUBLE: - zCreate_CompCol_Matrix(A, m, n, nnz, (doublecomplex *)nzvals->data, (int *)rowind->data, \ - (int *)colptr->data, SLU_NC, SLU_Z, SLU_GE); - break; - default: + if (setjmp(_superlu_py_jmpbuf)) + return -1; + else { + if (!CHECK_SLU_TYPE(nzvals->descr->type_num)) { PyErr_SetString(PyExc_TypeError, "Invalid type for array."); return -1; } + Create_CompCol_Matrix(nzvals->descr->type_num, + A, m, n, nnz, nzvals->data, (int *)rowind->data, + (int *)colptr->data, SLU_NC, + NPY_TYPECODE_TO_SLU(nzvals->descr->type_num), + SLU_GE); + } return 0; } -colperm_t superlu_module_getpermc(int permc_spec) -{ - switch(permc_spec) { - case 0: - return NATURAL; - case 1: - return MMD_ATA; - case 2: - return MMD_AT_PLUS_A; - case 3: - return COLAMD; - } - ABORT("Invalid input for permc_spec."); - return NATURAL; /* compiler complains... */ -} - PyObject * -newSciPyLUObject(SuperMatrix *A, double diag_pivot_thresh, - double drop_tol, int relax, int panel_size, int permc_spec, - int intype) +newSciPyLUObject(SuperMatrix *A, PyObject *option_dict, int intype, int ilu) { /* A must be in SLU_NC format used by the factorization routine. */ @@ -332,9 +336,16 @@ int n; superlu_options_t options; SuperLUStat_t stat; - + int panel_size, relax; + int trf_finished = 0; + n = A->ncol; + if (!set_superlu_options_from_dict(&options, ilu, option_dict, + &panel_size, &relax)) { + return NULL; + } + /* Create SciPyLUObject */ self = PyObject_New(SciPyLUObject, &SciPySuperLUType); if (self == NULL) @@ -351,46 +362,35 @@ etree = intMalloc(n); self->perm_r = intMalloc(n); self->perm_c = intMalloc(n); - - set_default_options(&options); - options.ColPerm=superlu_module_getpermc(permc_spec); - options.DiagPivotThresh = diag_pivot_thresh; StatInit(&stat); - - get_perm_c(permc_spec, A, self->perm_c); /* calc column permutation */ - sp_preorder(&options, A, self->perm_c, etree, &AC); /* apply column permutation */ - + + get_perm_c(options.ColPerm, A, self->perm_c); /* calc column permutation */ + sp_preorder(&options, A, self->perm_c, etree, &AC); /* apply column + * permutation */ /* Perform factorization */ - switch (A->Dtype) { - case SLU_S: - sgstrf(&options, &AC, (float) drop_tol, relax, panel_size, - etree, NULL, lwork, self->perm_c, self->perm_r, - &self->L, &self->U, &stat, &info); - break; - case SLU_D: - dgstrf(&options, &AC, drop_tol, relax, panel_size, - etree, NULL, lwork, self->perm_c, self->perm_r, - &self->L, &self->U, &stat, &info); - break; - case SLU_C: - cgstrf(&options, &AC, (float) drop_tol, relax, panel_size, - etree, NULL, lwork, self->perm_c, self->perm_r, - &self->L, &self->U, &stat, &info); - break; - case SLU_Z: - zgstrf(&options, &AC, drop_tol, relax, panel_size, - etree, NULL, lwork, self->perm_c, self->perm_r, - &self->L, &self->U, &stat, &info); - break; - default: + if (!CHECK_SLU_TYPE(SLU_TYPECODE_TO_NPY(A->Dtype))) { PyErr_SetString(PyExc_ValueError, "Invalid type in SuperMatrix."); goto fail; } - + if (ilu) { + gsitrf(SLU_TYPECODE_TO_NPY(A->Dtype), + &options, &AC, relax, panel_size, + etree, NULL, lwork, self->perm_c, self->perm_r, + &self->L, &self->U, &stat, &info); + } + else { + gstrf(SLU_TYPECODE_TO_NPY(A->Dtype), + &options, &AC, relax, panel_size, + etree, NULL, lwork, self->perm_c, self->perm_r, + &self->L, &self->U, &stat, &info); + } + trf_finished = 1; + if (info) { if (info < 0) - PyErr_SetString(PyExc_SystemError, "dgstrf was called with invalid arguments"); + PyErr_SetString(PyExc_SystemError, + "gstrf was called with invalid arguments"); else { if (info <= n) PyErr_SetString(PyExc_RuntimeError, "Factor is exactly singular"); @@ -399,7 +399,7 @@ } goto fail; } - + /* free memory */ SUPERLU_FREE(etree); Destroy_CompCol_Permuted(&AC); @@ -407,10 +407,299 @@ return (PyObject *)self; - fail: +fail: + if (!trf_finished) { + /* Avoid trying to free partially initialized matrices; + might leak some memory, but avoids a crash */ + self->L.Store = NULL; + self->U.Store = NULL; + } SUPERLU_FREE(etree); Destroy_CompCol_Permuted(&AC); StatFree(&stat); SciPyLU_dealloc(self); return NULL; } + + +/*********************************************************************** + * Preparing superlu_options_t + */ + +#define ENUM_CHECK_INIT \ + long i = -1; \ + char *s = ""; \ + if (input == Py_None) return 1; \ + if (PyString_Check(input)) { \ + s = PyString_AS_STRING(input); \ + } \ + if (PyInt_Check(input)) { \ + i = PyInt_AsLong(input); \ + } + +#define ENUM_CHECK_FINISH(message) \ + PyErr_SetString(PyExc_ValueError, message); \ + return 0; + +#define ENUM_CHECK(name) \ + if (my_strxcmp(s, #name) == 0 || i == (long)name) { *value = name; return 1; } + +/* + * Compare strings ignoring case, underscores and whitespace + */ +static int my_strxcmp(const char *a, const char *b) +{ + int c; + while (*a != '\0' && *b != '\0') { + while (*a == '_' || isspace(*a)) ++a; + while (*b == '_' || isspace(*b)) ++b; + c = (int)tolower(*a) - (int)tolower(*b); + if (c != 0) { + return c; + } + ++a; + ++b; + } + return (int)tolower(*a) - (int)tolower(*b); +} + +static int yes_no_cvt(PyObject *input, yes_no_t *value) +{ + if (input == Py_None) { + return 1; + } + else if (input == Py_True) { + *value = YES; + } else if (input == Py_False) { + *value = NO; + } else { + PyErr_SetString(PyExc_ValueError, "value not a boolean"); + return 0; + } + return 1; +} + +static int fact_cvt(PyObject *input, fact_t *value) +{ + ENUM_CHECK_INIT; + ENUM_CHECK(DOFACT); + ENUM_CHECK(SamePattern); + ENUM_CHECK(SamePattern_SameRowPerm); + ENUM_CHECK(FACTORED); + ENUM_CHECK_FINISH("invalid value for 'Fact' parameter"); +} + +static int rowperm_cvt(PyObject *input, rowperm_t *value) +{ + ENUM_CHECK_INIT; + ENUM_CHECK(NOROWPERM); + ENUM_CHECK(LargeDiag); + ENUM_CHECK(MY_PERMR); + ENUM_CHECK_FINISH("invalid value for 'RowPerm' parameter"); +} + +static int colperm_cvt(PyObject *input, colperm_t *value) +{ + ENUM_CHECK_INIT; + ENUM_CHECK(NATURAL); + ENUM_CHECK(MMD_ATA); + ENUM_CHECK(MMD_AT_PLUS_A); + ENUM_CHECK(COLAMD); + ENUM_CHECK(MY_PERMC); + ENUM_CHECK_FINISH("invalid value for 'ColPerm' parameter"); +} + +static int trans_cvt(PyObject *input, trans_t *value) +{ + ENUM_CHECK_INIT; + ENUM_CHECK(NOTRANS); + ENUM_CHECK(TRANS); + ENUM_CHECK(CONJ); + if (my_strxcmp(s, "N") == 0) { *value = NOTRANS; return 1; } + if (my_strxcmp(s, "T") == 0) { *value = TRANS; return 1; } + if (my_strxcmp(s, "H") == 0) { *value = CONJ; return 1; } + ENUM_CHECK_FINISH("invalid value for 'Trans' parameter"); +} + +static int iterrefine_cvt(PyObject *input, IterRefine_t *value) +{ + ENUM_CHECK_INIT; + ENUM_CHECK(NOREFINE); + ENUM_CHECK(SINGLE); + ENUM_CHECK(DOUBLE); + ENUM_CHECK(EXTRA); + ENUM_CHECK_FINISH("invalid value for 'IterRefine' parameter"); +} + +static int norm_cvt(PyObject *input, norm_t *value) +{ + ENUM_CHECK_INIT; + ENUM_CHECK(ONE_NORM); + ENUM_CHECK(TWO_NORM); + ENUM_CHECK(INF_NORM); + ENUM_CHECK_FINISH("invalid value for 'ILU_Norm' parameter"); +} + +static int milu_cvt(PyObject *input, milu_t *value) +{ + ENUM_CHECK_INIT; + ENUM_CHECK(SILU); + ENUM_CHECK(SMILU_1); + ENUM_CHECK(SMILU_2); + ENUM_CHECK(SMILU_3); + ENUM_CHECK_FINISH("invalid value for 'ILU_MILU' parameter"); +} + +static int droprule_one_cvt(PyObject *input, int *value) +{ + ENUM_CHECK_INIT; + if (my_strxcmp(s, "BASIC") == 0) { *value = DROP_BASIC; return 1; } + if (my_strxcmp(s, "PROWS") == 0) { *value = DROP_PROWS; return 1; } + if (my_strxcmp(s, "COLUMN") == 0) { *value = DROP_COLUMN; return 1; } + if (my_strxcmp(s, "AREA") == 0) { *value = DROP_AREA; return 1; } + if (my_strxcmp(s, "SECONDARY") == 0) { *value = DROP_SECONDARY; return 1; } + if (my_strxcmp(s, "DYNAMIC") == 0) { *value = DROP_DYNAMIC; return 1; } + if (my_strxcmp(s, "INTERP") == 0) { *value = DROP_INTERP; return 1; } + ENUM_CHECK_FINISH("invalid value for 'ILU_DropRule' parameter"); +} + +static int droprule_cvt(PyObject *input, int *value) +{ + PyObject *seq = NULL; + int i; + int rule = 0; + + if (input == Py_None) { + /* Leave as default */ + return 1; + } + else if (PyInt_Check(input)) { + *value = PyInt_AsLong(input); + return 1; + } + else if (PyString_Check(input)) { + /* Comma-separated string */ + seq = PyObject_CallMethod(input, "split", "s", ","); + if (seq == NULL || !PySequence_Check(seq)) + goto fail; + } + else if (PySequence_Check(input)) { + /* Sequence of strings or integers */ + seq = input; + Py_INCREF(seq); + } + else { + PyErr_SetString(PyExc_ValueError, "invalid value for drop rule"); + goto fail; + } + + /* OR multiple values together */ + for (i = 0; i < PySequence_Size(seq); ++i) { + PyObject *item; + int one_value; + item = PySequence_ITEM(seq, i); + if (item == NULL) { + goto fail; + } + if (!droprule_one_cvt(item, &one_value)) { + Py_DECREF(item); + goto fail; + } + Py_DECREF(item); + rule |= one_value; + } + Py_DECREF(seq); + + *value = rule; + return 1; + +fail: + Py_XDECREF(seq); + return 0; +} + +static int double_cvt(PyObject *input, double *value) +{ + if (input == Py_None) return 1; + *value = PyFloat_AsDouble(input); + if (PyErr_Occurred()) return 0; + return 1; +} + +static int int_cvt(PyObject *input, int *value) +{ + if (input == Py_None) return 1; + *value = PyInt_AsLong(input); + if (PyErr_Occurred()) return 0; + return 1; +} + +int set_superlu_options_from_dict(superlu_options_t *options, + int ilu, PyObject *option_dict, + int *panel_size, int *relax) +{ + PyObject *args; + int ret; + int _relax, _panel_size; + + static char *kwlist[] = { + "Fact", "Equil", "ColPerm", "Trans", "IterRefine", + "DiagPivotThresh", "PivotGrowth", "ConditionNumber", + "RowPerm", "SymmetricMode", "PrintStat", "ReplaceTinyPivot", + "SolveInitialized", "RefineInitialized", "ILU_Norm", + "ILU_MILU", "ILU_DropTol", "ILU_FillTol", "ILU_FillFactor", + "ILU_DropRule", "PanelSize", "Relax", NULL + }; + + if (ilu) { + ilu_set_default_options(options); + } + else { + set_default_options(options); + } + + _panel_size = sp_ienv(1); + _relax = sp_ienv(2); + + if (option_dict == NULL) { + return 0; + } + + args = PyTuple_New(0); + ret = PyArg_ParseTupleAndKeywords( + args, option_dict, + "|O&O&O&O&O&O&O&O&O&O&O&O&O&O&O&O&O&O&O&O&O&O&", kwlist, + fact_cvt, &options->Fact, + yes_no_cvt, &options->Equil, + colperm_cvt, &options->ColPerm, + trans_cvt, &options->Trans, + iterrefine_cvt, &options->IterRefine, + double_cvt, &options->DiagPivotThresh, + yes_no_cvt, &options->PivotGrowth, + yes_no_cvt, &options->ConditionNumber, + rowperm_cvt, &options->RowPerm, + yes_no_cvt, &options->SymmetricMode, + yes_no_cvt, &options->PrintStat, + yes_no_cvt, &options->ReplaceTinyPivot, + yes_no_cvt, &options->SolveInitialized, + yes_no_cvt, &options->RefineInitialized, + norm_cvt, &options->ILU_Norm, + milu_cvt, &options->ILU_MILU, + double_cvt, &options->ILU_DropTol, + double_cvt, &options->ILU_FillTol, + double_cvt, &options->ILU_FillFactor, + droprule_cvt, &options->ILU_DropRule, + int_cvt, &_panel_size, + int_cvt, &_relax + ); + Py_DECREF(args); + + if (panel_size != NULL) { + *panel_size = _panel_size; + } + if (relax != NULL) { + *relax = _relax; + } + + return ret; +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/_superluobject.h python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/_superluobject.h --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/_superluobject.h 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/_superluobject.h 2010-07-26 15:48:34.000000000 +0100 @@ -1,105 +1,127 @@ -#ifndef __SUPERLU_OBJECT /* allow multiple inclusions */ +/* -*-c-*- */ +/* + * _superlu object + * + * Python object representing SuperLU factorization + some utility functions. + */ + +#ifndef __SUPERLU_OBJECT #define __SUPERLU_OBJECT #include "Python.h" -#define PY_ARRAY_UNIQUE_SYMBOL scipy_superlu +#include "SuperLU/SRC/slu_zdefs.h" +#define PY_ARRAY_UNIQUE_SYMBOL _scipy_sparse_superlu_ARRAY_API #include "numpy/arrayobject.h" -#include "SuperLU/SRC/util.h" -#include "SuperLU/SRC/scomplex.h" -#include "SuperLU/SRC/dcomplex.h" +#include "SuperLU/SRC/slu_util.h" +#include "SuperLU/SRC/slu_dcomplex.h" +#include "SuperLU/SRC/slu_scomplex.h" #define _CHECK_INTEGER(x) (PyArray_ISINTEGER(x) && (x)->descr->elsize == sizeof(int)) -/*********************************************************************** +/* * SuperLUObject definition */ - -typedef struct SciPyLUObject { - PyObject_VAR_HEAD - int m,n; - SuperMatrix L; - SuperMatrix U; - int *perm_r; - int *perm_c; - int type; +typedef struct { + PyObject_VAR_HEAD + npy_intp m,n; + SuperMatrix L; + SuperMatrix U; + int *perm_r; + int *perm_c; + int type; } SciPyLUObject; extern PyTypeObject SciPySuperLUType; int DenseSuper_from_Numeric(SuperMatrix *, PyObject *); -int NRFormat_from_spMatrix(SuperMatrix *, int, int, int, PyArrayObject *, PyArrayObject *, PyArrayObject *, int); -int NCFormat_from_spMatrix(SuperMatrix *, int, int, int, PyArrayObject *, PyArrayObject *, PyArrayObject *, int); +int NRFormat_from_spMatrix(SuperMatrix *, int, int, int, PyArrayObject *, + PyArrayObject *, PyArrayObject *, int); +int NCFormat_from_spMatrix(SuperMatrix *, int, int, int, PyArrayObject *, + PyArrayObject *, PyArrayObject *, int); colperm_t superlu_module_getpermc(int); -PyObject *newSciPyLUObject(SuperMatrix *, double, double, int, int, int, int); +PyObject *newSciPyLUObject(SuperMatrix *, PyObject*, int, int); +int set_superlu_options_from_dict(superlu_options_t *options, + int ilu, PyObject *option_dict, + int *panel_size, int *relax); + +/* + * Definitions for other SuperLU data types than Z, + * and type-generic definitions. + */ -void -dgstrf (superlu_options_t *, SuperMatrix *, double, - int, int, int *, void *, int, - int *, int *, SuperMatrix *, SuperMatrix *, - SuperLUStat_t *, int *); - -void -sgstrf (superlu_options_t *, SuperMatrix *, float, - int, int, int *, void *, int, - int *, int *, SuperMatrix *, SuperMatrix *, - SuperLUStat_t *, int *); - -void -cgstrf (superlu_options_t *, SuperMatrix *, float, - int, int, int *, void *, int, - int *, int *, SuperMatrix *, SuperMatrix *, - SuperLUStat_t *, int *); - -void -dgstrs (trans_t, SuperMatrix *, SuperMatrix *, - int *, int *, SuperMatrix *, - SuperLUStat_t *, int *); - -void -sgstrs (trans_t, SuperMatrix *, SuperMatrix *, - int *, int *, SuperMatrix *, - SuperLUStat_t *, int *); - -void -cgstrs (trans_t, SuperMatrix *, SuperMatrix *, - int *, int *, SuperMatrix *, - SuperLUStat_t *, int *); - -void -sCreate_Dense_Matrix(SuperMatrix *, int, int, float *, int, Stype_t, Dtype_t, Mtype_t); -void -dCreate_Dense_Matrix(SuperMatrix *, int, int, double *, int, Stype_t, Dtype_t, Mtype_t); -void -cCreate_Dense_Matrix(SuperMatrix *, int, int, complex *, int, Stype_t, Dtype_t, Mtype_t); - -void -sCreate_CompRow_Matrix(SuperMatrix *, int, int, int, - float *, int *, int *, - Stype_t, Dtype_t, Mtype_t); - -void -dCreate_CompRow_Matrix(SuperMatrix *, int, int, int, - double *, int *, int *, - Stype_t, Dtype_t, Mtype_t); - -void -cCreate_CompRow_Matrix(SuperMatrix *, int, int, int, - complex *, int *, int *, - Stype_t, Dtype_t, Mtype_t); - -void -sCreate_CompCol_Matrix(SuperMatrix *, int, int, int, - float *, int *, int *, - Stype_t, Dtype_t, Mtype_t); -void -dCreate_CompCol_Matrix(SuperMatrix *, int, int, int, - double *, int *, int *, - Stype_t, Dtype_t, Mtype_t); -void -cCreate_CompCol_Matrix(SuperMatrix *, int, int, int, - complex *, int *, int *, - Stype_t, Dtype_t, Mtype_t); +#define CHECK_SLU_TYPE(type) \ + (type == NPY_FLOAT || type == NPY_DOUBLE || type == NPY_CFLOAT || type == NPY_CDOUBLE) +#define TYPE_GENERIC_FUNC(name, returntype) \ + returntype s##name(name##_ARGS); \ + returntype d##name(name##_ARGS); \ + returntype c##name(name##_ARGS); \ + static returntype name(int type, name##_ARGS) \ + { \ + switch(type) { \ + case NPY_FLOAT: s##name(name##_ARGS_REF); break; \ + case NPY_DOUBLE: d##name(name##_ARGS_REF); break; \ + case NPY_CFLOAT: c##name(name##_ARGS_REF); break; \ + case NPY_CDOUBLE: z##name(name##_ARGS_REF); break; \ + default: return; \ + } \ + } + +#define SLU_TYPECODE_TO_NPY(s) \ + ( ((s) == SLU_S) ? NPY_FLOAT : \ + ((s) == SLU_D) ? NPY_DOUBLE : \ + ((s) == SLU_C) ? NPY_CFLOAT : \ + ((s) == SLU_Z) ? NPY_CDOUBLE : -1) + +#define NPY_TYPECODE_TO_SLU(s) \ + ( ((s) == NPY_FLOAT) ? SLU_S : \ + ((s) == NPY_DOUBLE) ? SLU_D : \ + ((s) == NPY_CFLOAT) ? SLU_C : \ + ((s) == NPY_CDOUBLE) ? SLU_Z : -1) + +#define gstrf_ARGS \ + superlu_options_t *a, SuperMatrix *b, \ + int c, int d, int *e, void *f, int g, \ + int *h, int *i, SuperMatrix *j, SuperMatrix *k, \ + SuperLUStat_t *l, int *m +#define gstrf_ARGS_REF a,b,c,d,e,f,g,h,i,j,k,l,m + +#define gsitrf_ARGS gstrf_ARGS +#define gsitrf_ARGS_REF gstrf_ARGS_REF + +#define gstrs_ARGS \ + trans_t a, SuperMatrix *b, SuperMatrix *c, \ + int *d, int *e, SuperMatrix *f, \ + SuperLUStat_t *g, int *h +#define gstrs_ARGS_REF a,b,c,d,e,f,g,h + +#define gssv_ARGS \ + superlu_options_t *a, SuperMatrix *b, int *c, int *d, \ + SuperMatrix *e, SuperMatrix *f, SuperMatrix *g, \ + SuperLUStat_t *h, int *i +#define gssv_ARGS_REF a,b,c,d,e,f,g,h,i + +#define Create_Dense_Matrix_ARGS \ + SuperMatrix *a, int b, int c, void *d, int e, \ + Stype_t f, Dtype_t g, Mtype_t h +#define Create_Dense_Matrix_ARGS_REF a,b,c,d,e,f,g,h + +#define Create_CompRow_Matrix_ARGS \ + SuperMatrix *a, int b, int c, int d, \ + void *e, int *f, int *g, \ + Stype_t h, Dtype_t i, Mtype_t j +#define Create_CompRow_Matrix_ARGS_REF a,b,c,d,e,f,g,h,i,j + +#define Create_CompCol_Matrix_ARGS Create_CompRow_Matrix_ARGS +#define Create_CompCol_Matrix_ARGS_REF Create_CompRow_Matrix_ARGS_REF + +TYPE_GENERIC_FUNC(gstrf, void); +TYPE_GENERIC_FUNC(gsitrf, void); +TYPE_GENERIC_FUNC(gstrs, void); +TYPE_GENERIC_FUNC(gssv, void); +TYPE_GENERIC_FUNC(Create_Dense_Matrix, void); +TYPE_GENERIC_FUNC(Create_CompRow_Matrix, void); +TYPE_GENERIC_FUNC(Create_CompCol_Matrix, void); #endif /* __SUPERLU_OBJECT */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/_superlu.py python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/_superlu.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/_superlu.py 2010-03-03 14:34:12.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/_superlu.py 1970-01-01 01:00:00.000000000 +0100 @@ -1,4 +0,0 @@ -from _zsuperlu import * -from _ssuperlu import * -from _dsuperlu import * -from _csuperlu import * diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/_superlu_utils.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/_superlu_utils.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/_superlu_utils.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/_superlu_utils.c 2010-07-26 15:48:34.000000000 +0100 @@ -67,3 +67,18 @@ return; } +/* + * Stubs for Harwell Subroutine Library functions that SuperLU tries to call. + */ + +void mc64id_(int *a) +{ + superlu_python_module_abort("chosen functionality not available"); +} + +void mc64ad_(int *a, int *b, int *c, int d[], int e[], double f[], + int *g, int h[], int *i, int j[], int *k, double l[], + int m[], int n[]) +{ + superlu_python_module_abort("chosen functionality not available"); +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/tests/test_linsolve.py python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/tests/test_linsolve.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/tests/test_linsolve.py 2010-03-03 14:34:12.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/tests/test_linsolve.py 2010-07-26 15:48:34.000000000 +0100 @@ -1,11 +1,12 @@ import warnings -from numpy import array, finfo +from numpy import array, finfo, arange, eye, all, unique, ones, dot +import numpy.random as random from numpy.testing import * from scipy.linalg import norm, inv -from scipy.sparse import spdiags, SparseEfficiencyWarning -from scipy.sparse.linalg.dsolve import spsolve, use_solver +from scipy.sparse import spdiags, SparseEfficiencyWarning, csc_matrix +from scipy.sparse.linalg.dsolve import spsolve, use_solver, splu, spilu warnings.simplefilter('ignore',SparseEfficiencyWarning) @@ -13,11 +14,10 @@ use_solver( useUmfpack = False ) class TestLinsolve(TestCase): - ## this crashes SuperLU - #def test_singular(self): - # A = csc_matrix( (5,5), dtype='d' ) - # b = array([1, 2, 3, 4, 5],dtype='d') - # x = spsolve(A,b) + def test_singular(self): + A = csc_matrix( (5,5), dtype='d' ) + b = array([1, 2, 3, 4, 5],dtype='d') + x = spsolve(A, b, use_umfpack=False) def test_twodiags(self): A = spdiags([[1, 2, 3, 4, 5], [6, 5, 8, 9, 10]], [0, 1], 5, 5) @@ -39,5 +39,97 @@ assert( norm(b - Asp*x) < 10 * cond_A * eps ) +class TestSplu(object): + def setUp(self): + n = 40 + d = arange(n) + 1 + self.n = n + self.A = spdiags((d, 2*d, d[::-1]), (-3, 0, 5), n, n) + random.seed(1234) + + def test_splu_smoketest(self): + # Check that splu works at all + x = random.rand(self.n) + lu = splu(self.A) + r = self.A*lu.solve(x) + assert abs(x - r).max() < 1e-13 + + def test_spilu_smoketest(self): + # Check that spilu works at all + x = random.rand(self.n) + lu = spilu(self.A, drop_tol=1e-2, fill_factor=5) + r = self.A*lu.solve(x) + assert abs(x - r).max() < 1e-2 + assert abs(x - r).max() > 1e-5 + + def test_splu_nnz0(self): + A = csc_matrix( (5,5), dtype='d' ) + assert_raises(RuntimeError, splu, A) + + def test_spilu_nnz0(self): + A = csc_matrix( (5,5), dtype='d' ) + assert_raises(RuntimeError, spilu, A) + + def test_splu_basic(self): + # Test basic splu functionality. + n = 30 + a = random.random((n, n)) + a[a < 0.95] = 0 + # First test with a singular matrix + a[:, 0] = 0 + a_ = csc_matrix(a) + # Matrix is exactly singular + assert_raises(RuntimeError, splu, a_) + + # Make a diagonal dominant, to make sure it is not singular + a += 4*eye(n) + a_ = csc_matrix(a) + lu = splu(a_) + b = ones(n) + x = lu.solve(b) + assert_almost_equal(dot(a, x), b) + + def test_splu_perm(self): + # Test the permutation vectors exposed by splu. + n = 30 + a = random.random((n, n)) + a[a < 0.95] = 0 + # Make a diagonal dominant, to make sure it is not singular + a += 4*eye(n) + a_ = csc_matrix(a) + lu = splu(a_) + # Check that the permutation indices do belong to [0, n-1]. + for perm in (lu.perm_r, lu.perm_c): + assert_(all(perm > -1)) + assert_(all(perm < n)) + assert_equal(len(unique(perm)), len(perm)) + + # Now make a symmetric, and test that the two permutation vectors are + # the same + a += a.T + a_ = csc_matrix(a) + lu = splu(a_) + assert_array_equal(lu.perm_r, lu.perm_c) + + def test_lu_refcount(self): + # Test that we are keeping track of the reference count with splu. + n = 30 + a = random.random((n, n)) + a[a < 0.95] = 0 + # Make a diagonal dominant, to make sure it is not singular + a += 4*eye(n) + a_ = csc_matrix(a) + lu = splu(a_) + + # And now test that we don't have a refcount bug + import gc, sys + rc = sys.getrefcount(lu) + for attr in ('perm_r', 'perm_c'): + perm = getattr(lu, attr) + assert_equal(sys.getrefcount(lu), rc + 1) + del perm + assert_equal(sys.getrefcount(lu), rc) + + if __name__ == "__main__": run_module_suite() diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/umfpack/umfpack.py python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/umfpack/umfpack.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/umfpack/umfpack.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/umfpack/umfpack.py 2010-07-26 15:48:34.000000000 +0100 @@ -268,6 +268,11 @@ maxCond .. if extimated condition number is greater than maxCond, a warning is printed (default: 1e12)""" + if _um is None: + raise ImportError('Scipy was built without UMFPACK support. ' + 'You need to install the UMFPACK library and ' + 'header files before building scipy.') + self.maxCond = 1e12 Struct.__init__( self, **kwargs ) diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/_zsuperlumodule.c python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/_zsuperlumodule.c --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/dsolve/_zsuperlumodule.c 2010-04-05 08:55:23.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/dsolve/_zsuperlumodule.c 1970-01-01 01:00:00.000000000 +0100 @@ -1,210 +0,0 @@ - -/* Copyright 1999 Travis Oliphant - Permision to copy and modified this file is granted under the revised BSD license. - No warranty is expressed or IMPLIED - - Changes: 2004 converted to SuperLU_3.0 and added factor and solve routines for - more flexible handling. - - Also added NC (compressed sparse column handling -- best to use CSC) -*/ - -/* - This file implements glue between the SuperLU library for - sparse matrix inversion and Python. -*/ - - -/* We want a low-level interface to: - xGSSV - xgstrf -- factor - xgstrs -- solve - - These will be done in separate files due to the include structure of - SuperLU. - - Define a user abort and a user malloc and free (to keep pointers - that will be released on errors) -*/ - -#include "Python.h" -#include "SuperLU/SRC/zsp_defs.h" -#include "_superluobject.h" -#include - -extern jmp_buf _superlu_py_jmpbuf; - - -static char doc_zgssv[] = "Direct inversion of sparse matrix.\n\nX = zgssv(A,B) solves A*X = B for X."; - -static PyObject *Py_zgssv (PyObject *self, PyObject *args, PyObject *kwdict) -{ - PyObject *Py_B=NULL, *Py_X=NULL; - PyArrayObject *nzvals=NULL; - PyArrayObject *colind=NULL, *rowptr=NULL; - int N, nnz; - int csc=0, permc_spec=2; - int info; - int *perm_r=NULL, *perm_c=NULL; - SuperMatrix A, B, L, U; - superlu_options_t options; - SuperLUStat_t stat; - - static char *kwlist[] = {"N","nnz","nzvals","colind","rowptr","B", "csc", "permc_spec",NULL}; - - /* Get input arguments */ - if (!PyArg_ParseTupleAndKeywords(args, kwdict, "iiO!O!O!O|ii", kwlist, &N, &nnz, &PyArray_Type, &nzvals, &PyArray_Type, &colind, &PyArray_Type, &rowptr, &Py_B, &csc, &permc_spec)) - return NULL; - - if (!_CHECK_INTEGER(colind) || !_CHECK_INTEGER(rowptr)) { - PyErr_SetString(PyExc_TypeError, "colind and rowptr must be of type cint"); - return NULL; - } - - /* Create Space for output */ - Py_X = PyArray_CopyFromObject(Py_B,PyArray_CDOUBLE,1,2); - if (Py_X == NULL) return NULL; - if (csc) { - if (NCFormat_from_spMatrix(&A, N, N, nnz, nzvals, colind, rowptr, PyArray_CDOUBLE)) { - Py_DECREF(Py_X); return NULL; - } - } - else { - if (NRFormat_from_spMatrix(&A, N, N, nnz, nzvals, colind, rowptr, PyArray_CDOUBLE)) { - Py_DECREF(Py_X); return NULL; - } - } - if (DenseSuper_from_Numeric(&B, Py_X)) { - Destroy_SuperMatrix_Store(&A); - Py_DECREF(Py_X); - return NULL; - } - - /* B and Py_X share same data now but Py_X "owns" it */ - - /* Setup options */ - - if (setjmp(_superlu_py_jmpbuf)) goto fail; - else { - perm_c = intMalloc(N); - perm_r = intMalloc(N); - set_default_options(&options); - options.ColPerm=superlu_module_getpermc(permc_spec); - StatInit(&stat); - - /* Compute direct inverse of sparse Matrix */ - zgssv(&options, &A, perm_c, perm_r, &L, &U, &B, &stat, &info); - } - - SUPERLU_FREE(perm_r); - SUPERLU_FREE(perm_c); - Destroy_SuperMatrix_Store(&A); - Destroy_SuperMatrix_Store(&B); - Destroy_SuperNode_Matrix(&L); - Destroy_CompCol_Matrix(&U); - StatFree(&stat); - - return Py_BuildValue("Ni", Py_X, info); - - fail: - SUPERLU_FREE(perm_r); - SUPERLU_FREE(perm_c); - Destroy_SuperMatrix_Store(&A); - Destroy_SuperMatrix_Store(&B); - Destroy_SuperNode_Matrix(&L); - Destroy_CompCol_Matrix(&U); - StatFree(&stat); - Py_XDECREF(Py_X); - return NULL; -} - -/*******************************Begin Code Adapted from PySparse *****************/ - -static char doc_zgstrf[] = "zgstrf(A, ...)\n\ -\n\ -performs a factorization of the sparse matrix A=*(N,nnz,nzvals,rowind,colptr) and \n\ -returns a factored_lu object.\n\ -\n\ -see dgstrf for more information."; - -static PyObject * -Py_zgstrf(PyObject *self, PyObject *args, PyObject *keywds) { - - /* default value for SuperLU parameters*/ - double diag_pivot_thresh = 1.0; - double drop_tol = 0.0; - int relax = 1; - int panel_size = 10; - int permc_spec = 2; - int N, nnz; - PyArrayObject *rowind, *colptr, *nzvals; - SuperMatrix A; - PyObject *result; - - static char *kwlist[] = {"N","nnz","nzvals","rowind","colptr","permc_spec","diag_pivot_thresh", "drop_tol", "relax", "panel_size", NULL}; - - int res = PyArg_ParseTupleAndKeywords(args, keywds, "iiO!O!O!|iddii", kwlist, - &N, &nnz, - &PyArray_Type, &nzvals, - &PyArray_Type, &rowind, - &PyArray_Type, &colptr, - &permc_spec, - &diag_pivot_thresh, - &drop_tol, - &relax, - &panel_size); - if (!res) - return NULL; - - - if (!_CHECK_INTEGER(colptr) || !_CHECK_INTEGER(rowind)) { - PyErr_SetString(PyExc_TypeError, "colptr and rowind must be of type cint"); - return NULL; - } - - if (NCFormat_from_spMatrix(&A, N, N, nnz, nzvals, rowind, colptr, PyArray_CDOUBLE)) goto fail; - - result = newSciPyLUObject(&A, diag_pivot_thresh, drop_tol, relax, panel_size,\ - permc_spec, PyArray_CDOUBLE); - if (result == NULL) goto fail; - - Destroy_SuperMatrix_Store(&A); /* arrays of input matrix will not be freed */ - return result; - - fail: - Destroy_SuperMatrix_Store(&A); /* arrays of input matrix will not be freed */ - return NULL; -} - - -/*******************************End Code Adapted from PySparse *****************/ - - -static PyMethodDef zSuperLU_Methods[] = { - {"zgssv", (PyCFunction) Py_zgssv, METH_VARARGS|METH_KEYWORDS, doc_zgssv}, - {"zgstrf", (PyCFunction) Py_zgstrf, METH_VARARGS|METH_KEYWORDS, doc_zgstrf}, - /* {"zgstrs", (PyCFunction) Py_zgstrs, METH_VARARGS|METH_KEYWORDS, doc_zgstrs}, - {"_zgscon", Py_zgscon, METH_VARARGS, doc_zgscon}, - {"_zgsequ", Py_zgsequ, METH_VARARGS, doc_zgsequ}, - {"_zlaqgs", Py_zlaqgs, METH_VARARGS, doc_zlaqgs}, - {"_zgsrfs", Py_zgsrfs, METH_VARARGS, doc_zgsrfs}, */ - {NULL, NULL} -}; - - -/* This should be imported first */ -PyMODINIT_FUNC -init_zsuperlu(void) -{ - - Py_InitModule("_zsuperlu", zSuperLU_Methods); - - import_array(); - - if (PyErr_Occurred()) - Py_FatalError("can't initialize module zsuperlu"); -} - - - - diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/eigen/arpack/arpack.py python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/eigen/arpack/arpack.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/eigen/arpack/arpack.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/eigen/arpack/arpack.py 2010-07-26 15:48:34.000000000 +0100 @@ -39,17 +39,271 @@ __docformat__ = "restructuredtext en" -__all___=['eigen','eigen_symmetric'] +__all___=['eigen','eigen_symmetric', 'svd'] import warnings import _arpack import numpy as np from scipy.sparse.linalg.interface import aslinearoperator +from scipy.sparse import csc_matrix, csr_matrix _type_conv = {'f':'s', 'd':'d', 'F':'c', 'D':'z'} _ndigits = {'f':5, 'd':12, 'F':5, 'D':12} +class _ArpackParams(object): + def __init__(self, n, k, tp, matvec, sigma=None, + ncv=None, v0=None, maxiter=None, which="LM", tol=0): + if k <= 0: + raise ValueError("k must be positive, k=%d" % k) + if k == n: + raise ValueError("k must be less than rank(A), k=%d" % k) + + if maxiter is None: + maxiter = n * 10 + if maxiter <= 0: + raise ValueError("maxiter must be positive, maxiter=%d" % maxiter) + + if tp not in 'fdFD': + raise ValueError("matrix type must be 'f', 'd', 'F', or 'D'") + + if v0 is not None: + self.resid = v0 + info = 1 + else: + self.resid = np.zeros(n, tp) + info = 0 + + if sigma is not None: + raise NotImplementedError("shifted eigenproblem not supported yet") + + if ncv is None: + ncv = 2 * k + 1 + ncv = min(ncv, n) + + if ncv > n or ncv < k: + raise ValueError("ncv must be k<=ncv<=n, ncv=%s" % ncv) + + self.v = np.zeros((n, ncv), tp) # holds Ritz vectors + self.iparam = np.zeros(11, "int") + + # set solver mode and parameters + # only supported mode is 1: Ax=lx + ishfts = 1 + mode1 = 1 + self.iparam[0] = ishfts + self.iparam[2] = maxiter + self.iparam[6] = mode1 + + self.n = n + self.matvec = matvec + self.tol = tol + self.k = k + self.maxiter = maxiter + self.ncv = ncv + self.which = which + self.tp = tp + self.info = info + self.bmat = 'I' + + self.converged = False + self.ido = 0 + +class _SymmetricArpackParams(_ArpackParams): + def __init__(self, n, k, tp, matvec, sigma=None, + ncv=None, v0=None, maxiter=None, which="LM", tol=0): + if not which in ['LM', 'SM', 'LA', 'SA', 'BE']: + raise ValueError("which must be one of %s" % ' '.join(whiches)) + + _ArpackParams.__init__(self, n, k, tp, matvec, sigma, + ncv, v0, maxiter, which, tol) + + self.workd = np.zeros(3 * n, self.tp) + self.workl = np.zeros(self.ncv * (self.ncv + 8), self.tp) + + ltr = _type_conv[self.tp] + self._arpack_solver = _arpack.__dict__[ltr + 'saupd'] + self._arpack_extract = _arpack.__dict__[ltr + 'seupd'] + + self.ipntr = np.zeros(11, "int") + + def iterate(self): + self.ido, self.resid, self.v, self.iparam, self.ipntr, self.info = \ + self._arpack_solver(self.ido, self.bmat, self.which, self.k, self.tol, + self.resid, self.v, self.iparam, self.ipntr, + self.workd, self.workl, self.info) + + xslice = slice(self.ipntr[0]-1, self.ipntr[0]-1+self.n) + yslice = slice(self.ipntr[1]-1, self.ipntr[1]-1+self.n) + if self.ido == -1: + # initialization + self.workd[yslice] = self.matvec(self.workd[xslice]) + elif self.ido == 1: + # compute y=Ax + self.workd[yslice] = self.matvec(self.workd[xslice]) + else: + self.converged = True + + if self.info < -1 : + raise RuntimeError("Error info=%d in arpack" % self.info) + elif self.info == -1: + warnings.warn("Maximum number of iterations taken: %s" % self.iparam[2]) + + if self.iparam[4] < self.k: + warnings.warn("Only %d/%d eigenvectors converged" % (self.iparam[4], self.k)) + + def extract(self, return_eigenvectors): + rvec = return_eigenvectors + ierr = 0 + howmny = 'A' # return all eigenvectors + sselect = np.zeros(self.ncv, 'int') # unused + sigma = 0.0 # no shifts, not implemented + + d, z, info = self._arpack_extract(rvec, howmny, sselect, sigma, self.bmat, + self.which, self.k, self.tol, self.resid, self.v, + self.iparam[0:7], self.ipntr, self.workd[0:2*self.n], + self.workl,ierr) + + if ierr != 0: + raise RuntimeError("Error info=%d in arpack" % params.info) + + if return_eigenvectors: + return d, z + else: + return d + +class _UnsymmetricArpackParams(_ArpackParams): + def __init__(self, n, k, tp, matvec, sigma=None, + ncv=None, v0=None, maxiter=None, which="LM", tol=0): + if not which in ["LM", "SM", "LR", "SR", "LI", "SI"]: + raise ValueError("Parameter which must be one of %s" % ' '.join(whiches)) + + _ArpackParams.__init__(self, n, k, tp, matvec, sigma, + ncv, v0, maxiter, which, tol) + + self.workd = np.zeros(3 * n, self.tp) + self.workl = np.zeros(3 * self.ncv * self.ncv + 6 * self.ncv, self.tp) + + ltr = _type_conv[self.tp] + self._arpack_solver = _arpack.__dict__[ltr + 'naupd'] + self._arpack_extract = _arpack.__dict__[ltr + 'neupd'] + + self.ipntr = np.zeros(14, "int") + + if self.tp in 'FD': + self.rwork = np.zeros(self.ncv, self.tp.lower()) + else: + self.rwork = None + + def iterate(self): + if self.tp in 'fd': + self.ido, self.resid, self.v, self.iparam, self.ipntr, self.info = \ + self._arpack_solver(self.ido, self.bmat, self.which, self.k, self.tol, + self.resid, self.v, self.iparam, self.ipntr, + self.workd, self.workl, self.info) + else: + self.ido, self.resid, self.v, self.iparam, self.ipntr, self.info =\ + self._arpack_solver(self.ido, self.bmat, self.which, self.k, self.tol, + self.resid, self.v, self.iparam, self.ipntr, + self.workd, self.workl, self.rwork, self.info) + + xslice = slice(self.ipntr[0]-1, self.ipntr[0]-1+self.n) + yslice = slice(self.ipntr[1]-1, self.ipntr[1]-1+self.n) + if self.ido == -1: + # initialization + self.workd[yslice] = self.matvec(self.workd[xslice]) + elif self.ido == 1: + # compute y=Ax + self.workd[yslice] = self.matvec(self.workd[xslice]) + else: + self.converged = True + + if self.info < -1 : + raise RuntimeError("Error info=%d in arpack" % self.info) + elif self.info == -1: + warnings.warn("Maximum number of iterations taken: %s" % self.iparam[2]) + + def extract(self, return_eigenvectors): + k, n = self.k, self.n + + ierr = 0 + howmny = 'A' # return all eigenvectors + sselect = np.zeros(self.ncv, 'int') # unused + sigmai = 0.0 # no shifts, not implemented + sigmar = 0.0 # no shifts, not implemented + workev = np.zeros(3 * self.ncv, self.tp) + + if self.tp in 'fd': + dr = np.zeros(k+1, self.tp) + di = np.zeros(k+1, self.tp) + zr = np.zeros((n, k+1), self.tp) + dr, di, zr, self.info=\ + self._arpack_extract(return_eigenvectors, + howmny, sselect, sigmar, sigmai, workev, + self.bmat, self.which, k, self.tol, self.resid, + self.v, self.iparam, self.ipntr, + self.workd, self.workl, self.info) + + # The ARPACK nonsymmetric real and double interface (s,d)naupd return + # eigenvalues and eigenvectors in real (float,double) arrays. + + # Build complex eigenvalues from real and imaginary parts + d = dr + 1.0j * di + + # Arrange the eigenvectors: complex eigenvectors are stored as + # real,imaginary in consecutive columns + z = zr.astype(self.tp.upper()) + eps = np.finfo(self.tp).eps + i = 0 + while i<=k: + # check if complex + if abs(d[i].imag) > eps: + # assume this is a complex conjugate pair with eigenvalues + # in consecutive columns + z[:,i] = zr[:,i] + 1.0j * zr[:,i+1] + z[:,i+1] = z[:,i].conjugate() + i +=1 + i += 1 + + # Now we have k+1 possible eigenvalues and eigenvectors + # Return the ones specified by the keyword "which" + nreturned = self.iparam[4] # number of good eigenvalues returned + if nreturned == k: # we got exactly how many eigenvalues we wanted + d = d[:k] + z = z[:,:k] + else: # we got one extra eigenvalue (likely a cc pair, but which?) + # cut at approx precision for sorting + rd = np.round(d, decimals = _ndigits[self.tp]) + if self.which in ['LR','SR']: + ind = np.argsort(rd.real) + elif self.which in ['LI','SI']: + # for LI,SI ARPACK returns largest,smallest abs(imaginary) why? + ind = np.argsort(abs(rd.imag)) + else: + ind = np.argsort(abs(rd)) + if self.which in ['LR','LM','LI']: + d = d[ind[-k:]] + z = z[:,ind[-k:]] + if self.which in ['SR','SM','SI']: + d = d[ind[:k]] + z = z[:,ind[:k]] + + else: + # complex is so much simpler... + d, z, self.info =\ + self._arpack_extract(return_eigenvectors, + howmny, sselect, sigmar, workev, + self.bmat, self.which, k, self.tol, self.resid, + self.v, self.iparam, self.ipntr, + self.workd, self.workl, self.rwork, ierr) + + if ierr != 0: + raise RuntimeError("Error info=%d in arpack" % info) + + if return_eigenvectors: + return d, z + else: + return d def eigen(A, k=6, M=None, sigma=None, which='LM', v0=None, ncv=None, maxiter=None, tol=0, @@ -129,181 +383,20 @@ """ A = aslinearoperator(A) if A.shape[0] != A.shape[1]: - raise ValueError('expected square matrix (shape=%s)' % shape) + raise ValueError('expected square matrix (shape=%s)' % A.shape) n = A.shape[0] - # guess type - typ = A.dtype.char - if typ not in 'fdFD': - raise ValueError("matrix type must be 'f', 'd', 'F', or 'D'") + matvec = lambda x : A.matvec(x) + params = _UnsymmetricArpackParams(n, k, A.dtype.char, matvec, sigma, + ncv, v0, maxiter, which, tol) if M is not None: raise NotImplementedError("generalized eigenproblem not supported yet") - if sigma is not None: - raise NotImplementedError("shifted eigenproblem not supported yet") - - - # some defaults - if ncv is None: - ncv=2*k+1 - ncv=min(ncv,n) - if maxiter==None: - maxiter=n*10 - # assign starting vector - if v0 is not None: - resid=v0 - info=1 - else: - resid = np.zeros(n,typ) - info=0 - - - # some sanity checks - if k <= 0: - raise ValueError("k must be positive, k=%d"%k) - if k == n: - raise ValueError("k must be less than rank(A), k=%d"%k) - if maxiter <= 0: - raise ValueError("maxiter must be positive, maxiter=%d"%maxiter) - whiches=['LM','SM','LR','SR','LI','SI'] - if which not in whiches: - raise ValueError("which must be one of %s"%' '.join(whiches)) - if ncv > n or ncv < k: - raise ValueError("ncv must be k<=ncv<=n, ncv=%s"%ncv) - - # assign solver and postprocessor - ltr = _type_conv[typ] - eigsolver = _arpack.__dict__[ltr+'naupd'] - eigextract = _arpack.__dict__[ltr+'neupd'] - - v = np.zeros((n,ncv),typ) # holds Ritz vectors - workd = np.zeros(3*n,typ) # workspace - workl = np.zeros(3*ncv*ncv+6*ncv,typ) # workspace - iparam = np.zeros(11,'int') # problem parameters - ipntr = np.zeros(14,'int') # pointers into workspaces - ido = 0 - - if typ in 'FD': - rwork = np.zeros(ncv,typ.lower()) - - # set solver mode and parameters - # only supported mode is 1: Ax=lx - ishfts = 1 - mode1 = 1 - bmat = 'I' - iparam[0] = ishfts - iparam[2] = maxiter - iparam[6] = mode1 - - while True: - if typ in 'fd': - ido,resid,v,iparam,ipntr,info =\ - eigsolver(ido,bmat,which,k,tol,resid,v,iparam,ipntr, - workd,workl,info) - else: - ido,resid,v,iparam,ipntr,info =\ - eigsolver(ido,bmat,which,k,tol,resid,v,iparam,ipntr, - workd,workl,rwork,info) - - xslice = slice(ipntr[0]-1, ipntr[0]-1+n) - yslice = slice(ipntr[1]-1, ipntr[1]-1+n) - if ido == -1: - # initialization - workd[yslice]=A.matvec(workd[xslice]) - elif ido == 1: - # compute y=Ax - workd[yslice]=A.matvec(workd[xslice]) - else: - break - - if info < -1 : - raise RuntimeError("Error info=%d in arpack"%info) - return None - if info == -1: - warnings.warn("Maximum number of iterations taken: %s"%iparam[2]) -# if iparam[3] != k: -# warnings.warn("Only %s eigenvalues converged"%iparam[3]) - - - # now extract eigenvalues and (optionally) eigenvectors - rvec = return_eigenvectors - ierr = 0 - howmny = 'A' # return all eigenvectors - sselect = np.zeros(ncv,'int') # unused - sigmai = 0.0 # no shifts, not implemented - sigmar = 0.0 # no shifts, not implemented - workev = np.zeros(3*ncv,typ) - - if typ in 'fd': - dr=np.zeros(k+1,typ) - di=np.zeros(k+1,typ) - zr=np.zeros((n,k+1),typ) - dr,di,zr,info=\ - eigextract(rvec,howmny,sselect,sigmar,sigmai,workev, - bmat,which,k,tol,resid,v,iparam,ipntr, - workd,workl,info) - - # The ARPACK nonsymmetric real and double interface (s,d)naupd return - # eigenvalues and eigenvectors in real (float,double) arrays. - - # Build complex eigenvalues from real and imaginary parts - d=dr+1.0j*di - - # Arrange the eigenvectors: complex eigenvectors are stored as - # real,imaginary in consecutive columns - z=zr.astype(typ.upper()) - eps=np.finfo(typ).eps - i=0 - while i<=k: - # check if complex - if abs(d[i].imag)>eps: - # assume this is a complex conjugate pair with eigenvalues - # in consecutive columns - z[:,i]=zr[:,i]+1.0j*zr[:,i+1] - z[:,i+1]=z[:,i].conjugate() - i+=1 - i+=1 - - # Now we have k+1 possible eigenvalues and eigenvectors - # Return the ones specified by the keyword "which" - nreturned=iparam[4] # number of good eigenvalues returned - if nreturned==k: # we got exactly how many eigenvalues we wanted - d=d[:k] - z=z[:,:k] - else: # we got one extra eigenvalue (likely a cc pair, but which?) - # cut at approx precision for sorting - rd=np.round(d,decimals=_ndigits[typ]) - if which in ['LR','SR']: - ind=np.argsort(rd.real) - elif which in ['LI','SI']: - # for LI,SI ARPACK returns largest,smallest abs(imaginary) why? - ind=np.argsort(abs(rd.imag)) - else: - ind=np.argsort(abs(rd)) - if which in ['LR','LM','LI']: - d=d[ind[-k:]] - z=z[:,ind[-k:]] - if which in ['SR','SM','SI']: - d=d[ind[:k]] - z=z[:,ind[:k]] - - - else: - # complex is so much simpler... - d,z,info =\ - eigextract(rvec,howmny,sselect,sigmar,workev, - bmat,which,k,tol,resid,v,iparam,ipntr, - workd,workl,rwork,ierr) + while not params.converged: + params.iterate() - - if ierr != 0: - raise RuntimeError("Error info=%d in arpack"%info) - return None - if return_eigenvectors: - return d,z - return d - + return params.extract(return_eigenvectors) def eigen_symmetric(A, k=6, M=None, sigma=None, which='LM', v0=None, ncv=None, maxiter=None, tol=0, @@ -386,105 +479,82 @@ raise ValueError('expected square matrix (shape=%s)' % shape) n = A.shape[0] - # guess type - typ = A.dtype.char - if typ not in 'fd': - raise ValueError("matrix must be real valued (type must be 'f' or 'd')") - if M is not None: raise NotImplementedError("generalized eigenproblem not supported yet") - if sigma is not None: - raise NotImplementedError("shifted eigenproblem not supported yet") - if ncv is None: - ncv=2*k+1 - ncv=min(ncv,n) - if maxiter==None: - maxiter=n*10 - # assign starting vector - if v0 is not None: - resid=v0 - info=1 + matvec = lambda x : A.matvec(x) + params = _SymmetricArpackParams(n, k, A.dtype.char, matvec, sigma, + ncv, v0, maxiter, which, tol) + + while not params.converged: + params.iterate() + + return params.extract(return_eigenvectors) + +def svd(A, k=6): + """Compute a few singular values/vectors for a sparse matrix using ARPACK. + + Parameters + ---------- + A: sparse matrix + Array to compute the SVD on. + k: int + Number of singular values and vectors to compute. + + Note + ---- + This is a naive implementation using the symmetric eigensolver on A.T * A + or A * A.T, depending on which one is more efficient. + + Complex support is not implemented yet + """ + # TODO: implement complex support once ARPACK-based eigen_hermitian is + # available + n, m = A.shape + + if np.iscomplexobj(A): + raise NotImplementedError("Complex support for sparse SVD not " \ + "implemented yet") + op = lambda x: x.T.conjugate() else: - resid = np.zeros(n,typ) - info=0 + op = lambda x: x.T - # some sanity checks - if k <= 0: - raise ValueError("k must be positive, k=%d"%k) - if k == n: - raise ValueError("k must be less than rank(A), k=%d"%k) - if maxiter <= 0: - raise ValueError("maxiter must be positive, maxiter=%d"%maxiter) - whiches=['LM','SM','LA','SA','BE'] - if which not in whiches: - raise ValueError("which must be one of %s"%' '.join(whiches)) - if ncv > n or ncv < k: - raise ValueError("ncv must be k<=ncv<=n, ncv=%s"%ncv) - - # assign solver and postprocessor - ltr = _type_conv[typ] - eigsolver = _arpack.__dict__[ltr+'saupd'] - eigextract = _arpack.__dict__[ltr+'seupd'] - - # set output arrays, parameters, and workspace - v = np.zeros((n,ncv),typ) - workd = np.zeros(3*n,typ) - workl = np.zeros(ncv*(ncv+8),typ) - iparam = np.zeros(11,'int') - ipntr = np.zeros(11,'int') - ido = 0 - - # set solver mode and parameters - # only supported mode is 1: Ax=lx - ishfts = 1 - mode1 = 1 - bmat='I' - iparam[0] = ishfts - iparam[2] = maxiter - iparam[6] = mode1 - - while True: - ido,resid,v,iparam,ipntr,info =\ - eigsolver(ido,bmat,which,k,tol,resid,v, - iparam,ipntr,workd,workl,info) - - xslice = slice(ipntr[0]-1, ipntr[0]-1+n) - yslice = slice(ipntr[1]-1, ipntr[1]-1+n) - if ido == -1: - # initialization - workd[yslice]=A.matvec(workd[xslice]) - elif ido == 1: - # compute y=Ax - workd[yslice]=A.matvec(workd[xslice]) - else: - break + tp = A.dtype.char + linear_at = aslinearoperator(op(A)) + linear_a = aslinearoperator(A) + + def _left(x, sz): + x = csc_matrix(x) + + matvec = lambda x: linear_at.matvec(linear_a.matvec(x)) + params = _SymmetricArpackParams(sz, k, tp, matvec) - if info < -1 : - raise RuntimeError("Error info=%d in arpack" % info) - return None - - if info == 1: - warnings.warn("Maximum number of iterations taken: %s" % iparam[2]) - - if iparam[4] < k: - warnings.warn("Only %d/%d eigenvectors converged" % (iparam[4], k)) - - # now extract eigenvalues and (optionally) eigenvectors - rvec = return_eigenvectors - ierr = 0 - howmny = 'A' # return all eigenvectors - sselect = np.zeros(ncv,'int') # unused - sigma = 0.0 # no shifts, not implemented - - d,z,info =\ - eigextract(rvec,howmny,sselect,sigma, - bmat,which, k,tol,resid,v,iparam[0:7],ipntr, - workd[0:2*n],workl,ierr) - - if ierr != 0: - raise RuntimeError("Error info=%d in arpack"%info) - return None - if return_eigenvectors: - return d,z - return d + while not params.converged: + params.iterate() + eigvals, eigvec = params.extract(True) + s = np.sqrt(eigvals) + + v = eigvec + u = (x * v) / s + return u, s, op(v) + + def _right(x, sz): + x = csr_matrix(x) + + matvec = lambda x: linear_a.matvec(linear_at.matvec(x)) + params = _SymmetricArpackParams(sz, k, tp, matvec) + + while not params.converged: + params.iterate() + eigvals, eigvec = params.extract(True) + + s = np.sqrt(eigvals) + + u = eigvec + vh = (op(u) * x) / s[:, None] + return u, s, vh + + if n > m: + return _left(A, m) + else: + return _right(A, n) diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py 2010-07-26 15:48:35.000000000 +0100 @@ -4,11 +4,16 @@ python tests/test_arpack.py [-l] [-v] """ +import sys, platform + +import numpy as np from numpy.testing import * from numpy import array, finfo, argsort, dot, round, conj, random -from scipy.sparse.linalg.eigen.arpack import eigen_symmetric, eigen +from scipy.sparse.linalg.eigen.arpack import eigen_symmetric, eigen, svd + +from scipy.linalg import svd as dsvd def assert_almost_equal_cc(actual,desired,decimal=7,err_msg='',verbose=True): # almost equal or complex conjugates almost equal @@ -27,6 +32,11 @@ assert_array_almost_equal(actual,conj(desired),decimal,err_msg,verbose) +# check if we're on 64-bit OS X, there these tests fail. +if sys.platform == 'darwin' and platform.architecture()[0] == '64bit': + _osx64bit = True +else: + _osx64bit = False # precision for tests _ndigits = {'f':4, 'd':12, 'F':4, 'D':12} @@ -99,12 +109,14 @@ eval[i]*evec[:,i], decimal=_ndigits[typ]) + @dec.knownfailureif(_osx64bit, "Currently fails on 64-bit OS X 10.6") def test_symmetric_modes(self): k=2 for typ in 'fd': for which in ['LM','SM','BE']: self.eval_evec(self.symmetric[0],typ,k,which) + @dec.knownfailureif(_osx64bit, "Currently fails on 64-bit OS X 10.6") def test_starting_vector(self): k=2 for typ in 'fd': @@ -146,6 +158,7 @@ eval[i]*evec[:,i], decimal=_ndigits[typ]) + @dec.knownfailureif(_osx64bit, "Currently fails on 64-bit OS X 10.6") def test_complex_symmetric_modes(self): k=2 for typ in 'FD': @@ -193,6 +206,7 @@ decimal=_ndigits[typ]) + @dec.knownfailureif(_osx64bit, "Currently fails on 64-bit OS X 10.6") def test_nonsymmetric_modes(self): k=2 for typ in 'fd': @@ -202,6 +216,7 @@ + @dec.knownfailureif(_osx64bit, "Currently fails on 64-bit OS X 10.6") def test_starting_vector(self): k=2 for typ in 'fd': @@ -256,6 +271,7 @@ eval[i]*evec[:,i], decimal=_ndigits[typ]) + @dec.knownfailureif(_osx64bit, "Currently fails on 64-bit OS X 10.6") def test_complex_nonsymmetric_modes(self): k=2 for typ in 'FD': @@ -263,5 +279,51 @@ for m in self.nonsymmetric: self.eval_evec(m,typ,k,which) +def sorted_svd(m, k): + """Compute svd of a dense matrix m, and return singular vectors/values + sorted.""" + u, s, vh = dsvd(m) + ii = np.argsort(s)[-k:] + + return u[:, ii], s[ii], vh[ii] + +def svd_estimate(u, s, vh): + return np.dot(u, np.dot(np.diag(s), vh)) + +class TestSparseSvd(TestCase): + def test_simple_real(self): + x = np.array([[1, 2, 3], + [3, 4, 3], + [1, 0, 2], + [0, 0, 1]], np.float) + + for m in [x.T, x]: + for k in range(1, 3): + u, s, vh = sorted_svd(m, k) + su, ss, svh = svd(m, k) + + m_hat = svd_estimate(u, s, vh) + sm_hat = svd_estimate(su, ss, svh) + + assert_array_almost_equal_nulp(m_hat, sm_hat, nulp=1000) + + @dec.knownfailureif(True, "Complex sparse SVD not implemented (depends on " + "Hermitian support in eigen_symmetric") + def test_simple_complex(self): + x = np.array([[1, 2, 3], + [3, 4, 3], + [1+1j, 0, 2], + [0, 0, 1]], np.complex) + + for m in [x, x.T.conjugate()]: + for k in range(1, 3): + u, s, vh = sorted_svd(m, k) + su, ss, svh = svd(m, k) + + m_hat = svd_estimate(u, s, vh) + sm_hat = svd_estimate(su, ss, svh) + + assert_array_almost_equal_nulp(m_hat, sm_hat, nulp=1000) + if __name__ == "__main__": run_module_suite() diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/eigen/__init__.py python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/eigen/__init__.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/eigen/__init__.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/eigen/__init__.py 2010-07-26 15:48:34.000000000 +0100 @@ -2,6 +2,7 @@ from info import __doc__ +from arpack import * from lobpcg import * __all__ = filter(lambda s:not s.startswith('_'),dir()) diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/eigen/lobpcg/lobpcg.py 2010-07-26 15:48:35.000000000 +0100 @@ -64,8 +64,8 @@ raw_input() def save( ar, fileName ): - from scipy.io import write_array - write_array( fileName, ar, precision = 8 ) + from numpy import savetxt + savetxt( fileName, ar, precision = 8 ) ## # 21.05.2007, c diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/isolve/__init__.py python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/isolve/__init__.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/isolve/__init__.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/isolve/__init__.py 2010-07-26 15:48:35.000000000 +0100 @@ -3,6 +3,8 @@ #from info import __doc__ from iterative import * from minres import minres +from lgmres import lgmres +from lsqr import lsqr __all__ = filter(lambda s:not s.startswith('_'),dir()) from numpy.testing import Tester diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/isolve/iterative.py python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/isolve/iterative.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/isolve/iterative.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/isolve/iterative.py 2010-07-26 15:48:35.000000000 +0100 @@ -166,8 +166,6 @@ callback(x) break elif (ijob == 1): - if matvec is None: - matvec = get_matvec(A) work[slice2] *= sclr2 work[slice2] += sclr1*matvec(work[slice1]) elif (ijob == 2): @@ -175,8 +173,6 @@ psolve = get_psolve(A) work[slice1] = psolve(work[slice2]) elif (ijob == 3): - if matvec is None: - matvec = get_matvec(A) work[slice2] *= sclr2 work[slice2] += sclr1*matvec(x) elif (ijob == 4): @@ -307,7 +303,7 @@ return postprocess(x), info -def gmres(A, b, x0=None, tol=1e-5, restrt=20, maxiter=None, xtype=None, M=None, callback=None): +def gmres(A, b, x0=None, tol=1e-5, restart=None, maxiter=None, xtype=None, M=None, callback=None, restrt=None): """Use Generalized Minimal RESidual iteration to solve A x = b Parameters @@ -323,10 +319,11 @@ Starting guess for the solution. tol : float Relative tolerance to achieve before terminating. - restrt : integer + restart : integer, optional Number of iterations between restarts. Larger values increase iteration cost, but may be necessary for convergence. - maxiter : integer + (Default: 20) + maxiter : integer, optional Maximum number of iterations. Iteration will stop after maxiter steps even if the specified tolerance has not been achieved. M : {sparse matrix, dense matrix, LinearOperator} @@ -363,12 +360,22 @@ This parameter has been superceeded by LinearOperator. """ + + # Change 'restrt' keyword to 'restart' + if restrt is None: + restrt = restart + elif restart is not None: + raise ValueError("Cannot specify both restart and restrt keywords. " + "Preferably use 'restart' only.") + A,M,x,b,postprocess = make_system(A,M,x0,b,xtype) n = len(b) if maxiter is None: maxiter = n*10 + if restrt is None: + restrt = 20 restrt = min(restrt, n) matvec = A.matvec diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/isolve/lgmres.py python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/isolve/lgmres.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/isolve/lgmres.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/isolve/lgmres.py 2010-07-26 15:48:35.000000000 +0100 @@ -0,0 +1,277 @@ +# Copyright (C) 2009, Pauli Virtanen +# Distributed under the same license as Scipy. + +import numpy as np +import scipy.lib.blas as blas +from iterative import set_docstring +from utils import make_system + +__all__ = ['lgmres'] + +def norm2(q): + q = np.asarray(q) + nrm2, = blas.get_blas_funcs(['nrm2'], [q]) + return nrm2(q) + +def lgmres(A, b, x0=None, tol=1e-5, maxiter=1000, M=None, callback=None, + inner_m=30, outer_k=3, outer_v=None, store_outer_Av=True): + """ + Solve a matrix equation using the LGMRES algorithm. + + The LGMRES algorithm [BJM]_ [BPh]_ is designed to avoid some problems + in the convergence in restarted GMRES, and often converges in fewer + iterations. + + Parameters + ---------- + A : {sparse matrix, dense matrix, LinearOperator} + The N-by-N matrix of the linear system. + b : {array, matrix} + Right hand side of the linear system. Has shape (N,) or (N,1). + x0 : {array, matrix} + Starting guess for the solution. + tol : float + Tolerance to achieve. The algorithm terminates when either the relative + or the absolute residual is below `tol`. + maxiter : integer + Maximum number of iterations. Iteration will stop after maxiter + steps even if the specified tolerance has not been achieved. + M : {sparse matrix, dense matrix, LinearOperator} + Preconditioner for A. The preconditioner should approximate the + inverse of A. Effective preconditioning dramatically improves the + rate of convergence, which implies that fewer iterations are needed + to reach a given error tolerance. + callback : function + User-supplied function to call after each iteration. It is called + as callback(xk), where xk is the current solution vector. + + Additional parameters + --------------------- + inner_m : int, optional + Number of inner GMRES iterations per each outer iteration. + outer_k : int, optional + Number of vectors to carry between inner GMRES iterations. + According to [BJM]_, good values are in the range of 1...3. + However, note that if you want to use the additional vectors to + accelerate solving multiple similar problems, larger values may + be beneficial. + outer_v : list of tuples, optional + List containing tuples ``(v, Av)`` of vectors and corresponding + matrix-vector products, used to augment the Krylov subspace, and + carried between inner GMRES iterations. The element ``Av`` can + be `None` if the matrix-vector product should be re-evaluated. + This parameter is modified in-place by `lgmres`, and can be used + to pass "guess" vectors in and out of the algorithm when solving + similar problems. + store_outer_Av : bool, optional + Whether LGMRES should store also A*v in addition to vectors `v` + in the `outer_v` list. Default is True. + + Returns + ------- + x : array or matrix + The converged solution. + info : integer + Provides convergence information: + 0 : successful exit + >0 : convergence to tolerance not achieved, number of iterations + <0 : illegal input or breakdown + + Notes + ----- + The LGMRES algorithm [BJM]_ [BPh]_ is designed to avoid the + slowing of convergence in restarted GMRES, due to alternating + residual vectors. Typically, it often outperforms GMRES(m) of + comparable memory requirements by some measure, or at least is not + much worse. + + Another advantage in this algorithm is that you can supply it with + 'guess' vectors in the `outer_v` argument that augment the Krylov + subspace. If the solution lies close to the span of these vectors, + the algorithm converges faster. This can be useful if several very + similar matrices need to be inverted one after another, such as in + Newton-Krylov iteration where the Jacobian matrix often changes + little in the nonlinear steps. + + References + ---------- + .. [BJM] A.H. Baker and E.R. Jessup and T. Manteuffel, + SIAM J. Matrix Anal. Appl. 26, 962 (2005). + .. [BPh] A.H. Baker, PhD thesis, University of Colorado (2003). + http://amath.colorado.edu/activities/thesis/allisonb/Thesis.ps + + """ + from scipy.linalg.basic import lstsq + A,M,x,b,postprocess = make_system(A,M,x0,b) + + if not np.isfinite(b).all(): + raise ValueError("RHS must contain only finite numbers") + + matvec = A.matvec + psolve = M.matvec + + if outer_v is None: + outer_v = [] + + axpy, dotc, scal = None, None, None + + b_norm = norm2(b) + if b_norm == 0: + b_norm = 1 + + for k_outer in xrange(maxiter): + r_outer = matvec(x) - b + + # -- callback + if callback is not None: + callback(x) + + # -- determine input type routines + if axpy is None: + if np.iscomplexobj(r_outer) and not np.iscomplexobj(x): + x = x.astype(r_outer.dtype) + axpy, dotc, scal = blas.get_blas_funcs(['axpy', 'dotc', 'scal'], + (x, r_outer)) + + # -- check stopping condition + r_norm = norm2(r_outer) + if r_norm < tol * b_norm or r_norm < tol: + break + + # -- inner LGMRES iteration + vs0 = -psolve(r_outer) + inner_res_0 = norm2(vs0) + + if inner_res_0 == 0: + rnorm = norm2(r_outer) + raise RuntimeError("Preconditioner returned a zero vector; " + "|v| ~ %.1g, |M v| = 0" % rnorm) + + vs0 = scal(1.0/inner_res_0, vs0) + hs = [] + vs = [vs0] + ws = [] + y = None + + for j in xrange(1, 1 + inner_m + len(outer_v)): + # -- Arnoldi process: + # + # Build an orthonormal basis V and matrices W and H such that + # A W = V H + # Columns of W, V, and H are stored in `ws`, `vs` and `hs`. + # + # The first column of V is always the residual vector, `vs0`; + # V has *one more column* than the other of the three matrices. + # + # The other columns in V are built by feeding in, one + # by one, some vectors `z` and orthonormalizing them + # against the basis so far. The trick here is to + # feed in first some augmentation vectors, before + # starting to construct the Krylov basis on `v0`. + # + # It was shown in [BJM]_ that a good choice (the LGMRES choice) + # for these augmentation vectors are the `dx` vectors obtained + # from a couple of the previous restart cycles. + # + # Note especially that while `vs0` is always the first + # column in V, there is no reason why it should also be + # the first column in W. (In fact, below `vs0` comes in + # W only after the augmentation vectors.) + # + # The rest of the algorithm then goes as in GMRES, one + # solves a minimization problem in the smaller subspace + # spanned by W (range) and V (image). + # + # XXX: Below, I'm lazy and use `lstsq` to solve the + # small least squares problem. Performance-wise, this + # is in practice acceptable, but it could be nice to do + # it on the fly with Givens etc. + # + + # ++ evaluate + v_new = None + if j < len(outer_v) + 1: + z, v_new = outer_v[j-1] + elif j == len(outer_v) + 1: + z = vs0 + else: + z = vs[-1] + + if v_new is None: + v_new = psolve(matvec(z)) + else: + # Note: v_new is modified in-place below. Must make a + # copy to ensure that the outer_v vectors are not + # clobbered. + v_new = v_new.copy() + + # ++ orthogonalize + hcur = [] + for v in vs: + alpha = dotc(v, v_new) + hcur.append(alpha) + v_new = axpy(v, v_new, v.shape[0], -alpha) # v_new -= alpha*v + hcur.append(norm2(v_new)) + + if hcur[-1] == 0: + # Exact solution found; bail out. + # Zero basis vector (v_new) in the least-squares problem + # does no harm, so we can just use the same code as usually; + # it will give zero (inner) residual as a result. + bailout = True + else: + bailout = False + v_new = scal(1.0/hcur[-1], v_new) + + vs.append(v_new) + hs.append(hcur) + ws.append(z) + + # XXX: Ugly: should implement the GMRES iteration properly, + # with Givens rotations and not using lstsq. Instead, we + # spare some work by solving the LSQ problem only every 5 + # iterations. + if not bailout and j % 5 != 1 and j < inner_m + len(outer_v) - 1: + continue + + # -- GMRES optimization problem + hess = np.zeros((j+1, j), x.dtype) + e1 = np.zeros((j+1,), x.dtype) + e1[0] = inner_res_0 + for q in xrange(j): + hess[:(q+2),q] = hs[q] + + y, resids, rank, s = lstsq(hess, e1) + inner_res = norm2(np.dot(hess, y) - e1) + + # -- check for termination + if inner_res < tol * inner_res_0: + break + + # -- GMRES terminated: eval solution + dx = ws[0]*y[0] + for w, yc in zip(ws[1:], y[1:]): + dx = axpy(w, dx, dx.shape[0], yc) # dx += w*yc + + # -- Store LGMRES augmentation vectors + nx = norm2(dx) + if store_outer_Av: + q = np.dot(hess, y) + ax = vs[0]*q[0] + for v, qc in zip(vs[1:], q[1:]): + ax = axpy(v, ax, ax.shape[0], qc) + outer_v.append((dx/nx, ax/nx)) + else: + outer_v.append((dx/nx, None)) + + # -- Retain only a finite number of augmentation vectors + while len(outer_v) > outer_k: + del outer_v[0] + + # -- Apply step + x += dx + else: + # didn't converge ... + return postprocess(x), maxiter + + return postprocess(x), 0 diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/isolve/lsqr.py python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/isolve/lsqr.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/isolve/lsqr.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/isolve/lsqr.py 2010-07-26 15:48:35.000000000 +0100 @@ -0,0 +1,495 @@ +"""Sparse Equations and Least Squares. + +The original Fortran code was written by C. C. Paige and M. A. Saunders as +described in + +C. C. Paige and M. A. Saunders, LSQR: An algorithm for sparse linear +equations and sparse least squares, TOMS 8(1), 43--71 (1982). + +C. C. Paige and M. A. Saunders, Algorithm 583; LSQR: Sparse linear +equations and least-squares problems, TOMS 8(2), 195--209 (1982). + +It is licensed under the following BSD license: + +Copyright (c) 2006, Systems Optimization Laboratory +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + + * Redistributions in binary form must reproduce the above + copyright notice, this list of conditions and the following + disclaimer in the documentation and/or other materials provided + with the distribution. + + * Neither the name of Stanford University nor the names of its + contributors may be used to endorse or promote products derived + from this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +The Fortran code was translated to Python for use in CVXOPT by Jeffery +Kline with contributions by Mridul Aanjaneya and Bob Myhill. + +Adapted for SciPy by Stefan van der Walt. + +""" + +__all__ = ['lsqr'] + +import numpy as np +from math import sqrt +from scipy.sparse.linalg.interface import aslinearoperator + +def _sym_ortho(a,b): + """ + Jeffery Kline noted: I added the routine 'SymOrtho' for numerical + stability. This is recommended by S.-C. Choi in [1]_. It removes + the unpleasant potential of ``1/eps`` in some important places + (see, for example text following "Compute the next + plane rotation Qk" in minres_py). + + References + ---------- + .. [1] S.-C. Choi, "Iterative Methods for Singular Linear Equations + and Least-Squares Problems", Dissertation, + http://www.stanford.edu/group/SOL/dissertations/sou-cheng-choi-thesis.pdf + + """ + aa = abs(a) + ab = abs(b) + if b == 0.: + s = 0. + r = aa + if aa == 0.: + c = 1. + else: + c = a/aa + elif a == 0.: + c = 0. + s = b / ab + r = ab + elif ab >= aa: + sb = 1 + if b < 0: sb=-1 + tau = a/b + s = sb * (1 + tau**2)**-0.5 + c = s * tau + r = b / s + elif aa > ab: + sa = 1 + if a < 0: sa = -1 + tau = b / a + c = sa * (1 + tau**2)**-0.5 + s = c * tau + r = a / c + + return c, s, r + + +def lsqr(A, b, damp=0.0, atol=1e-8, btol=1e-8, conlim=1e8, + iter_lim=None, show=False, calc_var=False): + """Find the least-squares solution to a large, sparse, linear system + of equations. + + The function solves ``Ax = b`` or ``min ||b - Ax||^2`` or + ``min ||Ax - b||^2 + d^2 ||x||^2. + + The matrix A may be square or rectangular (over-determined or + under-determined), and may have any rank. + + :: + + 1. Unsymmetric equations -- solve A*x = b + + 2. Linear least squares -- solve A*x = b + in the least-squares sense + + 3. Damped least squares -- solve ( A )*x = ( b ) + ( damp*I ) ( 0 ) + in the least-squares sense + + Parameters + ---------- + A : {sparse matrix, ndarray, LinearOperatorLinear} + Representation of an mxn matrix. It is required that + the linear operator can produce ``Ax`` and ``A^T x``. + b : (m,) ndarray + Right-hand side vector ``b``. + damp : float + Damping coefficient. + atol, btol : float + Stopping tolerances. If both are 1.0e-9 (say), the final + residual norm should be accurate to about 9 digits. (The + final x will usually have fewer correct digits, depending on + cond(A) and the size of damp.) + conlim : float + Another stopping tolerance. lsqr terminates if an estimate of + ``cond(A)`` exceeds `conlim`. For compatible systems ``Ax = + b``, `conlim` could be as large as 1.0e+12 (say). For + least-squares problems, conlim should be less than 1.0e+8. + Maximum precision can be obtained by setting ``atol = btol = + conlim = zero``, but the number of iterations may then be + excessive. + iter_lim : int + Explicit limitation on number of iterations (for safety). + show : bool + Display an iteration log. + calc_var : bool + Whether to estimate diagonals of ``(A'A + damp^2*I)^{-1}``. + + Returns + ------- + x : ndarray of float + The final solution. + istop : int + Gives the reason for termination. + 1 means x is an approximate solution to Ax = b. + 2 means x approximately solves the least-squares problem. + itn : int + Iteration number upon termination. + r1norm : float + ``norm(r)``, where ``r = b - Ax``. + r2norm : float + ``sqrt( norm(r)^2 + damp^2 * norm(x)^2 )``. Equal to `r1norm` if + ``damp == 0``. + anorm : float + Estimate of Frobenius norm of ``Abar = [[A]; [damp*I]]``. + acond : float + Estimate of ``cond(Abar)``. + arnorm : float + Estimate of ``norm(A'*r - damp^2*x)``. + xnorm : float + ``norm(x)`` + var : ndarray of float + If ``calc_var`` is True, estimates all diagonals of + ``(A'A)^{-1}`` (if ``damp == 0``) or more generally ``(A'A + + damp^2*I)^{-1}``. This is well defined if A has full column + rank or ``damp > 0``. (Not sure what var means if ``rank(A) + < n`` and ``damp = 0.``) + + Notes + ----- + LSQR uses an iterative method to approximate the solution. The + number of iterations required to reach a certain accuracy depends + strongly on the scaling of the problem. Poor scaling of the rows + or columns of A should therefore be avoided where possible. + + For example, in problem 1 the solution is unaltered by + row-scaling. If a row of A is very small or large compared to + the other rows of A, the corresponding row of ( A b ) should be + scaled up or down. + + In problems 1 and 2, the solution x is easily recovered + following column-scaling. Unless better information is known, + the nonzero columns of A should be scaled so that they all have + the same Euclidean norm (e.g., 1.0). + + In problem 3, there is no freedom to re-scale if damp is + nonzero. However, the value of damp should be assigned only + after attention has been paid to the scaling of A. + + The parameter damp is intended to help regularize + ill-conditioned systems, by preventing the true solution from + being very large. Another aid to regularization is provided by + the parameter acond, which may be used to terminate iterations + before the computed solution becomes very large. + + If some initial estimate ``x0`` is known and if ``damp == 0``, + one could proceed as follows: + + 1. Compute a residual vector ``r0 = b - A*x0``. + 2. Use LSQR to solve the system ``A*dx = r0``. + 3. Add the correction dx to obtain a final solution ``x = x0 + dx``. + + This requires that ``x0`` be available before and after the call + to LSQR. To judge the benefits, suppose LSQR takes k1 iterations + to solve A*x = b and k2 iterations to solve A*dx = r0. + If x0 is "good", norm(r0) will be smaller than norm(b). + If the same stopping tolerances atol and btol are used for each + system, k1 and k2 will be similar, but the final solution x0 + dx + should be more accurate. The only way to reduce the total work + is to use a larger stopping tolerance for the second system. + If some value btol is suitable for A*x = b, the larger value + btol*norm(b)/norm(r0) should be suitable for A*dx = r0. + + Preconditioning is another way to reduce the number of iterations. + If it is possible to solve a related system ``M*x = b`` + efficiently, where M approximates A in some helpful way (e.g. M - + A has low rank or its elements are small relative to those of A), + LSQR may converge more rapidly on the system ``A*M(inverse)*z = + b``, after which x can be recovered by solving M*x = z. + + If A is symmetric, LSQR should not be used! + + Alternatives are the symmetric conjugate-gradient method (cg) + and/or SYMMLQ. SYMMLQ is an implementation of symmetric cg that + applies to any symmetric A and will converge more rapidly than + LSQR. If A is positive definite, there are other implementations + of symmetric cg that require slightly less work per iteration than + SYMMLQ (but will take the same number of iterations). + + References + ---------- + .. [1] C. C. Paige and M. A. Saunders (1982a). + "LSQR: An algorithm for sparse linear equations and + sparse least squares", ACM TOMS 8(1), 43-71. + .. [2] C. C. Paige and M. A. Saunders (1982b). + "Algorithm 583. LSQR: Sparse linear equations and least + squares problems", ACM TOMS 8(2), 195-209. + .. [3] M. A. Saunders (1995). "Solution of sparse rectangular + systems using LSQR and CRAIG", BIT 35, 588-604. + + """ + A = aslinearoperator(A) + b = b.squeeze() + + m, n = A.shape + if iter_lim is None: iter_lim = 2 * n + var = np.zeros(n) + + msg=('The exact solution is x = 0 ', + 'Ax - b is small enough, given atol, btol ', + 'The least-squares solution is good enough, given atol ', + 'The estimate of cond(Abar) has exceeded conlim ', + 'Ax - b is small enough for this machine ', + 'The least-squares solution is good enough for this machine', + 'Cond(Abar) seems to be too large for this machine ', + 'The iteration limit has been reached '); + + if show: + print ' ' + print 'LSQR Least-squares solution of Ax = b' + str1 = 'The matrix A has %8g rows and %8g cols' % (m, n) + str2 = 'damp = %20.14e calc_var = %8g' % (damp, calc_var) + str3 = 'atol = %8.2e conlim = %8.2e'%( atol, conlim) + str4 = 'btol = %8.2e iter_lim = %8g' %( btol, iter_lim) + print str1 + print str2 + print str3 + print str4 + + itn = 0 + istop = 0 + nstop = 0 + ctol = 0 + if conlim > 0: ctol = 1/conlim + anorm = 0 + acond = 0 + dampsq = damp**2 + ddnorm = 0 + res2 = 0 + xnorm = 0 + xxnorm = 0 + z = 0 + cs2 = -1 + sn2 = 0 + + """ + Set up the first vectors u and v for the bidiagonalization. + These satisfy beta*u = b, alfa*v = A'u. + """ + __xm = np.zeros(m) # a matrix for temporary holding + __xn = np.zeros(n) # a matrix for temporary holding + v = np.zeros(n) + u = b + x = np.zeros(n) + alfa = 0 + beta = np.linalg.norm(u) + w = np.zeros(n) + + if beta > 0: + u = (1/beta) * u + v = A.rmatvec(u) + alfa = np.linalg.norm(v) + + if alfa > 0: + v = (1/alfa) * v + w = v.copy() + + rhobar = alfa + phibar = beta + bnorm = beta + rnorm = beta + r1norm = rnorm + r2norm = rnorm + + # Reverse the order here from the original matlab code because + # there was an error on return when arnorm==0 + arnorm = alfa * beta + if arnorm == 0: + print msg[0]; + return x, istop, itn, r1norm, r2norm, anorm, acond, arnorm, xnorm, var + + head1 = ' Itn x[0] r1norm r2norm '; + head2 = ' Compatible LS Norm A Cond A'; + + if show: + print ' ' + print head1, head2 + test1 = 1; test2 = alfa / beta; + str1 = '%6g %12.5e' %( itn, x[0] ); + str2 = ' %10.3e %10.3e'%( r1norm, r2norm ); + str3 = ' %8.1e %8.1e' %( test1, test2 ); + print str1, str2, str3 + + # Main iteration loop. + while itn < iter_lim: + itn = itn + 1 + """ + % Perform the next step of the bidiagonalization to obtain the + % next beta, u, alfa, v. These satisfy the relations + % beta*u = a*v - alfa*u, + % alfa*v = A'*u - beta*v. + """ + u = A.matvec(v) - alfa * u + beta = np.linalg.norm(u) + + if beta > 0: + u = (1/beta) * u + anorm = sqrt(anorm**2 + alfa**2 + beta**2 + damp**2) + v = A.rmatvec(u) - beta * v + alfa = np.linalg.norm(v) + if alfa > 0: + v = (1 / alfa) * v + + # Use a plane rotation to eliminate the damping parameter. + # This alters the diagonal (rhobar) of the lower-bidiagonal matrix. + rhobar1 = sqrt(rhobar**2 + damp**2) + cs1 = rhobar / rhobar1 + sn1 = damp / rhobar1 + psi = sn1 * phibar + phibar = cs1 * phibar + + # Use a plane rotation to eliminate the subdiagonal element (beta) + # of the lower-bidiagonal matrix, giving an upper-bidiagonal matrix. + cs, sn, rho = _sym_ortho(rhobar1, beta) + + theta = sn * alfa + rhobar = -cs * alfa + phi = cs * phibar + phibar = sn * phibar + tau = sn * phi + + # Update x and w. + t1 = phi / rho + t2 = -theta / rho + dk = (1 / rho) * w + + x = x + t1 * w + w = v + t2 * w + ddnorm = ddnorm + np.linalg.norm(dk)**2 + + if calc_var: + var = var + dk**2 + + # Use a plane rotation on the right to eliminate the + # super-diagonal element (theta) of the upper-bidiagonal matrix. + # Then use the result to estimate norm(x). + delta = sn2 * rho + gambar = -cs2 * rho + rhs = phi - delta * z + zbar = rhs / gambar + xnorm = sqrt(xxnorm + zbar**2) + gamma = sqrt(gambar**2 +theta**2) + cs2 = gambar / gamma + sn2 = theta / gamma + z = rhs / gamma + xxnorm = xxnorm + z**2 + + # Test for convergence. + # First, estimate the condition of the matrix Abar, + # and the norms of rbar and Abar'rbar. + acond = anorm * sqrt(ddnorm) + res1 = phibar**2 + res2 = res2 + psi**2 + rnorm = sqrt(res1 + res2) + arnorm = alfa * abs(tau) + + # Distinguish between + # r1norm = ||b - Ax|| and + # r2norm = rnorm in current code + # = sqrt(r1norm^2 + damp^2*||x||^2). + # Estimate r1norm from + # r1norm = sqrt(r2norm^2 - damp^2*||x||^2). + # Although there is cancellation, it might be accurate enough. + r1sq = rnorm**2 - dampsq * xxnorm + r1norm = sqrt(abs(r1sq)) + if r1sq < 0: + r1norm = -r1norm + r2norm = rnorm + + # Now use these norms to estimate certain other quantities, + # some of which will be small near a solution. + test1 = rnorm / bnorm + test2 = arnorm / (anorm * rnorm) + test3 = 1 / acond + t1 = test1 / (1 + anorm * xnorm / bnorm) + rtol = btol + atol * anorm * xnorm / bnorm + + # The following tests guard against extremely small values of + # atol, btol or ctol. (The user may have set any or all of + # the parameters atol, btol, conlim to 0.) + # The effect is equivalent to the normal tests using + # atol = eps, btol = eps, conlim = 1/eps. + if itn >= iter_lim: istop = 7 + if 1 + test3 <= 1: istop = 6 + if 1 + test2 <= 1: istop = 5 + if 1 + t1 <= 1: istop = 4 + + # Allow for tolerances set by the user. + if test3 <= ctol: istop = 3 + if test2 <= atol: istop = 2 + if test1 <= rtol: istop = 1 + + # See if it is time to print something. + prnt = False; + if n <= 40: prnt = True + if itn <= 10: prnt = True + if itn >= iter_lim-10: prnt = True + # if itn%10 == 0: prnt = True + if test3 <= 2*ctol: prnt = True + if test2 <= 10*atol: prnt = True + if test1 <= 10*rtol: prnt = True + if istop != 0: prnt = True + + if prnt: + if show: + str1 = '%6g %12.5e' % (itn, x[0]) + str2 = ' %10.3e %10.3e' % (r1norm, r2norm) + str3 = ' %8.1e %8.1e' % (test1, test2) + str4 = ' %8.1e %8.1e' % (anorm, acond) + print str1, str2, str3, str4 + + if istop != 0: break + + # End of iteration loop. + # Print the stopping condition. + if show: + print ' ' + print 'LSQR finished' + print msg[istop] + print ' ' + str1 = 'istop =%8g r1norm =%8.1e' % (istop, r1norm) + str2 = 'anorm =%8.1e arnorm =%8.1e' % (anorm, arnorm) + str3 = 'itn =%8g r2norm =%8.1e' % (itn, r2norm) + str4 = 'acond =%8.1e xnorm =%8.1e' % (acond, xnorm) + print str1+ ' ' + str2 + print str3+ ' ' + str4 + print ' ' + + return x, istop, itn, r1norm, r2norm, anorm, acond, arnorm, xnorm, var diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/isolve/setup.py python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/isolve/setup.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/isolve/setup.py 2010-03-03 14:34:12.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/isolve/setup.py 2010-07-26 15:48:35.000000000 +0100 @@ -1,5 +1,4 @@ #!/usr/bin/env python -## Automatically adapted for scipy Oct 18, 2005 by import os import sys diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/isolve/tests/demo_lgmres.py python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/isolve/tests/demo_lgmres.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/isolve/tests/demo_lgmres.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/isolve/tests/demo_lgmres.py 2010-07-26 15:48:35.000000000 +0100 @@ -0,0 +1,57 @@ +import scipy.sparse.linalg as la +import scipy.sparse as sp +import scipy.io as io +import numpy as np +import sys + +#problem = "SPARSKIT/drivcav/e05r0100" +problem = "SPARSKIT/drivcav/e05r0200" +#problem = "Harwell-Boeing/sherman/sherman1" +#problem = "misc/hamm/add32" + +mm = np.lib._datasource.Repository('ftp://math.nist.gov/pub/MatrixMarket2/') +f = mm.open('%s.mtx.gz' % problem) +Am = io.mmread(f).tocsr() +f.close() + +f = mm.open('%s_rhs1.mtx.gz' % problem) +b = np.array(io.mmread(f)).ravel() +f.close() + +count = [0] +def matvec(v): + count[0] += 1 + sys.stderr.write('%d\r' % count[0]) + return Am*v +A = la.LinearOperator(matvec=matvec, shape=Am.shape, dtype=Am.dtype) + +M = 100 + +print "MatrixMarket problem %s" % problem +print "Invert %d x %d matrix; nnz = %d" % (Am.shape[0], Am.shape[1], Am.nnz) + +count[0] = 0 +x0, info = la.gmres(A, b, restrt=M, tol=1e-14) +count_0 = count[0] +err0 = np.linalg.norm(Am*x0 - b) / np.linalg.norm(b) +print "GMRES(%d):" % M, count_0, "matvecs, residual", err0 +if info != 0: + print "Didn't converge" + +count[0] = 0 +x1, info = la.lgmres(A, b, inner_m=M-6*2, outer_k=6, tol=1e-14) +count_1 = count[0] +err1 = np.linalg.norm(Am*x1 - b) / np.linalg.norm(b) +print "LGMRES(%d,6) [same memory req.]:" % (M-2*6), count_1, \ + "matvecs, residual:", err1 +if info != 0: + print "Didn't converge" + +count[0] = 0 +x2, info = la.lgmres(A, b, inner_m=M-6, outer_k=6, tol=1e-14) +count_2 = count[0] +err2 = np.linalg.norm(Am*x2 - b) / np.linalg.norm(b) +print "LGMRES(%d,6) [same subspace size]:" % (M-6), count_2, \ + "matvecs, residual:", err2 +if info != 0: + print "Didn't converge" diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/isolve/tests/test_iterative.py python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/isolve/tests/test_iterative.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/isolve/tests/test_iterative.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/isolve/tests/test_iterative.py 2010-07-26 15:48:35.000000000 +0100 @@ -1,7 +1,6 @@ #!/usr/bin/env python """ Test functions for the sparse.linalg.isolve module """ -import sys from numpy.testing import * @@ -10,7 +9,7 @@ from scipy.sparse import spdiags, csr_matrix from scipy.sparse.linalg.interface import LinearOperator -from scipy.sparse.linalg.isolve import cg, cgs, bicg, bicgstab, gmres, qmr, minres +from scipy.sparse.linalg.isolve import cg, cgs, bicg, bicgstab, gmres, qmr, minres, lgmres #def callback(x): # global A, b @@ -43,6 +42,7 @@ self.solvers.append( (gmres, False, False) ) self.solvers.append( (qmr, False, False) ) self.solvers.append( (minres, True, False) ) + self.solvers.append( (lgmres, False, False) ) # list of tuples (A, symmetric, positive_definite ) self.cases = [] diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/isolve/tests/test_lgmres.py python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/isolve/tests/test_lgmres.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/isolve/tests/test_lgmres.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/isolve/tests/test_lgmres.py 2010-07-26 15:48:35.000000000 +0100 @@ -0,0 +1,76 @@ +#!/usr/bin/env python +"""Tests for the linalg.isolve.lgmres module +""" + +from numpy.testing import * + +from numpy import zeros, array, allclose +from scipy.linalg import norm +from scipy.sparse import csr_matrix + +from scipy.sparse.linalg.interface import LinearOperator +from scipy.sparse.linalg import splu +from scipy.sparse.linalg.isolve import lgmres + +Am = csr_matrix(array([[-2,1,0,0,0,9], + [1,-2,1,0,5,0], + [0,1,-2,1,0,0], + [0,0,1,-2,1,0], + [0,3,0,1,-2,1], + [1,0,0,0,1,-2]])) +b = array([1,2,3,4,5,6]) +count = [0] +def matvec(v): + count[0] += 1 + return Am*v +A = LinearOperator(matvec=matvec, shape=Am.shape, dtype=Am.dtype) +def do_solve(**kw): + count[0] = 0 + x0, flag = lgmres(A, b, x0=zeros(A.shape[0]), inner_m=6, tol=1e-14, **kw) + count_0 = count[0] + assert allclose(A*x0, b, rtol=1e-12, atol=1e-12), norm(A*x0-b) + return x0, count_0 + + +class TestLGMRES(TestCase): + def test_preconditioner(self): + # Check that preconditioning works + pc = splu(Am.tocsc()) + M = LinearOperator(matvec=pc.solve, shape=A.shape, dtype=A.dtype) + + x0, count_0 = do_solve() + x1, count_1 = do_solve(M=M) + + assert count_1 == 3 + assert count_1 < count_0/2 + assert allclose(x1, x0, rtol=1e-14) + + def test_outer_v(self): + # Check that the augmentation vectors behave as expected + + outer_v = [] + x0, count_0 = do_solve(outer_k=6, outer_v=outer_v) + assert len(outer_v) > 0 + assert len(outer_v) <= 6 + + x1, count_1 = do_solve(outer_k=6, outer_v=outer_v) + assert count_1 == 2, count_1 + assert count_1 < count_0/2 + assert allclose(x1, x0, rtol=1e-14) + + # --- + + outer_v = [] + x0, count_0 = do_solve(outer_k=6, outer_v=outer_v, store_outer_Av=False) + assert array([v[1] is None for v in outer_v]).all() + assert len(outer_v) > 0 + assert len(outer_v) <= 6 + + x1, count_1 = do_solve(outer_k=6, outer_v=outer_v) + assert count_1 == 3, count_1 + assert count_1 < count_0/2 + assert allclose(x1, x0, rtol=1e-14) + +if __name__ == "__main__": + import nose + nose.run(argv=['', __file__]) diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/isolve/tests/test_lsqr.py python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/isolve/tests/test_lsqr.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/isolve/tests/test_lsqr.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/isolve/tests/test_lsqr.py 2010-07-26 15:48:35.000000000 +0100 @@ -0,0 +1,59 @@ +import numpy as np + +from scipy.sparse.linalg import lsqr +from time import time + +# Set up a test problem +n = 35 +G = np.eye(n) +normal = np.random.normal +norm = np.linalg.norm + +for jj in range(5): + gg = normal(size=n) + hh = gg * gg.T + G += (hh + hh.T) * 0.5 + G += normal(size=n) * normal(size=n) + +b = normal(size=n) + +tol = 1e-10 +show = False +maxit = None + +def test_basic(): + svx = np.linalg.solve(G, b) + X = lsqr(G, b, show=show, atol=tol, btol=tol, iter_lim=maxit) + xo = X[0] + assert norm(svx - xo) < 1e-5 + +if __name__ == "__main__": + svx = np.linalg.solve(G, b) + + tic = time() + X = lsqr(G, b, show=show, atol=tol, btol=tol, iter_lim=maxit) + xo = X[0] + phio = X[3] + psio = X[7] + k = X[2] + chio = X[8] + mg = np.amax(G - G.T) + if mg > 1e-14: + sym='No' + else: + sym='Yes' + + print 'LSQR' + print "Is linear operator symmetric? " + sym + print "n: %3g iterations: %3g" % (n, k) + print "Norms computed in %.2fs by LSQR" % (time() - tic) + print " ||x|| %9.4e ||r|| %9.4e ||Ar|| %9.4e " %( chio, phio, psio) + print "Residual norms computed directly:" + print " ||x|| %9.4e ||r|| %9.4e ||Ar|| %9.4e" % (norm(xo), + norm(G*xo - b), + norm(G.T*(G*xo-b))) + print "Direct solution norms:" + print " ||x|| %9.4e ||r|| %9.4e " % (norm(svx), norm(G*svx -b)) + print "" + print " || x_{direct} - x_{LSQR}|| %9.4e " % norm(svx-xo) + print "" diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/tests/test_iterative.py python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/tests/test_iterative.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/linalg/tests/test_iterative.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/linalg/tests/test_iterative.py 2010-07-26 15:48:35.000000000 +0100 @@ -0,0 +1,18 @@ +import numpy as np +from numpy.testing import run_module_suite, assert_almost_equal + +import scipy.sparse as sp +import scipy.sparse.linalg as spla + +def test_gmres_basic(): + A = np.vander(np.arange(10) + 1)[:, ::-1] + b = np.zeros(10) + b[0] = 1 + x = np.linalg.solve(A, b) + + x_gm, err = spla.gmres(A, b, restart=5, maxiter=1) + + assert_almost_equal(x_gm[0], 0.359, decimal=2) + +if __name__ == "__main__": + run_module_suite() diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/sparsetools/bsr.h python-scipy-0.8.0+dfsg1/scipy/sparse/sparsetools/bsr.h --- python-scipy-0.7.2+dfsg1/scipy/sparse/sparsetools/bsr.h 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/sparsetools/bsr.h 2010-07-26 15:48:35.000000000 +0100 @@ -330,21 +330,123 @@ +/* + * Compute C = A (binary_op) B for BSR matrices that are not + * necessarily canonical BSR format. Specifically, this method + * works even when the input matrices have duplicate and/or + * unsorted column indices within a given row. + * + * Refer to bsr_binop_bsr() for additional information + * + * Note: + * Output arrays Cp, Cj, and Cx must be preallocated + * If nnz(C) is not known a priori, a conservative bound is: + * nnz(C) <= nnz(A) + nnz(B) + * + * Note: + * Input: A and B column indices are not assumed to be in sorted order + * Output: C column indices are not generally in sorted order + * C will not contain any duplicate entries or explicit zeros. + * + */ template -void bsr_binop_bsr(const I n_brow, const I n_bcol, - const I R, const I C, - const I Ap[], const I Aj[], const T Ax[], - const I Bp[], const I Bj[], const T Bx[], - I Cp[], I Cj[], T Cx[], - const bin_op& op) +void bsr_binop_bsr_general(const I n_brow, const I n_bcol, + const I R, const I C, + const I Ap[], const I Aj[], const T Ax[], + const I Bp[], const I Bj[], const T Bx[], + I Cp[], I Cj[], T Cx[], + const bin_op& op) { - assert( R > 0 && C > 0); - - if( R == 1 && C == 1 ){ - csr_binop_csr(n_brow, n_bcol, Ap, Aj, Ax, Bp, Bj, Bx, Cp, Cj, Cx, op); //use CSR for 1x1 blocksize - return; + //Method that works for duplicate and/or unsorted indices + const I RC = R*C; + + Cp[0] = 0; + I nnz = 0; + + std::vector next(n_bcol, -1); + std::vector A_row(n_bcol * RC, 0); // this approach can be problematic for large R + std::vector B_row(n_bcol * RC, 0); + + for(I i = 0; i < n_brow; i++){ + I head = -2; + I length = 0; + + //add a row of A to A_row + for(I jj = Ap[i]; jj < Ap[i+1]; jj++){ + I j = Aj[jj]; + + for(I n = 0; n < RC; n++) + A_row[RC*j + n] += Ax[RC*jj + n]; + + if(next[j] == -1){ + next[j] = head; + head = j; + length++; + } + } + + //add a row of B to B_row + for(I jj = Bp[i]; jj < Bp[i+1]; jj++){ + I j = Bj[jj]; + + for(I n = 0; n < RC; n++) + B_row[RC*j + n] += Bx[RC*jj + n]; + + if(next[j] == -1){ + next[j] = head; + head = j; + length++; + } + } + + + for(I jj = 0; jj < length; jj++){ + // compute op(block_A, block_B) + for(I n = 0; n < RC; n++) + Cx[RC * nnz + n] = op(A_row[RC*head + n], B_row[RC*head + n]); + + // advance counter if block is nonzero + if( is_nonzero_block(Cx + (RC * nnz), RC) ) + Cj[nnz++] = head; + + // clear block_A and block_B values + for(I n = 0; n < RC; n++){ + A_row[RC*head + n] = 0; + B_row[RC*head + n] = 0; + } + + I temp = head; + head = next[head]; + next[temp] = -1; + } + + Cp[i + 1] = nnz; } +} + +/* + * Compute C = A (binary_op) B for BSR matrices that are in the + * canonical BSR format. Specifically, this method requires that + * the rows of the input matrices are free of duplicate column indices + * and that the column indices are in sorted order. + * + * Refer to bsr_binop_bsr() for additional information + * + * Note: + * Input: A and B column indices are assumed to be in sorted order + * Output: C column indices will be in sorted order + * Cx will not contain any zero entries + * + */ +template +void bsr_binop_bsr_canonical(const I n_brow, const I n_bcol, + const I R, const I C, + const I Ap[], const I Aj[], const T Ax[], + const I Bp[], const I Bj[], const T Bx[], + I Cp[], I Cj[], T Cx[], + const bin_op& op) +{ const I RC = R*C; T * result = Cx; @@ -364,7 +466,7 @@ if(A_j == B_j){ for(I n = 0; n < RC; n++){ - result[n] = op(Ax[RC*A_pos + n],Bx[RC*B_pos + n]); + result[n] = op(Ax[RC*A_pos + n], Bx[RC*B_pos + n]); } if( is_nonzero_block(result,RC) ){ @@ -377,7 +479,7 @@ B_pos++; } else if (A_j < B_j) { for(I n = 0; n < RC; n++){ - result[n] = op(Ax[RC*A_pos + n],0); + result[n] = op(Ax[RC*A_pos + n], 0); } if(is_nonzero_block(result,RC)){ @@ -390,7 +492,7 @@ } else { //B_j < A_j for(I n = 0; n < RC; n++){ - result[n] = op(0,Bx[RC*B_pos + n]); + result[n] = op(0, Bx[RC*B_pos + n]); } if(is_nonzero_block(result,RC)){ Cj[nnz] = B_j; @@ -405,10 +507,10 @@ //tail while(A_pos < A_end){ for(I n = 0; n < RC; n++){ - result[n] = op(Ax[RC*A_pos + n],0); + result[n] = op(Ax[RC*A_pos + n], 0); } - if(is_nonzero_block(result,RC)){ + if(is_nonzero_block(result, RC)){ Cj[nnz] = Aj[A_pos]; result += RC; nnz++; @@ -421,7 +523,7 @@ result[n] = op(0,Bx[RC*B_pos + n]); } - if(is_nonzero_block(result,RC)){ + if(is_nonzero_block(result, RC)){ Cj[nnz] = Bj[B_pos]; result += RC; nnz++; @@ -434,6 +536,62 @@ } } + +/* + * Compute C = A (binary_op) B for CSR matrices A,B where the column + * indices with the rows of A and B are known to be sorted. + * + * binary_op(x,y) - binary operator to apply elementwise + * + * Input Arguments: + * I n_row - number of rows in A (and B) + * I n_col - number of columns in A (and B) + * I Ap[n_row+1] - row pointer + * I Aj[nnz(A)] - column indices + * T Ax[nnz(A)] - nonzeros + * I Bp[n_row+1] - row pointer + * I Bj[nnz(B)] - column indices + * T Bx[nnz(B)] - nonzeros + * Output Arguments: + * I Cp[n_row+1] - row pointer + * I Cj[nnz(C)] - column indices + * T Cx[nnz(C)] - nonzeros + * + * Note: + * Output arrays Cp, Cj, and Cx must be preallocated + * If nnz(C) is not known a priori, a conservative bound is: + * nnz(C) <= nnz(A) + nnz(B) + * + * Note: + * Input: A and B column indices are not assumed to be in sorted order. + * Output: C column indices will be in sorted if both A and B have sorted indices. + * Cx will not contain any zero entries + * + */ +template +void bsr_binop_bsr(const I n_brow, const I n_bcol, + const I R, const I C, + const I Ap[], const I Aj[], const T Ax[], + const I Bp[], const I Bj[], const T Bx[], + I Cp[], I Cj[], T Cx[], + const bin_op& op) +{ + assert( R > 0 && C > 0); + + if( R == 1 && C == 1 ){ + //use CSR for 1x1 blocksize + csr_binop_csr(n_brow, n_bcol, Ap, Aj, Ax, Bp, Bj, Bx, Cp, Cj, Cx, op); + } + else if ( csr_has_canonical_format(n_brow, Ap, Aj) && csr_has_canonical_format(n_brow, Bp, Bj) ){ + // prefer faster implementation + bsr_binop_bsr_canonical(n_brow, n_bcol, R, C, Ap, Aj, Ax, Bp, Bj, Bx, Cp, Cj, Cx, op); + } + else { + // slower fallback method + bsr_binop_bsr_general(n_brow, n_bcol, R, C, Ap, Aj, Ax, Bp, Bj, Bx, Cp, Cj, Cx, op); + } +} + /* element-wise binary operations */ template void bsr_elmul_bsr(const I n_row, const I n_col, const I R, const I C, @@ -529,6 +687,8 @@ } } } + + /* * Compute Y += A*X for BSR matrix A and dense block vectors X,Y * diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/sparsetools/bsr.py python-scipy-0.8.0+dfsg1/scipy/sparse/sparsetools/bsr.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/sparsetools/bsr.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/sparsetools/bsr.py 2010-07-26 15:48:35.000000000 +0100 @@ -1,5 +1,5 @@ # This file was automatically generated by SWIG (http://www.swig.org). -# Version 1.3.34 +# Version 1.3.36 # # Don't modify this file, modify the SWIG interface instead. # This file is compatible with both classic and new-style classes. @@ -51,483 +51,484 @@ def bsr_diagonal(*args): + """ + bsr_diagonal(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + signed char Ax, signed char Yx) + bsr_diagonal(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + unsigned char Ax, unsigned char Yx) + bsr_diagonal(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + short Ax, short Yx) + bsr_diagonal(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + unsigned short Ax, unsigned short Yx) + bsr_diagonal(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + int Ax, int Yx) + bsr_diagonal(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + unsigned int Ax, unsigned int Yx) + bsr_diagonal(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + long long Ax, long long Yx) + bsr_diagonal(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + unsigned long long Ax, unsigned long long Yx) + bsr_diagonal(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + float Ax, float Yx) + bsr_diagonal(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + double Ax, double Yx) + bsr_diagonal(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + long double Ax, long double Yx) + bsr_diagonal(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + npy_cfloat_wrapper Ax, npy_cfloat_wrapper Yx) + bsr_diagonal(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + npy_cdouble_wrapper Ax, npy_cdouble_wrapper Yx) + bsr_diagonal(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + npy_clongdouble_wrapper Ax, npy_clongdouble_wrapper Yx) """ - bsr_diagonal(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - signed char Ax, signed char Yx) - bsr_diagonal(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - unsigned char Ax, unsigned char Yx) - bsr_diagonal(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - short Ax, short Yx) - bsr_diagonal(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - unsigned short Ax, unsigned short Yx) - bsr_diagonal(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - int Ax, int Yx) - bsr_diagonal(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - unsigned int Ax, unsigned int Yx) - bsr_diagonal(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - long long Ax, long long Yx) - bsr_diagonal(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - unsigned long long Ax, unsigned long long Yx) - bsr_diagonal(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - float Ax, float Yx) - bsr_diagonal(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - double Ax, double Yx) - bsr_diagonal(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - long double Ax, long double Yx) - bsr_diagonal(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - npy_cfloat_wrapper Ax, npy_cfloat_wrapper Yx) - bsr_diagonal(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - npy_cdouble_wrapper Ax, npy_cdouble_wrapper Yx) - bsr_diagonal(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - npy_clongdouble_wrapper Ax, npy_clongdouble_wrapper Yx) - """ - return _bsr.bsr_diagonal(*args) + return _bsr.bsr_diagonal(*args) def bsr_scale_rows(*args): + """ + bsr_scale_rows(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + signed char Ax, signed char Xx) + bsr_scale_rows(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + unsigned char Ax, unsigned char Xx) + bsr_scale_rows(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + short Ax, short Xx) + bsr_scale_rows(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + unsigned short Ax, unsigned short Xx) + bsr_scale_rows(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + int Ax, int Xx) + bsr_scale_rows(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + unsigned int Ax, unsigned int Xx) + bsr_scale_rows(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + long long Ax, long long Xx) + bsr_scale_rows(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + unsigned long long Ax, unsigned long long Xx) + bsr_scale_rows(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + float Ax, float Xx) + bsr_scale_rows(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + double Ax, double Xx) + bsr_scale_rows(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + long double Ax, long double Xx) + bsr_scale_rows(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + npy_cfloat_wrapper Ax, npy_cfloat_wrapper Xx) + bsr_scale_rows(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + npy_cdouble_wrapper Ax, npy_cdouble_wrapper Xx) + bsr_scale_rows(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + npy_clongdouble_wrapper Ax, npy_clongdouble_wrapper Xx) """ - bsr_scale_rows(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - signed char Ax, signed char Xx) - bsr_scale_rows(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - unsigned char Ax, unsigned char Xx) - bsr_scale_rows(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - short Ax, short Xx) - bsr_scale_rows(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - unsigned short Ax, unsigned short Xx) - bsr_scale_rows(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - int Ax, int Xx) - bsr_scale_rows(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - unsigned int Ax, unsigned int Xx) - bsr_scale_rows(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - long long Ax, long long Xx) - bsr_scale_rows(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - unsigned long long Ax, unsigned long long Xx) - bsr_scale_rows(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - float Ax, float Xx) - bsr_scale_rows(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - double Ax, double Xx) - bsr_scale_rows(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - long double Ax, long double Xx) - bsr_scale_rows(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - npy_cfloat_wrapper Ax, npy_cfloat_wrapper Xx) - bsr_scale_rows(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - npy_cdouble_wrapper Ax, npy_cdouble_wrapper Xx) - bsr_scale_rows(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - npy_clongdouble_wrapper Ax, npy_clongdouble_wrapper Xx) - """ - return _bsr.bsr_scale_rows(*args) + return _bsr.bsr_scale_rows(*args) def bsr_scale_columns(*args): + """ + bsr_scale_columns(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + signed char Ax, signed char Xx) + bsr_scale_columns(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + unsigned char Ax, unsigned char Xx) + bsr_scale_columns(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + short Ax, short Xx) + bsr_scale_columns(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + unsigned short Ax, unsigned short Xx) + bsr_scale_columns(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + int Ax, int Xx) + bsr_scale_columns(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + unsigned int Ax, unsigned int Xx) + bsr_scale_columns(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + long long Ax, long long Xx) + bsr_scale_columns(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + unsigned long long Ax, unsigned long long Xx) + bsr_scale_columns(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + float Ax, float Xx) + bsr_scale_columns(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + double Ax, double Xx) + bsr_scale_columns(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + long double Ax, long double Xx) + bsr_scale_columns(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + npy_cfloat_wrapper Ax, npy_cfloat_wrapper Xx) + bsr_scale_columns(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + npy_cdouble_wrapper Ax, npy_cdouble_wrapper Xx) + bsr_scale_columns(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + npy_clongdouble_wrapper Ax, npy_clongdouble_wrapper Xx) """ - bsr_scale_columns(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - signed char Ax, signed char Xx) - bsr_scale_columns(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - unsigned char Ax, unsigned char Xx) - bsr_scale_columns(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - short Ax, short Xx) - bsr_scale_columns(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - unsigned short Ax, unsigned short Xx) - bsr_scale_columns(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - int Ax, int Xx) - bsr_scale_columns(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - unsigned int Ax, unsigned int Xx) - bsr_scale_columns(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - long long Ax, long long Xx) - bsr_scale_columns(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - unsigned long long Ax, unsigned long long Xx) - bsr_scale_columns(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - float Ax, float Xx) - bsr_scale_columns(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - double Ax, double Xx) - bsr_scale_columns(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - long double Ax, long double Xx) - bsr_scale_columns(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - npy_cfloat_wrapper Ax, npy_cfloat_wrapper Xx) - bsr_scale_columns(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - npy_cdouble_wrapper Ax, npy_cdouble_wrapper Xx) - bsr_scale_columns(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - npy_clongdouble_wrapper Ax, npy_clongdouble_wrapper Xx) - """ - return _bsr.bsr_scale_columns(*args) + return _bsr.bsr_scale_columns(*args) def bsr_transpose(*args): + """ + bsr_transpose(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + signed char Ax, int Bp, int Bj, signed char Bx) + bsr_transpose(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + unsigned char Ax, int Bp, int Bj, unsigned char Bx) + bsr_transpose(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + short Ax, int Bp, int Bj, short Bx) + bsr_transpose(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + unsigned short Ax, int Bp, int Bj, unsigned short Bx) + bsr_transpose(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + int Ax, int Bp, int Bj, int Bx) + bsr_transpose(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + unsigned int Ax, int Bp, int Bj, unsigned int Bx) + bsr_transpose(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + long long Ax, int Bp, int Bj, long long Bx) + bsr_transpose(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + unsigned long long Ax, int Bp, int Bj, unsigned long long Bx) + bsr_transpose(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + float Ax, int Bp, int Bj, float Bx) + bsr_transpose(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + double Ax, int Bp, int Bj, double Bx) + bsr_transpose(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + long double Ax, int Bp, int Bj, long double Bx) + bsr_transpose(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + npy_cfloat_wrapper Ax, int Bp, int Bj, npy_cfloat_wrapper Bx) + bsr_transpose(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + npy_cdouble_wrapper Ax, int Bp, int Bj, npy_cdouble_wrapper Bx) + bsr_transpose(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + npy_clongdouble_wrapper Ax, int Bp, int Bj, + npy_clongdouble_wrapper Bx) """ - bsr_transpose(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - signed char Ax, int Bp, int Bj, signed char Bx) - bsr_transpose(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - unsigned char Ax, int Bp, int Bj, unsigned char Bx) - bsr_transpose(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - short Ax, int Bp, int Bj, short Bx) - bsr_transpose(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - unsigned short Ax, int Bp, int Bj, unsigned short Bx) - bsr_transpose(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - int Ax, int Bp, int Bj, int Bx) - bsr_transpose(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - unsigned int Ax, int Bp, int Bj, unsigned int Bx) - bsr_transpose(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - long long Ax, int Bp, int Bj, long long Bx) - bsr_transpose(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - unsigned long long Ax, int Bp, int Bj, unsigned long long Bx) - bsr_transpose(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - float Ax, int Bp, int Bj, float Bx) - bsr_transpose(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - double Ax, int Bp, int Bj, double Bx) - bsr_transpose(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - long double Ax, int Bp, int Bj, long double Bx) - bsr_transpose(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - npy_cfloat_wrapper Ax, int Bp, int Bj, npy_cfloat_wrapper Bx) - bsr_transpose(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - npy_cdouble_wrapper Ax, int Bp, int Bj, npy_cdouble_wrapper Bx) - bsr_transpose(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - npy_clongdouble_wrapper Ax, int Bp, int Bj, - npy_clongdouble_wrapper Bx) - """ - return _bsr.bsr_transpose(*args) + return _bsr.bsr_transpose(*args) def bsr_matmat_pass2(*args): + """ + bsr_matmat_pass2(int n_brow, int n_bcol, int R, int C, int N, int Ap, + int Aj, signed char Ax, int Bp, int Bj, signed char Bx, + int Cp, int Cj, signed char Cx) + bsr_matmat_pass2(int n_brow, int n_bcol, int R, int C, int N, int Ap, + int Aj, unsigned char Ax, int Bp, int Bj, unsigned char Bx, + int Cp, int Cj, unsigned char Cx) + bsr_matmat_pass2(int n_brow, int n_bcol, int R, int C, int N, int Ap, + int Aj, short Ax, int Bp, int Bj, short Bx, + int Cp, int Cj, short Cx) + bsr_matmat_pass2(int n_brow, int n_bcol, int R, int C, int N, int Ap, + int Aj, unsigned short Ax, int Bp, int Bj, + unsigned short Bx, int Cp, int Cj, unsigned short Cx) + bsr_matmat_pass2(int n_brow, int n_bcol, int R, int C, int N, int Ap, + int Aj, int Ax, int Bp, int Bj, int Bx, int Cp, + int Cj, int Cx) + bsr_matmat_pass2(int n_brow, int n_bcol, int R, int C, int N, int Ap, + int Aj, unsigned int Ax, int Bp, int Bj, unsigned int Bx, + int Cp, int Cj, unsigned int Cx) + bsr_matmat_pass2(int n_brow, int n_bcol, int R, int C, int N, int Ap, + int Aj, long long Ax, int Bp, int Bj, long long Bx, + int Cp, int Cj, long long Cx) + bsr_matmat_pass2(int n_brow, int n_bcol, int R, int C, int N, int Ap, + int Aj, unsigned long long Ax, int Bp, int Bj, + unsigned long long Bx, int Cp, int Cj, unsigned long long Cx) + bsr_matmat_pass2(int n_brow, int n_bcol, int R, int C, int N, int Ap, + int Aj, float Ax, int Bp, int Bj, float Bx, + int Cp, int Cj, float Cx) + bsr_matmat_pass2(int n_brow, int n_bcol, int R, int C, int N, int Ap, + int Aj, double Ax, int Bp, int Bj, double Bx, + int Cp, int Cj, double Cx) + bsr_matmat_pass2(int n_brow, int n_bcol, int R, int C, int N, int Ap, + int Aj, long double Ax, int Bp, int Bj, long double Bx, + int Cp, int Cj, long double Cx) + bsr_matmat_pass2(int n_brow, int n_bcol, int R, int C, int N, int Ap, + int Aj, npy_cfloat_wrapper Ax, int Bp, int Bj, + npy_cfloat_wrapper Bx, int Cp, int Cj, npy_cfloat_wrapper Cx) + bsr_matmat_pass2(int n_brow, int n_bcol, int R, int C, int N, int Ap, + int Aj, npy_cdouble_wrapper Ax, int Bp, int Bj, + npy_cdouble_wrapper Bx, int Cp, int Cj, + npy_cdouble_wrapper Cx) + bsr_matmat_pass2(int n_brow, int n_bcol, int R, int C, int N, int Ap, + int Aj, npy_clongdouble_wrapper Ax, int Bp, + int Bj, npy_clongdouble_wrapper Bx, int Cp, + int Cj, npy_clongdouble_wrapper Cx) """ - bsr_matmat_pass2(int n_brow, int n_bcol, int R, int C, int N, int Ap, - int Aj, signed char Ax, int Bp, int Bj, signed char Bx, - int Cp, int Cj, signed char Cx) - bsr_matmat_pass2(int n_brow, int n_bcol, int R, int C, int N, int Ap, - int Aj, unsigned char Ax, int Bp, int Bj, unsigned char Bx, - int Cp, int Cj, unsigned char Cx) - bsr_matmat_pass2(int n_brow, int n_bcol, int R, int C, int N, int Ap, - int Aj, short Ax, int Bp, int Bj, short Bx, - int Cp, int Cj, short Cx) - bsr_matmat_pass2(int n_brow, int n_bcol, int R, int C, int N, int Ap, - int Aj, unsigned short Ax, int Bp, int Bj, - unsigned short Bx, int Cp, int Cj, unsigned short Cx) - bsr_matmat_pass2(int n_brow, int n_bcol, int R, int C, int N, int Ap, - int Aj, int Ax, int Bp, int Bj, int Bx, int Cp, - int Cj, int Cx) - bsr_matmat_pass2(int n_brow, int n_bcol, int R, int C, int N, int Ap, - int Aj, unsigned int Ax, int Bp, int Bj, unsigned int Bx, - int Cp, int Cj, unsigned int Cx) - bsr_matmat_pass2(int n_brow, int n_bcol, int R, int C, int N, int Ap, - int Aj, long long Ax, int Bp, int Bj, long long Bx, - int Cp, int Cj, long long Cx) - bsr_matmat_pass2(int n_brow, int n_bcol, int R, int C, int N, int Ap, - int Aj, unsigned long long Ax, int Bp, int Bj, - unsigned long long Bx, int Cp, int Cj, unsigned long long Cx) - bsr_matmat_pass2(int n_brow, int n_bcol, int R, int C, int N, int Ap, - int Aj, float Ax, int Bp, int Bj, float Bx, - int Cp, int Cj, float Cx) - bsr_matmat_pass2(int n_brow, int n_bcol, int R, int C, int N, int Ap, - int Aj, double Ax, int Bp, int Bj, double Bx, - int Cp, int Cj, double Cx) - bsr_matmat_pass2(int n_brow, int n_bcol, int R, int C, int N, int Ap, - int Aj, long double Ax, int Bp, int Bj, long double Bx, - int Cp, int Cj, long double Cx) - bsr_matmat_pass2(int n_brow, int n_bcol, int R, int C, int N, int Ap, - int Aj, npy_cfloat_wrapper Ax, int Bp, int Bj, - npy_cfloat_wrapper Bx, int Cp, int Cj, npy_cfloat_wrapper Cx) - bsr_matmat_pass2(int n_brow, int n_bcol, int R, int C, int N, int Ap, - int Aj, npy_cdouble_wrapper Ax, int Bp, int Bj, - npy_cdouble_wrapper Bx, int Cp, int Cj, - npy_cdouble_wrapper Cx) - bsr_matmat_pass2(int n_brow, int n_bcol, int R, int C, int N, int Ap, - int Aj, npy_clongdouble_wrapper Ax, int Bp, - int Bj, npy_clongdouble_wrapper Bx, int Cp, - int Cj, npy_clongdouble_wrapper Cx) - """ - return _bsr.bsr_matmat_pass2(*args) + return _bsr.bsr_matmat_pass2(*args) def bsr_matvec(*args): + """ + bsr_matvec(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + signed char Ax, signed char Xx, signed char Yx) + bsr_matvec(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + unsigned char Ax, unsigned char Xx, unsigned char Yx) + bsr_matvec(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + short Ax, short Xx, short Yx) + bsr_matvec(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + unsigned short Ax, unsigned short Xx, unsigned short Yx) + bsr_matvec(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + int Ax, int Xx, int Yx) + bsr_matvec(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + unsigned int Ax, unsigned int Xx, unsigned int Yx) + bsr_matvec(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + long long Ax, long long Xx, long long Yx) + bsr_matvec(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + unsigned long long Ax, unsigned long long Xx, + unsigned long long Yx) + bsr_matvec(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + float Ax, float Xx, float Yx) + bsr_matvec(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + double Ax, double Xx, double Yx) + bsr_matvec(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + long double Ax, long double Xx, long double Yx) + bsr_matvec(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + npy_cfloat_wrapper Ax, npy_cfloat_wrapper Xx, + npy_cfloat_wrapper Yx) + bsr_matvec(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + npy_cdouble_wrapper Ax, npy_cdouble_wrapper Xx, + npy_cdouble_wrapper Yx) + bsr_matvec(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + npy_clongdouble_wrapper Ax, npy_clongdouble_wrapper Xx, + npy_clongdouble_wrapper Yx) """ - bsr_matvec(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - signed char Ax, signed char Xx, signed char Yx) - bsr_matvec(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - unsigned char Ax, unsigned char Xx, unsigned char Yx) - bsr_matvec(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - short Ax, short Xx, short Yx) - bsr_matvec(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - unsigned short Ax, unsigned short Xx, unsigned short Yx) - bsr_matvec(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - int Ax, int Xx, int Yx) - bsr_matvec(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - unsigned int Ax, unsigned int Xx, unsigned int Yx) - bsr_matvec(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - long long Ax, long long Xx, long long Yx) - bsr_matvec(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - unsigned long long Ax, unsigned long long Xx, - unsigned long long Yx) - bsr_matvec(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - float Ax, float Xx, float Yx) - bsr_matvec(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - double Ax, double Xx, double Yx) - bsr_matvec(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - long double Ax, long double Xx, long double Yx) - bsr_matvec(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - npy_cfloat_wrapper Ax, npy_cfloat_wrapper Xx, - npy_cfloat_wrapper Yx) - bsr_matvec(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - npy_cdouble_wrapper Ax, npy_cdouble_wrapper Xx, - npy_cdouble_wrapper Yx) - bsr_matvec(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - npy_clongdouble_wrapper Ax, npy_clongdouble_wrapper Xx, - npy_clongdouble_wrapper Yx) - """ - return _bsr.bsr_matvec(*args) + return _bsr.bsr_matvec(*args) def bsr_matvecs(*args): + """ + bsr_matvecs(int n_brow, int n_bcol, int n_vecs, int R, int C, int Ap, + int Aj, signed char Ax, signed char Xx, + signed char Yx) + bsr_matvecs(int n_brow, int n_bcol, int n_vecs, int R, int C, int Ap, + int Aj, unsigned char Ax, unsigned char Xx, + unsigned char Yx) + bsr_matvecs(int n_brow, int n_bcol, int n_vecs, int R, int C, int Ap, + int Aj, short Ax, short Xx, short Yx) + bsr_matvecs(int n_brow, int n_bcol, int n_vecs, int R, int C, int Ap, + int Aj, unsigned short Ax, unsigned short Xx, + unsigned short Yx) + bsr_matvecs(int n_brow, int n_bcol, int n_vecs, int R, int C, int Ap, + int Aj, int Ax, int Xx, int Yx) + bsr_matvecs(int n_brow, int n_bcol, int n_vecs, int R, int C, int Ap, + int Aj, unsigned int Ax, unsigned int Xx, + unsigned int Yx) + bsr_matvecs(int n_brow, int n_bcol, int n_vecs, int R, int C, int Ap, + int Aj, long long Ax, long long Xx, long long Yx) + bsr_matvecs(int n_brow, int n_bcol, int n_vecs, int R, int C, int Ap, + int Aj, unsigned long long Ax, unsigned long long Xx, + unsigned long long Yx) + bsr_matvecs(int n_brow, int n_bcol, int n_vecs, int R, int C, int Ap, + int Aj, float Ax, float Xx, float Yx) + bsr_matvecs(int n_brow, int n_bcol, int n_vecs, int R, int C, int Ap, + int Aj, double Ax, double Xx, double Yx) + bsr_matvecs(int n_brow, int n_bcol, int n_vecs, int R, int C, int Ap, + int Aj, long double Ax, long double Xx, + long double Yx) + bsr_matvecs(int n_brow, int n_bcol, int n_vecs, int R, int C, int Ap, + int Aj, npy_cfloat_wrapper Ax, npy_cfloat_wrapper Xx, + npy_cfloat_wrapper Yx) + bsr_matvecs(int n_brow, int n_bcol, int n_vecs, int R, int C, int Ap, + int Aj, npy_cdouble_wrapper Ax, npy_cdouble_wrapper Xx, + npy_cdouble_wrapper Yx) + bsr_matvecs(int n_brow, int n_bcol, int n_vecs, int R, int C, int Ap, + int Aj, npy_clongdouble_wrapper Ax, npy_clongdouble_wrapper Xx, + npy_clongdouble_wrapper Yx) """ - bsr_matvecs(int n_brow, int n_bcol, int n_vecs, int R, int C, int Ap, - int Aj, signed char Ax, signed char Xx, - signed char Yx) - bsr_matvecs(int n_brow, int n_bcol, int n_vecs, int R, int C, int Ap, - int Aj, unsigned char Ax, unsigned char Xx, - unsigned char Yx) - bsr_matvecs(int n_brow, int n_bcol, int n_vecs, int R, int C, int Ap, - int Aj, short Ax, short Xx, short Yx) - bsr_matvecs(int n_brow, int n_bcol, int n_vecs, int R, int C, int Ap, - int Aj, unsigned short Ax, unsigned short Xx, - unsigned short Yx) - bsr_matvecs(int n_brow, int n_bcol, int n_vecs, int R, int C, int Ap, - int Aj, int Ax, int Xx, int Yx) - bsr_matvecs(int n_brow, int n_bcol, int n_vecs, int R, int C, int Ap, - int Aj, unsigned int Ax, unsigned int Xx, - unsigned int Yx) - bsr_matvecs(int n_brow, int n_bcol, int n_vecs, int R, int C, int Ap, - int Aj, long long Ax, long long Xx, long long Yx) - bsr_matvecs(int n_brow, int n_bcol, int n_vecs, int R, int C, int Ap, - int Aj, unsigned long long Ax, unsigned long long Xx, - unsigned long long Yx) - bsr_matvecs(int n_brow, int n_bcol, int n_vecs, int R, int C, int Ap, - int Aj, float Ax, float Xx, float Yx) - bsr_matvecs(int n_brow, int n_bcol, int n_vecs, int R, int C, int Ap, - int Aj, double Ax, double Xx, double Yx) - bsr_matvecs(int n_brow, int n_bcol, int n_vecs, int R, int C, int Ap, - int Aj, long double Ax, long double Xx, - long double Yx) - bsr_matvecs(int n_brow, int n_bcol, int n_vecs, int R, int C, int Ap, - int Aj, npy_cfloat_wrapper Ax, npy_cfloat_wrapper Xx, - npy_cfloat_wrapper Yx) - bsr_matvecs(int n_brow, int n_bcol, int n_vecs, int R, int C, int Ap, - int Aj, npy_cdouble_wrapper Ax, npy_cdouble_wrapper Xx, - npy_cdouble_wrapper Yx) - bsr_matvecs(int n_brow, int n_bcol, int n_vecs, int R, int C, int Ap, - int Aj, npy_clongdouble_wrapper Ax, npy_clongdouble_wrapper Xx, - npy_clongdouble_wrapper Yx) - """ - return _bsr.bsr_matvecs(*args) + return _bsr.bsr_matvecs(*args) def bsr_elmul_bsr(*args): + """ + bsr_elmul_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + signed char Ax, int Bp, int Bj, signed char Bx, + int Cp, int Cj, signed char Cx) + bsr_elmul_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + unsigned char Ax, int Bp, int Bj, unsigned char Bx, + int Cp, int Cj, unsigned char Cx) + bsr_elmul_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + short Ax, int Bp, int Bj, short Bx, int Cp, + int Cj, short Cx) + bsr_elmul_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + unsigned short Ax, int Bp, int Bj, unsigned short Bx, + int Cp, int Cj, unsigned short Cx) + bsr_elmul_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + int Ax, int Bp, int Bj, int Bx, int Cp, int Cj, + int Cx) + bsr_elmul_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + unsigned int Ax, int Bp, int Bj, unsigned int Bx, + int Cp, int Cj, unsigned int Cx) + bsr_elmul_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + long long Ax, int Bp, int Bj, long long Bx, + int Cp, int Cj, long long Cx) + bsr_elmul_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + unsigned long long Ax, int Bp, int Bj, unsigned long long Bx, + int Cp, int Cj, unsigned long long Cx) + bsr_elmul_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + float Ax, int Bp, int Bj, float Bx, int Cp, + int Cj, float Cx) + bsr_elmul_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + double Ax, int Bp, int Bj, double Bx, int Cp, + int Cj, double Cx) + bsr_elmul_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + long double Ax, int Bp, int Bj, long double Bx, + int Cp, int Cj, long double Cx) + bsr_elmul_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + npy_cfloat_wrapper Ax, int Bp, int Bj, npy_cfloat_wrapper Bx, + int Cp, int Cj, npy_cfloat_wrapper Cx) + bsr_elmul_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + npy_cdouble_wrapper Ax, int Bp, int Bj, npy_cdouble_wrapper Bx, + int Cp, int Cj, npy_cdouble_wrapper Cx) + bsr_elmul_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + npy_clongdouble_wrapper Ax, int Bp, int Bj, + npy_clongdouble_wrapper Bx, int Cp, int Cj, npy_clongdouble_wrapper Cx) """ - bsr_elmul_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - signed char Ax, int Bp, int Bj, signed char Bx, - int Cp, int Cj, signed char Cx) - bsr_elmul_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - unsigned char Ax, int Bp, int Bj, unsigned char Bx, - int Cp, int Cj, unsigned char Cx) - bsr_elmul_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - short Ax, int Bp, int Bj, short Bx, int Cp, - int Cj, short Cx) - bsr_elmul_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - unsigned short Ax, int Bp, int Bj, unsigned short Bx, - int Cp, int Cj, unsigned short Cx) - bsr_elmul_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - int Ax, int Bp, int Bj, int Bx, int Cp, int Cj, - int Cx) - bsr_elmul_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - unsigned int Ax, int Bp, int Bj, unsigned int Bx, - int Cp, int Cj, unsigned int Cx) - bsr_elmul_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - long long Ax, int Bp, int Bj, long long Bx, - int Cp, int Cj, long long Cx) - bsr_elmul_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - unsigned long long Ax, int Bp, int Bj, unsigned long long Bx, - int Cp, int Cj, unsigned long long Cx) - bsr_elmul_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - float Ax, int Bp, int Bj, float Bx, int Cp, - int Cj, float Cx) - bsr_elmul_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - double Ax, int Bp, int Bj, double Bx, int Cp, - int Cj, double Cx) - bsr_elmul_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - long double Ax, int Bp, int Bj, long double Bx, - int Cp, int Cj, long double Cx) - bsr_elmul_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - npy_cfloat_wrapper Ax, int Bp, int Bj, npy_cfloat_wrapper Bx, - int Cp, int Cj, npy_cfloat_wrapper Cx) - bsr_elmul_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - npy_cdouble_wrapper Ax, int Bp, int Bj, npy_cdouble_wrapper Bx, - int Cp, int Cj, npy_cdouble_wrapper Cx) - bsr_elmul_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - npy_clongdouble_wrapper Ax, int Bp, int Bj, - npy_clongdouble_wrapper Bx, int Cp, int Cj, npy_clongdouble_wrapper Cx) - """ - return _bsr.bsr_elmul_bsr(*args) + return _bsr.bsr_elmul_bsr(*args) def bsr_eldiv_bsr(*args): + """ + bsr_eldiv_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + signed char Ax, int Bp, int Bj, signed char Bx, + int Cp, int Cj, signed char Cx) + bsr_eldiv_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + unsigned char Ax, int Bp, int Bj, unsigned char Bx, + int Cp, int Cj, unsigned char Cx) + bsr_eldiv_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + short Ax, int Bp, int Bj, short Bx, int Cp, + int Cj, short Cx) + bsr_eldiv_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + unsigned short Ax, int Bp, int Bj, unsigned short Bx, + int Cp, int Cj, unsigned short Cx) + bsr_eldiv_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + int Ax, int Bp, int Bj, int Bx, int Cp, int Cj, + int Cx) + bsr_eldiv_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + unsigned int Ax, int Bp, int Bj, unsigned int Bx, + int Cp, int Cj, unsigned int Cx) + bsr_eldiv_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + long long Ax, int Bp, int Bj, long long Bx, + int Cp, int Cj, long long Cx) + bsr_eldiv_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + unsigned long long Ax, int Bp, int Bj, unsigned long long Bx, + int Cp, int Cj, unsigned long long Cx) + bsr_eldiv_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + float Ax, int Bp, int Bj, float Bx, int Cp, + int Cj, float Cx) + bsr_eldiv_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + double Ax, int Bp, int Bj, double Bx, int Cp, + int Cj, double Cx) + bsr_eldiv_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + long double Ax, int Bp, int Bj, long double Bx, + int Cp, int Cj, long double Cx) + bsr_eldiv_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + npy_cfloat_wrapper Ax, int Bp, int Bj, npy_cfloat_wrapper Bx, + int Cp, int Cj, npy_cfloat_wrapper Cx) + bsr_eldiv_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + npy_cdouble_wrapper Ax, int Bp, int Bj, npy_cdouble_wrapper Bx, + int Cp, int Cj, npy_cdouble_wrapper Cx) + bsr_eldiv_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + npy_clongdouble_wrapper Ax, int Bp, int Bj, + npy_clongdouble_wrapper Bx, int Cp, int Cj, npy_clongdouble_wrapper Cx) """ - bsr_eldiv_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - signed char Ax, int Bp, int Bj, signed char Bx, - int Cp, int Cj, signed char Cx) - bsr_eldiv_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - unsigned char Ax, int Bp, int Bj, unsigned char Bx, - int Cp, int Cj, unsigned char Cx) - bsr_eldiv_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - short Ax, int Bp, int Bj, short Bx, int Cp, - int Cj, short Cx) - bsr_eldiv_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - unsigned short Ax, int Bp, int Bj, unsigned short Bx, - int Cp, int Cj, unsigned short Cx) - bsr_eldiv_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - int Ax, int Bp, int Bj, int Bx, int Cp, int Cj, - int Cx) - bsr_eldiv_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - unsigned int Ax, int Bp, int Bj, unsigned int Bx, - int Cp, int Cj, unsigned int Cx) - bsr_eldiv_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - long long Ax, int Bp, int Bj, long long Bx, - int Cp, int Cj, long long Cx) - bsr_eldiv_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - unsigned long long Ax, int Bp, int Bj, unsigned long long Bx, - int Cp, int Cj, unsigned long long Cx) - bsr_eldiv_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - float Ax, int Bp, int Bj, float Bx, int Cp, - int Cj, float Cx) - bsr_eldiv_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - double Ax, int Bp, int Bj, double Bx, int Cp, - int Cj, double Cx) - bsr_eldiv_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - long double Ax, int Bp, int Bj, long double Bx, - int Cp, int Cj, long double Cx) - bsr_eldiv_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - npy_cfloat_wrapper Ax, int Bp, int Bj, npy_cfloat_wrapper Bx, - int Cp, int Cj, npy_cfloat_wrapper Cx) - bsr_eldiv_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - npy_cdouble_wrapper Ax, int Bp, int Bj, npy_cdouble_wrapper Bx, - int Cp, int Cj, npy_cdouble_wrapper Cx) - bsr_eldiv_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - npy_clongdouble_wrapper Ax, int Bp, int Bj, - npy_clongdouble_wrapper Bx, int Cp, int Cj, npy_clongdouble_wrapper Cx) - """ - return _bsr.bsr_eldiv_bsr(*args) + return _bsr.bsr_eldiv_bsr(*args) def bsr_plus_bsr(*args): + """ + bsr_plus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + signed char Ax, int Bp, int Bj, signed char Bx, + int Cp, int Cj, signed char Cx) + bsr_plus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + unsigned char Ax, int Bp, int Bj, unsigned char Bx, + int Cp, int Cj, unsigned char Cx) + bsr_plus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + short Ax, int Bp, int Bj, short Bx, int Cp, + int Cj, short Cx) + bsr_plus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + unsigned short Ax, int Bp, int Bj, unsigned short Bx, + int Cp, int Cj, unsigned short Cx) + bsr_plus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + int Ax, int Bp, int Bj, int Bx, int Cp, int Cj, + int Cx) + bsr_plus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + unsigned int Ax, int Bp, int Bj, unsigned int Bx, + int Cp, int Cj, unsigned int Cx) + bsr_plus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + long long Ax, int Bp, int Bj, long long Bx, + int Cp, int Cj, long long Cx) + bsr_plus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + unsigned long long Ax, int Bp, int Bj, unsigned long long Bx, + int Cp, int Cj, unsigned long long Cx) + bsr_plus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + float Ax, int Bp, int Bj, float Bx, int Cp, + int Cj, float Cx) + bsr_plus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + double Ax, int Bp, int Bj, double Bx, int Cp, + int Cj, double Cx) + bsr_plus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + long double Ax, int Bp, int Bj, long double Bx, + int Cp, int Cj, long double Cx) + bsr_plus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + npy_cfloat_wrapper Ax, int Bp, int Bj, npy_cfloat_wrapper Bx, + int Cp, int Cj, npy_cfloat_wrapper Cx) + bsr_plus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + npy_cdouble_wrapper Ax, int Bp, int Bj, npy_cdouble_wrapper Bx, + int Cp, int Cj, npy_cdouble_wrapper Cx) + bsr_plus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + npy_clongdouble_wrapper Ax, int Bp, int Bj, + npy_clongdouble_wrapper Bx, int Cp, int Cj, npy_clongdouble_wrapper Cx) """ - bsr_plus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - signed char Ax, int Bp, int Bj, signed char Bx, - int Cp, int Cj, signed char Cx) - bsr_plus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - unsigned char Ax, int Bp, int Bj, unsigned char Bx, - int Cp, int Cj, unsigned char Cx) - bsr_plus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - short Ax, int Bp, int Bj, short Bx, int Cp, - int Cj, short Cx) - bsr_plus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - unsigned short Ax, int Bp, int Bj, unsigned short Bx, - int Cp, int Cj, unsigned short Cx) - bsr_plus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - int Ax, int Bp, int Bj, int Bx, int Cp, int Cj, - int Cx) - bsr_plus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - unsigned int Ax, int Bp, int Bj, unsigned int Bx, - int Cp, int Cj, unsigned int Cx) - bsr_plus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - long long Ax, int Bp, int Bj, long long Bx, - int Cp, int Cj, long long Cx) - bsr_plus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - unsigned long long Ax, int Bp, int Bj, unsigned long long Bx, - int Cp, int Cj, unsigned long long Cx) - bsr_plus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - float Ax, int Bp, int Bj, float Bx, int Cp, - int Cj, float Cx) - bsr_plus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - double Ax, int Bp, int Bj, double Bx, int Cp, - int Cj, double Cx) - bsr_plus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - long double Ax, int Bp, int Bj, long double Bx, - int Cp, int Cj, long double Cx) - bsr_plus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - npy_cfloat_wrapper Ax, int Bp, int Bj, npy_cfloat_wrapper Bx, - int Cp, int Cj, npy_cfloat_wrapper Cx) - bsr_plus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - npy_cdouble_wrapper Ax, int Bp, int Bj, npy_cdouble_wrapper Bx, - int Cp, int Cj, npy_cdouble_wrapper Cx) - bsr_plus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - npy_clongdouble_wrapper Ax, int Bp, int Bj, - npy_clongdouble_wrapper Bx, int Cp, int Cj, npy_clongdouble_wrapper Cx) - """ - return _bsr.bsr_plus_bsr(*args) + return _bsr.bsr_plus_bsr(*args) def bsr_minus_bsr(*args): + """ + bsr_minus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + signed char Ax, int Bp, int Bj, signed char Bx, + int Cp, int Cj, signed char Cx) + bsr_minus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + unsigned char Ax, int Bp, int Bj, unsigned char Bx, + int Cp, int Cj, unsigned char Cx) + bsr_minus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + short Ax, int Bp, int Bj, short Bx, int Cp, + int Cj, short Cx) + bsr_minus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + unsigned short Ax, int Bp, int Bj, unsigned short Bx, + int Cp, int Cj, unsigned short Cx) + bsr_minus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + int Ax, int Bp, int Bj, int Bx, int Cp, int Cj, + int Cx) + bsr_minus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + unsigned int Ax, int Bp, int Bj, unsigned int Bx, + int Cp, int Cj, unsigned int Cx) + bsr_minus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + long long Ax, int Bp, int Bj, long long Bx, + int Cp, int Cj, long long Cx) + bsr_minus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + unsigned long long Ax, int Bp, int Bj, unsigned long long Bx, + int Cp, int Cj, unsigned long long Cx) + bsr_minus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + float Ax, int Bp, int Bj, float Bx, int Cp, + int Cj, float Cx) + bsr_minus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + double Ax, int Bp, int Bj, double Bx, int Cp, + int Cj, double Cx) + bsr_minus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + long double Ax, int Bp, int Bj, long double Bx, + int Cp, int Cj, long double Cx) + bsr_minus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + npy_cfloat_wrapper Ax, int Bp, int Bj, npy_cfloat_wrapper Bx, + int Cp, int Cj, npy_cfloat_wrapper Cx) + bsr_minus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + npy_cdouble_wrapper Ax, int Bp, int Bj, npy_cdouble_wrapper Bx, + int Cp, int Cj, npy_cdouble_wrapper Cx) + bsr_minus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + npy_clongdouble_wrapper Ax, int Bp, int Bj, + npy_clongdouble_wrapper Bx, int Cp, int Cj, npy_clongdouble_wrapper Cx) """ - bsr_minus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - signed char Ax, int Bp, int Bj, signed char Bx, - int Cp, int Cj, signed char Cx) - bsr_minus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - unsigned char Ax, int Bp, int Bj, unsigned char Bx, - int Cp, int Cj, unsigned char Cx) - bsr_minus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - short Ax, int Bp, int Bj, short Bx, int Cp, - int Cj, short Cx) - bsr_minus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - unsigned short Ax, int Bp, int Bj, unsigned short Bx, - int Cp, int Cj, unsigned short Cx) - bsr_minus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - int Ax, int Bp, int Bj, int Bx, int Cp, int Cj, - int Cx) - bsr_minus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - unsigned int Ax, int Bp, int Bj, unsigned int Bx, - int Cp, int Cj, unsigned int Cx) - bsr_minus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - long long Ax, int Bp, int Bj, long long Bx, - int Cp, int Cj, long long Cx) - bsr_minus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - unsigned long long Ax, int Bp, int Bj, unsigned long long Bx, - int Cp, int Cj, unsigned long long Cx) - bsr_minus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - float Ax, int Bp, int Bj, float Bx, int Cp, - int Cj, float Cx) - bsr_minus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - double Ax, int Bp, int Bj, double Bx, int Cp, - int Cj, double Cx) - bsr_minus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - long double Ax, int Bp, int Bj, long double Bx, - int Cp, int Cj, long double Cx) - bsr_minus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - npy_cfloat_wrapper Ax, int Bp, int Bj, npy_cfloat_wrapper Bx, - int Cp, int Cj, npy_cfloat_wrapper Cx) - bsr_minus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - npy_cdouble_wrapper Ax, int Bp, int Bj, npy_cdouble_wrapper Bx, - int Cp, int Cj, npy_cdouble_wrapper Cx) - bsr_minus_bsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - npy_clongdouble_wrapper Ax, int Bp, int Bj, - npy_clongdouble_wrapper Bx, int Cp, int Cj, npy_clongdouble_wrapper Cx) - """ - return _bsr.bsr_minus_bsr(*args) + return _bsr.bsr_minus_bsr(*args) def bsr_sort_indices(*args): + """ + bsr_sort_indices(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + signed char Ax) + bsr_sort_indices(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + unsigned char Ax) + bsr_sort_indices(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + short Ax) + bsr_sort_indices(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + unsigned short Ax) + bsr_sort_indices(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + int Ax) + bsr_sort_indices(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + unsigned int Ax) + bsr_sort_indices(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + long long Ax) + bsr_sort_indices(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + unsigned long long Ax) + bsr_sort_indices(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + float Ax) + bsr_sort_indices(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + double Ax) + bsr_sort_indices(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + long double Ax) + bsr_sort_indices(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + npy_cfloat_wrapper Ax) + bsr_sort_indices(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + npy_cdouble_wrapper Ax) + bsr_sort_indices(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, + npy_clongdouble_wrapper Ax) """ - bsr_sort_indices(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - signed char Ax) - bsr_sort_indices(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - unsigned char Ax) - bsr_sort_indices(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - short Ax) - bsr_sort_indices(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - unsigned short Ax) - bsr_sort_indices(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - int Ax) - bsr_sort_indices(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - unsigned int Ax) - bsr_sort_indices(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - long long Ax) - bsr_sort_indices(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - unsigned long long Ax) - bsr_sort_indices(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - float Ax) - bsr_sort_indices(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - double Ax) - bsr_sort_indices(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - long double Ax) - bsr_sort_indices(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - npy_cfloat_wrapper Ax) - bsr_sort_indices(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - npy_cdouble_wrapper Ax) - bsr_sort_indices(int n_brow, int n_bcol, int R, int C, int Ap, int Aj, - npy_clongdouble_wrapper Ax) - """ - return _bsr.bsr_sort_indices(*args) + return _bsr.bsr_sort_indices(*args) + diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/sparsetools/bsr_wrap.cxx python-scipy-0.8.0+dfsg1/scipy/sparse/sparsetools/bsr_wrap.cxx --- python-scipy-0.7.2+dfsg1/scipy/sparse/sparsetools/bsr_wrap.cxx 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/sparsetools/bsr_wrap.cxx 2010-07-26 15:48:35.000000000 +0100 @@ -1,6 +1,6 @@ /* ---------------------------------------------------------------------------- * This file was automatically generated by SWIG (http://www.swig.org). - * Version 1.3.34 + * Version 1.3.36 * * This file is not intended to be easily readable and contains a number of * coding conventions designed to improve portability and efficiency. Do not make @@ -73,6 +73,12 @@ # endif #endif +#ifndef SWIG_MSC_UNSUPPRESS_4505 +# if defined(_MSC_VER) +# pragma warning(disable : 4505) /* unreferenced local function has been removed */ +# endif +#endif + #ifndef SWIGUNUSEDPARM # ifdef __cplusplus # define SWIGUNUSEDPARM(p) @@ -2516,7 +2522,7 @@ #define SWIG_name "_bsr" -#define SWIGVERSION 0x010334 +#define SWIGVERSION 0x010336 #define SWIG_VERSION SWIGVERSION @@ -2544,7 +2550,9 @@ PyObject_ptr(PyObject *obj, bool initial_ref = true) :_obj(obj) { - if (initial_ref) Py_XINCREF(_obj); + if (initial_ref) { + Py_XINCREF(_obj); + } } PyObject_ptr & operator=(const PyObject_ptr& item) diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/sparsetools/coo.py python-scipy-0.8.0+dfsg1/scipy/sparse/sparsetools/coo.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/sparsetools/coo.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/sparsetools/coo.py 2010-07-26 15:48:35.000000000 +0100 @@ -50,134 +50,135 @@ def coo_count_diagonals(*args): - """coo_count_diagonals(int nnz, int Ai, int Aj) -> int""" - return _coo.coo_count_diagonals(*args) + """coo_count_diagonals(int nnz, int Ai, int Aj) -> int""" + return _coo.coo_count_diagonals(*args) def coo_tocsr(*args): + """ + coo_tocsr(int n_row, int n_col, int nnz, int Ai, int Aj, signed char Ax, + int Bp, int Bj, signed char Bx) + coo_tocsr(int n_row, int n_col, int nnz, int Ai, int Aj, unsigned char Ax, + int Bp, int Bj, unsigned char Bx) + coo_tocsr(int n_row, int n_col, int nnz, int Ai, int Aj, short Ax, + int Bp, int Bj, short Bx) + coo_tocsr(int n_row, int n_col, int nnz, int Ai, int Aj, unsigned short Ax, + int Bp, int Bj, unsigned short Bx) + coo_tocsr(int n_row, int n_col, int nnz, int Ai, int Aj, int Ax, + int Bp, int Bj, int Bx) + coo_tocsr(int n_row, int n_col, int nnz, int Ai, int Aj, unsigned int Ax, + int Bp, int Bj, unsigned int Bx) + coo_tocsr(int n_row, int n_col, int nnz, int Ai, int Aj, long long Ax, + int Bp, int Bj, long long Bx) + coo_tocsr(int n_row, int n_col, int nnz, int Ai, int Aj, unsigned long long Ax, + int Bp, int Bj, unsigned long long Bx) + coo_tocsr(int n_row, int n_col, int nnz, int Ai, int Aj, float Ax, + int Bp, int Bj, float Bx) + coo_tocsr(int n_row, int n_col, int nnz, int Ai, int Aj, double Ax, + int Bp, int Bj, double Bx) + coo_tocsr(int n_row, int n_col, int nnz, int Ai, int Aj, long double Ax, + int Bp, int Bj, long double Bx) + coo_tocsr(int n_row, int n_col, int nnz, int Ai, int Aj, npy_cfloat_wrapper Ax, + int Bp, int Bj, npy_cfloat_wrapper Bx) + coo_tocsr(int n_row, int n_col, int nnz, int Ai, int Aj, npy_cdouble_wrapper Ax, + int Bp, int Bj, npy_cdouble_wrapper Bx) + coo_tocsr(int n_row, int n_col, int nnz, int Ai, int Aj, npy_clongdouble_wrapper Ax, + int Bp, int Bj, npy_clongdouble_wrapper Bx) """ - coo_tocsr(int n_row, int n_col, int nnz, int Ai, int Aj, signed char Ax, - int Bp, int Bj, signed char Bx) - coo_tocsr(int n_row, int n_col, int nnz, int Ai, int Aj, unsigned char Ax, - int Bp, int Bj, unsigned char Bx) - coo_tocsr(int n_row, int n_col, int nnz, int Ai, int Aj, short Ax, - int Bp, int Bj, short Bx) - coo_tocsr(int n_row, int n_col, int nnz, int Ai, int Aj, unsigned short Ax, - int Bp, int Bj, unsigned short Bx) - coo_tocsr(int n_row, int n_col, int nnz, int Ai, int Aj, int Ax, - int Bp, int Bj, int Bx) - coo_tocsr(int n_row, int n_col, int nnz, int Ai, int Aj, unsigned int Ax, - int Bp, int Bj, unsigned int Bx) - coo_tocsr(int n_row, int n_col, int nnz, int Ai, int Aj, long long Ax, - int Bp, int Bj, long long Bx) - coo_tocsr(int n_row, int n_col, int nnz, int Ai, int Aj, unsigned long long Ax, - int Bp, int Bj, unsigned long long Bx) - coo_tocsr(int n_row, int n_col, int nnz, int Ai, int Aj, float Ax, - int Bp, int Bj, float Bx) - coo_tocsr(int n_row, int n_col, int nnz, int Ai, int Aj, double Ax, - int Bp, int Bj, double Bx) - coo_tocsr(int n_row, int n_col, int nnz, int Ai, int Aj, long double Ax, - int Bp, int Bj, long double Bx) - coo_tocsr(int n_row, int n_col, int nnz, int Ai, int Aj, npy_cfloat_wrapper Ax, - int Bp, int Bj, npy_cfloat_wrapper Bx) - coo_tocsr(int n_row, int n_col, int nnz, int Ai, int Aj, npy_cdouble_wrapper Ax, - int Bp, int Bj, npy_cdouble_wrapper Bx) - coo_tocsr(int n_row, int n_col, int nnz, int Ai, int Aj, npy_clongdouble_wrapper Ax, - int Bp, int Bj, npy_clongdouble_wrapper Bx) - """ - return _coo.coo_tocsr(*args) + return _coo.coo_tocsr(*args) def coo_tocsc(*args): + """ + coo_tocsc(int n_row, int n_col, int nnz, int Ai, int Aj, signed char Ax, + int Bp, int Bi, signed char Bx) + coo_tocsc(int n_row, int n_col, int nnz, int Ai, int Aj, unsigned char Ax, + int Bp, int Bi, unsigned char Bx) + coo_tocsc(int n_row, int n_col, int nnz, int Ai, int Aj, short Ax, + int Bp, int Bi, short Bx) + coo_tocsc(int n_row, int n_col, int nnz, int Ai, int Aj, unsigned short Ax, + int Bp, int Bi, unsigned short Bx) + coo_tocsc(int n_row, int n_col, int nnz, int Ai, int Aj, int Ax, + int Bp, int Bi, int Bx) + coo_tocsc(int n_row, int n_col, int nnz, int Ai, int Aj, unsigned int Ax, + int Bp, int Bi, unsigned int Bx) + coo_tocsc(int n_row, int n_col, int nnz, int Ai, int Aj, long long Ax, + int Bp, int Bi, long long Bx) + coo_tocsc(int n_row, int n_col, int nnz, int Ai, int Aj, unsigned long long Ax, + int Bp, int Bi, unsigned long long Bx) + coo_tocsc(int n_row, int n_col, int nnz, int Ai, int Aj, float Ax, + int Bp, int Bi, float Bx) + coo_tocsc(int n_row, int n_col, int nnz, int Ai, int Aj, double Ax, + int Bp, int Bi, double Bx) + coo_tocsc(int n_row, int n_col, int nnz, int Ai, int Aj, long double Ax, + int Bp, int Bi, long double Bx) + coo_tocsc(int n_row, int n_col, int nnz, int Ai, int Aj, npy_cfloat_wrapper Ax, + int Bp, int Bi, npy_cfloat_wrapper Bx) + coo_tocsc(int n_row, int n_col, int nnz, int Ai, int Aj, npy_cdouble_wrapper Ax, + int Bp, int Bi, npy_cdouble_wrapper Bx) + coo_tocsc(int n_row, int n_col, int nnz, int Ai, int Aj, npy_clongdouble_wrapper Ax, + int Bp, int Bi, npy_clongdouble_wrapper Bx) """ - coo_tocsc(int n_row, int n_col, int nnz, int Ai, int Aj, signed char Ax, - int Bp, int Bi, signed char Bx) - coo_tocsc(int n_row, int n_col, int nnz, int Ai, int Aj, unsigned char Ax, - int Bp, int Bi, unsigned char Bx) - coo_tocsc(int n_row, int n_col, int nnz, int Ai, int Aj, short Ax, - int Bp, int Bi, short Bx) - coo_tocsc(int n_row, int n_col, int nnz, int Ai, int Aj, unsigned short Ax, - int Bp, int Bi, unsigned short Bx) - coo_tocsc(int n_row, int n_col, int nnz, int Ai, int Aj, int Ax, - int Bp, int Bi, int Bx) - coo_tocsc(int n_row, int n_col, int nnz, int Ai, int Aj, unsigned int Ax, - int Bp, int Bi, unsigned int Bx) - coo_tocsc(int n_row, int n_col, int nnz, int Ai, int Aj, long long Ax, - int Bp, int Bi, long long Bx) - coo_tocsc(int n_row, int n_col, int nnz, int Ai, int Aj, unsigned long long Ax, - int Bp, int Bi, unsigned long long Bx) - coo_tocsc(int n_row, int n_col, int nnz, int Ai, int Aj, float Ax, - int Bp, int Bi, float Bx) - coo_tocsc(int n_row, int n_col, int nnz, int Ai, int Aj, double Ax, - int Bp, int Bi, double Bx) - coo_tocsc(int n_row, int n_col, int nnz, int Ai, int Aj, long double Ax, - int Bp, int Bi, long double Bx) - coo_tocsc(int n_row, int n_col, int nnz, int Ai, int Aj, npy_cfloat_wrapper Ax, - int Bp, int Bi, npy_cfloat_wrapper Bx) - coo_tocsc(int n_row, int n_col, int nnz, int Ai, int Aj, npy_cdouble_wrapper Ax, - int Bp, int Bi, npy_cdouble_wrapper Bx) - coo_tocsc(int n_row, int n_col, int nnz, int Ai, int Aj, npy_clongdouble_wrapper Ax, - int Bp, int Bi, npy_clongdouble_wrapper Bx) - """ - return _coo.coo_tocsc(*args) + return _coo.coo_tocsc(*args) def coo_todense(*args): + """ + coo_todense(int n_row, int n_col, int nnz, int Ai, int Aj, signed char Ax, + signed char Bx) + coo_todense(int n_row, int n_col, int nnz, int Ai, int Aj, unsigned char Ax, + unsigned char Bx) + coo_todense(int n_row, int n_col, int nnz, int Ai, int Aj, short Ax, + short Bx) + coo_todense(int n_row, int n_col, int nnz, int Ai, int Aj, unsigned short Ax, + unsigned short Bx) + coo_todense(int n_row, int n_col, int nnz, int Ai, int Aj, int Ax, + int Bx) + coo_todense(int n_row, int n_col, int nnz, int Ai, int Aj, unsigned int Ax, + unsigned int Bx) + coo_todense(int n_row, int n_col, int nnz, int Ai, int Aj, long long Ax, + long long Bx) + coo_todense(int n_row, int n_col, int nnz, int Ai, int Aj, unsigned long long Ax, + unsigned long long Bx) + coo_todense(int n_row, int n_col, int nnz, int Ai, int Aj, float Ax, + float Bx) + coo_todense(int n_row, int n_col, int nnz, int Ai, int Aj, double Ax, + double Bx) + coo_todense(int n_row, int n_col, int nnz, int Ai, int Aj, long double Ax, + long double Bx) + coo_todense(int n_row, int n_col, int nnz, int Ai, int Aj, npy_cfloat_wrapper Ax, + npy_cfloat_wrapper Bx) + coo_todense(int n_row, int n_col, int nnz, int Ai, int Aj, npy_cdouble_wrapper Ax, + npy_cdouble_wrapper Bx) + coo_todense(int n_row, int n_col, int nnz, int Ai, int Aj, npy_clongdouble_wrapper Ax, + npy_clongdouble_wrapper Bx) """ - coo_todense(int n_row, int n_col, int nnz, int Ai, int Aj, signed char Ax, - signed char Bx) - coo_todense(int n_row, int n_col, int nnz, int Ai, int Aj, unsigned char Ax, - unsigned char Bx) - coo_todense(int n_row, int n_col, int nnz, int Ai, int Aj, short Ax, - short Bx) - coo_todense(int n_row, int n_col, int nnz, int Ai, int Aj, unsigned short Ax, - unsigned short Bx) - coo_todense(int n_row, int n_col, int nnz, int Ai, int Aj, int Ax, - int Bx) - coo_todense(int n_row, int n_col, int nnz, int Ai, int Aj, unsigned int Ax, - unsigned int Bx) - coo_todense(int n_row, int n_col, int nnz, int Ai, int Aj, long long Ax, - long long Bx) - coo_todense(int n_row, int n_col, int nnz, int Ai, int Aj, unsigned long long Ax, - unsigned long long Bx) - coo_todense(int n_row, int n_col, int nnz, int Ai, int Aj, float Ax, - float Bx) - coo_todense(int n_row, int n_col, int nnz, int Ai, int Aj, double Ax, - double Bx) - coo_todense(int n_row, int n_col, int nnz, int Ai, int Aj, long double Ax, - long double Bx) - coo_todense(int n_row, int n_col, int nnz, int Ai, int Aj, npy_cfloat_wrapper Ax, - npy_cfloat_wrapper Bx) - coo_todense(int n_row, int n_col, int nnz, int Ai, int Aj, npy_cdouble_wrapper Ax, - npy_cdouble_wrapper Bx) - coo_todense(int n_row, int n_col, int nnz, int Ai, int Aj, npy_clongdouble_wrapper Ax, - npy_clongdouble_wrapper Bx) - """ - return _coo.coo_todense(*args) + return _coo.coo_todense(*args) def coo_matvec(*args): + """ + coo_matvec(int nnz, int Ai, int Aj, signed char Ax, signed char Xx, + signed char Yx) + coo_matvec(int nnz, int Ai, int Aj, unsigned char Ax, unsigned char Xx, + unsigned char Yx) + coo_matvec(int nnz, int Ai, int Aj, short Ax, short Xx, short Yx) + coo_matvec(int nnz, int Ai, int Aj, unsigned short Ax, unsigned short Xx, + unsigned short Yx) + coo_matvec(int nnz, int Ai, int Aj, int Ax, int Xx, int Yx) + coo_matvec(int nnz, int Ai, int Aj, unsigned int Ax, unsigned int Xx, + unsigned int Yx) + coo_matvec(int nnz, int Ai, int Aj, long long Ax, long long Xx, + long long Yx) + coo_matvec(int nnz, int Ai, int Aj, unsigned long long Ax, unsigned long long Xx, + unsigned long long Yx) + coo_matvec(int nnz, int Ai, int Aj, float Ax, float Xx, float Yx) + coo_matvec(int nnz, int Ai, int Aj, double Ax, double Xx, double Yx) + coo_matvec(int nnz, int Ai, int Aj, long double Ax, long double Xx, + long double Yx) + coo_matvec(int nnz, int Ai, int Aj, npy_cfloat_wrapper Ax, npy_cfloat_wrapper Xx, + npy_cfloat_wrapper Yx) + coo_matvec(int nnz, int Ai, int Aj, npy_cdouble_wrapper Ax, npy_cdouble_wrapper Xx, + npy_cdouble_wrapper Yx) + coo_matvec(int nnz, int Ai, int Aj, npy_clongdouble_wrapper Ax, + npy_clongdouble_wrapper Xx, npy_clongdouble_wrapper Yx) """ - coo_matvec(int nnz, int Ai, int Aj, signed char Ax, signed char Xx, - signed char Yx) - coo_matvec(int nnz, int Ai, int Aj, unsigned char Ax, unsigned char Xx, - unsigned char Yx) - coo_matvec(int nnz, int Ai, int Aj, short Ax, short Xx, short Yx) - coo_matvec(int nnz, int Ai, int Aj, unsigned short Ax, unsigned short Xx, - unsigned short Yx) - coo_matvec(int nnz, int Ai, int Aj, int Ax, int Xx, int Yx) - coo_matvec(int nnz, int Ai, int Aj, unsigned int Ax, unsigned int Xx, - unsigned int Yx) - coo_matvec(int nnz, int Ai, int Aj, long long Ax, long long Xx, - long long Yx) - coo_matvec(int nnz, int Ai, int Aj, unsigned long long Ax, unsigned long long Xx, - unsigned long long Yx) - coo_matvec(int nnz, int Ai, int Aj, float Ax, float Xx, float Yx) - coo_matvec(int nnz, int Ai, int Aj, double Ax, double Xx, double Yx) - coo_matvec(int nnz, int Ai, int Aj, long double Ax, long double Xx, - long double Yx) - coo_matvec(int nnz, int Ai, int Aj, npy_cfloat_wrapper Ax, npy_cfloat_wrapper Xx, - npy_cfloat_wrapper Yx) - coo_matvec(int nnz, int Ai, int Aj, npy_cdouble_wrapper Ax, npy_cdouble_wrapper Xx, - npy_cdouble_wrapper Yx) - coo_matvec(int nnz, int Ai, int Aj, npy_clongdouble_wrapper Ax, - npy_clongdouble_wrapper Xx, npy_clongdouble_wrapper Yx) - """ - return _coo.coo_matvec(*args) + return _coo.coo_matvec(*args) + diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/sparsetools/csc.py python-scipy-0.8.0+dfsg1/scipy/sparse/sparsetools/csc.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/sparsetools/csc.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/sparsetools/csc.py 2010-07-26 15:48:35.000000000 +0100 @@ -50,356 +50,357 @@ def csc_matmat_pass1(*args): + """ + csc_matmat_pass1(int n_row, int n_col, int Ap, int Ai, int Bp, int Bi, + int Cp) """ - csc_matmat_pass1(int n_row, int n_col, int Ap, int Ai, int Bp, int Bi, - int Cp) - """ - return _csc.csc_matmat_pass1(*args) + return _csc.csc_matmat_pass1(*args) def csc_diagonal(*args): + """ + csc_diagonal(int n_row, int n_col, int Ap, int Aj, signed char Ax, + signed char Yx) + csc_diagonal(int n_row, int n_col, int Ap, int Aj, unsigned char Ax, + unsigned char Yx) + csc_diagonal(int n_row, int n_col, int Ap, int Aj, short Ax, short Yx) + csc_diagonal(int n_row, int n_col, int Ap, int Aj, unsigned short Ax, + unsigned short Yx) + csc_diagonal(int n_row, int n_col, int Ap, int Aj, int Ax, int Yx) + csc_diagonal(int n_row, int n_col, int Ap, int Aj, unsigned int Ax, + unsigned int Yx) + csc_diagonal(int n_row, int n_col, int Ap, int Aj, long long Ax, + long long Yx) + csc_diagonal(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax, + unsigned long long Yx) + csc_diagonal(int n_row, int n_col, int Ap, int Aj, float Ax, float Yx) + csc_diagonal(int n_row, int n_col, int Ap, int Aj, double Ax, double Yx) + csc_diagonal(int n_row, int n_col, int Ap, int Aj, long double Ax, + long double Yx) + csc_diagonal(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax, + npy_cfloat_wrapper Yx) + csc_diagonal(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax, + npy_cdouble_wrapper Yx) + csc_diagonal(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax, + npy_clongdouble_wrapper Yx) """ - csc_diagonal(int n_row, int n_col, int Ap, int Aj, signed char Ax, - signed char Yx) - csc_diagonal(int n_row, int n_col, int Ap, int Aj, unsigned char Ax, - unsigned char Yx) - csc_diagonal(int n_row, int n_col, int Ap, int Aj, short Ax, short Yx) - csc_diagonal(int n_row, int n_col, int Ap, int Aj, unsigned short Ax, - unsigned short Yx) - csc_diagonal(int n_row, int n_col, int Ap, int Aj, int Ax, int Yx) - csc_diagonal(int n_row, int n_col, int Ap, int Aj, unsigned int Ax, - unsigned int Yx) - csc_diagonal(int n_row, int n_col, int Ap, int Aj, long long Ax, - long long Yx) - csc_diagonal(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax, - unsigned long long Yx) - csc_diagonal(int n_row, int n_col, int Ap, int Aj, float Ax, float Yx) - csc_diagonal(int n_row, int n_col, int Ap, int Aj, double Ax, double Yx) - csc_diagonal(int n_row, int n_col, int Ap, int Aj, long double Ax, - long double Yx) - csc_diagonal(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax, - npy_cfloat_wrapper Yx) - csc_diagonal(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax, - npy_cdouble_wrapper Yx) - csc_diagonal(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax, - npy_clongdouble_wrapper Yx) - """ - return _csc.csc_diagonal(*args) + return _csc.csc_diagonal(*args) def csc_tocsr(*args): + """ + csc_tocsr(int n_row, int n_col, int Ap, int Ai, signed char Ax, + int Bp, int Bj, signed char Bx) + csc_tocsr(int n_row, int n_col, int Ap, int Ai, unsigned char Ax, + int Bp, int Bj, unsigned char Bx) + csc_tocsr(int n_row, int n_col, int Ap, int Ai, short Ax, int Bp, + int Bj, short Bx) + csc_tocsr(int n_row, int n_col, int Ap, int Ai, unsigned short Ax, + int Bp, int Bj, unsigned short Bx) + csc_tocsr(int n_row, int n_col, int Ap, int Ai, int Ax, int Bp, + int Bj, int Bx) + csc_tocsr(int n_row, int n_col, int Ap, int Ai, unsigned int Ax, + int Bp, int Bj, unsigned int Bx) + csc_tocsr(int n_row, int n_col, int Ap, int Ai, long long Ax, + int Bp, int Bj, long long Bx) + csc_tocsr(int n_row, int n_col, int Ap, int Ai, unsigned long long Ax, + int Bp, int Bj, unsigned long long Bx) + csc_tocsr(int n_row, int n_col, int Ap, int Ai, float Ax, int Bp, + int Bj, float Bx) + csc_tocsr(int n_row, int n_col, int Ap, int Ai, double Ax, int Bp, + int Bj, double Bx) + csc_tocsr(int n_row, int n_col, int Ap, int Ai, long double Ax, + int Bp, int Bj, long double Bx) + csc_tocsr(int n_row, int n_col, int Ap, int Ai, npy_cfloat_wrapper Ax, + int Bp, int Bj, npy_cfloat_wrapper Bx) + csc_tocsr(int n_row, int n_col, int Ap, int Ai, npy_cdouble_wrapper Ax, + int Bp, int Bj, npy_cdouble_wrapper Bx) + csc_tocsr(int n_row, int n_col, int Ap, int Ai, npy_clongdouble_wrapper Ax, + int Bp, int Bj, npy_clongdouble_wrapper Bx) """ - csc_tocsr(int n_row, int n_col, int Ap, int Ai, signed char Ax, - int Bp, int Bj, signed char Bx) - csc_tocsr(int n_row, int n_col, int Ap, int Ai, unsigned char Ax, - int Bp, int Bj, unsigned char Bx) - csc_tocsr(int n_row, int n_col, int Ap, int Ai, short Ax, int Bp, - int Bj, short Bx) - csc_tocsr(int n_row, int n_col, int Ap, int Ai, unsigned short Ax, - int Bp, int Bj, unsigned short Bx) - csc_tocsr(int n_row, int n_col, int Ap, int Ai, int Ax, int Bp, - int Bj, int Bx) - csc_tocsr(int n_row, int n_col, int Ap, int Ai, unsigned int Ax, - int Bp, int Bj, unsigned int Bx) - csc_tocsr(int n_row, int n_col, int Ap, int Ai, long long Ax, - int Bp, int Bj, long long Bx) - csc_tocsr(int n_row, int n_col, int Ap, int Ai, unsigned long long Ax, - int Bp, int Bj, unsigned long long Bx) - csc_tocsr(int n_row, int n_col, int Ap, int Ai, float Ax, int Bp, - int Bj, float Bx) - csc_tocsr(int n_row, int n_col, int Ap, int Ai, double Ax, int Bp, - int Bj, double Bx) - csc_tocsr(int n_row, int n_col, int Ap, int Ai, long double Ax, - int Bp, int Bj, long double Bx) - csc_tocsr(int n_row, int n_col, int Ap, int Ai, npy_cfloat_wrapper Ax, - int Bp, int Bj, npy_cfloat_wrapper Bx) - csc_tocsr(int n_row, int n_col, int Ap, int Ai, npy_cdouble_wrapper Ax, - int Bp, int Bj, npy_cdouble_wrapper Bx) - csc_tocsr(int n_row, int n_col, int Ap, int Ai, npy_clongdouble_wrapper Ax, - int Bp, int Bj, npy_clongdouble_wrapper Bx) - """ - return _csc.csc_tocsr(*args) + return _csc.csc_tocsr(*args) def csc_matmat_pass2(*args): + """ + csc_matmat_pass2(int n_row, int n_col, int Ap, int Ai, signed char Ax, + int Bp, int Bi, signed char Bx, int Cp, int Ci, + signed char Cx) + csc_matmat_pass2(int n_row, int n_col, int Ap, int Ai, unsigned char Ax, + int Bp, int Bi, unsigned char Bx, int Cp, + int Ci, unsigned char Cx) + csc_matmat_pass2(int n_row, int n_col, int Ap, int Ai, short Ax, int Bp, + int Bi, short Bx, int Cp, int Ci, short Cx) + csc_matmat_pass2(int n_row, int n_col, int Ap, int Ai, unsigned short Ax, + int Bp, int Bi, unsigned short Bx, int Cp, + int Ci, unsigned short Cx) + csc_matmat_pass2(int n_row, int n_col, int Ap, int Ai, int Ax, int Bp, + int Bi, int Bx, int Cp, int Ci, int Cx) + csc_matmat_pass2(int n_row, int n_col, int Ap, int Ai, unsigned int Ax, + int Bp, int Bi, unsigned int Bx, int Cp, + int Ci, unsigned int Cx) + csc_matmat_pass2(int n_row, int n_col, int Ap, int Ai, long long Ax, + int Bp, int Bi, long long Bx, int Cp, int Ci, + long long Cx) + csc_matmat_pass2(int n_row, int n_col, int Ap, int Ai, unsigned long long Ax, + int Bp, int Bi, unsigned long long Bx, + int Cp, int Ci, unsigned long long Cx) + csc_matmat_pass2(int n_row, int n_col, int Ap, int Ai, float Ax, int Bp, + int Bi, float Bx, int Cp, int Ci, float Cx) + csc_matmat_pass2(int n_row, int n_col, int Ap, int Ai, double Ax, int Bp, + int Bi, double Bx, int Cp, int Ci, double Cx) + csc_matmat_pass2(int n_row, int n_col, int Ap, int Ai, long double Ax, + int Bp, int Bi, long double Bx, int Cp, int Ci, + long double Cx) + csc_matmat_pass2(int n_row, int n_col, int Ap, int Ai, npy_cfloat_wrapper Ax, + int Bp, int Bi, npy_cfloat_wrapper Bx, + int Cp, int Ci, npy_cfloat_wrapper Cx) + csc_matmat_pass2(int n_row, int n_col, int Ap, int Ai, npy_cdouble_wrapper Ax, + int Bp, int Bi, npy_cdouble_wrapper Bx, + int Cp, int Ci, npy_cdouble_wrapper Cx) + csc_matmat_pass2(int n_row, int n_col, int Ap, int Ai, npy_clongdouble_wrapper Ax, + int Bp, int Bi, npy_clongdouble_wrapper Bx, + int Cp, int Ci, npy_clongdouble_wrapper Cx) """ - csc_matmat_pass2(int n_row, int n_col, int Ap, int Ai, signed char Ax, - int Bp, int Bi, signed char Bx, int Cp, int Ci, - signed char Cx) - csc_matmat_pass2(int n_row, int n_col, int Ap, int Ai, unsigned char Ax, - int Bp, int Bi, unsigned char Bx, int Cp, - int Ci, unsigned char Cx) - csc_matmat_pass2(int n_row, int n_col, int Ap, int Ai, short Ax, int Bp, - int Bi, short Bx, int Cp, int Ci, short Cx) - csc_matmat_pass2(int n_row, int n_col, int Ap, int Ai, unsigned short Ax, - int Bp, int Bi, unsigned short Bx, int Cp, - int Ci, unsigned short Cx) - csc_matmat_pass2(int n_row, int n_col, int Ap, int Ai, int Ax, int Bp, - int Bi, int Bx, int Cp, int Ci, int Cx) - csc_matmat_pass2(int n_row, int n_col, int Ap, int Ai, unsigned int Ax, - int Bp, int Bi, unsigned int Bx, int Cp, - int Ci, unsigned int Cx) - csc_matmat_pass2(int n_row, int n_col, int Ap, int Ai, long long Ax, - int Bp, int Bi, long long Bx, int Cp, int Ci, - long long Cx) - csc_matmat_pass2(int n_row, int n_col, int Ap, int Ai, unsigned long long Ax, - int Bp, int Bi, unsigned long long Bx, - int Cp, int Ci, unsigned long long Cx) - csc_matmat_pass2(int n_row, int n_col, int Ap, int Ai, float Ax, int Bp, - int Bi, float Bx, int Cp, int Ci, float Cx) - csc_matmat_pass2(int n_row, int n_col, int Ap, int Ai, double Ax, int Bp, - int Bi, double Bx, int Cp, int Ci, double Cx) - csc_matmat_pass2(int n_row, int n_col, int Ap, int Ai, long double Ax, - int Bp, int Bi, long double Bx, int Cp, int Ci, - long double Cx) - csc_matmat_pass2(int n_row, int n_col, int Ap, int Ai, npy_cfloat_wrapper Ax, - int Bp, int Bi, npy_cfloat_wrapper Bx, - int Cp, int Ci, npy_cfloat_wrapper Cx) - csc_matmat_pass2(int n_row, int n_col, int Ap, int Ai, npy_cdouble_wrapper Ax, - int Bp, int Bi, npy_cdouble_wrapper Bx, - int Cp, int Ci, npy_cdouble_wrapper Cx) - csc_matmat_pass2(int n_row, int n_col, int Ap, int Ai, npy_clongdouble_wrapper Ax, - int Bp, int Bi, npy_clongdouble_wrapper Bx, - int Cp, int Ci, npy_clongdouble_wrapper Cx) - """ - return _csc.csc_matmat_pass2(*args) + return _csc.csc_matmat_pass2(*args) def csc_matvec(*args): + """ + csc_matvec(int n_row, int n_col, int Ap, int Ai, signed char Ax, + signed char Xx, signed char Yx) + csc_matvec(int n_row, int n_col, int Ap, int Ai, unsigned char Ax, + unsigned char Xx, unsigned char Yx) + csc_matvec(int n_row, int n_col, int Ap, int Ai, short Ax, short Xx, + short Yx) + csc_matvec(int n_row, int n_col, int Ap, int Ai, unsigned short Ax, + unsigned short Xx, unsigned short Yx) + csc_matvec(int n_row, int n_col, int Ap, int Ai, int Ax, int Xx, + int Yx) + csc_matvec(int n_row, int n_col, int Ap, int Ai, unsigned int Ax, + unsigned int Xx, unsigned int Yx) + csc_matvec(int n_row, int n_col, int Ap, int Ai, long long Ax, + long long Xx, long long Yx) + csc_matvec(int n_row, int n_col, int Ap, int Ai, unsigned long long Ax, + unsigned long long Xx, unsigned long long Yx) + csc_matvec(int n_row, int n_col, int Ap, int Ai, float Ax, float Xx, + float Yx) + csc_matvec(int n_row, int n_col, int Ap, int Ai, double Ax, double Xx, + double Yx) + csc_matvec(int n_row, int n_col, int Ap, int Ai, long double Ax, + long double Xx, long double Yx) + csc_matvec(int n_row, int n_col, int Ap, int Ai, npy_cfloat_wrapper Ax, + npy_cfloat_wrapper Xx, npy_cfloat_wrapper Yx) + csc_matvec(int n_row, int n_col, int Ap, int Ai, npy_cdouble_wrapper Ax, + npy_cdouble_wrapper Xx, npy_cdouble_wrapper Yx) + csc_matvec(int n_row, int n_col, int Ap, int Ai, npy_clongdouble_wrapper Ax, + npy_clongdouble_wrapper Xx, npy_clongdouble_wrapper Yx) """ - csc_matvec(int n_row, int n_col, int Ap, int Ai, signed char Ax, - signed char Xx, signed char Yx) - csc_matvec(int n_row, int n_col, int Ap, int Ai, unsigned char Ax, - unsigned char Xx, unsigned char Yx) - csc_matvec(int n_row, int n_col, int Ap, int Ai, short Ax, short Xx, - short Yx) - csc_matvec(int n_row, int n_col, int Ap, int Ai, unsigned short Ax, - unsigned short Xx, unsigned short Yx) - csc_matvec(int n_row, int n_col, int Ap, int Ai, int Ax, int Xx, - int Yx) - csc_matvec(int n_row, int n_col, int Ap, int Ai, unsigned int Ax, - unsigned int Xx, unsigned int Yx) - csc_matvec(int n_row, int n_col, int Ap, int Ai, long long Ax, - long long Xx, long long Yx) - csc_matvec(int n_row, int n_col, int Ap, int Ai, unsigned long long Ax, - unsigned long long Xx, unsigned long long Yx) - csc_matvec(int n_row, int n_col, int Ap, int Ai, float Ax, float Xx, - float Yx) - csc_matvec(int n_row, int n_col, int Ap, int Ai, double Ax, double Xx, - double Yx) - csc_matvec(int n_row, int n_col, int Ap, int Ai, long double Ax, - long double Xx, long double Yx) - csc_matvec(int n_row, int n_col, int Ap, int Ai, npy_cfloat_wrapper Ax, - npy_cfloat_wrapper Xx, npy_cfloat_wrapper Yx) - csc_matvec(int n_row, int n_col, int Ap, int Ai, npy_cdouble_wrapper Ax, - npy_cdouble_wrapper Xx, npy_cdouble_wrapper Yx) - csc_matvec(int n_row, int n_col, int Ap, int Ai, npy_clongdouble_wrapper Ax, - npy_clongdouble_wrapper Xx, npy_clongdouble_wrapper Yx) - """ - return _csc.csc_matvec(*args) + return _csc.csc_matvec(*args) def csc_matvecs(*args): + """ + csc_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Ai, signed char Ax, + signed char Xx, signed char Yx) + csc_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Ai, unsigned char Ax, + unsigned char Xx, unsigned char Yx) + csc_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Ai, short Ax, + short Xx, short Yx) + csc_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Ai, unsigned short Ax, + unsigned short Xx, unsigned short Yx) + csc_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Ai, int Ax, + int Xx, int Yx) + csc_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Ai, unsigned int Ax, + unsigned int Xx, unsigned int Yx) + csc_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Ai, long long Ax, + long long Xx, long long Yx) + csc_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Ai, unsigned long long Ax, + unsigned long long Xx, + unsigned long long Yx) + csc_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Ai, float Ax, + float Xx, float Yx) + csc_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Ai, double Ax, + double Xx, double Yx) + csc_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Ai, long double Ax, + long double Xx, long double Yx) + csc_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Ai, npy_cfloat_wrapper Ax, + npy_cfloat_wrapper Xx, + npy_cfloat_wrapper Yx) + csc_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Ai, npy_cdouble_wrapper Ax, + npy_cdouble_wrapper Xx, + npy_cdouble_wrapper Yx) + csc_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Ai, npy_clongdouble_wrapper Ax, + npy_clongdouble_wrapper Xx, + npy_clongdouble_wrapper Yx) """ - csc_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Ai, signed char Ax, - signed char Xx, signed char Yx) - csc_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Ai, unsigned char Ax, - unsigned char Xx, unsigned char Yx) - csc_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Ai, short Ax, - short Xx, short Yx) - csc_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Ai, unsigned short Ax, - unsigned short Xx, unsigned short Yx) - csc_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Ai, int Ax, - int Xx, int Yx) - csc_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Ai, unsigned int Ax, - unsigned int Xx, unsigned int Yx) - csc_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Ai, long long Ax, - long long Xx, long long Yx) - csc_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Ai, unsigned long long Ax, - unsigned long long Xx, - unsigned long long Yx) - csc_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Ai, float Ax, - float Xx, float Yx) - csc_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Ai, double Ax, - double Xx, double Yx) - csc_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Ai, long double Ax, - long double Xx, long double Yx) - csc_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Ai, npy_cfloat_wrapper Ax, - npy_cfloat_wrapper Xx, - npy_cfloat_wrapper Yx) - csc_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Ai, npy_cdouble_wrapper Ax, - npy_cdouble_wrapper Xx, - npy_cdouble_wrapper Yx) - csc_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Ai, npy_clongdouble_wrapper Ax, - npy_clongdouble_wrapper Xx, - npy_clongdouble_wrapper Yx) - """ - return _csc.csc_matvecs(*args) + return _csc.csc_matvecs(*args) def csc_elmul_csc(*args): + """ + csc_elmul_csc(int n_row, int n_col, int Ap, int Ai, signed char Ax, + int Bp, int Bi, signed char Bx, int Cp, int Ci, + signed char Cx) + csc_elmul_csc(int n_row, int n_col, int Ap, int Ai, unsigned char Ax, + int Bp, int Bi, unsigned char Bx, int Cp, + int Ci, unsigned char Cx) + csc_elmul_csc(int n_row, int n_col, int Ap, int Ai, short Ax, int Bp, + int Bi, short Bx, int Cp, int Ci, short Cx) + csc_elmul_csc(int n_row, int n_col, int Ap, int Ai, unsigned short Ax, + int Bp, int Bi, unsigned short Bx, int Cp, + int Ci, unsigned short Cx) + csc_elmul_csc(int n_row, int n_col, int Ap, int Ai, int Ax, int Bp, + int Bi, int Bx, int Cp, int Ci, int Cx) + csc_elmul_csc(int n_row, int n_col, int Ap, int Ai, unsigned int Ax, + int Bp, int Bi, unsigned int Bx, int Cp, + int Ci, unsigned int Cx) + csc_elmul_csc(int n_row, int n_col, int Ap, int Ai, long long Ax, + int Bp, int Bi, long long Bx, int Cp, int Ci, + long long Cx) + csc_elmul_csc(int n_row, int n_col, int Ap, int Ai, unsigned long long Ax, + int Bp, int Bi, unsigned long long Bx, + int Cp, int Ci, unsigned long long Cx) + csc_elmul_csc(int n_row, int n_col, int Ap, int Ai, float Ax, int Bp, + int Bi, float Bx, int Cp, int Ci, float Cx) + csc_elmul_csc(int n_row, int n_col, int Ap, int Ai, double Ax, int Bp, + int Bi, double Bx, int Cp, int Ci, double Cx) + csc_elmul_csc(int n_row, int n_col, int Ap, int Ai, long double Ax, + int Bp, int Bi, long double Bx, int Cp, int Ci, + long double Cx) + csc_elmul_csc(int n_row, int n_col, int Ap, int Ai, npy_cfloat_wrapper Ax, + int Bp, int Bi, npy_cfloat_wrapper Bx, + int Cp, int Ci, npy_cfloat_wrapper Cx) + csc_elmul_csc(int n_row, int n_col, int Ap, int Ai, npy_cdouble_wrapper Ax, + int Bp, int Bi, npy_cdouble_wrapper Bx, + int Cp, int Ci, npy_cdouble_wrapper Cx) + csc_elmul_csc(int n_row, int n_col, int Ap, int Ai, npy_clongdouble_wrapper Ax, + int Bp, int Bi, npy_clongdouble_wrapper Bx, + int Cp, int Ci, npy_clongdouble_wrapper Cx) """ - csc_elmul_csc(int n_row, int n_col, int Ap, int Ai, signed char Ax, - int Bp, int Bi, signed char Bx, int Cp, int Ci, - signed char Cx) - csc_elmul_csc(int n_row, int n_col, int Ap, int Ai, unsigned char Ax, - int Bp, int Bi, unsigned char Bx, int Cp, - int Ci, unsigned char Cx) - csc_elmul_csc(int n_row, int n_col, int Ap, int Ai, short Ax, int Bp, - int Bi, short Bx, int Cp, int Ci, short Cx) - csc_elmul_csc(int n_row, int n_col, int Ap, int Ai, unsigned short Ax, - int Bp, int Bi, unsigned short Bx, int Cp, - int Ci, unsigned short Cx) - csc_elmul_csc(int n_row, int n_col, int Ap, int Ai, int Ax, int Bp, - int Bi, int Bx, int Cp, int Ci, int Cx) - csc_elmul_csc(int n_row, int n_col, int Ap, int Ai, unsigned int Ax, - int Bp, int Bi, unsigned int Bx, int Cp, - int Ci, unsigned int Cx) - csc_elmul_csc(int n_row, int n_col, int Ap, int Ai, long long Ax, - int Bp, int Bi, long long Bx, int Cp, int Ci, - long long Cx) - csc_elmul_csc(int n_row, int n_col, int Ap, int Ai, unsigned long long Ax, - int Bp, int Bi, unsigned long long Bx, - int Cp, int Ci, unsigned long long Cx) - csc_elmul_csc(int n_row, int n_col, int Ap, int Ai, float Ax, int Bp, - int Bi, float Bx, int Cp, int Ci, float Cx) - csc_elmul_csc(int n_row, int n_col, int Ap, int Ai, double Ax, int Bp, - int Bi, double Bx, int Cp, int Ci, double Cx) - csc_elmul_csc(int n_row, int n_col, int Ap, int Ai, long double Ax, - int Bp, int Bi, long double Bx, int Cp, int Ci, - long double Cx) - csc_elmul_csc(int n_row, int n_col, int Ap, int Ai, npy_cfloat_wrapper Ax, - int Bp, int Bi, npy_cfloat_wrapper Bx, - int Cp, int Ci, npy_cfloat_wrapper Cx) - csc_elmul_csc(int n_row, int n_col, int Ap, int Ai, npy_cdouble_wrapper Ax, - int Bp, int Bi, npy_cdouble_wrapper Bx, - int Cp, int Ci, npy_cdouble_wrapper Cx) - csc_elmul_csc(int n_row, int n_col, int Ap, int Ai, npy_clongdouble_wrapper Ax, - int Bp, int Bi, npy_clongdouble_wrapper Bx, - int Cp, int Ci, npy_clongdouble_wrapper Cx) - """ - return _csc.csc_elmul_csc(*args) + return _csc.csc_elmul_csc(*args) def csc_eldiv_csc(*args): + """ + csc_eldiv_csc(int n_row, int n_col, int Ap, int Ai, signed char Ax, + int Bp, int Bi, signed char Bx, int Cp, int Ci, + signed char Cx) + csc_eldiv_csc(int n_row, int n_col, int Ap, int Ai, unsigned char Ax, + int Bp, int Bi, unsigned char Bx, int Cp, + int Ci, unsigned char Cx) + csc_eldiv_csc(int n_row, int n_col, int Ap, int Ai, short Ax, int Bp, + int Bi, short Bx, int Cp, int Ci, short Cx) + csc_eldiv_csc(int n_row, int n_col, int Ap, int Ai, unsigned short Ax, + int Bp, int Bi, unsigned short Bx, int Cp, + int Ci, unsigned short Cx) + csc_eldiv_csc(int n_row, int n_col, int Ap, int Ai, int Ax, int Bp, + int Bi, int Bx, int Cp, int Ci, int Cx) + csc_eldiv_csc(int n_row, int n_col, int Ap, int Ai, unsigned int Ax, + int Bp, int Bi, unsigned int Bx, int Cp, + int Ci, unsigned int Cx) + csc_eldiv_csc(int n_row, int n_col, int Ap, int Ai, long long Ax, + int Bp, int Bi, long long Bx, int Cp, int Ci, + long long Cx) + csc_eldiv_csc(int n_row, int n_col, int Ap, int Ai, unsigned long long Ax, + int Bp, int Bi, unsigned long long Bx, + int Cp, int Ci, unsigned long long Cx) + csc_eldiv_csc(int n_row, int n_col, int Ap, int Ai, float Ax, int Bp, + int Bi, float Bx, int Cp, int Ci, float Cx) + csc_eldiv_csc(int n_row, int n_col, int Ap, int Ai, double Ax, int Bp, + int Bi, double Bx, int Cp, int Ci, double Cx) + csc_eldiv_csc(int n_row, int n_col, int Ap, int Ai, long double Ax, + int Bp, int Bi, long double Bx, int Cp, int Ci, + long double Cx) + csc_eldiv_csc(int n_row, int n_col, int Ap, int Ai, npy_cfloat_wrapper Ax, + int Bp, int Bi, npy_cfloat_wrapper Bx, + int Cp, int Ci, npy_cfloat_wrapper Cx) + csc_eldiv_csc(int n_row, int n_col, int Ap, int Ai, npy_cdouble_wrapper Ax, + int Bp, int Bi, npy_cdouble_wrapper Bx, + int Cp, int Ci, npy_cdouble_wrapper Cx) + csc_eldiv_csc(int n_row, int n_col, int Ap, int Ai, npy_clongdouble_wrapper Ax, + int Bp, int Bi, npy_clongdouble_wrapper Bx, + int Cp, int Ci, npy_clongdouble_wrapper Cx) """ - csc_eldiv_csc(int n_row, int n_col, int Ap, int Ai, signed char Ax, - int Bp, int Bi, signed char Bx, int Cp, int Ci, - signed char Cx) - csc_eldiv_csc(int n_row, int n_col, int Ap, int Ai, unsigned char Ax, - int Bp, int Bi, unsigned char Bx, int Cp, - int Ci, unsigned char Cx) - csc_eldiv_csc(int n_row, int n_col, int Ap, int Ai, short Ax, int Bp, - int Bi, short Bx, int Cp, int Ci, short Cx) - csc_eldiv_csc(int n_row, int n_col, int Ap, int Ai, unsigned short Ax, - int Bp, int Bi, unsigned short Bx, int Cp, - int Ci, unsigned short Cx) - csc_eldiv_csc(int n_row, int n_col, int Ap, int Ai, int Ax, int Bp, - int Bi, int Bx, int Cp, int Ci, int Cx) - csc_eldiv_csc(int n_row, int n_col, int Ap, int Ai, unsigned int Ax, - int Bp, int Bi, unsigned int Bx, int Cp, - int Ci, unsigned int Cx) - csc_eldiv_csc(int n_row, int n_col, int Ap, int Ai, long long Ax, - int Bp, int Bi, long long Bx, int Cp, int Ci, - long long Cx) - csc_eldiv_csc(int n_row, int n_col, int Ap, int Ai, unsigned long long Ax, - int Bp, int Bi, unsigned long long Bx, - int Cp, int Ci, unsigned long long Cx) - csc_eldiv_csc(int n_row, int n_col, int Ap, int Ai, float Ax, int Bp, - int Bi, float Bx, int Cp, int Ci, float Cx) - csc_eldiv_csc(int n_row, int n_col, int Ap, int Ai, double Ax, int Bp, - int Bi, double Bx, int Cp, int Ci, double Cx) - csc_eldiv_csc(int n_row, int n_col, int Ap, int Ai, long double Ax, - int Bp, int Bi, long double Bx, int Cp, int Ci, - long double Cx) - csc_eldiv_csc(int n_row, int n_col, int Ap, int Ai, npy_cfloat_wrapper Ax, - int Bp, int Bi, npy_cfloat_wrapper Bx, - int Cp, int Ci, npy_cfloat_wrapper Cx) - csc_eldiv_csc(int n_row, int n_col, int Ap, int Ai, npy_cdouble_wrapper Ax, - int Bp, int Bi, npy_cdouble_wrapper Bx, - int Cp, int Ci, npy_cdouble_wrapper Cx) - csc_eldiv_csc(int n_row, int n_col, int Ap, int Ai, npy_clongdouble_wrapper Ax, - int Bp, int Bi, npy_clongdouble_wrapper Bx, - int Cp, int Ci, npy_clongdouble_wrapper Cx) - """ - return _csc.csc_eldiv_csc(*args) + return _csc.csc_eldiv_csc(*args) def csc_plus_csc(*args): + """ + csc_plus_csc(int n_row, int n_col, int Ap, int Ai, signed char Ax, + int Bp, int Bi, signed char Bx, int Cp, int Ci, + signed char Cx) + csc_plus_csc(int n_row, int n_col, int Ap, int Ai, unsigned char Ax, + int Bp, int Bi, unsigned char Bx, int Cp, + int Ci, unsigned char Cx) + csc_plus_csc(int n_row, int n_col, int Ap, int Ai, short Ax, int Bp, + int Bi, short Bx, int Cp, int Ci, short Cx) + csc_plus_csc(int n_row, int n_col, int Ap, int Ai, unsigned short Ax, + int Bp, int Bi, unsigned short Bx, int Cp, + int Ci, unsigned short Cx) + csc_plus_csc(int n_row, int n_col, int Ap, int Ai, int Ax, int Bp, + int Bi, int Bx, int Cp, int Ci, int Cx) + csc_plus_csc(int n_row, int n_col, int Ap, int Ai, unsigned int Ax, + int Bp, int Bi, unsigned int Bx, int Cp, + int Ci, unsigned int Cx) + csc_plus_csc(int n_row, int n_col, int Ap, int Ai, long long Ax, + int Bp, int Bi, long long Bx, int Cp, int Ci, + long long Cx) + csc_plus_csc(int n_row, int n_col, int Ap, int Ai, unsigned long long Ax, + int Bp, int Bi, unsigned long long Bx, + int Cp, int Ci, unsigned long long Cx) + csc_plus_csc(int n_row, int n_col, int Ap, int Ai, float Ax, int Bp, + int Bi, float Bx, int Cp, int Ci, float Cx) + csc_plus_csc(int n_row, int n_col, int Ap, int Ai, double Ax, int Bp, + int Bi, double Bx, int Cp, int Ci, double Cx) + csc_plus_csc(int n_row, int n_col, int Ap, int Ai, long double Ax, + int Bp, int Bi, long double Bx, int Cp, int Ci, + long double Cx) + csc_plus_csc(int n_row, int n_col, int Ap, int Ai, npy_cfloat_wrapper Ax, + int Bp, int Bi, npy_cfloat_wrapper Bx, + int Cp, int Ci, npy_cfloat_wrapper Cx) + csc_plus_csc(int n_row, int n_col, int Ap, int Ai, npy_cdouble_wrapper Ax, + int Bp, int Bi, npy_cdouble_wrapper Bx, + int Cp, int Ci, npy_cdouble_wrapper Cx) + csc_plus_csc(int n_row, int n_col, int Ap, int Ai, npy_clongdouble_wrapper Ax, + int Bp, int Bi, npy_clongdouble_wrapper Bx, + int Cp, int Ci, npy_clongdouble_wrapper Cx) """ - csc_plus_csc(int n_row, int n_col, int Ap, int Ai, signed char Ax, - int Bp, int Bi, signed char Bx, int Cp, int Ci, - signed char Cx) - csc_plus_csc(int n_row, int n_col, int Ap, int Ai, unsigned char Ax, - int Bp, int Bi, unsigned char Bx, int Cp, - int Ci, unsigned char Cx) - csc_plus_csc(int n_row, int n_col, int Ap, int Ai, short Ax, int Bp, - int Bi, short Bx, int Cp, int Ci, short Cx) - csc_plus_csc(int n_row, int n_col, int Ap, int Ai, unsigned short Ax, - int Bp, int Bi, unsigned short Bx, int Cp, - int Ci, unsigned short Cx) - csc_plus_csc(int n_row, int n_col, int Ap, int Ai, int Ax, int Bp, - int Bi, int Bx, int Cp, int Ci, int Cx) - csc_plus_csc(int n_row, int n_col, int Ap, int Ai, unsigned int Ax, - int Bp, int Bi, unsigned int Bx, int Cp, - int Ci, unsigned int Cx) - csc_plus_csc(int n_row, int n_col, int Ap, int Ai, long long Ax, - int Bp, int Bi, long long Bx, int Cp, int Ci, - long long Cx) - csc_plus_csc(int n_row, int n_col, int Ap, int Ai, unsigned long long Ax, - int Bp, int Bi, unsigned long long Bx, - int Cp, int Ci, unsigned long long Cx) - csc_plus_csc(int n_row, int n_col, int Ap, int Ai, float Ax, int Bp, - int Bi, float Bx, int Cp, int Ci, float Cx) - csc_plus_csc(int n_row, int n_col, int Ap, int Ai, double Ax, int Bp, - int Bi, double Bx, int Cp, int Ci, double Cx) - csc_plus_csc(int n_row, int n_col, int Ap, int Ai, long double Ax, - int Bp, int Bi, long double Bx, int Cp, int Ci, - long double Cx) - csc_plus_csc(int n_row, int n_col, int Ap, int Ai, npy_cfloat_wrapper Ax, - int Bp, int Bi, npy_cfloat_wrapper Bx, - int Cp, int Ci, npy_cfloat_wrapper Cx) - csc_plus_csc(int n_row, int n_col, int Ap, int Ai, npy_cdouble_wrapper Ax, - int Bp, int Bi, npy_cdouble_wrapper Bx, - int Cp, int Ci, npy_cdouble_wrapper Cx) - csc_plus_csc(int n_row, int n_col, int Ap, int Ai, npy_clongdouble_wrapper Ax, - int Bp, int Bi, npy_clongdouble_wrapper Bx, - int Cp, int Ci, npy_clongdouble_wrapper Cx) - """ - return _csc.csc_plus_csc(*args) + return _csc.csc_plus_csc(*args) def csc_minus_csc(*args): + """ + csc_minus_csc(int n_row, int n_col, int Ap, int Ai, signed char Ax, + int Bp, int Bi, signed char Bx, int Cp, int Ci, + signed char Cx) + csc_minus_csc(int n_row, int n_col, int Ap, int Ai, unsigned char Ax, + int Bp, int Bi, unsigned char Bx, int Cp, + int Ci, unsigned char Cx) + csc_minus_csc(int n_row, int n_col, int Ap, int Ai, short Ax, int Bp, + int Bi, short Bx, int Cp, int Ci, short Cx) + csc_minus_csc(int n_row, int n_col, int Ap, int Ai, unsigned short Ax, + int Bp, int Bi, unsigned short Bx, int Cp, + int Ci, unsigned short Cx) + csc_minus_csc(int n_row, int n_col, int Ap, int Ai, int Ax, int Bp, + int Bi, int Bx, int Cp, int Ci, int Cx) + csc_minus_csc(int n_row, int n_col, int Ap, int Ai, unsigned int Ax, + int Bp, int Bi, unsigned int Bx, int Cp, + int Ci, unsigned int Cx) + csc_minus_csc(int n_row, int n_col, int Ap, int Ai, long long Ax, + int Bp, int Bi, long long Bx, int Cp, int Ci, + long long Cx) + csc_minus_csc(int n_row, int n_col, int Ap, int Ai, unsigned long long Ax, + int Bp, int Bi, unsigned long long Bx, + int Cp, int Ci, unsigned long long Cx) + csc_minus_csc(int n_row, int n_col, int Ap, int Ai, float Ax, int Bp, + int Bi, float Bx, int Cp, int Ci, float Cx) + csc_minus_csc(int n_row, int n_col, int Ap, int Ai, double Ax, int Bp, + int Bi, double Bx, int Cp, int Ci, double Cx) + csc_minus_csc(int n_row, int n_col, int Ap, int Ai, long double Ax, + int Bp, int Bi, long double Bx, int Cp, int Ci, + long double Cx) + csc_minus_csc(int n_row, int n_col, int Ap, int Ai, npy_cfloat_wrapper Ax, + int Bp, int Bi, npy_cfloat_wrapper Bx, + int Cp, int Ci, npy_cfloat_wrapper Cx) + csc_minus_csc(int n_row, int n_col, int Ap, int Ai, npy_cdouble_wrapper Ax, + int Bp, int Bi, npy_cdouble_wrapper Bx, + int Cp, int Ci, npy_cdouble_wrapper Cx) + csc_minus_csc(int n_row, int n_col, int Ap, int Ai, npy_clongdouble_wrapper Ax, + int Bp, int Bi, npy_clongdouble_wrapper Bx, + int Cp, int Ci, npy_clongdouble_wrapper Cx) """ - csc_minus_csc(int n_row, int n_col, int Ap, int Ai, signed char Ax, - int Bp, int Bi, signed char Bx, int Cp, int Ci, - signed char Cx) - csc_minus_csc(int n_row, int n_col, int Ap, int Ai, unsigned char Ax, - int Bp, int Bi, unsigned char Bx, int Cp, - int Ci, unsigned char Cx) - csc_minus_csc(int n_row, int n_col, int Ap, int Ai, short Ax, int Bp, - int Bi, short Bx, int Cp, int Ci, short Cx) - csc_minus_csc(int n_row, int n_col, int Ap, int Ai, unsigned short Ax, - int Bp, int Bi, unsigned short Bx, int Cp, - int Ci, unsigned short Cx) - csc_minus_csc(int n_row, int n_col, int Ap, int Ai, int Ax, int Bp, - int Bi, int Bx, int Cp, int Ci, int Cx) - csc_minus_csc(int n_row, int n_col, int Ap, int Ai, unsigned int Ax, - int Bp, int Bi, unsigned int Bx, int Cp, - int Ci, unsigned int Cx) - csc_minus_csc(int n_row, int n_col, int Ap, int Ai, long long Ax, - int Bp, int Bi, long long Bx, int Cp, int Ci, - long long Cx) - csc_minus_csc(int n_row, int n_col, int Ap, int Ai, unsigned long long Ax, - int Bp, int Bi, unsigned long long Bx, - int Cp, int Ci, unsigned long long Cx) - csc_minus_csc(int n_row, int n_col, int Ap, int Ai, float Ax, int Bp, - int Bi, float Bx, int Cp, int Ci, float Cx) - csc_minus_csc(int n_row, int n_col, int Ap, int Ai, double Ax, int Bp, - int Bi, double Bx, int Cp, int Ci, double Cx) - csc_minus_csc(int n_row, int n_col, int Ap, int Ai, long double Ax, - int Bp, int Bi, long double Bx, int Cp, int Ci, - long double Cx) - csc_minus_csc(int n_row, int n_col, int Ap, int Ai, npy_cfloat_wrapper Ax, - int Bp, int Bi, npy_cfloat_wrapper Bx, - int Cp, int Ci, npy_cfloat_wrapper Cx) - csc_minus_csc(int n_row, int n_col, int Ap, int Ai, npy_cdouble_wrapper Ax, - int Bp, int Bi, npy_cdouble_wrapper Bx, - int Cp, int Ci, npy_cdouble_wrapper Cx) - csc_minus_csc(int n_row, int n_col, int Ap, int Ai, npy_clongdouble_wrapper Ax, - int Bp, int Bi, npy_clongdouble_wrapper Bx, - int Cp, int Ci, npy_clongdouble_wrapper Cx) - """ - return _csc.csc_minus_csc(*args) + return _csc.csc_minus_csc(*args) + diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/sparsetools/csr.h python-scipy-0.8.0+dfsg1/scipy/sparse/sparsetools/csr.h --- python-scipy-0.7.2+dfsg1/scipy/sparse/sparsetools/csr.h 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/sparsetools/csr.h 2010-07-26 15:48:35.000000000 +0100 @@ -83,6 +83,7 @@ } } + /* * Scale the rows of a CSR matrix *in place* * @@ -104,6 +105,7 @@ } } + /* * Scale the columns of a CSR matrix *in place* * @@ -189,7 +191,6 @@ * * */ - template void csr_tobsr(const I n_row, const I n_col, @@ -243,15 +244,13 @@ } - /* - * Sort CSR column indices inplace + * Determine whether the CSR column indices are in sorted order. * * Input Arguments: * I n_row - number of rows in A * I Ap[n_row+1] - row pointer * I Aj[nnz(A)] - column indices - * T Ax[nnz(A)] - nonzeros * */ template @@ -268,11 +267,54 @@ } return true; } + + + +/* + * Determine whether the matrix structure is canonical CSR. + * Canonical CSR implies that column indices within each row + * are (1) sorted and (2) unique. Matrices that meet these + * conditions facilitate faster matrix computations. + * + * Input Arguments: + * I n_row - number of rows in A + * I Ap[n_row+1] - row pointer + * I Aj[nnz(A)] - column indices + * + */ +template +bool csr_has_canonical_format(const I n_row, + const I Ap[], + const I Aj[]) +{ + for(I i = 0; i < n_row; i++){ + if (Ap[i] > Ap[i+1]) + return false; + for(I jj = Ap[i] + 1; jj < Ap[i+1]; jj++){ + if( !(Aj[jj-1] < Aj[jj]) ){ + return false; + } + } + } + return true; +} + + template< class T1, class T2 > bool kv_pair_less(const std::pair& x, const std::pair& y){ return x.first < y.first; } +/* + * Sort CSR column indices inplace + * + * Input Arguments: + * I n_row - number of rows in A + * I Ap[n_row+1] - row pointer + * I Aj[nnz(A)] - column indices + * T Ax[nnz(A)] - nonzeros + * + */ template void csr_sort_indices(const I n_row, const I Ap[], @@ -486,52 +528,8 @@ const I Bj[], I Cp[]) { -// // method that uses O(1) temp storage -// const I hash_size = 1 << 5; -// I vals[hash_size]; -// I mask[hash_size]; -// -// std::set spill; -// -// for(I i = 0; i < hash_size; i++){ -// vals[i] = -1; -// mask[i] = -1; -// } -// -// Cp[0] = 0; -// -// I slow_inserts = 0; -// I total_inserts = 0; -// I nnz = 0; -// for(I i = 0; i < n_row; i++){ -// spill.clear(); -// for(I jj = Ap[i]; jj < Ap[i+1]; jj++){ -// I j = Aj[jj]; -// for(I kk = Bp[j]; kk < Bp[j+1]; kk++){ -// I k = Bj[kk]; -// // I hash = k & (hash_size - 1); -// I hash = ((I)2654435761 * k) & (hash_size -1 ); -// total_inserts++; -// if(mask[hash] != i){ -// mask[hash] = i; -// vals[hash] = k; -// nnz++; -// } else { -// if (vals[hash] != k){ -// slow_inserts++; -// spill.insert(k); -// } -// } -// } -// } -// nnz += spill.size(); -// Cp[i+1] = nnz; -// } -// -// std::cout << "slow fraction " << ((float) slow_inserts)/ ((float) total_inserts) << std::endl; - // method that uses O(n) temp storage - std::vector mask(n_col,-1); + std::vector mask(n_col, -1); Cp[0] = 0; I nnz = 0; @@ -594,7 +592,7 @@ if(next[k] == -1){ next[k] = head; - head = k; + head = k; length++; } } @@ -620,28 +618,13 @@ } - - - /* - * Compute C = A (bin_op) B for CSR matrices A,B - * - * bin_op(x,y) - binary operator to apply elementwise + * Compute C = A (binary_op) B for CSR matrices that are not + * necessarily canonical CSR format. Specifically, this method + * works even when the input matrices have duplicate and/or + * unsorted column indices within a given row. * - * - * Input Arguments: - * I n_row - number of rows in A (and B) - * I n_col - number of columns in A (and B) - * I Ap[n_row+1] - row pointer - * I Aj[nnz(A)] - column indices - * T Ax[nnz(A)] - nonzeros - * I Bp[?] - row pointer - * I Bj[nnz(B)] - column indices - * T Bx[nnz(B)] - nonzeros - * Output Arguments: - * I Cp[n_row+1] - row pointer - * I Cj[nnz(C)] - column indices - * T Cx[nnz(C)] - nonzeros + * Refer to csr_binop_csr() for additional information * * Note: * Output arrays Cp, Cj, and Cx must be preallocated @@ -649,28 +632,109 @@ * nnz(C) <= nnz(A) + nnz(B) * * Note: + * Input: A and B column indices are not assumed to be in sorted order + * Output: C column indices are not generally in sorted order + * C will not contain any duplicate entries or explicit zeros. + * + */ +template +void csr_binop_csr_general(const I n_row, const I n_col, + const I Ap[], const I Aj[], const T Ax[], + const I Bp[], const I Bj[], const T Bx[], + I Cp[], I Cj[], T Cx[], + const binary_op& op) +{ + //Method that works for duplicate and/or unsorted indices + + std::vector next(n_col,-1); + std::vector A_row(n_col, 0); + std::vector B_row(n_col, 0); + + I nnz = 0; + Cp[0] = 0; + + for(I i = 0; i < n_row; i++){ + I head = -2; + I length = 0; + + //add a row of A to A_row + I i_start = Ap[i]; + I i_end = Ap[i+1]; + for(I jj = i_start; jj < i_end; jj++){ + I j = Aj[jj]; + + A_row[j] += Ax[jj]; + + if(next[j] == -1){ + next[j] = head; + head = j; + length++; + } + } + + //add a row of B to B_row + i_start = Bp[i]; + i_end = Bp[i+1]; + for(I jj = i_start; jj < i_end; jj++){ + I j = Bj[jj]; + + B_row[j] += Bx[jj]; + + if(next[j] == -1){ + next[j] = head; + head = j; + length++; + } + } + + + // scan through columns where A or B has + // contributed a non-zero entry + for(I jj = 0; jj < length; jj++){ + T result = op(A_row[head], B_row[head]); + + if(result != 0){ + Cj[nnz] = head; + Cx[nnz] = result; + nnz++; + } + + I temp = head; + head = next[head]; + + next[temp] = -1; + A_row[temp] = 0; + B_row[temp] = 0; + } + + Cp[i + 1] = nnz; + } +} + + + +/* + * Compute C = A (binary_op) B for CSR matrices that are in the + * canonical CSR format. Specifically, this method requires that + * the rows of the input matrices are free of duplicate column indices + * and that the column indices are in sorted order. + * + * Refer to csr_binop_csr() for additional information + * + * Note: * Input: A and B column indices are assumed to be in sorted order - * Output: C column indices are assumed to be in sorted order + * Output: C column indices will be in sorted order * Cx will not contain any zero entries * */ -template -void csr_binop_csr(const I n_row, - const I n_col, - const I Ap[], - const I Aj[], - const T Ax[], - const I Bp[], - const I Bj[], - const T Bx[], - I Cp[], - I Cj[], - T Cx[], - const bin_op& op) +template +void csr_binop_csr_canonical(const I n_row, const I n_col, + const I Ap[], const I Aj[], const T Ax[], + const I Bp[], const I Bj[], const T Bx[], + I Cp[], I Cj[], T Cx[], + const binary_op& op) { - //Method that works for sorted indices - // assert( csr_has_sorted_indices(n_row,Ap,Aj) ); - // assert( csr_has_sorted_indices(n_row,Bp,Bj) ); + //Method that works for canonical CSR matrices Cp[0] = 0; I nnz = 0; @@ -734,10 +798,65 @@ } B_pos++; } + Cp[i+1] = nnz; } } + +/* + * Compute C = A (binary_op) B for CSR matrices A,B where the column + * indices with the rows of A and B are known to be sorted. + * + * binary_op(x,y) - binary operator to apply elementwise + * + * Input Arguments: + * I n_row - number of rows in A (and B) + * I n_col - number of columns in A (and B) + * I Ap[n_row+1] - row pointer + * I Aj[nnz(A)] - column indices + * T Ax[nnz(A)] - nonzeros + * I Bp[n_row+1] - row pointer + * I Bj[nnz(B)] - column indices + * T Bx[nnz(B)] - nonzeros + * Output Arguments: + * I Cp[n_row+1] - row pointer + * I Cj[nnz(C)] - column indices + * T Cx[nnz(C)] - nonzeros + * + * Note: + * Output arrays Cp, Cj, and Cx must be preallocated + * If nnz(C) is not known a priori, a conservative bound is: + * nnz(C) <= nnz(A) + nnz(B) + * + * Note: + * Input: A and B column indices are not assumed to be in sorted order. + * Output: C column indices will be in sorted if both A and B have sorted indices. + * Cx will not contain any zero entries + * + */ +template +void csr_binop_csr(const I n_row, + const I n_col, + const I Ap[], + const I Aj[], + const T Ax[], + const I Bp[], + const I Bj[], + const T Bx[], + I Cp[], + I Cj[], + T Cx[], + const binary_op& op) +{ + if (csr_has_canonical_format(n_row,Ap,Aj) && csr_has_canonical_format(n_row,Bp,Bj)) + csr_binop_csr_canonical(n_row, n_col, Ap, Aj, Ax, Bp, Bj, Bx, Cp, Cj, Cx, op); + else + csr_binop_csr_general(n_row, n_col, Ap, Aj, Ax, Bp, Bj, Bx, Cp, Cj, Cx, op); +} + + + /* element-wise binary operations*/ template void csr_elmul_csr(const I n_row, const I n_col, @@ -1026,4 +1145,108 @@ } +/* + * Sample the matrix at specific locations + * + * Determine the matrix value for each row,col pair + * Bx[n] = A(Bi[n],Bj[n]) + * + * Input Arguments: + * I n_row - number of rows in A + * I n_col - number of columns in A + * I Ap[n_row+1] - row pointer + * I Aj[nnz(A)] - column indices + * T Ax[nnz(A)] - nonzeros + * I n_samples - number of samples + * I Bi[N] - sample rows + * I Bj[N] - sample columns + * + * Output Arguments: + * T Bx[N] - sample values + * + * Note: + * Output array Yx must be preallocated + * + * Complexity: varies + * + * TODO handle other cases with asymptotically optimal method + * + */ +template +void csr_sample_values(const I n_row, + const I n_col, + const I Ap[], + const I Aj[], + const T Ax[], + const I n_samples, + const I Bi[], + const I Bj[], + T Bx[]) +{ + // ideally we'd do the following + // Case 1: A is canonical and B is sorted by row and column + // -> special purpose csr_binop_csr() (optimized form) + // Case 2: A is canonical and B is unsorted and max(log(Ap[i+1] - Ap[i])) > log(num_samples) + // -> do binary searches for each sample + // Case 3: A is canonical and B is unsorted and max(log(Ap[i+1] - Ap[i])) < log(num_samples) + // -> sort B by row and column and use Case 1 + // Case 4: A is not canonical and num_samples ~ nnz + // -> special purpose csr_binop_csr() (general form) + // Case 5: A is not canonical and num_samples << nnz + // -> do linear searches for each sample + + const I nnz = Ap[n_row]; + + const I threshold = nnz / 10; // constant is arbitrary + + if (n_samples > threshold && csr_has_canonical_format(n_row, Ap, Aj)) + { + for(I n = 0; n < n_samples; n++) + { + const I i = Bi[n] < 0 ? Bi[n] + n_row : Bi[n]; // sample row + const I j = Bj[n] < 0 ? Bj[n] + n_col : Bj[n]; // sample column + + const I row_start = Ap[i]; + const I row_end = Ap[i+1]; + + if (row_start < row_end) + { + const I offset = std::lower_bound(Aj + row_start, Aj + row_end, j) - Aj; + + if (offset < row_end && Aj[offset] == j) + Bx[n] = Ax[offset]; + else + Bx[n] = 0; + } + else + { + Bx[n] = 0; + } + + } + } + else + { + for(I n = 0; n < n_samples; n++) + { + const I i = Bi[n] < 0 ? Bi[n] + n_row : Bi[n]; // sample row + const I j = Bj[n] < 0 ? Bj[n] + n_col : Bj[n]; // sample column + + const I row_start = Ap[i]; + const I row_end = Ap[i+1]; + + T x = 0; + + for(I jj = row_start; jj < row_end; jj++) + { + if (Aj[jj] == j) + x += Ax[jj]; + } + + Bx[n] = x; + } + + } +} + #endif diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/sparsetools/csr.py python-scipy-0.8.0+dfsg1/scipy/sparse/sparsetools/csr.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/sparsetools/csr.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/sparsetools/csr.py 2010-07-26 15:48:35.000000000 +0100 @@ -50,568 +50,603 @@ def expandptr(*args): - """expandptr(int n_row, int Ap, int Bi)""" - return _csr.expandptr(*args) + """expandptr(int n_row, int Ap, int Bi)""" + return _csr.expandptr(*args) def csr_matmat_pass1(*args): + """ + csr_matmat_pass1(int n_row, int n_col, int Ap, int Aj, int Bp, int Bj, + int Cp) """ - csr_matmat_pass1(int n_row, int n_col, int Ap, int Aj, int Bp, int Bj, - int Cp) - """ - return _csr.csr_matmat_pass1(*args) + return _csr.csr_matmat_pass1(*args) def csr_count_blocks(*args): - """csr_count_blocks(int n_row, int n_col, int R, int C, int Ap, int Aj) -> int""" - return _csr.csr_count_blocks(*args) + """csr_count_blocks(int n_row, int n_col, int R, int C, int Ap, int Aj) -> int""" + return _csr.csr_count_blocks(*args) def csr_has_sorted_indices(*args): - """csr_has_sorted_indices(int n_row, int Ap, int Aj) -> bool""" - return _csr.csr_has_sorted_indices(*args) + """csr_has_sorted_indices(int n_row, int Ap, int Aj) -> bool""" + return _csr.csr_has_sorted_indices(*args) def csr_diagonal(*args): + """ + csr_diagonal(int n_row, int n_col, int Ap, int Aj, signed char Ax, + signed char Yx) + csr_diagonal(int n_row, int n_col, int Ap, int Aj, unsigned char Ax, + unsigned char Yx) + csr_diagonal(int n_row, int n_col, int Ap, int Aj, short Ax, short Yx) + csr_diagonal(int n_row, int n_col, int Ap, int Aj, unsigned short Ax, + unsigned short Yx) + csr_diagonal(int n_row, int n_col, int Ap, int Aj, int Ax, int Yx) + csr_diagonal(int n_row, int n_col, int Ap, int Aj, unsigned int Ax, + unsigned int Yx) + csr_diagonal(int n_row, int n_col, int Ap, int Aj, long long Ax, + long long Yx) + csr_diagonal(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax, + unsigned long long Yx) + csr_diagonal(int n_row, int n_col, int Ap, int Aj, float Ax, float Yx) + csr_diagonal(int n_row, int n_col, int Ap, int Aj, double Ax, double Yx) + csr_diagonal(int n_row, int n_col, int Ap, int Aj, long double Ax, + long double Yx) + csr_diagonal(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax, + npy_cfloat_wrapper Yx) + csr_diagonal(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax, + npy_cdouble_wrapper Yx) + csr_diagonal(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax, + npy_clongdouble_wrapper Yx) """ - csr_diagonal(int n_row, int n_col, int Ap, int Aj, signed char Ax, - signed char Yx) - csr_diagonal(int n_row, int n_col, int Ap, int Aj, unsigned char Ax, - unsigned char Yx) - csr_diagonal(int n_row, int n_col, int Ap, int Aj, short Ax, short Yx) - csr_diagonal(int n_row, int n_col, int Ap, int Aj, unsigned short Ax, - unsigned short Yx) - csr_diagonal(int n_row, int n_col, int Ap, int Aj, int Ax, int Yx) - csr_diagonal(int n_row, int n_col, int Ap, int Aj, unsigned int Ax, - unsigned int Yx) - csr_diagonal(int n_row, int n_col, int Ap, int Aj, long long Ax, - long long Yx) - csr_diagonal(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax, - unsigned long long Yx) - csr_diagonal(int n_row, int n_col, int Ap, int Aj, float Ax, float Yx) - csr_diagonal(int n_row, int n_col, int Ap, int Aj, double Ax, double Yx) - csr_diagonal(int n_row, int n_col, int Ap, int Aj, long double Ax, - long double Yx) - csr_diagonal(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax, - npy_cfloat_wrapper Yx) - csr_diagonal(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax, - npy_cdouble_wrapper Yx) - csr_diagonal(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax, - npy_clongdouble_wrapper Yx) - """ - return _csr.csr_diagonal(*args) + return _csr.csr_diagonal(*args) def csr_scale_rows(*args): + """ + csr_scale_rows(int n_row, int n_col, int Ap, int Aj, signed char Ax, + signed char Xx) + csr_scale_rows(int n_row, int n_col, int Ap, int Aj, unsigned char Ax, + unsigned char Xx) + csr_scale_rows(int n_row, int n_col, int Ap, int Aj, short Ax, short Xx) + csr_scale_rows(int n_row, int n_col, int Ap, int Aj, unsigned short Ax, + unsigned short Xx) + csr_scale_rows(int n_row, int n_col, int Ap, int Aj, int Ax, int Xx) + csr_scale_rows(int n_row, int n_col, int Ap, int Aj, unsigned int Ax, + unsigned int Xx) + csr_scale_rows(int n_row, int n_col, int Ap, int Aj, long long Ax, + long long Xx) + csr_scale_rows(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax, + unsigned long long Xx) + csr_scale_rows(int n_row, int n_col, int Ap, int Aj, float Ax, float Xx) + csr_scale_rows(int n_row, int n_col, int Ap, int Aj, double Ax, double Xx) + csr_scale_rows(int n_row, int n_col, int Ap, int Aj, long double Ax, + long double Xx) + csr_scale_rows(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax, + npy_cfloat_wrapper Xx) + csr_scale_rows(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax, + npy_cdouble_wrapper Xx) + csr_scale_rows(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax, + npy_clongdouble_wrapper Xx) """ - csr_scale_rows(int n_row, int n_col, int Ap, int Aj, signed char Ax, - signed char Xx) - csr_scale_rows(int n_row, int n_col, int Ap, int Aj, unsigned char Ax, - unsigned char Xx) - csr_scale_rows(int n_row, int n_col, int Ap, int Aj, short Ax, short Xx) - csr_scale_rows(int n_row, int n_col, int Ap, int Aj, unsigned short Ax, - unsigned short Xx) - csr_scale_rows(int n_row, int n_col, int Ap, int Aj, int Ax, int Xx) - csr_scale_rows(int n_row, int n_col, int Ap, int Aj, unsigned int Ax, - unsigned int Xx) - csr_scale_rows(int n_row, int n_col, int Ap, int Aj, long long Ax, - long long Xx) - csr_scale_rows(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax, - unsigned long long Xx) - csr_scale_rows(int n_row, int n_col, int Ap, int Aj, float Ax, float Xx) - csr_scale_rows(int n_row, int n_col, int Ap, int Aj, double Ax, double Xx) - csr_scale_rows(int n_row, int n_col, int Ap, int Aj, long double Ax, - long double Xx) - csr_scale_rows(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax, - npy_cfloat_wrapper Xx) - csr_scale_rows(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax, - npy_cdouble_wrapper Xx) - csr_scale_rows(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax, - npy_clongdouble_wrapper Xx) - """ - return _csr.csr_scale_rows(*args) + return _csr.csr_scale_rows(*args) def csr_scale_columns(*args): + """ + csr_scale_columns(int n_row, int n_col, int Ap, int Aj, signed char Ax, + signed char Xx) + csr_scale_columns(int n_row, int n_col, int Ap, int Aj, unsigned char Ax, + unsigned char Xx) + csr_scale_columns(int n_row, int n_col, int Ap, int Aj, short Ax, short Xx) + csr_scale_columns(int n_row, int n_col, int Ap, int Aj, unsigned short Ax, + unsigned short Xx) + csr_scale_columns(int n_row, int n_col, int Ap, int Aj, int Ax, int Xx) + csr_scale_columns(int n_row, int n_col, int Ap, int Aj, unsigned int Ax, + unsigned int Xx) + csr_scale_columns(int n_row, int n_col, int Ap, int Aj, long long Ax, + long long Xx) + csr_scale_columns(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax, + unsigned long long Xx) + csr_scale_columns(int n_row, int n_col, int Ap, int Aj, float Ax, float Xx) + csr_scale_columns(int n_row, int n_col, int Ap, int Aj, double Ax, double Xx) + csr_scale_columns(int n_row, int n_col, int Ap, int Aj, long double Ax, + long double Xx) + csr_scale_columns(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax, + npy_cfloat_wrapper Xx) + csr_scale_columns(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax, + npy_cdouble_wrapper Xx) + csr_scale_columns(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax, + npy_clongdouble_wrapper Xx) """ - csr_scale_columns(int n_row, int n_col, int Ap, int Aj, signed char Ax, - signed char Xx) - csr_scale_columns(int n_row, int n_col, int Ap, int Aj, unsigned char Ax, - unsigned char Xx) - csr_scale_columns(int n_row, int n_col, int Ap, int Aj, short Ax, short Xx) - csr_scale_columns(int n_row, int n_col, int Ap, int Aj, unsigned short Ax, - unsigned short Xx) - csr_scale_columns(int n_row, int n_col, int Ap, int Aj, int Ax, int Xx) - csr_scale_columns(int n_row, int n_col, int Ap, int Aj, unsigned int Ax, - unsigned int Xx) - csr_scale_columns(int n_row, int n_col, int Ap, int Aj, long long Ax, - long long Xx) - csr_scale_columns(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax, - unsigned long long Xx) - csr_scale_columns(int n_row, int n_col, int Ap, int Aj, float Ax, float Xx) - csr_scale_columns(int n_row, int n_col, int Ap, int Aj, double Ax, double Xx) - csr_scale_columns(int n_row, int n_col, int Ap, int Aj, long double Ax, - long double Xx) - csr_scale_columns(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax, - npy_cfloat_wrapper Xx) - csr_scale_columns(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax, - npy_cdouble_wrapper Xx) - csr_scale_columns(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax, - npy_clongdouble_wrapper Xx) - """ - return _csr.csr_scale_columns(*args) + return _csr.csr_scale_columns(*args) def csr_tocsc(*args): + """ + csr_tocsc(int n_row, int n_col, int Ap, int Aj, signed char Ax, + int Bp, int Bi, signed char Bx) + csr_tocsc(int n_row, int n_col, int Ap, int Aj, unsigned char Ax, + int Bp, int Bi, unsigned char Bx) + csr_tocsc(int n_row, int n_col, int Ap, int Aj, short Ax, int Bp, + int Bi, short Bx) + csr_tocsc(int n_row, int n_col, int Ap, int Aj, unsigned short Ax, + int Bp, int Bi, unsigned short Bx) + csr_tocsc(int n_row, int n_col, int Ap, int Aj, int Ax, int Bp, + int Bi, int Bx) + csr_tocsc(int n_row, int n_col, int Ap, int Aj, unsigned int Ax, + int Bp, int Bi, unsigned int Bx) + csr_tocsc(int n_row, int n_col, int Ap, int Aj, long long Ax, + int Bp, int Bi, long long Bx) + csr_tocsc(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax, + int Bp, int Bi, unsigned long long Bx) + csr_tocsc(int n_row, int n_col, int Ap, int Aj, float Ax, int Bp, + int Bi, float Bx) + csr_tocsc(int n_row, int n_col, int Ap, int Aj, double Ax, int Bp, + int Bi, double Bx) + csr_tocsc(int n_row, int n_col, int Ap, int Aj, long double Ax, + int Bp, int Bi, long double Bx) + csr_tocsc(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax, + int Bp, int Bi, npy_cfloat_wrapper Bx) + csr_tocsc(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax, + int Bp, int Bi, npy_cdouble_wrapper Bx) + csr_tocsc(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax, + int Bp, int Bi, npy_clongdouble_wrapper Bx) """ - csr_tocsc(int n_row, int n_col, int Ap, int Aj, signed char Ax, - int Bp, int Bi, signed char Bx) - csr_tocsc(int n_row, int n_col, int Ap, int Aj, unsigned char Ax, - int Bp, int Bi, unsigned char Bx) - csr_tocsc(int n_row, int n_col, int Ap, int Aj, short Ax, int Bp, - int Bi, short Bx) - csr_tocsc(int n_row, int n_col, int Ap, int Aj, unsigned short Ax, - int Bp, int Bi, unsigned short Bx) - csr_tocsc(int n_row, int n_col, int Ap, int Aj, int Ax, int Bp, - int Bi, int Bx) - csr_tocsc(int n_row, int n_col, int Ap, int Aj, unsigned int Ax, - int Bp, int Bi, unsigned int Bx) - csr_tocsc(int n_row, int n_col, int Ap, int Aj, long long Ax, - int Bp, int Bi, long long Bx) - csr_tocsc(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax, - int Bp, int Bi, unsigned long long Bx) - csr_tocsc(int n_row, int n_col, int Ap, int Aj, float Ax, int Bp, - int Bi, float Bx) - csr_tocsc(int n_row, int n_col, int Ap, int Aj, double Ax, int Bp, - int Bi, double Bx) - csr_tocsc(int n_row, int n_col, int Ap, int Aj, long double Ax, - int Bp, int Bi, long double Bx) - csr_tocsc(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax, - int Bp, int Bi, npy_cfloat_wrapper Bx) - csr_tocsc(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax, - int Bp, int Bi, npy_cdouble_wrapper Bx) - csr_tocsc(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax, - int Bp, int Bi, npy_clongdouble_wrapper Bx) - """ - return _csr.csr_tocsc(*args) + return _csr.csr_tocsc(*args) def csr_tobsr(*args): + """ + csr_tobsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + signed char Ax, int Bp, int Bj, signed char Bx) + csr_tobsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + unsigned char Ax, int Bp, int Bj, unsigned char Bx) + csr_tobsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + short Ax, int Bp, int Bj, short Bx) + csr_tobsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + unsigned short Ax, int Bp, int Bj, unsigned short Bx) + csr_tobsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + int Ax, int Bp, int Bj, int Bx) + csr_tobsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + unsigned int Ax, int Bp, int Bj, unsigned int Bx) + csr_tobsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + long long Ax, int Bp, int Bj, long long Bx) + csr_tobsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + unsigned long long Ax, int Bp, int Bj, unsigned long long Bx) + csr_tobsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + float Ax, int Bp, int Bj, float Bx) + csr_tobsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + double Ax, int Bp, int Bj, double Bx) + csr_tobsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + long double Ax, int Bp, int Bj, long double Bx) + csr_tobsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + npy_cfloat_wrapper Ax, int Bp, int Bj, npy_cfloat_wrapper Bx) + csr_tobsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + npy_cdouble_wrapper Ax, int Bp, int Bj, npy_cdouble_wrapper Bx) + csr_tobsr(int n_row, int n_col, int R, int C, int Ap, int Aj, + npy_clongdouble_wrapper Ax, int Bp, int Bj, + npy_clongdouble_wrapper Bx) """ - csr_tobsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - signed char Ax, int Bp, int Bj, signed char Bx) - csr_tobsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - unsigned char Ax, int Bp, int Bj, unsigned char Bx) - csr_tobsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - short Ax, int Bp, int Bj, short Bx) - csr_tobsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - unsigned short Ax, int Bp, int Bj, unsigned short Bx) - csr_tobsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - int Ax, int Bp, int Bj, int Bx) - csr_tobsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - unsigned int Ax, int Bp, int Bj, unsigned int Bx) - csr_tobsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - long long Ax, int Bp, int Bj, long long Bx) - csr_tobsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - unsigned long long Ax, int Bp, int Bj, unsigned long long Bx) - csr_tobsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - float Ax, int Bp, int Bj, float Bx) - csr_tobsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - double Ax, int Bp, int Bj, double Bx) - csr_tobsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - long double Ax, int Bp, int Bj, long double Bx) - csr_tobsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - npy_cfloat_wrapper Ax, int Bp, int Bj, npy_cfloat_wrapper Bx) - csr_tobsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - npy_cdouble_wrapper Ax, int Bp, int Bj, npy_cdouble_wrapper Bx) - csr_tobsr(int n_row, int n_col, int R, int C, int Ap, int Aj, - npy_clongdouble_wrapper Ax, int Bp, int Bj, - npy_clongdouble_wrapper Bx) - """ - return _csr.csr_tobsr(*args) + return _csr.csr_tobsr(*args) def csr_matmat_pass2(*args): + """ + csr_matmat_pass2(int n_row, int n_col, int Ap, int Aj, signed char Ax, + int Bp, int Bj, signed char Bx, int Cp, int Cj, + signed char Cx) + csr_matmat_pass2(int n_row, int n_col, int Ap, int Aj, unsigned char Ax, + int Bp, int Bj, unsigned char Bx, int Cp, + int Cj, unsigned char Cx) + csr_matmat_pass2(int n_row, int n_col, int Ap, int Aj, short Ax, int Bp, + int Bj, short Bx, int Cp, int Cj, short Cx) + csr_matmat_pass2(int n_row, int n_col, int Ap, int Aj, unsigned short Ax, + int Bp, int Bj, unsigned short Bx, int Cp, + int Cj, unsigned short Cx) + csr_matmat_pass2(int n_row, int n_col, int Ap, int Aj, int Ax, int Bp, + int Bj, int Bx, int Cp, int Cj, int Cx) + csr_matmat_pass2(int n_row, int n_col, int Ap, int Aj, unsigned int Ax, + int Bp, int Bj, unsigned int Bx, int Cp, + int Cj, unsigned int Cx) + csr_matmat_pass2(int n_row, int n_col, int Ap, int Aj, long long Ax, + int Bp, int Bj, long long Bx, int Cp, int Cj, + long long Cx) + csr_matmat_pass2(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax, + int Bp, int Bj, unsigned long long Bx, + int Cp, int Cj, unsigned long long Cx) + csr_matmat_pass2(int n_row, int n_col, int Ap, int Aj, float Ax, int Bp, + int Bj, float Bx, int Cp, int Cj, float Cx) + csr_matmat_pass2(int n_row, int n_col, int Ap, int Aj, double Ax, int Bp, + int Bj, double Bx, int Cp, int Cj, double Cx) + csr_matmat_pass2(int n_row, int n_col, int Ap, int Aj, long double Ax, + int Bp, int Bj, long double Bx, int Cp, int Cj, + long double Cx) + csr_matmat_pass2(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax, + int Bp, int Bj, npy_cfloat_wrapper Bx, + int Cp, int Cj, npy_cfloat_wrapper Cx) + csr_matmat_pass2(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax, + int Bp, int Bj, npy_cdouble_wrapper Bx, + int Cp, int Cj, npy_cdouble_wrapper Cx) + csr_matmat_pass2(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax, + int Bp, int Bj, npy_clongdouble_wrapper Bx, + int Cp, int Cj, npy_clongdouble_wrapper Cx) """ - csr_matmat_pass2(int n_row, int n_col, int Ap, int Aj, signed char Ax, - int Bp, int Bj, signed char Bx, int Cp, int Cj, - signed char Cx) - csr_matmat_pass2(int n_row, int n_col, int Ap, int Aj, unsigned char Ax, - int Bp, int Bj, unsigned char Bx, int Cp, - int Cj, unsigned char Cx) - csr_matmat_pass2(int n_row, int n_col, int Ap, int Aj, short Ax, int Bp, - int Bj, short Bx, int Cp, int Cj, short Cx) - csr_matmat_pass2(int n_row, int n_col, int Ap, int Aj, unsigned short Ax, - int Bp, int Bj, unsigned short Bx, int Cp, - int Cj, unsigned short Cx) - csr_matmat_pass2(int n_row, int n_col, int Ap, int Aj, int Ax, int Bp, - int Bj, int Bx, int Cp, int Cj, int Cx) - csr_matmat_pass2(int n_row, int n_col, int Ap, int Aj, unsigned int Ax, - int Bp, int Bj, unsigned int Bx, int Cp, - int Cj, unsigned int Cx) - csr_matmat_pass2(int n_row, int n_col, int Ap, int Aj, long long Ax, - int Bp, int Bj, long long Bx, int Cp, int Cj, - long long Cx) - csr_matmat_pass2(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax, - int Bp, int Bj, unsigned long long Bx, - int Cp, int Cj, unsigned long long Cx) - csr_matmat_pass2(int n_row, int n_col, int Ap, int Aj, float Ax, int Bp, - int Bj, float Bx, int Cp, int Cj, float Cx) - csr_matmat_pass2(int n_row, int n_col, int Ap, int Aj, double Ax, int Bp, - int Bj, double Bx, int Cp, int Cj, double Cx) - csr_matmat_pass2(int n_row, int n_col, int Ap, int Aj, long double Ax, - int Bp, int Bj, long double Bx, int Cp, int Cj, - long double Cx) - csr_matmat_pass2(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax, - int Bp, int Bj, npy_cfloat_wrapper Bx, - int Cp, int Cj, npy_cfloat_wrapper Cx) - csr_matmat_pass2(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax, - int Bp, int Bj, npy_cdouble_wrapper Bx, - int Cp, int Cj, npy_cdouble_wrapper Cx) - csr_matmat_pass2(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax, - int Bp, int Bj, npy_clongdouble_wrapper Bx, - int Cp, int Cj, npy_clongdouble_wrapper Cx) - """ - return _csr.csr_matmat_pass2(*args) + return _csr.csr_matmat_pass2(*args) def csr_matvec(*args): + """ + csr_matvec(int n_row, int n_col, int Ap, int Aj, signed char Ax, + signed char Xx, signed char Yx) + csr_matvec(int n_row, int n_col, int Ap, int Aj, unsigned char Ax, + unsigned char Xx, unsigned char Yx) + csr_matvec(int n_row, int n_col, int Ap, int Aj, short Ax, short Xx, + short Yx) + csr_matvec(int n_row, int n_col, int Ap, int Aj, unsigned short Ax, + unsigned short Xx, unsigned short Yx) + csr_matvec(int n_row, int n_col, int Ap, int Aj, int Ax, int Xx, + int Yx) + csr_matvec(int n_row, int n_col, int Ap, int Aj, unsigned int Ax, + unsigned int Xx, unsigned int Yx) + csr_matvec(int n_row, int n_col, int Ap, int Aj, long long Ax, + long long Xx, long long Yx) + csr_matvec(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax, + unsigned long long Xx, unsigned long long Yx) + csr_matvec(int n_row, int n_col, int Ap, int Aj, float Ax, float Xx, + float Yx) + csr_matvec(int n_row, int n_col, int Ap, int Aj, double Ax, double Xx, + double Yx) + csr_matvec(int n_row, int n_col, int Ap, int Aj, long double Ax, + long double Xx, long double Yx) + csr_matvec(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax, + npy_cfloat_wrapper Xx, npy_cfloat_wrapper Yx) + csr_matvec(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax, + npy_cdouble_wrapper Xx, npy_cdouble_wrapper Yx) + csr_matvec(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax, + npy_clongdouble_wrapper Xx, npy_clongdouble_wrapper Yx) """ - csr_matvec(int n_row, int n_col, int Ap, int Aj, signed char Ax, - signed char Xx, signed char Yx) - csr_matvec(int n_row, int n_col, int Ap, int Aj, unsigned char Ax, - unsigned char Xx, unsigned char Yx) - csr_matvec(int n_row, int n_col, int Ap, int Aj, short Ax, short Xx, - short Yx) - csr_matvec(int n_row, int n_col, int Ap, int Aj, unsigned short Ax, - unsigned short Xx, unsigned short Yx) - csr_matvec(int n_row, int n_col, int Ap, int Aj, int Ax, int Xx, - int Yx) - csr_matvec(int n_row, int n_col, int Ap, int Aj, unsigned int Ax, - unsigned int Xx, unsigned int Yx) - csr_matvec(int n_row, int n_col, int Ap, int Aj, long long Ax, - long long Xx, long long Yx) - csr_matvec(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax, - unsigned long long Xx, unsigned long long Yx) - csr_matvec(int n_row, int n_col, int Ap, int Aj, float Ax, float Xx, - float Yx) - csr_matvec(int n_row, int n_col, int Ap, int Aj, double Ax, double Xx, - double Yx) - csr_matvec(int n_row, int n_col, int Ap, int Aj, long double Ax, - long double Xx, long double Yx) - csr_matvec(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax, - npy_cfloat_wrapper Xx, npy_cfloat_wrapper Yx) - csr_matvec(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax, - npy_cdouble_wrapper Xx, npy_cdouble_wrapper Yx) - csr_matvec(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax, - npy_clongdouble_wrapper Xx, npy_clongdouble_wrapper Yx) - """ - return _csr.csr_matvec(*args) + return _csr.csr_matvec(*args) def csr_matvecs(*args): + """ + csr_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Aj, signed char Ax, + signed char Xx, signed char Yx) + csr_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Aj, unsigned char Ax, + unsigned char Xx, unsigned char Yx) + csr_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Aj, short Ax, + short Xx, short Yx) + csr_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Aj, unsigned short Ax, + unsigned short Xx, unsigned short Yx) + csr_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Aj, int Ax, + int Xx, int Yx) + csr_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Aj, unsigned int Ax, + unsigned int Xx, unsigned int Yx) + csr_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Aj, long long Ax, + long long Xx, long long Yx) + csr_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Aj, unsigned long long Ax, + unsigned long long Xx, + unsigned long long Yx) + csr_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Aj, float Ax, + float Xx, float Yx) + csr_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Aj, double Ax, + double Xx, double Yx) + csr_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Aj, long double Ax, + long double Xx, long double Yx) + csr_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Aj, npy_cfloat_wrapper Ax, + npy_cfloat_wrapper Xx, + npy_cfloat_wrapper Yx) + csr_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Aj, npy_cdouble_wrapper Ax, + npy_cdouble_wrapper Xx, + npy_cdouble_wrapper Yx) + csr_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Aj, npy_clongdouble_wrapper Ax, + npy_clongdouble_wrapper Xx, + npy_clongdouble_wrapper Yx) """ - csr_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Aj, signed char Ax, - signed char Xx, signed char Yx) - csr_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Aj, unsigned char Ax, - unsigned char Xx, unsigned char Yx) - csr_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Aj, short Ax, - short Xx, short Yx) - csr_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Aj, unsigned short Ax, - unsigned short Xx, unsigned short Yx) - csr_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Aj, int Ax, - int Xx, int Yx) - csr_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Aj, unsigned int Ax, - unsigned int Xx, unsigned int Yx) - csr_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Aj, long long Ax, - long long Xx, long long Yx) - csr_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Aj, unsigned long long Ax, - unsigned long long Xx, - unsigned long long Yx) - csr_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Aj, float Ax, - float Xx, float Yx) - csr_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Aj, double Ax, - double Xx, double Yx) - csr_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Aj, long double Ax, - long double Xx, long double Yx) - csr_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Aj, npy_cfloat_wrapper Ax, - npy_cfloat_wrapper Xx, - npy_cfloat_wrapper Yx) - csr_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Aj, npy_cdouble_wrapper Ax, - npy_cdouble_wrapper Xx, - npy_cdouble_wrapper Yx) - csr_matvecs(int n_row, int n_col, int n_vecs, int Ap, int Aj, npy_clongdouble_wrapper Ax, - npy_clongdouble_wrapper Xx, - npy_clongdouble_wrapper Yx) - """ - return _csr.csr_matvecs(*args) + return _csr.csr_matvecs(*args) def csr_elmul_csr(*args): + """ + csr_elmul_csr(int n_row, int n_col, int Ap, int Aj, signed char Ax, + int Bp, int Bj, signed char Bx, int Cp, int Cj, + signed char Cx) + csr_elmul_csr(int n_row, int n_col, int Ap, int Aj, unsigned char Ax, + int Bp, int Bj, unsigned char Bx, int Cp, + int Cj, unsigned char Cx) + csr_elmul_csr(int n_row, int n_col, int Ap, int Aj, short Ax, int Bp, + int Bj, short Bx, int Cp, int Cj, short Cx) + csr_elmul_csr(int n_row, int n_col, int Ap, int Aj, unsigned short Ax, + int Bp, int Bj, unsigned short Bx, int Cp, + int Cj, unsigned short Cx) + csr_elmul_csr(int n_row, int n_col, int Ap, int Aj, int Ax, int Bp, + int Bj, int Bx, int Cp, int Cj, int Cx) + csr_elmul_csr(int n_row, int n_col, int Ap, int Aj, unsigned int Ax, + int Bp, int Bj, unsigned int Bx, int Cp, + int Cj, unsigned int Cx) + csr_elmul_csr(int n_row, int n_col, int Ap, int Aj, long long Ax, + int Bp, int Bj, long long Bx, int Cp, int Cj, + long long Cx) + csr_elmul_csr(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax, + int Bp, int Bj, unsigned long long Bx, + int Cp, int Cj, unsigned long long Cx) + csr_elmul_csr(int n_row, int n_col, int Ap, int Aj, float Ax, int Bp, + int Bj, float Bx, int Cp, int Cj, float Cx) + csr_elmul_csr(int n_row, int n_col, int Ap, int Aj, double Ax, int Bp, + int Bj, double Bx, int Cp, int Cj, double Cx) + csr_elmul_csr(int n_row, int n_col, int Ap, int Aj, long double Ax, + int Bp, int Bj, long double Bx, int Cp, int Cj, + long double Cx) + csr_elmul_csr(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax, + int Bp, int Bj, npy_cfloat_wrapper Bx, + int Cp, int Cj, npy_cfloat_wrapper Cx) + csr_elmul_csr(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax, + int Bp, int Bj, npy_cdouble_wrapper Bx, + int Cp, int Cj, npy_cdouble_wrapper Cx) + csr_elmul_csr(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax, + int Bp, int Bj, npy_clongdouble_wrapper Bx, + int Cp, int Cj, npy_clongdouble_wrapper Cx) """ - csr_elmul_csr(int n_row, int n_col, int Ap, int Aj, signed char Ax, - int Bp, int Bj, signed char Bx, int Cp, int Cj, - signed char Cx) - csr_elmul_csr(int n_row, int n_col, int Ap, int Aj, unsigned char Ax, - int Bp, int Bj, unsigned char Bx, int Cp, - int Cj, unsigned char Cx) - csr_elmul_csr(int n_row, int n_col, int Ap, int Aj, short Ax, int Bp, - int Bj, short Bx, int Cp, int Cj, short Cx) - csr_elmul_csr(int n_row, int n_col, int Ap, int Aj, unsigned short Ax, - int Bp, int Bj, unsigned short Bx, int Cp, - int Cj, unsigned short Cx) - csr_elmul_csr(int n_row, int n_col, int Ap, int Aj, int Ax, int Bp, - int Bj, int Bx, int Cp, int Cj, int Cx) - csr_elmul_csr(int n_row, int n_col, int Ap, int Aj, unsigned int Ax, - int Bp, int Bj, unsigned int Bx, int Cp, - int Cj, unsigned int Cx) - csr_elmul_csr(int n_row, int n_col, int Ap, int Aj, long long Ax, - int Bp, int Bj, long long Bx, int Cp, int Cj, - long long Cx) - csr_elmul_csr(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax, - int Bp, int Bj, unsigned long long Bx, - int Cp, int Cj, unsigned long long Cx) - csr_elmul_csr(int n_row, int n_col, int Ap, int Aj, float Ax, int Bp, - int Bj, float Bx, int Cp, int Cj, float Cx) - csr_elmul_csr(int n_row, int n_col, int Ap, int Aj, double Ax, int Bp, - int Bj, double Bx, int Cp, int Cj, double Cx) - csr_elmul_csr(int n_row, int n_col, int Ap, int Aj, long double Ax, - int Bp, int Bj, long double Bx, int Cp, int Cj, - long double Cx) - csr_elmul_csr(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax, - int Bp, int Bj, npy_cfloat_wrapper Bx, - int Cp, int Cj, npy_cfloat_wrapper Cx) - csr_elmul_csr(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax, - int Bp, int Bj, npy_cdouble_wrapper Bx, - int Cp, int Cj, npy_cdouble_wrapper Cx) - csr_elmul_csr(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax, - int Bp, int Bj, npy_clongdouble_wrapper Bx, - int Cp, int Cj, npy_clongdouble_wrapper Cx) - """ - return _csr.csr_elmul_csr(*args) + return _csr.csr_elmul_csr(*args) def csr_eldiv_csr(*args): + """ + csr_eldiv_csr(int n_row, int n_col, int Ap, int Aj, signed char Ax, + int Bp, int Bj, signed char Bx, int Cp, int Cj, + signed char Cx) + csr_eldiv_csr(int n_row, int n_col, int Ap, int Aj, unsigned char Ax, + int Bp, int Bj, unsigned char Bx, int Cp, + int Cj, unsigned char Cx) + csr_eldiv_csr(int n_row, int n_col, int Ap, int Aj, short Ax, int Bp, + int Bj, short Bx, int Cp, int Cj, short Cx) + csr_eldiv_csr(int n_row, int n_col, int Ap, int Aj, unsigned short Ax, + int Bp, int Bj, unsigned short Bx, int Cp, + int Cj, unsigned short Cx) + csr_eldiv_csr(int n_row, int n_col, int Ap, int Aj, int Ax, int Bp, + int Bj, int Bx, int Cp, int Cj, int Cx) + csr_eldiv_csr(int n_row, int n_col, int Ap, int Aj, unsigned int Ax, + int Bp, int Bj, unsigned int Bx, int Cp, + int Cj, unsigned int Cx) + csr_eldiv_csr(int n_row, int n_col, int Ap, int Aj, long long Ax, + int Bp, int Bj, long long Bx, int Cp, int Cj, + long long Cx) + csr_eldiv_csr(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax, + int Bp, int Bj, unsigned long long Bx, + int Cp, int Cj, unsigned long long Cx) + csr_eldiv_csr(int n_row, int n_col, int Ap, int Aj, float Ax, int Bp, + int Bj, float Bx, int Cp, int Cj, float Cx) + csr_eldiv_csr(int n_row, int n_col, int Ap, int Aj, double Ax, int Bp, + int Bj, double Bx, int Cp, int Cj, double Cx) + csr_eldiv_csr(int n_row, int n_col, int Ap, int Aj, long double Ax, + int Bp, int Bj, long double Bx, int Cp, int Cj, + long double Cx) + csr_eldiv_csr(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax, + int Bp, int Bj, npy_cfloat_wrapper Bx, + int Cp, int Cj, npy_cfloat_wrapper Cx) + csr_eldiv_csr(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax, + int Bp, int Bj, npy_cdouble_wrapper Bx, + int Cp, int Cj, npy_cdouble_wrapper Cx) + csr_eldiv_csr(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax, + int Bp, int Bj, npy_clongdouble_wrapper Bx, + int Cp, int Cj, npy_clongdouble_wrapper Cx) """ - csr_eldiv_csr(int n_row, int n_col, int Ap, int Aj, signed char Ax, - int Bp, int Bj, signed char Bx, int Cp, int Cj, - signed char Cx) - csr_eldiv_csr(int n_row, int n_col, int Ap, int Aj, unsigned char Ax, - int Bp, int Bj, unsigned char Bx, int Cp, - int Cj, unsigned char Cx) - csr_eldiv_csr(int n_row, int n_col, int Ap, int Aj, short Ax, int Bp, - int Bj, short Bx, int Cp, int Cj, short Cx) - csr_eldiv_csr(int n_row, int n_col, int Ap, int Aj, unsigned short Ax, - int Bp, int Bj, unsigned short Bx, int Cp, - int Cj, unsigned short Cx) - csr_eldiv_csr(int n_row, int n_col, int Ap, int Aj, int Ax, int Bp, - int Bj, int Bx, int Cp, int Cj, int Cx) - csr_eldiv_csr(int n_row, int n_col, int Ap, int Aj, unsigned int Ax, - int Bp, int Bj, unsigned int Bx, int Cp, - int Cj, unsigned int Cx) - csr_eldiv_csr(int n_row, int n_col, int Ap, int Aj, long long Ax, - int Bp, int Bj, long long Bx, int Cp, int Cj, - long long Cx) - csr_eldiv_csr(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax, - int Bp, int Bj, unsigned long long Bx, - int Cp, int Cj, unsigned long long Cx) - csr_eldiv_csr(int n_row, int n_col, int Ap, int Aj, float Ax, int Bp, - int Bj, float Bx, int Cp, int Cj, float Cx) - csr_eldiv_csr(int n_row, int n_col, int Ap, int Aj, double Ax, int Bp, - int Bj, double Bx, int Cp, int Cj, double Cx) - csr_eldiv_csr(int n_row, int n_col, int Ap, int Aj, long double Ax, - int Bp, int Bj, long double Bx, int Cp, int Cj, - long double Cx) - csr_eldiv_csr(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax, - int Bp, int Bj, npy_cfloat_wrapper Bx, - int Cp, int Cj, npy_cfloat_wrapper Cx) - csr_eldiv_csr(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax, - int Bp, int Bj, npy_cdouble_wrapper Bx, - int Cp, int Cj, npy_cdouble_wrapper Cx) - csr_eldiv_csr(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax, - int Bp, int Bj, npy_clongdouble_wrapper Bx, - int Cp, int Cj, npy_clongdouble_wrapper Cx) - """ - return _csr.csr_eldiv_csr(*args) + return _csr.csr_eldiv_csr(*args) def csr_plus_csr(*args): + """ + csr_plus_csr(int n_row, int n_col, int Ap, int Aj, signed char Ax, + int Bp, int Bj, signed char Bx, int Cp, int Cj, + signed char Cx) + csr_plus_csr(int n_row, int n_col, int Ap, int Aj, unsigned char Ax, + int Bp, int Bj, unsigned char Bx, int Cp, + int Cj, unsigned char Cx) + csr_plus_csr(int n_row, int n_col, int Ap, int Aj, short Ax, int Bp, + int Bj, short Bx, int Cp, int Cj, short Cx) + csr_plus_csr(int n_row, int n_col, int Ap, int Aj, unsigned short Ax, + int Bp, int Bj, unsigned short Bx, int Cp, + int Cj, unsigned short Cx) + csr_plus_csr(int n_row, int n_col, int Ap, int Aj, int Ax, int Bp, + int Bj, int Bx, int Cp, int Cj, int Cx) + csr_plus_csr(int n_row, int n_col, int Ap, int Aj, unsigned int Ax, + int Bp, int Bj, unsigned int Bx, int Cp, + int Cj, unsigned int Cx) + csr_plus_csr(int n_row, int n_col, int Ap, int Aj, long long Ax, + int Bp, int Bj, long long Bx, int Cp, int Cj, + long long Cx) + csr_plus_csr(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax, + int Bp, int Bj, unsigned long long Bx, + int Cp, int Cj, unsigned long long Cx) + csr_plus_csr(int n_row, int n_col, int Ap, int Aj, float Ax, int Bp, + int Bj, float Bx, int Cp, int Cj, float Cx) + csr_plus_csr(int n_row, int n_col, int Ap, int Aj, double Ax, int Bp, + int Bj, double Bx, int Cp, int Cj, double Cx) + csr_plus_csr(int n_row, int n_col, int Ap, int Aj, long double Ax, + int Bp, int Bj, long double Bx, int Cp, int Cj, + long double Cx) + csr_plus_csr(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax, + int Bp, int Bj, npy_cfloat_wrapper Bx, + int Cp, int Cj, npy_cfloat_wrapper Cx) + csr_plus_csr(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax, + int Bp, int Bj, npy_cdouble_wrapper Bx, + int Cp, int Cj, npy_cdouble_wrapper Cx) + csr_plus_csr(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax, + int Bp, int Bj, npy_clongdouble_wrapper Bx, + int Cp, int Cj, npy_clongdouble_wrapper Cx) """ - csr_plus_csr(int n_row, int n_col, int Ap, int Aj, signed char Ax, - int Bp, int Bj, signed char Bx, int Cp, int Cj, - signed char Cx) - csr_plus_csr(int n_row, int n_col, int Ap, int Aj, unsigned char Ax, - int Bp, int Bj, unsigned char Bx, int Cp, - int Cj, unsigned char Cx) - csr_plus_csr(int n_row, int n_col, int Ap, int Aj, short Ax, int Bp, - int Bj, short Bx, int Cp, int Cj, short Cx) - csr_plus_csr(int n_row, int n_col, int Ap, int Aj, unsigned short Ax, - int Bp, int Bj, unsigned short Bx, int Cp, - int Cj, unsigned short Cx) - csr_plus_csr(int n_row, int n_col, int Ap, int Aj, int Ax, int Bp, - int Bj, int Bx, int Cp, int Cj, int Cx) - csr_plus_csr(int n_row, int n_col, int Ap, int Aj, unsigned int Ax, - int Bp, int Bj, unsigned int Bx, int Cp, - int Cj, unsigned int Cx) - csr_plus_csr(int n_row, int n_col, int Ap, int Aj, long long Ax, - int Bp, int Bj, long long Bx, int Cp, int Cj, - long long Cx) - csr_plus_csr(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax, - int Bp, int Bj, unsigned long long Bx, - int Cp, int Cj, unsigned long long Cx) - csr_plus_csr(int n_row, int n_col, int Ap, int Aj, float Ax, int Bp, - int Bj, float Bx, int Cp, int Cj, float Cx) - csr_plus_csr(int n_row, int n_col, int Ap, int Aj, double Ax, int Bp, - int Bj, double Bx, int Cp, int Cj, double Cx) - csr_plus_csr(int n_row, int n_col, int Ap, int Aj, long double Ax, - int Bp, int Bj, long double Bx, int Cp, int Cj, - long double Cx) - csr_plus_csr(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax, - int Bp, int Bj, npy_cfloat_wrapper Bx, - int Cp, int Cj, npy_cfloat_wrapper Cx) - csr_plus_csr(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax, - int Bp, int Bj, npy_cdouble_wrapper Bx, - int Cp, int Cj, npy_cdouble_wrapper Cx) - csr_plus_csr(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax, - int Bp, int Bj, npy_clongdouble_wrapper Bx, - int Cp, int Cj, npy_clongdouble_wrapper Cx) - """ - return _csr.csr_plus_csr(*args) + return _csr.csr_plus_csr(*args) def csr_minus_csr(*args): + """ + csr_minus_csr(int n_row, int n_col, int Ap, int Aj, signed char Ax, + int Bp, int Bj, signed char Bx, int Cp, int Cj, + signed char Cx) + csr_minus_csr(int n_row, int n_col, int Ap, int Aj, unsigned char Ax, + int Bp, int Bj, unsigned char Bx, int Cp, + int Cj, unsigned char Cx) + csr_minus_csr(int n_row, int n_col, int Ap, int Aj, short Ax, int Bp, + int Bj, short Bx, int Cp, int Cj, short Cx) + csr_minus_csr(int n_row, int n_col, int Ap, int Aj, unsigned short Ax, + int Bp, int Bj, unsigned short Bx, int Cp, + int Cj, unsigned short Cx) + csr_minus_csr(int n_row, int n_col, int Ap, int Aj, int Ax, int Bp, + int Bj, int Bx, int Cp, int Cj, int Cx) + csr_minus_csr(int n_row, int n_col, int Ap, int Aj, unsigned int Ax, + int Bp, int Bj, unsigned int Bx, int Cp, + int Cj, unsigned int Cx) + csr_minus_csr(int n_row, int n_col, int Ap, int Aj, long long Ax, + int Bp, int Bj, long long Bx, int Cp, int Cj, + long long Cx) + csr_minus_csr(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax, + int Bp, int Bj, unsigned long long Bx, + int Cp, int Cj, unsigned long long Cx) + csr_minus_csr(int n_row, int n_col, int Ap, int Aj, float Ax, int Bp, + int Bj, float Bx, int Cp, int Cj, float Cx) + csr_minus_csr(int n_row, int n_col, int Ap, int Aj, double Ax, int Bp, + int Bj, double Bx, int Cp, int Cj, double Cx) + csr_minus_csr(int n_row, int n_col, int Ap, int Aj, long double Ax, + int Bp, int Bj, long double Bx, int Cp, int Cj, + long double Cx) + csr_minus_csr(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax, + int Bp, int Bj, npy_cfloat_wrapper Bx, + int Cp, int Cj, npy_cfloat_wrapper Cx) + csr_minus_csr(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax, + int Bp, int Bj, npy_cdouble_wrapper Bx, + int Cp, int Cj, npy_cdouble_wrapper Cx) + csr_minus_csr(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax, + int Bp, int Bj, npy_clongdouble_wrapper Bx, + int Cp, int Cj, npy_clongdouble_wrapper Cx) """ - csr_minus_csr(int n_row, int n_col, int Ap, int Aj, signed char Ax, - int Bp, int Bj, signed char Bx, int Cp, int Cj, - signed char Cx) - csr_minus_csr(int n_row, int n_col, int Ap, int Aj, unsigned char Ax, - int Bp, int Bj, unsigned char Bx, int Cp, - int Cj, unsigned char Cx) - csr_minus_csr(int n_row, int n_col, int Ap, int Aj, short Ax, int Bp, - int Bj, short Bx, int Cp, int Cj, short Cx) - csr_minus_csr(int n_row, int n_col, int Ap, int Aj, unsigned short Ax, - int Bp, int Bj, unsigned short Bx, int Cp, - int Cj, unsigned short Cx) - csr_minus_csr(int n_row, int n_col, int Ap, int Aj, int Ax, int Bp, - int Bj, int Bx, int Cp, int Cj, int Cx) - csr_minus_csr(int n_row, int n_col, int Ap, int Aj, unsigned int Ax, - int Bp, int Bj, unsigned int Bx, int Cp, - int Cj, unsigned int Cx) - csr_minus_csr(int n_row, int n_col, int Ap, int Aj, long long Ax, - int Bp, int Bj, long long Bx, int Cp, int Cj, - long long Cx) - csr_minus_csr(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax, - int Bp, int Bj, unsigned long long Bx, - int Cp, int Cj, unsigned long long Cx) - csr_minus_csr(int n_row, int n_col, int Ap, int Aj, float Ax, int Bp, - int Bj, float Bx, int Cp, int Cj, float Cx) - csr_minus_csr(int n_row, int n_col, int Ap, int Aj, double Ax, int Bp, - int Bj, double Bx, int Cp, int Cj, double Cx) - csr_minus_csr(int n_row, int n_col, int Ap, int Aj, long double Ax, - int Bp, int Bj, long double Bx, int Cp, int Cj, - long double Cx) - csr_minus_csr(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax, - int Bp, int Bj, npy_cfloat_wrapper Bx, - int Cp, int Cj, npy_cfloat_wrapper Cx) - csr_minus_csr(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax, - int Bp, int Bj, npy_cdouble_wrapper Bx, - int Cp, int Cj, npy_cdouble_wrapper Cx) - csr_minus_csr(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax, - int Bp, int Bj, npy_clongdouble_wrapper Bx, - int Cp, int Cj, npy_clongdouble_wrapper Cx) - """ - return _csr.csr_minus_csr(*args) + return _csr.csr_minus_csr(*args) def csr_sort_indices(*args): + """ + csr_sort_indices(int n_row, int Ap, int Aj, signed char Ax) + csr_sort_indices(int n_row, int Ap, int Aj, unsigned char Ax) + csr_sort_indices(int n_row, int Ap, int Aj, short Ax) + csr_sort_indices(int n_row, int Ap, int Aj, unsigned short Ax) + csr_sort_indices(int n_row, int Ap, int Aj, int Ax) + csr_sort_indices(int n_row, int Ap, int Aj, unsigned int Ax) + csr_sort_indices(int n_row, int Ap, int Aj, long long Ax) + csr_sort_indices(int n_row, int Ap, int Aj, unsigned long long Ax) + csr_sort_indices(int n_row, int Ap, int Aj, float Ax) + csr_sort_indices(int n_row, int Ap, int Aj, double Ax) + csr_sort_indices(int n_row, int Ap, int Aj, long double Ax) + csr_sort_indices(int n_row, int Ap, int Aj, npy_cfloat_wrapper Ax) + csr_sort_indices(int n_row, int Ap, int Aj, npy_cdouble_wrapper Ax) + csr_sort_indices(int n_row, int Ap, int Aj, npy_clongdouble_wrapper Ax) """ - csr_sort_indices(int n_row, int Ap, int Aj, signed char Ax) - csr_sort_indices(int n_row, int Ap, int Aj, unsigned char Ax) - csr_sort_indices(int n_row, int Ap, int Aj, short Ax) - csr_sort_indices(int n_row, int Ap, int Aj, unsigned short Ax) - csr_sort_indices(int n_row, int Ap, int Aj, int Ax) - csr_sort_indices(int n_row, int Ap, int Aj, unsigned int Ax) - csr_sort_indices(int n_row, int Ap, int Aj, long long Ax) - csr_sort_indices(int n_row, int Ap, int Aj, unsigned long long Ax) - csr_sort_indices(int n_row, int Ap, int Aj, float Ax) - csr_sort_indices(int n_row, int Ap, int Aj, double Ax) - csr_sort_indices(int n_row, int Ap, int Aj, long double Ax) - csr_sort_indices(int n_row, int Ap, int Aj, npy_cfloat_wrapper Ax) - csr_sort_indices(int n_row, int Ap, int Aj, npy_cdouble_wrapper Ax) - csr_sort_indices(int n_row, int Ap, int Aj, npy_clongdouble_wrapper Ax) - """ - return _csr.csr_sort_indices(*args) + return _csr.csr_sort_indices(*args) def csr_eliminate_zeros(*args): + """ + csr_eliminate_zeros(int n_row, int n_col, int Ap, int Aj, signed char Ax) + csr_eliminate_zeros(int n_row, int n_col, int Ap, int Aj, unsigned char Ax) + csr_eliminate_zeros(int n_row, int n_col, int Ap, int Aj, short Ax) + csr_eliminate_zeros(int n_row, int n_col, int Ap, int Aj, unsigned short Ax) + csr_eliminate_zeros(int n_row, int n_col, int Ap, int Aj, int Ax) + csr_eliminate_zeros(int n_row, int n_col, int Ap, int Aj, unsigned int Ax) + csr_eliminate_zeros(int n_row, int n_col, int Ap, int Aj, long long Ax) + csr_eliminate_zeros(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax) + csr_eliminate_zeros(int n_row, int n_col, int Ap, int Aj, float Ax) + csr_eliminate_zeros(int n_row, int n_col, int Ap, int Aj, double Ax) + csr_eliminate_zeros(int n_row, int n_col, int Ap, int Aj, long double Ax) + csr_eliminate_zeros(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax) + csr_eliminate_zeros(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax) + csr_eliminate_zeros(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax) """ - csr_eliminate_zeros(int n_row, int n_col, int Ap, int Aj, signed char Ax) - csr_eliminate_zeros(int n_row, int n_col, int Ap, int Aj, unsigned char Ax) - csr_eliminate_zeros(int n_row, int n_col, int Ap, int Aj, short Ax) - csr_eliminate_zeros(int n_row, int n_col, int Ap, int Aj, unsigned short Ax) - csr_eliminate_zeros(int n_row, int n_col, int Ap, int Aj, int Ax) - csr_eliminate_zeros(int n_row, int n_col, int Ap, int Aj, unsigned int Ax) - csr_eliminate_zeros(int n_row, int n_col, int Ap, int Aj, long long Ax) - csr_eliminate_zeros(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax) - csr_eliminate_zeros(int n_row, int n_col, int Ap, int Aj, float Ax) - csr_eliminate_zeros(int n_row, int n_col, int Ap, int Aj, double Ax) - csr_eliminate_zeros(int n_row, int n_col, int Ap, int Aj, long double Ax) - csr_eliminate_zeros(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax) - csr_eliminate_zeros(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax) - csr_eliminate_zeros(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax) - """ - return _csr.csr_eliminate_zeros(*args) + return _csr.csr_eliminate_zeros(*args) def csr_sum_duplicates(*args): + """ + csr_sum_duplicates(int n_row, int n_col, int Ap, int Aj, signed char Ax) + csr_sum_duplicates(int n_row, int n_col, int Ap, int Aj, unsigned char Ax) + csr_sum_duplicates(int n_row, int n_col, int Ap, int Aj, short Ax) + csr_sum_duplicates(int n_row, int n_col, int Ap, int Aj, unsigned short Ax) + csr_sum_duplicates(int n_row, int n_col, int Ap, int Aj, int Ax) + csr_sum_duplicates(int n_row, int n_col, int Ap, int Aj, unsigned int Ax) + csr_sum_duplicates(int n_row, int n_col, int Ap, int Aj, long long Ax) + csr_sum_duplicates(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax) + csr_sum_duplicates(int n_row, int n_col, int Ap, int Aj, float Ax) + csr_sum_duplicates(int n_row, int n_col, int Ap, int Aj, double Ax) + csr_sum_duplicates(int n_row, int n_col, int Ap, int Aj, long double Ax) + csr_sum_duplicates(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax) + csr_sum_duplicates(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax) + csr_sum_duplicates(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax) """ - csr_sum_duplicates(int n_row, int n_col, int Ap, int Aj, signed char Ax) - csr_sum_duplicates(int n_row, int n_col, int Ap, int Aj, unsigned char Ax) - csr_sum_duplicates(int n_row, int n_col, int Ap, int Aj, short Ax) - csr_sum_duplicates(int n_row, int n_col, int Ap, int Aj, unsigned short Ax) - csr_sum_duplicates(int n_row, int n_col, int Ap, int Aj, int Ax) - csr_sum_duplicates(int n_row, int n_col, int Ap, int Aj, unsigned int Ax) - csr_sum_duplicates(int n_row, int n_col, int Ap, int Aj, long long Ax) - csr_sum_duplicates(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax) - csr_sum_duplicates(int n_row, int n_col, int Ap, int Aj, float Ax) - csr_sum_duplicates(int n_row, int n_col, int Ap, int Aj, double Ax) - csr_sum_duplicates(int n_row, int n_col, int Ap, int Aj, long double Ax) - csr_sum_duplicates(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax) - csr_sum_duplicates(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax) - csr_sum_duplicates(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax) - """ - return _csr.csr_sum_duplicates(*args) + return _csr.csr_sum_duplicates(*args) def get_csr_submatrix(*args): + """ + get_csr_submatrix(int n_row, int n_col, int Ap, int Aj, signed char Ax, + int ir0, int ir1, int ic0, int ic1, std::vector<(int)> Bp, + std::vector<(int)> Bj, std::vector<(signed char)> Bx) + get_csr_submatrix(int n_row, int n_col, int Ap, int Aj, unsigned char Ax, + int ir0, int ir1, int ic0, int ic1, std::vector<(int)> Bp, + std::vector<(int)> Bj, std::vector<(unsigned char)> Bx) + get_csr_submatrix(int n_row, int n_col, int Ap, int Aj, short Ax, int ir0, + int ir1, int ic0, int ic1, std::vector<(int)> Bp, + std::vector<(int)> Bj, std::vector<(short)> Bx) + get_csr_submatrix(int n_row, int n_col, int Ap, int Aj, unsigned short Ax, + int ir0, int ir1, int ic0, int ic1, std::vector<(int)> Bp, + std::vector<(int)> Bj, std::vector<(unsigned short)> Bx) + get_csr_submatrix(int n_row, int n_col, int Ap, int Aj, int Ax, int ir0, + int ir1, int ic0, int ic1, std::vector<(int)> Bp, + std::vector<(int)> Bj, std::vector<(int)> Bx) + get_csr_submatrix(int n_row, int n_col, int Ap, int Aj, unsigned int Ax, + int ir0, int ir1, int ic0, int ic1, std::vector<(int)> Bp, + std::vector<(int)> Bj, std::vector<(unsigned int)> Bx) + get_csr_submatrix(int n_row, int n_col, int Ap, int Aj, long long Ax, + int ir0, int ir1, int ic0, int ic1, std::vector<(int)> Bp, + std::vector<(int)> Bj, std::vector<(long long)> Bx) + get_csr_submatrix(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax, + int ir0, int ir1, int ic0, int ic1, + std::vector<(int)> Bp, std::vector<(int)> Bj, + std::vector<(unsigned long long)> Bx) + get_csr_submatrix(int n_row, int n_col, int Ap, int Aj, float Ax, int ir0, + int ir1, int ic0, int ic1, std::vector<(int)> Bp, + std::vector<(int)> Bj, std::vector<(float)> Bx) + get_csr_submatrix(int n_row, int n_col, int Ap, int Aj, double Ax, int ir0, + int ir1, int ic0, int ic1, std::vector<(int)> Bp, + std::vector<(int)> Bj, std::vector<(double)> Bx) + get_csr_submatrix(int n_row, int n_col, int Ap, int Aj, long double Ax, + int ir0, int ir1, int ic0, int ic1, std::vector<(int)> Bp, + std::vector<(int)> Bj, std::vector<(long double)> Bx) + get_csr_submatrix(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax, + int ir0, int ir1, int ic0, int ic1, + std::vector<(int)> Bp, std::vector<(int)> Bj, + std::vector<(npy_cfloat_wrapper)> Bx) + get_csr_submatrix(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax, + int ir0, int ir1, int ic0, int ic1, + std::vector<(int)> Bp, std::vector<(int)> Bj, + std::vector<(npy_cdouble_wrapper)> Bx) + get_csr_submatrix(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax, + int ir0, int ir1, int ic0, int ic1, + std::vector<(int)> Bp, std::vector<(int)> Bj, + std::vector<(npy_clongdouble_wrapper)> Bx) + """ + return _csr.get_csr_submatrix(*args) + +def csr_sample_values(*args): + """ + csr_sample_values(int n_row, int n_col, int Ap, int Aj, signed char Ax, + int n_samples, int Bi, int Bj, signed char Bx) + csr_sample_values(int n_row, int n_col, int Ap, int Aj, unsigned char Ax, + int n_samples, int Bi, int Bj, unsigned char Bx) + csr_sample_values(int n_row, int n_col, int Ap, int Aj, short Ax, int n_samples, + int Bi, int Bj, short Bx) + csr_sample_values(int n_row, int n_col, int Ap, int Aj, unsigned short Ax, + int n_samples, int Bi, int Bj, unsigned short Bx) + csr_sample_values(int n_row, int n_col, int Ap, int Aj, int Ax, int n_samples, + int Bi, int Bj, int Bx) + csr_sample_values(int n_row, int n_col, int Ap, int Aj, unsigned int Ax, + int n_samples, int Bi, int Bj, unsigned int Bx) + csr_sample_values(int n_row, int n_col, int Ap, int Aj, long long Ax, + int n_samples, int Bi, int Bj, long long Bx) + csr_sample_values(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax, + int n_samples, int Bi, int Bj, unsigned long long Bx) + csr_sample_values(int n_row, int n_col, int Ap, int Aj, float Ax, int n_samples, + int Bi, int Bj, float Bx) + csr_sample_values(int n_row, int n_col, int Ap, int Aj, double Ax, int n_samples, + int Bi, int Bj, double Bx) + csr_sample_values(int n_row, int n_col, int Ap, int Aj, long double Ax, + int n_samples, int Bi, int Bj, long double Bx) + csr_sample_values(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax, + int n_samples, int Bi, int Bj, npy_cfloat_wrapper Bx) + csr_sample_values(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax, + int n_samples, int Bi, int Bj, npy_cdouble_wrapper Bx) + csr_sample_values(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax, + int n_samples, int Bi, int Bj, + npy_clongdouble_wrapper Bx) """ - get_csr_submatrix(int n_row, int n_col, int Ap, int Aj, signed char Ax, - int ir0, int ir1, int ic0, int ic1, std::vector<(int)> Bp, - std::vector<(int)> Bj, std::vector<(signed char)> Bx) - get_csr_submatrix(int n_row, int n_col, int Ap, int Aj, unsigned char Ax, - int ir0, int ir1, int ic0, int ic1, std::vector<(int)> Bp, - std::vector<(int)> Bj, std::vector<(unsigned char)> Bx) - get_csr_submatrix(int n_row, int n_col, int Ap, int Aj, short Ax, int ir0, - int ir1, int ic0, int ic1, std::vector<(int)> Bp, - std::vector<(int)> Bj, std::vector<(short)> Bx) - get_csr_submatrix(int n_row, int n_col, int Ap, int Aj, unsigned short Ax, - int ir0, int ir1, int ic0, int ic1, std::vector<(int)> Bp, - std::vector<(int)> Bj, std::vector<(unsigned short)> Bx) - get_csr_submatrix(int n_row, int n_col, int Ap, int Aj, int Ax, int ir0, - int ir1, int ic0, int ic1, std::vector<(int)> Bp, - std::vector<(int)> Bj, std::vector<(int)> Bx) - get_csr_submatrix(int n_row, int n_col, int Ap, int Aj, unsigned int Ax, - int ir0, int ir1, int ic0, int ic1, std::vector<(int)> Bp, - std::vector<(int)> Bj, std::vector<(unsigned int)> Bx) - get_csr_submatrix(int n_row, int n_col, int Ap, int Aj, long long Ax, - int ir0, int ir1, int ic0, int ic1, std::vector<(int)> Bp, - std::vector<(int)> Bj, std::vector<(long long)> Bx) - get_csr_submatrix(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax, - int ir0, int ir1, int ic0, int ic1, - std::vector<(int)> Bp, std::vector<(int)> Bj, - std::vector<(unsigned long long)> Bx) - get_csr_submatrix(int n_row, int n_col, int Ap, int Aj, float Ax, int ir0, - int ir1, int ic0, int ic1, std::vector<(int)> Bp, - std::vector<(int)> Bj, std::vector<(float)> Bx) - get_csr_submatrix(int n_row, int n_col, int Ap, int Aj, double Ax, int ir0, - int ir1, int ic0, int ic1, std::vector<(int)> Bp, - std::vector<(int)> Bj, std::vector<(double)> Bx) - get_csr_submatrix(int n_row, int n_col, int Ap, int Aj, long double Ax, - int ir0, int ir1, int ic0, int ic1, std::vector<(int)> Bp, - std::vector<(int)> Bj, std::vector<(long double)> Bx) - get_csr_submatrix(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax, - int ir0, int ir1, int ic0, int ic1, - std::vector<(int)> Bp, std::vector<(int)> Bj, - std::vector<(npy_cfloat_wrapper)> Bx) - get_csr_submatrix(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax, - int ir0, int ir1, int ic0, int ic1, - std::vector<(int)> Bp, std::vector<(int)> Bj, - std::vector<(npy_cdouble_wrapper)> Bx) - get_csr_submatrix(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax, - int ir0, int ir1, int ic0, int ic1, - std::vector<(int)> Bp, std::vector<(int)> Bj, - std::vector<(npy_clongdouble_wrapper)> Bx) - """ - return _csr.get_csr_submatrix(*args) + return _csr.csr_sample_values(*args) + diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/sparsetools/csr_wrap.cxx python-scipy-0.8.0+dfsg1/scipy/sparse/sparsetools/csr_wrap.cxx --- python-scipy-0.7.2+dfsg1/scipy/sparse/sparsetools/csr_wrap.cxx 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/sparsetools/csr_wrap.cxx 2010-07-26 15:48:35.000000000 +0100 @@ -45437,6 +45437,3104 @@ } +SWIGINTERN PyObject *_wrap_csr_sample_values__SWIG_1(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { + PyObject *resultobj = 0; + int arg1 ; + int arg2 ; + int *arg3 ; + int *arg4 ; + signed char *arg5 ; + int arg6 ; + int *arg7 ; + int *arg8 ; + signed char *arg9 ; + int val1 ; + int ecode1 = 0 ; + int val2 ; + int ecode2 = 0 ; + PyArrayObject *array3 = NULL ; + int is_new_object3 ; + PyArrayObject *array4 = NULL ; + int is_new_object4 ; + PyArrayObject *array5 = NULL ; + int is_new_object5 ; + int val6 ; + int ecode6 = 0 ; + PyArrayObject *array7 = NULL ; + int is_new_object7 ; + PyArrayObject *array8 = NULL ; + int is_new_object8 ; + PyArrayObject *temp9 = NULL ; + PyObject * obj0 = 0 ; + PyObject * obj1 = 0 ; + PyObject * obj2 = 0 ; + PyObject * obj3 = 0 ; + PyObject * obj4 = 0 ; + PyObject * obj5 = 0 ; + PyObject * obj6 = 0 ; + PyObject * obj7 = 0 ; + PyObject * obj8 = 0 ; + + if (!PyArg_ParseTuple(args,(char *)"OOOOOOOOO:csr_sample_values",&obj0,&obj1,&obj2,&obj3,&obj4,&obj5,&obj6,&obj7,&obj8)) SWIG_fail; + ecode1 = SWIG_AsVal_int(obj0, &val1); + if (!SWIG_IsOK(ecode1)) { + SWIG_exception_fail(SWIG_ArgError(ecode1), "in method '" "csr_sample_values" "', argument " "1"" of type '" "int""'"); + } + arg1 = static_cast< int >(val1); + ecode2 = SWIG_AsVal_int(obj1, &val2); + if (!SWIG_IsOK(ecode2)) { + SWIG_exception_fail(SWIG_ArgError(ecode2), "in method '" "csr_sample_values" "', argument " "2"" of type '" "int""'"); + } + arg2 = static_cast< int >(val2); + { + npy_intp size[1] = { + -1 + }; + array3 = obj_to_array_contiguous_allow_conversion(obj2, PyArray_INT, &is_new_object3); + if (!array3 || !require_dimensions(array3,1) || !require_size(array3,size,1) + || !require_contiguous(array3) || !require_native(array3)) SWIG_fail; + + arg3 = (int*) array3->data; + } + { + npy_intp size[1] = { + -1 + }; + array4 = obj_to_array_contiguous_allow_conversion(obj3, PyArray_INT, &is_new_object4); + if (!array4 || !require_dimensions(array4,1) || !require_size(array4,size,1) + || !require_contiguous(array4) || !require_native(array4)) SWIG_fail; + + arg4 = (int*) array4->data; + } + { + npy_intp size[1] = { + -1 + }; + array5 = obj_to_array_contiguous_allow_conversion(obj4, PyArray_BYTE, &is_new_object5); + if (!array5 || !require_dimensions(array5,1) || !require_size(array5,size,1) + || !require_contiguous(array5) || !require_native(array5)) SWIG_fail; + + arg5 = (signed char*) array5->data; + } + ecode6 = SWIG_AsVal_int(obj5, &val6); + if (!SWIG_IsOK(ecode6)) { + SWIG_exception_fail(SWIG_ArgError(ecode6), "in method '" "csr_sample_values" "', argument " "6"" of type '" "int""'"); + } + arg6 = static_cast< int >(val6); + { + npy_intp size[1] = { + -1 + }; + array7 = obj_to_array_contiguous_allow_conversion(obj6, PyArray_INT, &is_new_object7); + if (!array7 || !require_dimensions(array7,1) || !require_size(array7,size,1) + || !require_contiguous(array7) || !require_native(array7)) SWIG_fail; + + arg7 = (int*) array7->data; + } + { + npy_intp size[1] = { + -1 + }; + array8 = obj_to_array_contiguous_allow_conversion(obj7, PyArray_INT, &is_new_object8); + if (!array8 || !require_dimensions(array8,1) || !require_size(array8,size,1) + || !require_contiguous(array8) || !require_native(array8)) SWIG_fail; + + arg8 = (int*) array8->data; + } + { + temp9 = obj_to_array_no_conversion(obj8,PyArray_BYTE); + if (!temp9 || !require_contiguous(temp9) || !require_native(temp9)) SWIG_fail; + arg9 = (signed char*) array_data(temp9); + } + csr_sample_values< int,signed char >(arg1,arg2,(int const (*))arg3,(int const (*))arg4,(signed char const (*))arg5,arg6,(int const (*))arg7,(int const (*))arg8,arg9); + resultobj = SWIG_Py_Void(); + { + if (is_new_object3 && array3) { + Py_DECREF(array3); + } + } + { + if (is_new_object4 && array4) { + Py_DECREF(array4); + } + } + { + if (is_new_object5 && array5) { + Py_DECREF(array5); + } + } + { + if (is_new_object7 && array7) { + Py_DECREF(array7); + } + } + { + if (is_new_object8 && array8) { + Py_DECREF(array8); + } + } + return resultobj; +fail: + { + if (is_new_object3 && array3) { + Py_DECREF(array3); + } + } + { + if (is_new_object4 && array4) { + Py_DECREF(array4); + } + } + { + if (is_new_object5 && array5) { + Py_DECREF(array5); + } + } + { + if (is_new_object7 && array7) { + Py_DECREF(array7); + } + } + { + if (is_new_object8 && array8) { + Py_DECREF(array8); + } + } + return NULL; +} + + +SWIGINTERN PyObject *_wrap_csr_sample_values__SWIG_2(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { + PyObject *resultobj = 0; + int arg1 ; + int arg2 ; + int *arg3 ; + int *arg4 ; + unsigned char *arg5 ; + int arg6 ; + int *arg7 ; + int *arg8 ; + unsigned char *arg9 ; + int val1 ; + int ecode1 = 0 ; + int val2 ; + int ecode2 = 0 ; + PyArrayObject *array3 = NULL ; + int is_new_object3 ; + PyArrayObject *array4 = NULL ; + int is_new_object4 ; + PyArrayObject *array5 = NULL ; + int is_new_object5 ; + int val6 ; + int ecode6 = 0 ; + PyArrayObject *array7 = NULL ; + int is_new_object7 ; + PyArrayObject *array8 = NULL ; + int is_new_object8 ; + PyArrayObject *temp9 = NULL ; + PyObject * obj0 = 0 ; + PyObject * obj1 = 0 ; + PyObject * obj2 = 0 ; + PyObject * obj3 = 0 ; + PyObject * obj4 = 0 ; + PyObject * obj5 = 0 ; + PyObject * obj6 = 0 ; + PyObject * obj7 = 0 ; + PyObject * obj8 = 0 ; + + if (!PyArg_ParseTuple(args,(char *)"OOOOOOOOO:csr_sample_values",&obj0,&obj1,&obj2,&obj3,&obj4,&obj5,&obj6,&obj7,&obj8)) SWIG_fail; + ecode1 = SWIG_AsVal_int(obj0, &val1); + if (!SWIG_IsOK(ecode1)) { + SWIG_exception_fail(SWIG_ArgError(ecode1), "in method '" "csr_sample_values" "', argument " "1"" of type '" "int""'"); + } + arg1 = static_cast< int >(val1); + ecode2 = SWIG_AsVal_int(obj1, &val2); + if (!SWIG_IsOK(ecode2)) { + SWIG_exception_fail(SWIG_ArgError(ecode2), "in method '" "csr_sample_values" "', argument " "2"" of type '" "int""'"); + } + arg2 = static_cast< int >(val2); + { + npy_intp size[1] = { + -1 + }; + array3 = obj_to_array_contiguous_allow_conversion(obj2, PyArray_INT, &is_new_object3); + if (!array3 || !require_dimensions(array3,1) || !require_size(array3,size,1) + || !require_contiguous(array3) || !require_native(array3)) SWIG_fail; + + arg3 = (int*) array3->data; + } + { + npy_intp size[1] = { + -1 + }; + array4 = obj_to_array_contiguous_allow_conversion(obj3, PyArray_INT, &is_new_object4); + if (!array4 || !require_dimensions(array4,1) || !require_size(array4,size,1) + || !require_contiguous(array4) || !require_native(array4)) SWIG_fail; + + arg4 = (int*) array4->data; + } + { + npy_intp size[1] = { + -1 + }; + array5 = obj_to_array_contiguous_allow_conversion(obj4, PyArray_UBYTE, &is_new_object5); + if (!array5 || !require_dimensions(array5,1) || !require_size(array5,size,1) + || !require_contiguous(array5) || !require_native(array5)) SWIG_fail; + + arg5 = (unsigned char*) array5->data; + } + ecode6 = SWIG_AsVal_int(obj5, &val6); + if (!SWIG_IsOK(ecode6)) { + SWIG_exception_fail(SWIG_ArgError(ecode6), "in method '" "csr_sample_values" "', argument " "6"" of type '" "int""'"); + } + arg6 = static_cast< int >(val6); + { + npy_intp size[1] = { + -1 + }; + array7 = obj_to_array_contiguous_allow_conversion(obj6, PyArray_INT, &is_new_object7); + if (!array7 || !require_dimensions(array7,1) || !require_size(array7,size,1) + || !require_contiguous(array7) || !require_native(array7)) SWIG_fail; + + arg7 = (int*) array7->data; + } + { + npy_intp size[1] = { + -1 + }; + array8 = obj_to_array_contiguous_allow_conversion(obj7, PyArray_INT, &is_new_object8); + if (!array8 || !require_dimensions(array8,1) || !require_size(array8,size,1) + || !require_contiguous(array8) || !require_native(array8)) SWIG_fail; + + arg8 = (int*) array8->data; + } + { + temp9 = obj_to_array_no_conversion(obj8,PyArray_UBYTE); + if (!temp9 || !require_contiguous(temp9) || !require_native(temp9)) SWIG_fail; + arg9 = (unsigned char*) array_data(temp9); + } + csr_sample_values< int,unsigned char >(arg1,arg2,(int const (*))arg3,(int const (*))arg4,(unsigned char const (*))arg5,arg6,(int const (*))arg7,(int const (*))arg8,arg9); + resultobj = SWIG_Py_Void(); + { + if (is_new_object3 && array3) { + Py_DECREF(array3); + } + } + { + if (is_new_object4 && array4) { + Py_DECREF(array4); + } + } + { + if (is_new_object5 && array5) { + Py_DECREF(array5); + } + } + { + if (is_new_object7 && array7) { + Py_DECREF(array7); + } + } + { + if (is_new_object8 && array8) { + Py_DECREF(array8); + } + } + return resultobj; +fail: + { + if (is_new_object3 && array3) { + Py_DECREF(array3); + } + } + { + if (is_new_object4 && array4) { + Py_DECREF(array4); + } + } + { + if (is_new_object5 && array5) { + Py_DECREF(array5); + } + } + { + if (is_new_object7 && array7) { + Py_DECREF(array7); + } + } + { + if (is_new_object8 && array8) { + Py_DECREF(array8); + } + } + return NULL; +} + + +SWIGINTERN PyObject *_wrap_csr_sample_values__SWIG_3(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { + PyObject *resultobj = 0; + int arg1 ; + int arg2 ; + int *arg3 ; + int *arg4 ; + short *arg5 ; + int arg6 ; + int *arg7 ; + int *arg8 ; + short *arg9 ; + int val1 ; + int ecode1 = 0 ; + int val2 ; + int ecode2 = 0 ; + PyArrayObject *array3 = NULL ; + int is_new_object3 ; + PyArrayObject *array4 = NULL ; + int is_new_object4 ; + PyArrayObject *array5 = NULL ; + int is_new_object5 ; + int val6 ; + int ecode6 = 0 ; + PyArrayObject *array7 = NULL ; + int is_new_object7 ; + PyArrayObject *array8 = NULL ; + int is_new_object8 ; + PyArrayObject *temp9 = NULL ; + PyObject * obj0 = 0 ; + PyObject * obj1 = 0 ; + PyObject * obj2 = 0 ; + PyObject * obj3 = 0 ; + PyObject * obj4 = 0 ; + PyObject * obj5 = 0 ; + PyObject * obj6 = 0 ; + PyObject * obj7 = 0 ; + PyObject * obj8 = 0 ; + + if (!PyArg_ParseTuple(args,(char *)"OOOOOOOOO:csr_sample_values",&obj0,&obj1,&obj2,&obj3,&obj4,&obj5,&obj6,&obj7,&obj8)) SWIG_fail; + ecode1 = SWIG_AsVal_int(obj0, &val1); + if (!SWIG_IsOK(ecode1)) { + SWIG_exception_fail(SWIG_ArgError(ecode1), "in method '" "csr_sample_values" "', argument " "1"" of type '" "int""'"); + } + arg1 = static_cast< int >(val1); + ecode2 = SWIG_AsVal_int(obj1, &val2); + if (!SWIG_IsOK(ecode2)) { + SWIG_exception_fail(SWIG_ArgError(ecode2), "in method '" "csr_sample_values" "', argument " "2"" of type '" "int""'"); + } + arg2 = static_cast< int >(val2); + { + npy_intp size[1] = { + -1 + }; + array3 = obj_to_array_contiguous_allow_conversion(obj2, PyArray_INT, &is_new_object3); + if (!array3 || !require_dimensions(array3,1) || !require_size(array3,size,1) + || !require_contiguous(array3) || !require_native(array3)) SWIG_fail; + + arg3 = (int*) array3->data; + } + { + npy_intp size[1] = { + -1 + }; + array4 = obj_to_array_contiguous_allow_conversion(obj3, PyArray_INT, &is_new_object4); + if (!array4 || !require_dimensions(array4,1) || !require_size(array4,size,1) + || !require_contiguous(array4) || !require_native(array4)) SWIG_fail; + + arg4 = (int*) array4->data; + } + { + npy_intp size[1] = { + -1 + }; + array5 = obj_to_array_contiguous_allow_conversion(obj4, PyArray_SHORT, &is_new_object5); + if (!array5 || !require_dimensions(array5,1) || !require_size(array5,size,1) + || !require_contiguous(array5) || !require_native(array5)) SWIG_fail; + + arg5 = (short*) array5->data; + } + ecode6 = SWIG_AsVal_int(obj5, &val6); + if (!SWIG_IsOK(ecode6)) { + SWIG_exception_fail(SWIG_ArgError(ecode6), "in method '" "csr_sample_values" "', argument " "6"" of type '" "int""'"); + } + arg6 = static_cast< int >(val6); + { + npy_intp size[1] = { + -1 + }; + array7 = obj_to_array_contiguous_allow_conversion(obj6, PyArray_INT, &is_new_object7); + if (!array7 || !require_dimensions(array7,1) || !require_size(array7,size,1) + || !require_contiguous(array7) || !require_native(array7)) SWIG_fail; + + arg7 = (int*) array7->data; + } + { + npy_intp size[1] = { + -1 + }; + array8 = obj_to_array_contiguous_allow_conversion(obj7, PyArray_INT, &is_new_object8); + if (!array8 || !require_dimensions(array8,1) || !require_size(array8,size,1) + || !require_contiguous(array8) || !require_native(array8)) SWIG_fail; + + arg8 = (int*) array8->data; + } + { + temp9 = obj_to_array_no_conversion(obj8,PyArray_SHORT); + if (!temp9 || !require_contiguous(temp9) || !require_native(temp9)) SWIG_fail; + arg9 = (short*) array_data(temp9); + } + csr_sample_values< int,short >(arg1,arg2,(int const (*))arg3,(int const (*))arg4,(short const (*))arg5,arg6,(int const (*))arg7,(int const (*))arg8,arg9); + resultobj = SWIG_Py_Void(); + { + if (is_new_object3 && array3) { + Py_DECREF(array3); + } + } + { + if (is_new_object4 && array4) { + Py_DECREF(array4); + } + } + { + if (is_new_object5 && array5) { + Py_DECREF(array5); + } + } + { + if (is_new_object7 && array7) { + Py_DECREF(array7); + } + } + { + if (is_new_object8 && array8) { + Py_DECREF(array8); + } + } + return resultobj; +fail: + { + if (is_new_object3 && array3) { + Py_DECREF(array3); + } + } + { + if (is_new_object4 && array4) { + Py_DECREF(array4); + } + } + { + if (is_new_object5 && array5) { + Py_DECREF(array5); + } + } + { + if (is_new_object7 && array7) { + Py_DECREF(array7); + } + } + { + if (is_new_object8 && array8) { + Py_DECREF(array8); + } + } + return NULL; +} + + +SWIGINTERN PyObject *_wrap_csr_sample_values__SWIG_4(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { + PyObject *resultobj = 0; + int arg1 ; + int arg2 ; + int *arg3 ; + int *arg4 ; + unsigned short *arg5 ; + int arg6 ; + int *arg7 ; + int *arg8 ; + unsigned short *arg9 ; + int val1 ; + int ecode1 = 0 ; + int val2 ; + int ecode2 = 0 ; + PyArrayObject *array3 = NULL ; + int is_new_object3 ; + PyArrayObject *array4 = NULL ; + int is_new_object4 ; + PyArrayObject *array5 = NULL ; + int is_new_object5 ; + int val6 ; + int ecode6 = 0 ; + PyArrayObject *array7 = NULL ; + int is_new_object7 ; + PyArrayObject *array8 = NULL ; + int is_new_object8 ; + PyArrayObject *temp9 = NULL ; + PyObject * obj0 = 0 ; + PyObject * obj1 = 0 ; + PyObject * obj2 = 0 ; + PyObject * obj3 = 0 ; + PyObject * obj4 = 0 ; + PyObject * obj5 = 0 ; + PyObject * obj6 = 0 ; + PyObject * obj7 = 0 ; + PyObject * obj8 = 0 ; + + if (!PyArg_ParseTuple(args,(char *)"OOOOOOOOO:csr_sample_values",&obj0,&obj1,&obj2,&obj3,&obj4,&obj5,&obj6,&obj7,&obj8)) SWIG_fail; + ecode1 = SWIG_AsVal_int(obj0, &val1); + if (!SWIG_IsOK(ecode1)) { + SWIG_exception_fail(SWIG_ArgError(ecode1), "in method '" "csr_sample_values" "', argument " "1"" of type '" "int""'"); + } + arg1 = static_cast< int >(val1); + ecode2 = SWIG_AsVal_int(obj1, &val2); + if (!SWIG_IsOK(ecode2)) { + SWIG_exception_fail(SWIG_ArgError(ecode2), "in method '" "csr_sample_values" "', argument " "2"" of type '" "int""'"); + } + arg2 = static_cast< int >(val2); + { + npy_intp size[1] = { + -1 + }; + array3 = obj_to_array_contiguous_allow_conversion(obj2, PyArray_INT, &is_new_object3); + if (!array3 || !require_dimensions(array3,1) || !require_size(array3,size,1) + || !require_contiguous(array3) || !require_native(array3)) SWIG_fail; + + arg3 = (int*) array3->data; + } + { + npy_intp size[1] = { + -1 + }; + array4 = obj_to_array_contiguous_allow_conversion(obj3, PyArray_INT, &is_new_object4); + if (!array4 || !require_dimensions(array4,1) || !require_size(array4,size,1) + || !require_contiguous(array4) || !require_native(array4)) SWIG_fail; + + arg4 = (int*) array4->data; + } + { + npy_intp size[1] = { + -1 + }; + array5 = obj_to_array_contiguous_allow_conversion(obj4, PyArray_USHORT, &is_new_object5); + if (!array5 || !require_dimensions(array5,1) || !require_size(array5,size,1) + || !require_contiguous(array5) || !require_native(array5)) SWIG_fail; + + arg5 = (unsigned short*) array5->data; + } + ecode6 = SWIG_AsVal_int(obj5, &val6); + if (!SWIG_IsOK(ecode6)) { + SWIG_exception_fail(SWIG_ArgError(ecode6), "in method '" "csr_sample_values" "', argument " "6"" of type '" "int""'"); + } + arg6 = static_cast< int >(val6); + { + npy_intp size[1] = { + -1 + }; + array7 = obj_to_array_contiguous_allow_conversion(obj6, PyArray_INT, &is_new_object7); + if (!array7 || !require_dimensions(array7,1) || !require_size(array7,size,1) + || !require_contiguous(array7) || !require_native(array7)) SWIG_fail; + + arg7 = (int*) array7->data; + } + { + npy_intp size[1] = { + -1 + }; + array8 = obj_to_array_contiguous_allow_conversion(obj7, PyArray_INT, &is_new_object8); + if (!array8 || !require_dimensions(array8,1) || !require_size(array8,size,1) + || !require_contiguous(array8) || !require_native(array8)) SWIG_fail; + + arg8 = (int*) array8->data; + } + { + temp9 = obj_to_array_no_conversion(obj8,PyArray_USHORT); + if (!temp9 || !require_contiguous(temp9) || !require_native(temp9)) SWIG_fail; + arg9 = (unsigned short*) array_data(temp9); + } + csr_sample_values< int,unsigned short >(arg1,arg2,(int const (*))arg3,(int const (*))arg4,(unsigned short const (*))arg5,arg6,(int const (*))arg7,(int const (*))arg8,arg9); + resultobj = SWIG_Py_Void(); + { + if (is_new_object3 && array3) { + Py_DECREF(array3); + } + } + { + if (is_new_object4 && array4) { + Py_DECREF(array4); + } + } + { + if (is_new_object5 && array5) { + Py_DECREF(array5); + } + } + { + if (is_new_object7 && array7) { + Py_DECREF(array7); + } + } + { + if (is_new_object8 && array8) { + Py_DECREF(array8); + } + } + return resultobj; +fail: + { + if (is_new_object3 && array3) { + Py_DECREF(array3); + } + } + { + if (is_new_object4 && array4) { + Py_DECREF(array4); + } + } + { + if (is_new_object5 && array5) { + Py_DECREF(array5); + } + } + { + if (is_new_object7 && array7) { + Py_DECREF(array7); + } + } + { + if (is_new_object8 && array8) { + Py_DECREF(array8); + } + } + return NULL; +} + + +SWIGINTERN PyObject *_wrap_csr_sample_values__SWIG_5(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { + PyObject *resultobj = 0; + int arg1 ; + int arg2 ; + int *arg3 ; + int *arg4 ; + int *arg5 ; + int arg6 ; + int *arg7 ; + int *arg8 ; + int *arg9 ; + int val1 ; + int ecode1 = 0 ; + int val2 ; + int ecode2 = 0 ; + PyArrayObject *array3 = NULL ; + int is_new_object3 ; + PyArrayObject *array4 = NULL ; + int is_new_object4 ; + PyArrayObject *array5 = NULL ; + int is_new_object5 ; + int val6 ; + int ecode6 = 0 ; + PyArrayObject *array7 = NULL ; + int is_new_object7 ; + PyArrayObject *array8 = NULL ; + int is_new_object8 ; + PyArrayObject *temp9 = NULL ; + PyObject * obj0 = 0 ; + PyObject * obj1 = 0 ; + PyObject * obj2 = 0 ; + PyObject * obj3 = 0 ; + PyObject * obj4 = 0 ; + PyObject * obj5 = 0 ; + PyObject * obj6 = 0 ; + PyObject * obj7 = 0 ; + PyObject * obj8 = 0 ; + + if (!PyArg_ParseTuple(args,(char *)"OOOOOOOOO:csr_sample_values",&obj0,&obj1,&obj2,&obj3,&obj4,&obj5,&obj6,&obj7,&obj8)) SWIG_fail; + ecode1 = SWIG_AsVal_int(obj0, &val1); + if (!SWIG_IsOK(ecode1)) { + SWIG_exception_fail(SWIG_ArgError(ecode1), "in method '" "csr_sample_values" "', argument " "1"" of type '" "int""'"); + } + arg1 = static_cast< int >(val1); + ecode2 = SWIG_AsVal_int(obj1, &val2); + if (!SWIG_IsOK(ecode2)) { + SWIG_exception_fail(SWIG_ArgError(ecode2), "in method '" "csr_sample_values" "', argument " "2"" of type '" "int""'"); + } + arg2 = static_cast< int >(val2); + { + npy_intp size[1] = { + -1 + }; + array3 = obj_to_array_contiguous_allow_conversion(obj2, PyArray_INT, &is_new_object3); + if (!array3 || !require_dimensions(array3,1) || !require_size(array3,size,1) + || !require_contiguous(array3) || !require_native(array3)) SWIG_fail; + + arg3 = (int*) array3->data; + } + { + npy_intp size[1] = { + -1 + }; + array4 = obj_to_array_contiguous_allow_conversion(obj3, PyArray_INT, &is_new_object4); + if (!array4 || !require_dimensions(array4,1) || !require_size(array4,size,1) + || !require_contiguous(array4) || !require_native(array4)) SWIG_fail; + + arg4 = (int*) array4->data; + } + { + npy_intp size[1] = { + -1 + }; + array5 = obj_to_array_contiguous_allow_conversion(obj4, PyArray_INT, &is_new_object5); + if (!array5 || !require_dimensions(array5,1) || !require_size(array5,size,1) + || !require_contiguous(array5) || !require_native(array5)) SWIG_fail; + + arg5 = (int*) array5->data; + } + ecode6 = SWIG_AsVal_int(obj5, &val6); + if (!SWIG_IsOK(ecode6)) { + SWIG_exception_fail(SWIG_ArgError(ecode6), "in method '" "csr_sample_values" "', argument " "6"" of type '" "int""'"); + } + arg6 = static_cast< int >(val6); + { + npy_intp size[1] = { + -1 + }; + array7 = obj_to_array_contiguous_allow_conversion(obj6, PyArray_INT, &is_new_object7); + if (!array7 || !require_dimensions(array7,1) || !require_size(array7,size,1) + || !require_contiguous(array7) || !require_native(array7)) SWIG_fail; + + arg7 = (int*) array7->data; + } + { + npy_intp size[1] = { + -1 + }; + array8 = obj_to_array_contiguous_allow_conversion(obj7, PyArray_INT, &is_new_object8); + if (!array8 || !require_dimensions(array8,1) || !require_size(array8,size,1) + || !require_contiguous(array8) || !require_native(array8)) SWIG_fail; + + arg8 = (int*) array8->data; + } + { + temp9 = obj_to_array_no_conversion(obj8,PyArray_INT); + if (!temp9 || !require_contiguous(temp9) || !require_native(temp9)) SWIG_fail; + arg9 = (int*) array_data(temp9); + } + csr_sample_values< int,int >(arg1,arg2,(int const (*))arg3,(int const (*))arg4,(int const (*))arg5,arg6,(int const (*))arg7,(int const (*))arg8,arg9); + resultobj = SWIG_Py_Void(); + { + if (is_new_object3 && array3) { + Py_DECREF(array3); + } + } + { + if (is_new_object4 && array4) { + Py_DECREF(array4); + } + } + { + if (is_new_object5 && array5) { + Py_DECREF(array5); + } + } + { + if (is_new_object7 && array7) { + Py_DECREF(array7); + } + } + { + if (is_new_object8 && array8) { + Py_DECREF(array8); + } + } + return resultobj; +fail: + { + if (is_new_object3 && array3) { + Py_DECREF(array3); + } + } + { + if (is_new_object4 && array4) { + Py_DECREF(array4); + } + } + { + if (is_new_object5 && array5) { + Py_DECREF(array5); + } + } + { + if (is_new_object7 && array7) { + Py_DECREF(array7); + } + } + { + if (is_new_object8 && array8) { + Py_DECREF(array8); + } + } + return NULL; +} + + +SWIGINTERN PyObject *_wrap_csr_sample_values__SWIG_6(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { + PyObject *resultobj = 0; + int arg1 ; + int arg2 ; + int *arg3 ; + int *arg4 ; + unsigned int *arg5 ; + int arg6 ; + int *arg7 ; + int *arg8 ; + unsigned int *arg9 ; + int val1 ; + int ecode1 = 0 ; + int val2 ; + int ecode2 = 0 ; + PyArrayObject *array3 = NULL ; + int is_new_object3 ; + PyArrayObject *array4 = NULL ; + int is_new_object4 ; + PyArrayObject *array5 = NULL ; + int is_new_object5 ; + int val6 ; + int ecode6 = 0 ; + PyArrayObject *array7 = NULL ; + int is_new_object7 ; + PyArrayObject *array8 = NULL ; + int is_new_object8 ; + PyArrayObject *temp9 = NULL ; + PyObject * obj0 = 0 ; + PyObject * obj1 = 0 ; + PyObject * obj2 = 0 ; + PyObject * obj3 = 0 ; + PyObject * obj4 = 0 ; + PyObject * obj5 = 0 ; + PyObject * obj6 = 0 ; + PyObject * obj7 = 0 ; + PyObject * obj8 = 0 ; + + if (!PyArg_ParseTuple(args,(char *)"OOOOOOOOO:csr_sample_values",&obj0,&obj1,&obj2,&obj3,&obj4,&obj5,&obj6,&obj7,&obj8)) SWIG_fail; + ecode1 = SWIG_AsVal_int(obj0, &val1); + if (!SWIG_IsOK(ecode1)) { + SWIG_exception_fail(SWIG_ArgError(ecode1), "in method '" "csr_sample_values" "', argument " "1"" of type '" "int""'"); + } + arg1 = static_cast< int >(val1); + ecode2 = SWIG_AsVal_int(obj1, &val2); + if (!SWIG_IsOK(ecode2)) { + SWIG_exception_fail(SWIG_ArgError(ecode2), "in method '" "csr_sample_values" "', argument " "2"" of type '" "int""'"); + } + arg2 = static_cast< int >(val2); + { + npy_intp size[1] = { + -1 + }; + array3 = obj_to_array_contiguous_allow_conversion(obj2, PyArray_INT, &is_new_object3); + if (!array3 || !require_dimensions(array3,1) || !require_size(array3,size,1) + || !require_contiguous(array3) || !require_native(array3)) SWIG_fail; + + arg3 = (int*) array3->data; + } + { + npy_intp size[1] = { + -1 + }; + array4 = obj_to_array_contiguous_allow_conversion(obj3, PyArray_INT, &is_new_object4); + if (!array4 || !require_dimensions(array4,1) || !require_size(array4,size,1) + || !require_contiguous(array4) || !require_native(array4)) SWIG_fail; + + arg4 = (int*) array4->data; + } + { + npy_intp size[1] = { + -1 + }; + array5 = obj_to_array_contiguous_allow_conversion(obj4, PyArray_UINT, &is_new_object5); + if (!array5 || !require_dimensions(array5,1) || !require_size(array5,size,1) + || !require_contiguous(array5) || !require_native(array5)) SWIG_fail; + + arg5 = (unsigned int*) array5->data; + } + ecode6 = SWIG_AsVal_int(obj5, &val6); + if (!SWIG_IsOK(ecode6)) { + SWIG_exception_fail(SWIG_ArgError(ecode6), "in method '" "csr_sample_values" "', argument " "6"" of type '" "int""'"); + } + arg6 = static_cast< int >(val6); + { + npy_intp size[1] = { + -1 + }; + array7 = obj_to_array_contiguous_allow_conversion(obj6, PyArray_INT, &is_new_object7); + if (!array7 || !require_dimensions(array7,1) || !require_size(array7,size,1) + || !require_contiguous(array7) || !require_native(array7)) SWIG_fail; + + arg7 = (int*) array7->data; + } + { + npy_intp size[1] = { + -1 + }; + array8 = obj_to_array_contiguous_allow_conversion(obj7, PyArray_INT, &is_new_object8); + if (!array8 || !require_dimensions(array8,1) || !require_size(array8,size,1) + || !require_contiguous(array8) || !require_native(array8)) SWIG_fail; + + arg8 = (int*) array8->data; + } + { + temp9 = obj_to_array_no_conversion(obj8,PyArray_UINT); + if (!temp9 || !require_contiguous(temp9) || !require_native(temp9)) SWIG_fail; + arg9 = (unsigned int*) array_data(temp9); + } + csr_sample_values< int,unsigned int >(arg1,arg2,(int const (*))arg3,(int const (*))arg4,(unsigned int const (*))arg5,arg6,(int const (*))arg7,(int const (*))arg8,arg9); + resultobj = SWIG_Py_Void(); + { + if (is_new_object3 && array3) { + Py_DECREF(array3); + } + } + { + if (is_new_object4 && array4) { + Py_DECREF(array4); + } + } + { + if (is_new_object5 && array5) { + Py_DECREF(array5); + } + } + { + if (is_new_object7 && array7) { + Py_DECREF(array7); + } + } + { + if (is_new_object8 && array8) { + Py_DECREF(array8); + } + } + return resultobj; +fail: + { + if (is_new_object3 && array3) { + Py_DECREF(array3); + } + } + { + if (is_new_object4 && array4) { + Py_DECREF(array4); + } + } + { + if (is_new_object5 && array5) { + Py_DECREF(array5); + } + } + { + if (is_new_object7 && array7) { + Py_DECREF(array7); + } + } + { + if (is_new_object8 && array8) { + Py_DECREF(array8); + } + } + return NULL; +} + + +SWIGINTERN PyObject *_wrap_csr_sample_values__SWIG_7(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { + PyObject *resultobj = 0; + int arg1 ; + int arg2 ; + int *arg3 ; + int *arg4 ; + long long *arg5 ; + int arg6 ; + int *arg7 ; + int *arg8 ; + long long *arg9 ; + int val1 ; + int ecode1 = 0 ; + int val2 ; + int ecode2 = 0 ; + PyArrayObject *array3 = NULL ; + int is_new_object3 ; + PyArrayObject *array4 = NULL ; + int is_new_object4 ; + PyArrayObject *array5 = NULL ; + int is_new_object5 ; + int val6 ; + int ecode6 = 0 ; + PyArrayObject *array7 = NULL ; + int is_new_object7 ; + PyArrayObject *array8 = NULL ; + int is_new_object8 ; + PyArrayObject *temp9 = NULL ; + PyObject * obj0 = 0 ; + PyObject * obj1 = 0 ; + PyObject * obj2 = 0 ; + PyObject * obj3 = 0 ; + PyObject * obj4 = 0 ; + PyObject * obj5 = 0 ; + PyObject * obj6 = 0 ; + PyObject * obj7 = 0 ; + PyObject * obj8 = 0 ; + + if (!PyArg_ParseTuple(args,(char *)"OOOOOOOOO:csr_sample_values",&obj0,&obj1,&obj2,&obj3,&obj4,&obj5,&obj6,&obj7,&obj8)) SWIG_fail; + ecode1 = SWIG_AsVal_int(obj0, &val1); + if (!SWIG_IsOK(ecode1)) { + SWIG_exception_fail(SWIG_ArgError(ecode1), "in method '" "csr_sample_values" "', argument " "1"" of type '" "int""'"); + } + arg1 = static_cast< int >(val1); + ecode2 = SWIG_AsVal_int(obj1, &val2); + if (!SWIG_IsOK(ecode2)) { + SWIG_exception_fail(SWIG_ArgError(ecode2), "in method '" "csr_sample_values" "', argument " "2"" of type '" "int""'"); + } + arg2 = static_cast< int >(val2); + { + npy_intp size[1] = { + -1 + }; + array3 = obj_to_array_contiguous_allow_conversion(obj2, PyArray_INT, &is_new_object3); + if (!array3 || !require_dimensions(array3,1) || !require_size(array3,size,1) + || !require_contiguous(array3) || !require_native(array3)) SWIG_fail; + + arg3 = (int*) array3->data; + } + { + npy_intp size[1] = { + -1 + }; + array4 = obj_to_array_contiguous_allow_conversion(obj3, PyArray_INT, &is_new_object4); + if (!array4 || !require_dimensions(array4,1) || !require_size(array4,size,1) + || !require_contiguous(array4) || !require_native(array4)) SWIG_fail; + + arg4 = (int*) array4->data; + } + { + npy_intp size[1] = { + -1 + }; + array5 = obj_to_array_contiguous_allow_conversion(obj4, PyArray_LONGLONG, &is_new_object5); + if (!array5 || !require_dimensions(array5,1) || !require_size(array5,size,1) + || !require_contiguous(array5) || !require_native(array5)) SWIG_fail; + + arg5 = (long long*) array5->data; + } + ecode6 = SWIG_AsVal_int(obj5, &val6); + if (!SWIG_IsOK(ecode6)) { + SWIG_exception_fail(SWIG_ArgError(ecode6), "in method '" "csr_sample_values" "', argument " "6"" of type '" "int""'"); + } + arg6 = static_cast< int >(val6); + { + npy_intp size[1] = { + -1 + }; + array7 = obj_to_array_contiguous_allow_conversion(obj6, PyArray_INT, &is_new_object7); + if (!array7 || !require_dimensions(array7,1) || !require_size(array7,size,1) + || !require_contiguous(array7) || !require_native(array7)) SWIG_fail; + + arg7 = (int*) array7->data; + } + { + npy_intp size[1] = { + -1 + }; + array8 = obj_to_array_contiguous_allow_conversion(obj7, PyArray_INT, &is_new_object8); + if (!array8 || !require_dimensions(array8,1) || !require_size(array8,size,1) + || !require_contiguous(array8) || !require_native(array8)) SWIG_fail; + + arg8 = (int*) array8->data; + } + { + temp9 = obj_to_array_no_conversion(obj8,PyArray_LONGLONG); + if (!temp9 || !require_contiguous(temp9) || !require_native(temp9)) SWIG_fail; + arg9 = (long long*) array_data(temp9); + } + csr_sample_values< int,long long >(arg1,arg2,(int const (*))arg3,(int const (*))arg4,(long long const (*))arg5,arg6,(int const (*))arg7,(int const (*))arg8,arg9); + resultobj = SWIG_Py_Void(); + { + if (is_new_object3 && array3) { + Py_DECREF(array3); + } + } + { + if (is_new_object4 && array4) { + Py_DECREF(array4); + } + } + { + if (is_new_object5 && array5) { + Py_DECREF(array5); + } + } + { + if (is_new_object7 && array7) { + Py_DECREF(array7); + } + } + { + if (is_new_object8 && array8) { + Py_DECREF(array8); + } + } + return resultobj; +fail: + { + if (is_new_object3 && array3) { + Py_DECREF(array3); + } + } + { + if (is_new_object4 && array4) { + Py_DECREF(array4); + } + } + { + if (is_new_object5 && array5) { + Py_DECREF(array5); + } + } + { + if (is_new_object7 && array7) { + Py_DECREF(array7); + } + } + { + if (is_new_object8 && array8) { + Py_DECREF(array8); + } + } + return NULL; +} + + +SWIGINTERN PyObject *_wrap_csr_sample_values__SWIG_8(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { + PyObject *resultobj = 0; + int arg1 ; + int arg2 ; + int *arg3 ; + int *arg4 ; + unsigned long long *arg5 ; + int arg6 ; + int *arg7 ; + int *arg8 ; + unsigned long long *arg9 ; + int val1 ; + int ecode1 = 0 ; + int val2 ; + int ecode2 = 0 ; + PyArrayObject *array3 = NULL ; + int is_new_object3 ; + PyArrayObject *array4 = NULL ; + int is_new_object4 ; + PyArrayObject *array5 = NULL ; + int is_new_object5 ; + int val6 ; + int ecode6 = 0 ; + PyArrayObject *array7 = NULL ; + int is_new_object7 ; + PyArrayObject *array8 = NULL ; + int is_new_object8 ; + PyArrayObject *temp9 = NULL ; + PyObject * obj0 = 0 ; + PyObject * obj1 = 0 ; + PyObject * obj2 = 0 ; + PyObject * obj3 = 0 ; + PyObject * obj4 = 0 ; + PyObject * obj5 = 0 ; + PyObject * obj6 = 0 ; + PyObject * obj7 = 0 ; + PyObject * obj8 = 0 ; + + if (!PyArg_ParseTuple(args,(char *)"OOOOOOOOO:csr_sample_values",&obj0,&obj1,&obj2,&obj3,&obj4,&obj5,&obj6,&obj7,&obj8)) SWIG_fail; + ecode1 = SWIG_AsVal_int(obj0, &val1); + if (!SWIG_IsOK(ecode1)) { + SWIG_exception_fail(SWIG_ArgError(ecode1), "in method '" "csr_sample_values" "', argument " "1"" of type '" "int""'"); + } + arg1 = static_cast< int >(val1); + ecode2 = SWIG_AsVal_int(obj1, &val2); + if (!SWIG_IsOK(ecode2)) { + SWIG_exception_fail(SWIG_ArgError(ecode2), "in method '" "csr_sample_values" "', argument " "2"" of type '" "int""'"); + } + arg2 = static_cast< int >(val2); + { + npy_intp size[1] = { + -1 + }; + array3 = obj_to_array_contiguous_allow_conversion(obj2, PyArray_INT, &is_new_object3); + if (!array3 || !require_dimensions(array3,1) || !require_size(array3,size,1) + || !require_contiguous(array3) || !require_native(array3)) SWIG_fail; + + arg3 = (int*) array3->data; + } + { + npy_intp size[1] = { + -1 + }; + array4 = obj_to_array_contiguous_allow_conversion(obj3, PyArray_INT, &is_new_object4); + if (!array4 || !require_dimensions(array4,1) || !require_size(array4,size,1) + || !require_contiguous(array4) || !require_native(array4)) SWIG_fail; + + arg4 = (int*) array4->data; + } + { + npy_intp size[1] = { + -1 + }; + array5 = obj_to_array_contiguous_allow_conversion(obj4, PyArray_ULONGLONG, &is_new_object5); + if (!array5 || !require_dimensions(array5,1) || !require_size(array5,size,1) + || !require_contiguous(array5) || !require_native(array5)) SWIG_fail; + + arg5 = (unsigned long long*) array5->data; + } + ecode6 = SWIG_AsVal_int(obj5, &val6); + if (!SWIG_IsOK(ecode6)) { + SWIG_exception_fail(SWIG_ArgError(ecode6), "in method '" "csr_sample_values" "', argument " "6"" of type '" "int""'"); + } + arg6 = static_cast< int >(val6); + { + npy_intp size[1] = { + -1 + }; + array7 = obj_to_array_contiguous_allow_conversion(obj6, PyArray_INT, &is_new_object7); + if (!array7 || !require_dimensions(array7,1) || !require_size(array7,size,1) + || !require_contiguous(array7) || !require_native(array7)) SWIG_fail; + + arg7 = (int*) array7->data; + } + { + npy_intp size[1] = { + -1 + }; + array8 = obj_to_array_contiguous_allow_conversion(obj7, PyArray_INT, &is_new_object8); + if (!array8 || !require_dimensions(array8,1) || !require_size(array8,size,1) + || !require_contiguous(array8) || !require_native(array8)) SWIG_fail; + + arg8 = (int*) array8->data; + } + { + temp9 = obj_to_array_no_conversion(obj8,PyArray_ULONGLONG); + if (!temp9 || !require_contiguous(temp9) || !require_native(temp9)) SWIG_fail; + arg9 = (unsigned long long*) array_data(temp9); + } + csr_sample_values< int,unsigned long long >(arg1,arg2,(int const (*))arg3,(int const (*))arg4,(unsigned long long const (*))arg5,arg6,(int const (*))arg7,(int const (*))arg8,arg9); + resultobj = SWIG_Py_Void(); + { + if (is_new_object3 && array3) { + Py_DECREF(array3); + } + } + { + if (is_new_object4 && array4) { + Py_DECREF(array4); + } + } + { + if (is_new_object5 && array5) { + Py_DECREF(array5); + } + } + { + if (is_new_object7 && array7) { + Py_DECREF(array7); + } + } + { + if (is_new_object8 && array8) { + Py_DECREF(array8); + } + } + return resultobj; +fail: + { + if (is_new_object3 && array3) { + Py_DECREF(array3); + } + } + { + if (is_new_object4 && array4) { + Py_DECREF(array4); + } + } + { + if (is_new_object5 && array5) { + Py_DECREF(array5); + } + } + { + if (is_new_object7 && array7) { + Py_DECREF(array7); + } + } + { + if (is_new_object8 && array8) { + Py_DECREF(array8); + } + } + return NULL; +} + + +SWIGINTERN PyObject *_wrap_csr_sample_values__SWIG_9(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { + PyObject *resultobj = 0; + int arg1 ; + int arg2 ; + int *arg3 ; + int *arg4 ; + float *arg5 ; + int arg6 ; + int *arg7 ; + int *arg8 ; + float *arg9 ; + int val1 ; + int ecode1 = 0 ; + int val2 ; + int ecode2 = 0 ; + PyArrayObject *array3 = NULL ; + int is_new_object3 ; + PyArrayObject *array4 = NULL ; + int is_new_object4 ; + PyArrayObject *array5 = NULL ; + int is_new_object5 ; + int val6 ; + int ecode6 = 0 ; + PyArrayObject *array7 = NULL ; + int is_new_object7 ; + PyArrayObject *array8 = NULL ; + int is_new_object8 ; + PyArrayObject *temp9 = NULL ; + PyObject * obj0 = 0 ; + PyObject * obj1 = 0 ; + PyObject * obj2 = 0 ; + PyObject * obj3 = 0 ; + PyObject * obj4 = 0 ; + PyObject * obj5 = 0 ; + PyObject * obj6 = 0 ; + PyObject * obj7 = 0 ; + PyObject * obj8 = 0 ; + + if (!PyArg_ParseTuple(args,(char *)"OOOOOOOOO:csr_sample_values",&obj0,&obj1,&obj2,&obj3,&obj4,&obj5,&obj6,&obj7,&obj8)) SWIG_fail; + ecode1 = SWIG_AsVal_int(obj0, &val1); + if (!SWIG_IsOK(ecode1)) { + SWIG_exception_fail(SWIG_ArgError(ecode1), "in method '" "csr_sample_values" "', argument " "1"" of type '" "int""'"); + } + arg1 = static_cast< int >(val1); + ecode2 = SWIG_AsVal_int(obj1, &val2); + if (!SWIG_IsOK(ecode2)) { + SWIG_exception_fail(SWIG_ArgError(ecode2), "in method '" "csr_sample_values" "', argument " "2"" of type '" "int""'"); + } + arg2 = static_cast< int >(val2); + { + npy_intp size[1] = { + -1 + }; + array3 = obj_to_array_contiguous_allow_conversion(obj2, PyArray_INT, &is_new_object3); + if (!array3 || !require_dimensions(array3,1) || !require_size(array3,size,1) + || !require_contiguous(array3) || !require_native(array3)) SWIG_fail; + + arg3 = (int*) array3->data; + } + { + npy_intp size[1] = { + -1 + }; + array4 = obj_to_array_contiguous_allow_conversion(obj3, PyArray_INT, &is_new_object4); + if (!array4 || !require_dimensions(array4,1) || !require_size(array4,size,1) + || !require_contiguous(array4) || !require_native(array4)) SWIG_fail; + + arg4 = (int*) array4->data; + } + { + npy_intp size[1] = { + -1 + }; + array5 = obj_to_array_contiguous_allow_conversion(obj4, PyArray_FLOAT, &is_new_object5); + if (!array5 || !require_dimensions(array5,1) || !require_size(array5,size,1) + || !require_contiguous(array5) || !require_native(array5)) SWIG_fail; + + arg5 = (float*) array5->data; + } + ecode6 = SWIG_AsVal_int(obj5, &val6); + if (!SWIG_IsOK(ecode6)) { + SWIG_exception_fail(SWIG_ArgError(ecode6), "in method '" "csr_sample_values" "', argument " "6"" of type '" "int""'"); + } + arg6 = static_cast< int >(val6); + { + npy_intp size[1] = { + -1 + }; + array7 = obj_to_array_contiguous_allow_conversion(obj6, PyArray_INT, &is_new_object7); + if (!array7 || !require_dimensions(array7,1) || !require_size(array7,size,1) + || !require_contiguous(array7) || !require_native(array7)) SWIG_fail; + + arg7 = (int*) array7->data; + } + { + npy_intp size[1] = { + -1 + }; + array8 = obj_to_array_contiguous_allow_conversion(obj7, PyArray_INT, &is_new_object8); + if (!array8 || !require_dimensions(array8,1) || !require_size(array8,size,1) + || !require_contiguous(array8) || !require_native(array8)) SWIG_fail; + + arg8 = (int*) array8->data; + } + { + temp9 = obj_to_array_no_conversion(obj8,PyArray_FLOAT); + if (!temp9 || !require_contiguous(temp9) || !require_native(temp9)) SWIG_fail; + arg9 = (float*) array_data(temp9); + } + csr_sample_values< int,float >(arg1,arg2,(int const (*))arg3,(int const (*))arg4,(float const (*))arg5,arg6,(int const (*))arg7,(int const (*))arg8,arg9); + resultobj = SWIG_Py_Void(); + { + if (is_new_object3 && array3) { + Py_DECREF(array3); + } + } + { + if (is_new_object4 && array4) { + Py_DECREF(array4); + } + } + { + if (is_new_object5 && array5) { + Py_DECREF(array5); + } + } + { + if (is_new_object7 && array7) { + Py_DECREF(array7); + } + } + { + if (is_new_object8 && array8) { + Py_DECREF(array8); + } + } + return resultobj; +fail: + { + if (is_new_object3 && array3) { + Py_DECREF(array3); + } + } + { + if (is_new_object4 && array4) { + Py_DECREF(array4); + } + } + { + if (is_new_object5 && array5) { + Py_DECREF(array5); + } + } + { + if (is_new_object7 && array7) { + Py_DECREF(array7); + } + } + { + if (is_new_object8 && array8) { + Py_DECREF(array8); + } + } + return NULL; +} + + +SWIGINTERN PyObject *_wrap_csr_sample_values__SWIG_10(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { + PyObject *resultobj = 0; + int arg1 ; + int arg2 ; + int *arg3 ; + int *arg4 ; + double *arg5 ; + int arg6 ; + int *arg7 ; + int *arg8 ; + double *arg9 ; + int val1 ; + int ecode1 = 0 ; + int val2 ; + int ecode2 = 0 ; + PyArrayObject *array3 = NULL ; + int is_new_object3 ; + PyArrayObject *array4 = NULL ; + int is_new_object4 ; + PyArrayObject *array5 = NULL ; + int is_new_object5 ; + int val6 ; + int ecode6 = 0 ; + PyArrayObject *array7 = NULL ; + int is_new_object7 ; + PyArrayObject *array8 = NULL ; + int is_new_object8 ; + PyArrayObject *temp9 = NULL ; + PyObject * obj0 = 0 ; + PyObject * obj1 = 0 ; + PyObject * obj2 = 0 ; + PyObject * obj3 = 0 ; + PyObject * obj4 = 0 ; + PyObject * obj5 = 0 ; + PyObject * obj6 = 0 ; + PyObject * obj7 = 0 ; + PyObject * obj8 = 0 ; + + if (!PyArg_ParseTuple(args,(char *)"OOOOOOOOO:csr_sample_values",&obj0,&obj1,&obj2,&obj3,&obj4,&obj5,&obj6,&obj7,&obj8)) SWIG_fail; + ecode1 = SWIG_AsVal_int(obj0, &val1); + if (!SWIG_IsOK(ecode1)) { + SWIG_exception_fail(SWIG_ArgError(ecode1), "in method '" "csr_sample_values" "', argument " "1"" of type '" "int""'"); + } + arg1 = static_cast< int >(val1); + ecode2 = SWIG_AsVal_int(obj1, &val2); + if (!SWIG_IsOK(ecode2)) { + SWIG_exception_fail(SWIG_ArgError(ecode2), "in method '" "csr_sample_values" "', argument " "2"" of type '" "int""'"); + } + arg2 = static_cast< int >(val2); + { + npy_intp size[1] = { + -1 + }; + array3 = obj_to_array_contiguous_allow_conversion(obj2, PyArray_INT, &is_new_object3); + if (!array3 || !require_dimensions(array3,1) || !require_size(array3,size,1) + || !require_contiguous(array3) || !require_native(array3)) SWIG_fail; + + arg3 = (int*) array3->data; + } + { + npy_intp size[1] = { + -1 + }; + array4 = obj_to_array_contiguous_allow_conversion(obj3, PyArray_INT, &is_new_object4); + if (!array4 || !require_dimensions(array4,1) || !require_size(array4,size,1) + || !require_contiguous(array4) || !require_native(array4)) SWIG_fail; + + arg4 = (int*) array4->data; + } + { + npy_intp size[1] = { + -1 + }; + array5 = obj_to_array_contiguous_allow_conversion(obj4, PyArray_DOUBLE, &is_new_object5); + if (!array5 || !require_dimensions(array5,1) || !require_size(array5,size,1) + || !require_contiguous(array5) || !require_native(array5)) SWIG_fail; + + arg5 = (double*) array5->data; + } + ecode6 = SWIG_AsVal_int(obj5, &val6); + if (!SWIG_IsOK(ecode6)) { + SWIG_exception_fail(SWIG_ArgError(ecode6), "in method '" "csr_sample_values" "', argument " "6"" of type '" "int""'"); + } + arg6 = static_cast< int >(val6); + { + npy_intp size[1] = { + -1 + }; + array7 = obj_to_array_contiguous_allow_conversion(obj6, PyArray_INT, &is_new_object7); + if (!array7 || !require_dimensions(array7,1) || !require_size(array7,size,1) + || !require_contiguous(array7) || !require_native(array7)) SWIG_fail; + + arg7 = (int*) array7->data; + } + { + npy_intp size[1] = { + -1 + }; + array8 = obj_to_array_contiguous_allow_conversion(obj7, PyArray_INT, &is_new_object8); + if (!array8 || !require_dimensions(array8,1) || !require_size(array8,size,1) + || !require_contiguous(array8) || !require_native(array8)) SWIG_fail; + + arg8 = (int*) array8->data; + } + { + temp9 = obj_to_array_no_conversion(obj8,PyArray_DOUBLE); + if (!temp9 || !require_contiguous(temp9) || !require_native(temp9)) SWIG_fail; + arg9 = (double*) array_data(temp9); + } + csr_sample_values< int,double >(arg1,arg2,(int const (*))arg3,(int const (*))arg4,(double const (*))arg5,arg6,(int const (*))arg7,(int const (*))arg8,arg9); + resultobj = SWIG_Py_Void(); + { + if (is_new_object3 && array3) { + Py_DECREF(array3); + } + } + { + if (is_new_object4 && array4) { + Py_DECREF(array4); + } + } + { + if (is_new_object5 && array5) { + Py_DECREF(array5); + } + } + { + if (is_new_object7 && array7) { + Py_DECREF(array7); + } + } + { + if (is_new_object8 && array8) { + Py_DECREF(array8); + } + } + return resultobj; +fail: + { + if (is_new_object3 && array3) { + Py_DECREF(array3); + } + } + { + if (is_new_object4 && array4) { + Py_DECREF(array4); + } + } + { + if (is_new_object5 && array5) { + Py_DECREF(array5); + } + } + { + if (is_new_object7 && array7) { + Py_DECREF(array7); + } + } + { + if (is_new_object8 && array8) { + Py_DECREF(array8); + } + } + return NULL; +} + + +SWIGINTERN PyObject *_wrap_csr_sample_values__SWIG_11(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { + PyObject *resultobj = 0; + int arg1 ; + int arg2 ; + int *arg3 ; + int *arg4 ; + long double *arg5 ; + int arg6 ; + int *arg7 ; + int *arg8 ; + long double *arg9 ; + int val1 ; + int ecode1 = 0 ; + int val2 ; + int ecode2 = 0 ; + PyArrayObject *array3 = NULL ; + int is_new_object3 ; + PyArrayObject *array4 = NULL ; + int is_new_object4 ; + PyArrayObject *array5 = NULL ; + int is_new_object5 ; + int val6 ; + int ecode6 = 0 ; + PyArrayObject *array7 = NULL ; + int is_new_object7 ; + PyArrayObject *array8 = NULL ; + int is_new_object8 ; + PyArrayObject *temp9 = NULL ; + PyObject * obj0 = 0 ; + PyObject * obj1 = 0 ; + PyObject * obj2 = 0 ; + PyObject * obj3 = 0 ; + PyObject * obj4 = 0 ; + PyObject * obj5 = 0 ; + PyObject * obj6 = 0 ; + PyObject * obj7 = 0 ; + PyObject * obj8 = 0 ; + + if (!PyArg_ParseTuple(args,(char *)"OOOOOOOOO:csr_sample_values",&obj0,&obj1,&obj2,&obj3,&obj4,&obj5,&obj6,&obj7,&obj8)) SWIG_fail; + ecode1 = SWIG_AsVal_int(obj0, &val1); + if (!SWIG_IsOK(ecode1)) { + SWIG_exception_fail(SWIG_ArgError(ecode1), "in method '" "csr_sample_values" "', argument " "1"" of type '" "int""'"); + } + arg1 = static_cast< int >(val1); + ecode2 = SWIG_AsVal_int(obj1, &val2); + if (!SWIG_IsOK(ecode2)) { + SWIG_exception_fail(SWIG_ArgError(ecode2), "in method '" "csr_sample_values" "', argument " "2"" of type '" "int""'"); + } + arg2 = static_cast< int >(val2); + { + npy_intp size[1] = { + -1 + }; + array3 = obj_to_array_contiguous_allow_conversion(obj2, PyArray_INT, &is_new_object3); + if (!array3 || !require_dimensions(array3,1) || !require_size(array3,size,1) + || !require_contiguous(array3) || !require_native(array3)) SWIG_fail; + + arg3 = (int*) array3->data; + } + { + npy_intp size[1] = { + -1 + }; + array4 = obj_to_array_contiguous_allow_conversion(obj3, PyArray_INT, &is_new_object4); + if (!array4 || !require_dimensions(array4,1) || !require_size(array4,size,1) + || !require_contiguous(array4) || !require_native(array4)) SWIG_fail; + + arg4 = (int*) array4->data; + } + { + npy_intp size[1] = { + -1 + }; + array5 = obj_to_array_contiguous_allow_conversion(obj4, PyArray_LONGDOUBLE, &is_new_object5); + if (!array5 || !require_dimensions(array5,1) || !require_size(array5,size,1) + || !require_contiguous(array5) || !require_native(array5)) SWIG_fail; + + arg5 = (long double*) array5->data; + } + ecode6 = SWIG_AsVal_int(obj5, &val6); + if (!SWIG_IsOK(ecode6)) { + SWIG_exception_fail(SWIG_ArgError(ecode6), "in method '" "csr_sample_values" "', argument " "6"" of type '" "int""'"); + } + arg6 = static_cast< int >(val6); + { + npy_intp size[1] = { + -1 + }; + array7 = obj_to_array_contiguous_allow_conversion(obj6, PyArray_INT, &is_new_object7); + if (!array7 || !require_dimensions(array7,1) || !require_size(array7,size,1) + || !require_contiguous(array7) || !require_native(array7)) SWIG_fail; + + arg7 = (int*) array7->data; + } + { + npy_intp size[1] = { + -1 + }; + array8 = obj_to_array_contiguous_allow_conversion(obj7, PyArray_INT, &is_new_object8); + if (!array8 || !require_dimensions(array8,1) || !require_size(array8,size,1) + || !require_contiguous(array8) || !require_native(array8)) SWIG_fail; + + arg8 = (int*) array8->data; + } + { + temp9 = obj_to_array_no_conversion(obj8,PyArray_LONGDOUBLE); + if (!temp9 || !require_contiguous(temp9) || !require_native(temp9)) SWIG_fail; + arg9 = (long double*) array_data(temp9); + } + csr_sample_values< int,long double >(arg1,arg2,(int const (*))arg3,(int const (*))arg4,(long double const (*))arg5,arg6,(int const (*))arg7,(int const (*))arg8,arg9); + resultobj = SWIG_Py_Void(); + { + if (is_new_object3 && array3) { + Py_DECREF(array3); + } + } + { + if (is_new_object4 && array4) { + Py_DECREF(array4); + } + } + { + if (is_new_object5 && array5) { + Py_DECREF(array5); + } + } + { + if (is_new_object7 && array7) { + Py_DECREF(array7); + } + } + { + if (is_new_object8 && array8) { + Py_DECREF(array8); + } + } + return resultobj; +fail: + { + if (is_new_object3 && array3) { + Py_DECREF(array3); + } + } + { + if (is_new_object4 && array4) { + Py_DECREF(array4); + } + } + { + if (is_new_object5 && array5) { + Py_DECREF(array5); + } + } + { + if (is_new_object7 && array7) { + Py_DECREF(array7); + } + } + { + if (is_new_object8 && array8) { + Py_DECREF(array8); + } + } + return NULL; +} + + +SWIGINTERN PyObject *_wrap_csr_sample_values__SWIG_12(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { + PyObject *resultobj = 0; + int arg1 ; + int arg2 ; + int *arg3 ; + int *arg4 ; + npy_cfloat_wrapper *arg5 ; + int arg6 ; + int *arg7 ; + int *arg8 ; + npy_cfloat_wrapper *arg9 ; + int val1 ; + int ecode1 = 0 ; + int val2 ; + int ecode2 = 0 ; + PyArrayObject *array3 = NULL ; + int is_new_object3 ; + PyArrayObject *array4 = NULL ; + int is_new_object4 ; + PyArrayObject *array5 = NULL ; + int is_new_object5 ; + int val6 ; + int ecode6 = 0 ; + PyArrayObject *array7 = NULL ; + int is_new_object7 ; + PyArrayObject *array8 = NULL ; + int is_new_object8 ; + PyArrayObject *temp9 = NULL ; + PyObject * obj0 = 0 ; + PyObject * obj1 = 0 ; + PyObject * obj2 = 0 ; + PyObject * obj3 = 0 ; + PyObject * obj4 = 0 ; + PyObject * obj5 = 0 ; + PyObject * obj6 = 0 ; + PyObject * obj7 = 0 ; + PyObject * obj8 = 0 ; + + if (!PyArg_ParseTuple(args,(char *)"OOOOOOOOO:csr_sample_values",&obj0,&obj1,&obj2,&obj3,&obj4,&obj5,&obj6,&obj7,&obj8)) SWIG_fail; + ecode1 = SWIG_AsVal_int(obj0, &val1); + if (!SWIG_IsOK(ecode1)) { + SWIG_exception_fail(SWIG_ArgError(ecode1), "in method '" "csr_sample_values" "', argument " "1"" of type '" "int""'"); + } + arg1 = static_cast< int >(val1); + ecode2 = SWIG_AsVal_int(obj1, &val2); + if (!SWIG_IsOK(ecode2)) { + SWIG_exception_fail(SWIG_ArgError(ecode2), "in method '" "csr_sample_values" "', argument " "2"" of type '" "int""'"); + } + arg2 = static_cast< int >(val2); + { + npy_intp size[1] = { + -1 + }; + array3 = obj_to_array_contiguous_allow_conversion(obj2, PyArray_INT, &is_new_object3); + if (!array3 || !require_dimensions(array3,1) || !require_size(array3,size,1) + || !require_contiguous(array3) || !require_native(array3)) SWIG_fail; + + arg3 = (int*) array3->data; + } + { + npy_intp size[1] = { + -1 + }; + array4 = obj_to_array_contiguous_allow_conversion(obj3, PyArray_INT, &is_new_object4); + if (!array4 || !require_dimensions(array4,1) || !require_size(array4,size,1) + || !require_contiguous(array4) || !require_native(array4)) SWIG_fail; + + arg4 = (int*) array4->data; + } + { + npy_intp size[1] = { + -1 + }; + array5 = obj_to_array_contiguous_allow_conversion(obj4, PyArray_CFLOAT, &is_new_object5); + if (!array5 || !require_dimensions(array5,1) || !require_size(array5,size,1) + || !require_contiguous(array5) || !require_native(array5)) SWIG_fail; + + arg5 = (npy_cfloat_wrapper*) array5->data; + } + ecode6 = SWIG_AsVal_int(obj5, &val6); + if (!SWIG_IsOK(ecode6)) { + SWIG_exception_fail(SWIG_ArgError(ecode6), "in method '" "csr_sample_values" "', argument " "6"" of type '" "int""'"); + } + arg6 = static_cast< int >(val6); + { + npy_intp size[1] = { + -1 + }; + array7 = obj_to_array_contiguous_allow_conversion(obj6, PyArray_INT, &is_new_object7); + if (!array7 || !require_dimensions(array7,1) || !require_size(array7,size,1) + || !require_contiguous(array7) || !require_native(array7)) SWIG_fail; + + arg7 = (int*) array7->data; + } + { + npy_intp size[1] = { + -1 + }; + array8 = obj_to_array_contiguous_allow_conversion(obj7, PyArray_INT, &is_new_object8); + if (!array8 || !require_dimensions(array8,1) || !require_size(array8,size,1) + || !require_contiguous(array8) || !require_native(array8)) SWIG_fail; + + arg8 = (int*) array8->data; + } + { + temp9 = obj_to_array_no_conversion(obj8,PyArray_CFLOAT); + if (!temp9 || !require_contiguous(temp9) || !require_native(temp9)) SWIG_fail; + arg9 = (npy_cfloat_wrapper*) array_data(temp9); + } + csr_sample_values< int,npy_cfloat_wrapper >(arg1,arg2,(int const (*))arg3,(int const (*))arg4,(npy_cfloat_wrapper const (*))arg5,arg6,(int const (*))arg7,(int const (*))arg8,arg9); + resultobj = SWIG_Py_Void(); + { + if (is_new_object3 && array3) { + Py_DECREF(array3); + } + } + { + if (is_new_object4 && array4) { + Py_DECREF(array4); + } + } + { + if (is_new_object5 && array5) { + Py_DECREF(array5); + } + } + { + if (is_new_object7 && array7) { + Py_DECREF(array7); + } + } + { + if (is_new_object8 && array8) { + Py_DECREF(array8); + } + } + return resultobj; +fail: + { + if (is_new_object3 && array3) { + Py_DECREF(array3); + } + } + { + if (is_new_object4 && array4) { + Py_DECREF(array4); + } + } + { + if (is_new_object5 && array5) { + Py_DECREF(array5); + } + } + { + if (is_new_object7 && array7) { + Py_DECREF(array7); + } + } + { + if (is_new_object8 && array8) { + Py_DECREF(array8); + } + } + return NULL; +} + + +SWIGINTERN PyObject *_wrap_csr_sample_values__SWIG_13(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { + PyObject *resultobj = 0; + int arg1 ; + int arg2 ; + int *arg3 ; + int *arg4 ; + npy_cdouble_wrapper *arg5 ; + int arg6 ; + int *arg7 ; + int *arg8 ; + npy_cdouble_wrapper *arg9 ; + int val1 ; + int ecode1 = 0 ; + int val2 ; + int ecode2 = 0 ; + PyArrayObject *array3 = NULL ; + int is_new_object3 ; + PyArrayObject *array4 = NULL ; + int is_new_object4 ; + PyArrayObject *array5 = NULL ; + int is_new_object5 ; + int val6 ; + int ecode6 = 0 ; + PyArrayObject *array7 = NULL ; + int is_new_object7 ; + PyArrayObject *array8 = NULL ; + int is_new_object8 ; + PyArrayObject *temp9 = NULL ; + PyObject * obj0 = 0 ; + PyObject * obj1 = 0 ; + PyObject * obj2 = 0 ; + PyObject * obj3 = 0 ; + PyObject * obj4 = 0 ; + PyObject * obj5 = 0 ; + PyObject * obj6 = 0 ; + PyObject * obj7 = 0 ; + PyObject * obj8 = 0 ; + + if (!PyArg_ParseTuple(args,(char *)"OOOOOOOOO:csr_sample_values",&obj0,&obj1,&obj2,&obj3,&obj4,&obj5,&obj6,&obj7,&obj8)) SWIG_fail; + ecode1 = SWIG_AsVal_int(obj0, &val1); + if (!SWIG_IsOK(ecode1)) { + SWIG_exception_fail(SWIG_ArgError(ecode1), "in method '" "csr_sample_values" "', argument " "1"" of type '" "int""'"); + } + arg1 = static_cast< int >(val1); + ecode2 = SWIG_AsVal_int(obj1, &val2); + if (!SWIG_IsOK(ecode2)) { + SWIG_exception_fail(SWIG_ArgError(ecode2), "in method '" "csr_sample_values" "', argument " "2"" of type '" "int""'"); + } + arg2 = static_cast< int >(val2); + { + npy_intp size[1] = { + -1 + }; + array3 = obj_to_array_contiguous_allow_conversion(obj2, PyArray_INT, &is_new_object3); + if (!array3 || !require_dimensions(array3,1) || !require_size(array3,size,1) + || !require_contiguous(array3) || !require_native(array3)) SWIG_fail; + + arg3 = (int*) array3->data; + } + { + npy_intp size[1] = { + -1 + }; + array4 = obj_to_array_contiguous_allow_conversion(obj3, PyArray_INT, &is_new_object4); + if (!array4 || !require_dimensions(array4,1) || !require_size(array4,size,1) + || !require_contiguous(array4) || !require_native(array4)) SWIG_fail; + + arg4 = (int*) array4->data; + } + { + npy_intp size[1] = { + -1 + }; + array5 = obj_to_array_contiguous_allow_conversion(obj4, PyArray_CDOUBLE, &is_new_object5); + if (!array5 || !require_dimensions(array5,1) || !require_size(array5,size,1) + || !require_contiguous(array5) || !require_native(array5)) SWIG_fail; + + arg5 = (npy_cdouble_wrapper*) array5->data; + } + ecode6 = SWIG_AsVal_int(obj5, &val6); + if (!SWIG_IsOK(ecode6)) { + SWIG_exception_fail(SWIG_ArgError(ecode6), "in method '" "csr_sample_values" "', argument " "6"" of type '" "int""'"); + } + arg6 = static_cast< int >(val6); + { + npy_intp size[1] = { + -1 + }; + array7 = obj_to_array_contiguous_allow_conversion(obj6, PyArray_INT, &is_new_object7); + if (!array7 || !require_dimensions(array7,1) || !require_size(array7,size,1) + || !require_contiguous(array7) || !require_native(array7)) SWIG_fail; + + arg7 = (int*) array7->data; + } + { + npy_intp size[1] = { + -1 + }; + array8 = obj_to_array_contiguous_allow_conversion(obj7, PyArray_INT, &is_new_object8); + if (!array8 || !require_dimensions(array8,1) || !require_size(array8,size,1) + || !require_contiguous(array8) || !require_native(array8)) SWIG_fail; + + arg8 = (int*) array8->data; + } + { + temp9 = obj_to_array_no_conversion(obj8,PyArray_CDOUBLE); + if (!temp9 || !require_contiguous(temp9) || !require_native(temp9)) SWIG_fail; + arg9 = (npy_cdouble_wrapper*) array_data(temp9); + } + csr_sample_values< int,npy_cdouble_wrapper >(arg1,arg2,(int const (*))arg3,(int const (*))arg4,(npy_cdouble_wrapper const (*))arg5,arg6,(int const (*))arg7,(int const (*))arg8,arg9); + resultobj = SWIG_Py_Void(); + { + if (is_new_object3 && array3) { + Py_DECREF(array3); + } + } + { + if (is_new_object4 && array4) { + Py_DECREF(array4); + } + } + { + if (is_new_object5 && array5) { + Py_DECREF(array5); + } + } + { + if (is_new_object7 && array7) { + Py_DECREF(array7); + } + } + { + if (is_new_object8 && array8) { + Py_DECREF(array8); + } + } + return resultobj; +fail: + { + if (is_new_object3 && array3) { + Py_DECREF(array3); + } + } + { + if (is_new_object4 && array4) { + Py_DECREF(array4); + } + } + { + if (is_new_object5 && array5) { + Py_DECREF(array5); + } + } + { + if (is_new_object7 && array7) { + Py_DECREF(array7); + } + } + { + if (is_new_object8 && array8) { + Py_DECREF(array8); + } + } + return NULL; +} + + +SWIGINTERN PyObject *_wrap_csr_sample_values__SWIG_14(PyObject *SWIGUNUSEDPARM(self), PyObject *args) { + PyObject *resultobj = 0; + int arg1 ; + int arg2 ; + int *arg3 ; + int *arg4 ; + npy_clongdouble_wrapper *arg5 ; + int arg6 ; + int *arg7 ; + int *arg8 ; + npy_clongdouble_wrapper *arg9 ; + int val1 ; + int ecode1 = 0 ; + int val2 ; + int ecode2 = 0 ; + PyArrayObject *array3 = NULL ; + int is_new_object3 ; + PyArrayObject *array4 = NULL ; + int is_new_object4 ; + PyArrayObject *array5 = NULL ; + int is_new_object5 ; + int val6 ; + int ecode6 = 0 ; + PyArrayObject *array7 = NULL ; + int is_new_object7 ; + PyArrayObject *array8 = NULL ; + int is_new_object8 ; + PyArrayObject *temp9 = NULL ; + PyObject * obj0 = 0 ; + PyObject * obj1 = 0 ; + PyObject * obj2 = 0 ; + PyObject * obj3 = 0 ; + PyObject * obj4 = 0 ; + PyObject * obj5 = 0 ; + PyObject * obj6 = 0 ; + PyObject * obj7 = 0 ; + PyObject * obj8 = 0 ; + + if (!PyArg_ParseTuple(args,(char *)"OOOOOOOOO:csr_sample_values",&obj0,&obj1,&obj2,&obj3,&obj4,&obj5,&obj6,&obj7,&obj8)) SWIG_fail; + ecode1 = SWIG_AsVal_int(obj0, &val1); + if (!SWIG_IsOK(ecode1)) { + SWIG_exception_fail(SWIG_ArgError(ecode1), "in method '" "csr_sample_values" "', argument " "1"" of type '" "int""'"); + } + arg1 = static_cast< int >(val1); + ecode2 = SWIG_AsVal_int(obj1, &val2); + if (!SWIG_IsOK(ecode2)) { + SWIG_exception_fail(SWIG_ArgError(ecode2), "in method '" "csr_sample_values" "', argument " "2"" of type '" "int""'"); + } + arg2 = static_cast< int >(val2); + { + npy_intp size[1] = { + -1 + }; + array3 = obj_to_array_contiguous_allow_conversion(obj2, PyArray_INT, &is_new_object3); + if (!array3 || !require_dimensions(array3,1) || !require_size(array3,size,1) + || !require_contiguous(array3) || !require_native(array3)) SWIG_fail; + + arg3 = (int*) array3->data; + } + { + npy_intp size[1] = { + -1 + }; + array4 = obj_to_array_contiguous_allow_conversion(obj3, PyArray_INT, &is_new_object4); + if (!array4 || !require_dimensions(array4,1) || !require_size(array4,size,1) + || !require_contiguous(array4) || !require_native(array4)) SWIG_fail; + + arg4 = (int*) array4->data; + } + { + npy_intp size[1] = { + -1 + }; + array5 = obj_to_array_contiguous_allow_conversion(obj4, PyArray_CLONGDOUBLE, &is_new_object5); + if (!array5 || !require_dimensions(array5,1) || !require_size(array5,size,1) + || !require_contiguous(array5) || !require_native(array5)) SWIG_fail; + + arg5 = (npy_clongdouble_wrapper*) array5->data; + } + ecode6 = SWIG_AsVal_int(obj5, &val6); + if (!SWIG_IsOK(ecode6)) { + SWIG_exception_fail(SWIG_ArgError(ecode6), "in method '" "csr_sample_values" "', argument " "6"" of type '" "int""'"); + } + arg6 = static_cast< int >(val6); + { + npy_intp size[1] = { + -1 + }; + array7 = obj_to_array_contiguous_allow_conversion(obj6, PyArray_INT, &is_new_object7); + if (!array7 || !require_dimensions(array7,1) || !require_size(array7,size,1) + || !require_contiguous(array7) || !require_native(array7)) SWIG_fail; + + arg7 = (int*) array7->data; + } + { + npy_intp size[1] = { + -1 + }; + array8 = obj_to_array_contiguous_allow_conversion(obj7, PyArray_INT, &is_new_object8); + if (!array8 || !require_dimensions(array8,1) || !require_size(array8,size,1) + || !require_contiguous(array8) || !require_native(array8)) SWIG_fail; + + arg8 = (int*) array8->data; + } + { + temp9 = obj_to_array_no_conversion(obj8,PyArray_CLONGDOUBLE); + if (!temp9 || !require_contiguous(temp9) || !require_native(temp9)) SWIG_fail; + arg9 = (npy_clongdouble_wrapper*) array_data(temp9); + } + csr_sample_values< int,npy_clongdouble_wrapper >(arg1,arg2,(int const (*))arg3,(int const (*))arg4,(npy_clongdouble_wrapper const (*))arg5,arg6,(int const (*))arg7,(int const (*))arg8,arg9); + resultobj = SWIG_Py_Void(); + { + if (is_new_object3 && array3) { + Py_DECREF(array3); + } + } + { + if (is_new_object4 && array4) { + Py_DECREF(array4); + } + } + { + if (is_new_object5 && array5) { + Py_DECREF(array5); + } + } + { + if (is_new_object7 && array7) { + Py_DECREF(array7); + } + } + { + if (is_new_object8 && array8) { + Py_DECREF(array8); + } + } + return resultobj; +fail: + { + if (is_new_object3 && array3) { + Py_DECREF(array3); + } + } + { + if (is_new_object4 && array4) { + Py_DECREF(array4); + } + } + { + if (is_new_object5 && array5) { + Py_DECREF(array5); + } + } + { + if (is_new_object7 && array7) { + Py_DECREF(array7); + } + } + { + if (is_new_object8 && array8) { + Py_DECREF(array8); + } + } + return NULL; +} + + +SWIGINTERN PyObject *_wrap_csr_sample_values(PyObject *self, PyObject *args) { + int argc; + PyObject *argv[10]; + int ii; + + if (!PyTuple_Check(args)) SWIG_fail; + argc = (int)PyObject_Length(args); + for (ii = 0; (ii < argc) && (ii < 9); ii++) { + argv[ii] = PyTuple_GET_ITEM(args,ii); + } + if (argc == 9) { + int _v; + { + int res = SWIG_AsVal_int(argv[0], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + int res = SWIG_AsVal_int(argv[1], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + _v = (is_array(argv[2]) && PyArray_CanCastSafely(PyArray_TYPE(argv[2]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[3]) && PyArray_CanCastSafely(PyArray_TYPE(argv[3]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[4]) && PyArray_CanCastSafely(PyArray_TYPE(argv[4]),PyArray_BYTE)) ? 1 : 0; + } + if (_v) { + { + int res = SWIG_AsVal_int(argv[5], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + _v = (is_array(argv[6]) && PyArray_CanCastSafely(PyArray_TYPE(argv[6]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[7]) && PyArray_CanCastSafely(PyArray_TYPE(argv[7]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[8]) && PyArray_CanCastSafely(PyArray_TYPE(argv[8]),PyArray_BYTE)) ? 1 : 0; + } + if (_v) { + return _wrap_csr_sample_values__SWIG_1(self, args); + } + } + } + } + } + } + } + } + } + } + if (argc == 9) { + int _v; + { + int res = SWIG_AsVal_int(argv[0], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + int res = SWIG_AsVal_int(argv[1], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + _v = (is_array(argv[2]) && PyArray_CanCastSafely(PyArray_TYPE(argv[2]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[3]) && PyArray_CanCastSafely(PyArray_TYPE(argv[3]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[4]) && PyArray_CanCastSafely(PyArray_TYPE(argv[4]),PyArray_UBYTE)) ? 1 : 0; + } + if (_v) { + { + int res = SWIG_AsVal_int(argv[5], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + _v = (is_array(argv[6]) && PyArray_CanCastSafely(PyArray_TYPE(argv[6]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[7]) && PyArray_CanCastSafely(PyArray_TYPE(argv[7]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[8]) && PyArray_CanCastSafely(PyArray_TYPE(argv[8]),PyArray_UBYTE)) ? 1 : 0; + } + if (_v) { + return _wrap_csr_sample_values__SWIG_2(self, args); + } + } + } + } + } + } + } + } + } + } + if (argc == 9) { + int _v; + { + int res = SWIG_AsVal_int(argv[0], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + int res = SWIG_AsVal_int(argv[1], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + _v = (is_array(argv[2]) && PyArray_CanCastSafely(PyArray_TYPE(argv[2]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[3]) && PyArray_CanCastSafely(PyArray_TYPE(argv[3]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[4]) && PyArray_CanCastSafely(PyArray_TYPE(argv[4]),PyArray_SHORT)) ? 1 : 0; + } + if (_v) { + { + int res = SWIG_AsVal_int(argv[5], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + _v = (is_array(argv[6]) && PyArray_CanCastSafely(PyArray_TYPE(argv[6]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[7]) && PyArray_CanCastSafely(PyArray_TYPE(argv[7]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[8]) && PyArray_CanCastSafely(PyArray_TYPE(argv[8]),PyArray_SHORT)) ? 1 : 0; + } + if (_v) { + return _wrap_csr_sample_values__SWIG_3(self, args); + } + } + } + } + } + } + } + } + } + } + if (argc == 9) { + int _v; + { + int res = SWIG_AsVal_int(argv[0], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + int res = SWIG_AsVal_int(argv[1], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + _v = (is_array(argv[2]) && PyArray_CanCastSafely(PyArray_TYPE(argv[2]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[3]) && PyArray_CanCastSafely(PyArray_TYPE(argv[3]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[4]) && PyArray_CanCastSafely(PyArray_TYPE(argv[4]),PyArray_USHORT)) ? 1 : 0; + } + if (_v) { + { + int res = SWIG_AsVal_int(argv[5], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + _v = (is_array(argv[6]) && PyArray_CanCastSafely(PyArray_TYPE(argv[6]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[7]) && PyArray_CanCastSafely(PyArray_TYPE(argv[7]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[8]) && PyArray_CanCastSafely(PyArray_TYPE(argv[8]),PyArray_USHORT)) ? 1 : 0; + } + if (_v) { + return _wrap_csr_sample_values__SWIG_4(self, args); + } + } + } + } + } + } + } + } + } + } + if (argc == 9) { + int _v; + { + int res = SWIG_AsVal_int(argv[0], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + int res = SWIG_AsVal_int(argv[1], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + _v = (is_array(argv[2]) && PyArray_CanCastSafely(PyArray_TYPE(argv[2]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[3]) && PyArray_CanCastSafely(PyArray_TYPE(argv[3]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[4]) && PyArray_CanCastSafely(PyArray_TYPE(argv[4]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + int res = SWIG_AsVal_int(argv[5], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + _v = (is_array(argv[6]) && PyArray_CanCastSafely(PyArray_TYPE(argv[6]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[7]) && PyArray_CanCastSafely(PyArray_TYPE(argv[7]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[8]) && PyArray_CanCastSafely(PyArray_TYPE(argv[8]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + return _wrap_csr_sample_values__SWIG_5(self, args); + } + } + } + } + } + } + } + } + } + } + if (argc == 9) { + int _v; + { + int res = SWIG_AsVal_int(argv[0], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + int res = SWIG_AsVal_int(argv[1], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + _v = (is_array(argv[2]) && PyArray_CanCastSafely(PyArray_TYPE(argv[2]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[3]) && PyArray_CanCastSafely(PyArray_TYPE(argv[3]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[4]) && PyArray_CanCastSafely(PyArray_TYPE(argv[4]),PyArray_UINT)) ? 1 : 0; + } + if (_v) { + { + int res = SWIG_AsVal_int(argv[5], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + _v = (is_array(argv[6]) && PyArray_CanCastSafely(PyArray_TYPE(argv[6]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[7]) && PyArray_CanCastSafely(PyArray_TYPE(argv[7]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[8]) && PyArray_CanCastSafely(PyArray_TYPE(argv[8]),PyArray_UINT)) ? 1 : 0; + } + if (_v) { + return _wrap_csr_sample_values__SWIG_6(self, args); + } + } + } + } + } + } + } + } + } + } + if (argc == 9) { + int _v; + { + int res = SWIG_AsVal_int(argv[0], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + int res = SWIG_AsVal_int(argv[1], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + _v = (is_array(argv[2]) && PyArray_CanCastSafely(PyArray_TYPE(argv[2]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[3]) && PyArray_CanCastSafely(PyArray_TYPE(argv[3]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[4]) && PyArray_CanCastSafely(PyArray_TYPE(argv[4]),PyArray_LONGLONG)) ? 1 : 0; + } + if (_v) { + { + int res = SWIG_AsVal_int(argv[5], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + _v = (is_array(argv[6]) && PyArray_CanCastSafely(PyArray_TYPE(argv[6]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[7]) && PyArray_CanCastSafely(PyArray_TYPE(argv[7]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[8]) && PyArray_CanCastSafely(PyArray_TYPE(argv[8]),PyArray_LONGLONG)) ? 1 : 0; + } + if (_v) { + return _wrap_csr_sample_values__SWIG_7(self, args); + } + } + } + } + } + } + } + } + } + } + if (argc == 9) { + int _v; + { + int res = SWIG_AsVal_int(argv[0], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + int res = SWIG_AsVal_int(argv[1], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + _v = (is_array(argv[2]) && PyArray_CanCastSafely(PyArray_TYPE(argv[2]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[3]) && PyArray_CanCastSafely(PyArray_TYPE(argv[3]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[4]) && PyArray_CanCastSafely(PyArray_TYPE(argv[4]),PyArray_ULONGLONG)) ? 1 : 0; + } + if (_v) { + { + int res = SWIG_AsVal_int(argv[5], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + _v = (is_array(argv[6]) && PyArray_CanCastSafely(PyArray_TYPE(argv[6]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[7]) && PyArray_CanCastSafely(PyArray_TYPE(argv[7]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[8]) && PyArray_CanCastSafely(PyArray_TYPE(argv[8]),PyArray_ULONGLONG)) ? 1 : 0; + } + if (_v) { + return _wrap_csr_sample_values__SWIG_8(self, args); + } + } + } + } + } + } + } + } + } + } + if (argc == 9) { + int _v; + { + int res = SWIG_AsVal_int(argv[0], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + int res = SWIG_AsVal_int(argv[1], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + _v = (is_array(argv[2]) && PyArray_CanCastSafely(PyArray_TYPE(argv[2]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[3]) && PyArray_CanCastSafely(PyArray_TYPE(argv[3]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[4]) && PyArray_CanCastSafely(PyArray_TYPE(argv[4]),PyArray_FLOAT)) ? 1 : 0; + } + if (_v) { + { + int res = SWIG_AsVal_int(argv[5], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + _v = (is_array(argv[6]) && PyArray_CanCastSafely(PyArray_TYPE(argv[6]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[7]) && PyArray_CanCastSafely(PyArray_TYPE(argv[7]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[8]) && PyArray_CanCastSafely(PyArray_TYPE(argv[8]),PyArray_FLOAT)) ? 1 : 0; + } + if (_v) { + return _wrap_csr_sample_values__SWIG_9(self, args); + } + } + } + } + } + } + } + } + } + } + if (argc == 9) { + int _v; + { + int res = SWIG_AsVal_int(argv[0], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + int res = SWIG_AsVal_int(argv[1], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + _v = (is_array(argv[2]) && PyArray_CanCastSafely(PyArray_TYPE(argv[2]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[3]) && PyArray_CanCastSafely(PyArray_TYPE(argv[3]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[4]) && PyArray_CanCastSafely(PyArray_TYPE(argv[4]),PyArray_DOUBLE)) ? 1 : 0; + } + if (_v) { + { + int res = SWIG_AsVal_int(argv[5], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + _v = (is_array(argv[6]) && PyArray_CanCastSafely(PyArray_TYPE(argv[6]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[7]) && PyArray_CanCastSafely(PyArray_TYPE(argv[7]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[8]) && PyArray_CanCastSafely(PyArray_TYPE(argv[8]),PyArray_DOUBLE)) ? 1 : 0; + } + if (_v) { + return _wrap_csr_sample_values__SWIG_10(self, args); + } + } + } + } + } + } + } + } + } + } + if (argc == 9) { + int _v; + { + int res = SWIG_AsVal_int(argv[0], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + int res = SWIG_AsVal_int(argv[1], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + _v = (is_array(argv[2]) && PyArray_CanCastSafely(PyArray_TYPE(argv[2]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[3]) && PyArray_CanCastSafely(PyArray_TYPE(argv[3]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[4]) && PyArray_CanCastSafely(PyArray_TYPE(argv[4]),PyArray_LONGDOUBLE)) ? 1 : 0; + } + if (_v) { + { + int res = SWIG_AsVal_int(argv[5], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + _v = (is_array(argv[6]) && PyArray_CanCastSafely(PyArray_TYPE(argv[6]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[7]) && PyArray_CanCastSafely(PyArray_TYPE(argv[7]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[8]) && PyArray_CanCastSafely(PyArray_TYPE(argv[8]),PyArray_LONGDOUBLE)) ? 1 : 0; + } + if (_v) { + return _wrap_csr_sample_values__SWIG_11(self, args); + } + } + } + } + } + } + } + } + } + } + if (argc == 9) { + int _v; + { + int res = SWIG_AsVal_int(argv[0], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + int res = SWIG_AsVal_int(argv[1], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + _v = (is_array(argv[2]) && PyArray_CanCastSafely(PyArray_TYPE(argv[2]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[3]) && PyArray_CanCastSafely(PyArray_TYPE(argv[3]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[4]) && PyArray_CanCastSafely(PyArray_TYPE(argv[4]),PyArray_CFLOAT)) ? 1 : 0; + } + if (_v) { + { + int res = SWIG_AsVal_int(argv[5], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + _v = (is_array(argv[6]) && PyArray_CanCastSafely(PyArray_TYPE(argv[6]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[7]) && PyArray_CanCastSafely(PyArray_TYPE(argv[7]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[8]) && PyArray_CanCastSafely(PyArray_TYPE(argv[8]),PyArray_CFLOAT)) ? 1 : 0; + } + if (_v) { + return _wrap_csr_sample_values__SWIG_12(self, args); + } + } + } + } + } + } + } + } + } + } + if (argc == 9) { + int _v; + { + int res = SWIG_AsVal_int(argv[0], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + int res = SWIG_AsVal_int(argv[1], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + _v = (is_array(argv[2]) && PyArray_CanCastSafely(PyArray_TYPE(argv[2]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[3]) && PyArray_CanCastSafely(PyArray_TYPE(argv[3]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[4]) && PyArray_CanCastSafely(PyArray_TYPE(argv[4]),PyArray_CDOUBLE)) ? 1 : 0; + } + if (_v) { + { + int res = SWIG_AsVal_int(argv[5], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + _v = (is_array(argv[6]) && PyArray_CanCastSafely(PyArray_TYPE(argv[6]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[7]) && PyArray_CanCastSafely(PyArray_TYPE(argv[7]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[8]) && PyArray_CanCastSafely(PyArray_TYPE(argv[8]),PyArray_CDOUBLE)) ? 1 : 0; + } + if (_v) { + return _wrap_csr_sample_values__SWIG_13(self, args); + } + } + } + } + } + } + } + } + } + } + if (argc == 9) { + int _v; + { + int res = SWIG_AsVal_int(argv[0], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + int res = SWIG_AsVal_int(argv[1], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + _v = (is_array(argv[2]) && PyArray_CanCastSafely(PyArray_TYPE(argv[2]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[3]) && PyArray_CanCastSafely(PyArray_TYPE(argv[3]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[4]) && PyArray_CanCastSafely(PyArray_TYPE(argv[4]),PyArray_CLONGDOUBLE)) ? 1 : 0; + } + if (_v) { + { + int res = SWIG_AsVal_int(argv[5], NULL); + _v = SWIG_CheckState(res); + } + if (_v) { + { + _v = (is_array(argv[6]) && PyArray_CanCastSafely(PyArray_TYPE(argv[6]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[7]) && PyArray_CanCastSafely(PyArray_TYPE(argv[7]),PyArray_INT)) ? 1 : 0; + } + if (_v) { + { + _v = (is_array(argv[8]) && PyArray_CanCastSafely(PyArray_TYPE(argv[8]),PyArray_CLONGDOUBLE)) ? 1 : 0; + } + if (_v) { + return _wrap_csr_sample_values__SWIG_14(self, args); + } + } + } + } + } + } + } + } + } + } + +fail: + SWIG_SetErrorMsg(PyExc_NotImplementedError,"Wrong number of arguments for overloaded function 'csr_sample_values'.\n" + " Possible C/C++ prototypes are:\n" + " csr_sample_values< int,signed char >(int const,int const,int const [],int const [],signed char const [],int const,int const [],int const [],signed char [])\n" + " csr_sample_values< int,unsigned char >(int const,int const,int const [],int const [],unsigned char const [],int const,int const [],int const [],unsigned char [])\n" + " csr_sample_values< int,short >(int const,int const,int const [],int const [],short const [],int const,int const [],int const [],short [])\n" + " csr_sample_values< int,unsigned short >(int const,int const,int const [],int const [],unsigned short const [],int const,int const [],int const [],unsigned short [])\n" + " csr_sample_values< int,int >(int const,int const,int const [],int const [],int const [],int const,int const [],int const [],int [])\n" + " csr_sample_values< int,unsigned int >(int const,int const,int const [],int const [],unsigned int const [],int const,int const [],int const [],unsigned int [])\n" + " csr_sample_values< int,long long >(int const,int const,int const [],int const [],long long const [],int const,int const [],int const [],long long [])\n" + " csr_sample_values< int,unsigned long long >(int const,int const,int const [],int const [],unsigned long long const [],int const,int const [],int const [],unsigned long long [])\n" + " csr_sample_values< int,float >(int const,int const,int const [],int const [],float const [],int const,int const [],int const [],float [])\n" + " csr_sample_values< int,double >(int const,int const,int const [],int const [],double const [],int const,int const [],int const [],double [])\n" + " csr_sample_values< int,long double >(int const,int const,int const [],int const [],long double const [],int const,int const [],int const [],long double [])\n" + " csr_sample_values< int,npy_cfloat_wrapper >(int const,int const,int const [],int const [],npy_cfloat_wrapper const [],int const,int const [],int const [],npy_cfloat_wrapper [])\n" + " csr_sample_values< int,npy_cdouble_wrapper >(int const,int const,int const [],int const [],npy_cdouble_wrapper const [],int const,int const [],int const [],npy_cdouble_wrapper [])\n" + " csr_sample_values< int,npy_clongdouble_wrapper >(int const,int const,int const [],int const [],npy_clongdouble_wrapper const [],int const,int const [],int const [],npy_clongdouble_wrapper [])\n"); + return NULL; +} + + static PyMethodDef SwigMethods[] = { { (char *)"expandptr", _wrap_expandptr, METH_VARARGS, (char *)"expandptr(int n_row, int Ap, int Bi)"}, { (char *)"csr_matmat_pass1", _wrap_csr_matmat_pass1, METH_VARARGS, (char *)"\n" @@ -45944,6 +49042,37 @@ " std::vector<(int)> Bp, std::vector<(int)> Bj, \n" " std::vector<(npy_clongdouble_wrapper)> Bx)\n" ""}, + { (char *)"csr_sample_values", _wrap_csr_sample_values, METH_VARARGS, (char *)"\n" + "csr_sample_values(int n_row, int n_col, int Ap, int Aj, signed char Ax, \n" + " int n_samples, int Bi, int Bj, signed char Bx)\n" + "csr_sample_values(int n_row, int n_col, int Ap, int Aj, unsigned char Ax, \n" + " int n_samples, int Bi, int Bj, unsigned char Bx)\n" + "csr_sample_values(int n_row, int n_col, int Ap, int Aj, short Ax, int n_samples, \n" + " int Bi, int Bj, short Bx)\n" + "csr_sample_values(int n_row, int n_col, int Ap, int Aj, unsigned short Ax, \n" + " int n_samples, int Bi, int Bj, unsigned short Bx)\n" + "csr_sample_values(int n_row, int n_col, int Ap, int Aj, int Ax, int n_samples, \n" + " int Bi, int Bj, int Bx)\n" + "csr_sample_values(int n_row, int n_col, int Ap, int Aj, unsigned int Ax, \n" + " int n_samples, int Bi, int Bj, unsigned int Bx)\n" + "csr_sample_values(int n_row, int n_col, int Ap, int Aj, long long Ax, \n" + " int n_samples, int Bi, int Bj, long long Bx)\n" + "csr_sample_values(int n_row, int n_col, int Ap, int Aj, unsigned long long Ax, \n" + " int n_samples, int Bi, int Bj, unsigned long long Bx)\n" + "csr_sample_values(int n_row, int n_col, int Ap, int Aj, float Ax, int n_samples, \n" + " int Bi, int Bj, float Bx)\n" + "csr_sample_values(int n_row, int n_col, int Ap, int Aj, double Ax, int n_samples, \n" + " int Bi, int Bj, double Bx)\n" + "csr_sample_values(int n_row, int n_col, int Ap, int Aj, long double Ax, \n" + " int n_samples, int Bi, int Bj, long double Bx)\n" + "csr_sample_values(int n_row, int n_col, int Ap, int Aj, npy_cfloat_wrapper Ax, \n" + " int n_samples, int Bi, int Bj, npy_cfloat_wrapper Bx)\n" + "csr_sample_values(int n_row, int n_col, int Ap, int Aj, npy_cdouble_wrapper Ax, \n" + " int n_samples, int Bi, int Bj, npy_cdouble_wrapper Bx)\n" + "csr_sample_values(int n_row, int n_col, int Ap, int Aj, npy_clongdouble_wrapper Ax, \n" + " int n_samples, int Bi, int Bj, \n" + " npy_clongdouble_wrapper Bx)\n" + ""}, { NULL, NULL, 0, NULL } }; diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/sparsetools/dia.py python-scipy-0.8.0+dfsg1/scipy/sparse/sparsetools/dia.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/sparsetools/dia.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/sparsetools/dia.py 2010-07-26 15:48:35.000000000 +0100 @@ -51,39 +51,40 @@ def dia_matvec(*args): + """ + dia_matvec(int n_row, int n_col, int n_diags, int L, int offsets, + signed char diags, signed char Xx, signed char Yx) + dia_matvec(int n_row, int n_col, int n_diags, int L, int offsets, + unsigned char diags, unsigned char Xx, unsigned char Yx) + dia_matvec(int n_row, int n_col, int n_diags, int L, int offsets, + short diags, short Xx, short Yx) + dia_matvec(int n_row, int n_col, int n_diags, int L, int offsets, + unsigned short diags, unsigned short Xx, + unsigned short Yx) + dia_matvec(int n_row, int n_col, int n_diags, int L, int offsets, + int diags, int Xx, int Yx) + dia_matvec(int n_row, int n_col, int n_diags, int L, int offsets, + unsigned int diags, unsigned int Xx, unsigned int Yx) + dia_matvec(int n_row, int n_col, int n_diags, int L, int offsets, + long long diags, long long Xx, long long Yx) + dia_matvec(int n_row, int n_col, int n_diags, int L, int offsets, + unsigned long long diags, unsigned long long Xx, + unsigned long long Yx) + dia_matvec(int n_row, int n_col, int n_diags, int L, int offsets, + float diags, float Xx, float Yx) + dia_matvec(int n_row, int n_col, int n_diags, int L, int offsets, + double diags, double Xx, double Yx) + dia_matvec(int n_row, int n_col, int n_diags, int L, int offsets, + long double diags, long double Xx, long double Yx) + dia_matvec(int n_row, int n_col, int n_diags, int L, int offsets, + npy_cfloat_wrapper diags, npy_cfloat_wrapper Xx, + npy_cfloat_wrapper Yx) + dia_matvec(int n_row, int n_col, int n_diags, int L, int offsets, + npy_cdouble_wrapper diags, npy_cdouble_wrapper Xx, + npy_cdouble_wrapper Yx) + dia_matvec(int n_row, int n_col, int n_diags, int L, int offsets, + npy_clongdouble_wrapper diags, npy_clongdouble_wrapper Xx, + npy_clongdouble_wrapper Yx) """ - dia_matvec(int n_row, int n_col, int n_diags, int L, int offsets, - signed char diags, signed char Xx, signed char Yx) - dia_matvec(int n_row, int n_col, int n_diags, int L, int offsets, - unsigned char diags, unsigned char Xx, unsigned char Yx) - dia_matvec(int n_row, int n_col, int n_diags, int L, int offsets, - short diags, short Xx, short Yx) - dia_matvec(int n_row, int n_col, int n_diags, int L, int offsets, - unsigned short diags, unsigned short Xx, - unsigned short Yx) - dia_matvec(int n_row, int n_col, int n_diags, int L, int offsets, - int diags, int Xx, int Yx) - dia_matvec(int n_row, int n_col, int n_diags, int L, int offsets, - unsigned int diags, unsigned int Xx, unsigned int Yx) - dia_matvec(int n_row, int n_col, int n_diags, int L, int offsets, - long long diags, long long Xx, long long Yx) - dia_matvec(int n_row, int n_col, int n_diags, int L, int offsets, - unsigned long long diags, unsigned long long Xx, - unsigned long long Yx) - dia_matvec(int n_row, int n_col, int n_diags, int L, int offsets, - float diags, float Xx, float Yx) - dia_matvec(int n_row, int n_col, int n_diags, int L, int offsets, - double diags, double Xx, double Yx) - dia_matvec(int n_row, int n_col, int n_diags, int L, int offsets, - long double diags, long double Xx, long double Yx) - dia_matvec(int n_row, int n_col, int n_diags, int L, int offsets, - npy_cfloat_wrapper diags, npy_cfloat_wrapper Xx, - npy_cfloat_wrapper Yx) - dia_matvec(int n_row, int n_col, int n_diags, int L, int offsets, - npy_cdouble_wrapper diags, npy_cdouble_wrapper Xx, - npy_cdouble_wrapper Yx) - dia_matvec(int n_row, int n_col, int n_diags, int L, int offsets, - npy_clongdouble_wrapper diags, npy_clongdouble_wrapper Xx, - npy_clongdouble_wrapper Yx) - """ - return _dia.dia_matvec(*args) + return _dia.dia_matvec(*args) + diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/sparsetools/scratch.h python-scipy-0.8.0+dfsg1/scipy/sparse/sparsetools/scratch.h --- python-scipy-0.7.2+dfsg1/scipy/sparse/sparsetools/scratch.h 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/sparsetools/scratch.h 2010-07-26 15:48:35.000000000 +0100 @@ -329,3 +329,63 @@ } } + +/* + * Pass 1 computes CSR row pointer for the matrix product C = A * B + * + */ +template +void csr_matmat_pass1(const I n_row, + const I n_col, + const I Ap[], + const I Aj[], + const I Bp[], + const I Bj[], + I Cp[]) +{ + // method that uses O(1) temp storage + const I hash_size = 1 << 5; + I vals[hash_size]; + I mask[hash_size]; + + std::set spill; + + for(I i = 0; i < hash_size; i++){ + vals[i] = -1; + mask[i] = -1; + } + + Cp[0] = 0; + + I slow_inserts = 0; + I total_inserts = 0; + I nnz = 0; + for(I i = 0; i < n_row; i++){ + spill.clear(); + for(I jj = Ap[i]; jj < Ap[i+1]; jj++){ + I j = Aj[jj]; + for(I kk = Bp[j]; kk < Bp[j+1]; kk++){ + I k = Bj[kk]; + // I hash = k & (hash_size - 1); + I hash = ((I)2654435761 * k) & (hash_size -1 ); + total_inserts++; + if(mask[hash] != i){ + mask[hash] = i; + vals[hash] = k; + nnz++; + } else { + if (vals[hash] != k){ + slow_inserts++; + spill.insert(k); + } + } + } + } + nnz += spill.size(); + Cp[i+1] = nnz; + } + + std::cout << "slow fraction " << ((float) slow_inserts)/ ((float) total_inserts) << std::endl; +} + + diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/sparsetools/setup.py python-scipy-0.8.0+dfsg1/scipy/sparse/sparsetools/setup.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/sparsetools/setup.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/sparsetools/setup.py 2010-07-26 15:48:35.000000000 +0100 @@ -8,7 +8,8 @@ for fmt in ['csr','csc','coo','bsr','dia']: sources = [ fmt + '_wrap.cxx' ] - config.add_extension('_' + fmt, sources=sources) + depends = [ fmt + '.h' ] + config.add_extension('_' + fmt, sources=sources, depends=depends) return config diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/tests/test_base.py python-scipy-0.8.0+dfsg1/scipy/sparse/tests/test_base.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/tests/test_base.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/tests/test_base.py 2010-07-26 15:48:35.000000000 +0100 @@ -550,8 +550,18 @@ assert_equal(toself(copy=False).todense(), A.todense()) - #TODO how can we check whether the data is copied? - pass + # check whether the data is copied? + # TODO: deal with non-indexable types somehow + B = A.copy() + try: + B[0,0] += 1 + assert B[0,0]!=A[0,0] + except NotImplementedError: + # not all sparse matrices can be indexed + pass + except TypeError: + # not all sparse matrices can be indexed + pass # Eventually we'd like to allow matrix products between dense # and sparse matrices using the normal dot() function: @@ -592,6 +602,7 @@ class _TestGetSet: def test_setelement(self): A = self.spmatrix((3,4)) + A[ 0, 0] = 0 # bug 870 A[ 1, 2] = 4.0 A[ 0, 1] = 3 A[ 2, 0] = 2.0 @@ -738,6 +749,23 @@ """Tests fancy indexing features. The tests for any matrix formats that implement these features should derive from this class. """ + def test_fancy_indexing_set(self): + n, m = (5, 10) + def _test_set(i, j, nitems): + A = self.spmatrix((n, m)) + A[i, j] = 1 + assert_almost_equal(A.sum(), nitems) + assert_almost_equal(A[i, j], 1) + + # [i,j] + for i, j in [(2, 3), (-1, 8), (-1, -2), (array(-1), -2), (-1, array(-2)), + (array(-1), array(-2))]: + _test_set(i, j, 1) + + # [i,1:2] + for i, j in [(2, slice(m)), (2, slice(5, -2)), (array(2), slice(5, -2))]: + _test_set(i, j, 3) + def test_fancy_indexing(self): B = asmatrix(arange(50).reshape(5,10)) A = self.spmatrix( B ) @@ -837,6 +865,30 @@ s = slice(int8(2),int8(4),None) assert_equal(A[s,:].todense(), B[2:4,:]) assert_equal(A[:,s].todense(), B[:,2:4]) + + def test_fancy_indexing_randomized(self): + random.seed(0) # make runs repeatable + + NUM_SAMPLES = 50 + M = 6 + N = 4 + + D = np.asmatrix(np.random.rand(M,N)) + D = np.multiply(D, D > 0.5) + + I = np.random.random_integers(-M + 1, M - 1, size=NUM_SAMPLES) + J = np.random.random_integers(-N + 1, N - 1, size=NUM_SAMPLES) + + S = self.spmatrix(D) + + assert_equal(S[I,J], D[I,J]) + + I_bad = I + M + J_bad = J - N + + assert_raises(IndexError, S.__getitem__, (I_bad,J)) + assert_raises(IndexError, S.__getitem__, (I,J_bad)) + class _TestArithmetic: """ @@ -925,6 +977,11 @@ _TestFancyIndexing, TestCase): spmatrix = csr_matrix + @dec.knownfailureif(True, "Fancy indexing is known to be broken for CSR" \ + " matrices") + def test_fancy_indexing_set(self): + _TestFancyIndexing.test_fancy_indexing_set(self) + def test_constructor1(self): b = matrix([[0,4,0], [3,0,0], @@ -992,7 +1049,6 @@ csr = csr_matrix((data, indices, indptr)) assert_array_equal(csr.shape,(3,6)) - def test_sort_indices(self): data = arange( 5 ) indices = array( [7, 2, 1, 5, 4] ) @@ -1014,6 +1070,18 @@ assert_array_equal(asp.data,[1, 2, 3]) assert_array_equal(asp.todense(),bsp.todense()) + def test_unsorted_arithmetic(self): + data = arange( 5 ) + indices = array( [7, 2, 1, 5, 4] ) + indptr = array( [0, 3, 5] ) + asp = csr_matrix( (data, indices, indptr), shape=(2,10) ) + data = arange( 6 ) + indices = array( [8, 1, 5, 7, 2, 4] ) + indptr = array( [0, 2, 6] ) + bsp = csr_matrix( (data, indices, indptr), shape=(2,10) ) + assert_equal((asp + bsp).todense(), asp.todense() + bsp.todense()) + + class TestCSC(_TestCommon, _TestGetSet, _TestSolve, @@ -1022,6 +1090,11 @@ _TestFancyIndexing, TestCase): spmatrix = csc_matrix + @dec.knownfailureif(True, "Fancy indexing is known to be broken for CSC" \ + " matrices") + def test_fancy_indexing_set(self): + _TestFancyIndexing.test_fancy_indexing_set(self) + def test_constructor1(self): b = matrix([[1,0,0,0],[0,0,1,0],[0,2,0,3]],'d') bsp = csc_matrix(b) @@ -1087,6 +1160,16 @@ assert_array_equal(asp.indices,[1, 2, 7, 4, 5]) assert_array_equal(asp.todense(),bsp.todense()) + def test_unsorted_arithmetic(self): + data = arange( 5 ) + indices = array( [7, 2, 1, 5, 4] ) + indptr = array( [0, 3, 5] ) + asp = csc_matrix( (data, indices, indptr), shape=(10,2) ) + data = arange( 6 ) + indices = array( [8, 1, 5, 7, 2, 4] ) + indptr = array( [0, 2, 6] ) + bsp = csc_matrix( (data, indices, indptr), shape=(10,2) ) + assert_equal((asp + bsp).todense(), asp.todense() + bsp.todense()) class TestDOK(_TestCommon, _TestGetSet, _TestSolve, TestCase): spmatrix = dok_matrix @@ -1216,7 +1299,7 @@ class TestLIL( _TestCommon, _TestHorizSlicing, _TestVertSlicing, _TestBothSlicing, _TestGetSet, _TestSolve, - _TestArithmetic, _TestInplaceArithmetic, + _TestArithmetic, _TestInplaceArithmetic, _TestFancyIndexing, TestCase): spmatrix = lil_matrix @@ -1226,6 +1309,17 @@ B[2,1] = 3 B[3,0] = 10 + + @dec.knownfailureif(True, "Fancy indexing is known to be broken for LIL" \ + " matrices") + def test_fancy_indexing_set(self): + _TestFancyIndexing.test_fancy_indexing_set(self) + + @dec.knownfailureif(True, "Fancy indexing is known to be broken for LIL" \ + " matrices") + def test_fancy_indexing_randomized(self): + _TestFancyIndexing.test_fancy_indexing_randomized(self) + def test_dot(self): A = matrix(zeros((10,10))) A[0,3] = 10 @@ -1301,7 +1395,7 @@ B[:2,:2] = csc_matrix(array(block)) assert_array_equal(B.todense()[:2,:2],block) - def test_lil_sequence_assignement(self): + def test_lil_sequence_assignment(self): A = lil_matrix((4,3)) B = eye(3,4,format='lil') @@ -1314,6 +1408,16 @@ A[2,i2] = B[i2,2] assert_array_equal(A.todense(),B.T.todense()) + # column slice + A = lil_matrix((2,3)) + A[1,1:3] = [10,20] + assert_array_equal(A.todense(), [[0,0,0],[0,10,20]]) + + # column slice + A = lil_matrix((3,2)) + A[1:3,1] = [[10],[20]] + assert_array_equal(A.todense(), [[0,0],[0,10],[0,20]]) + def test_lil_iteration(self): row_data = [[1,2,3],[4,5,6]] B = lil_matrix(array(row_data)) diff -Nru python-scipy-0.7.2+dfsg1/scipy/sparse/tests/test_construct.py python-scipy-0.8.0+dfsg1/scipy/sparse/tests/test_construct.py --- python-scipy-0.7.2+dfsg1/scipy/sparse/tests/test_construct.py 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/sparse/tests/test_construct.py 2010-07-26 15:48:35.000000000 +0100 @@ -8,6 +8,7 @@ from scipy.sparse import csr_matrix, coo_matrix from scipy.sparse.construct import * +from scipy.sparse.construct import rand as sprand sparse_formats = ['csr','csc','coo','bsr','dia','lil','dok'] @@ -204,5 +205,23 @@ [4,0,0], [6,5,0]]) + def test_rand(self): + # Simple sanity checks for sparse.rand + for t in [np.float32, np.float64, np.longdouble]: + x = sprand(5, 10, density=0.1, dtype=t) + assert_equal(x.dtype, t) + assert_equal(x.shape, (5, 10)) + assert_equal(x.nonzero()[0].size, 5) + + x = sprand(5, 10, density=0.1) + assert_equal(x.dtype, np.double) + + for fmt in ['coo', 'csc', 'csr', 'lil']: + x = sprand(5, 10, format=fmt) + assert_equal(x.format, fmt) + + assert_raises(ValueError, lambda: sprand(5, 10, 1.1)) + assert_raises(ValueError, lambda: sprand(5, 10, -0.1)) + if __name__ == "__main__": run_module_suite() diff -Nru python-scipy-0.7.2+dfsg1/scipy/spatial/ckdtree.c python-scipy-0.8.0+dfsg1/scipy/spatial/ckdtree.c --- python-scipy-0.7.2+dfsg1/scipy/spatial/ckdtree.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/spatial/ckdtree.c 2010-07-26 15:48:35.000000000 +0100 @@ -1,4 +1,4 @@ -/* Generated by Cython 0.12.1 on Mon Feb 8 18:11:01 2010 */ +/* Generated by Cython 0.12.1 on Wed Jun 16 17:42:36 2010 */ #define PY_SSIZE_T_CLEAN #include "Python.h" @@ -375,7 +375,7 @@ typedef npy_cdouble __pyx_t_5numpy_complex_t; -/* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":15 +/* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":15 * * # priority queue * cdef union heapcontents: # <<<<<<<<<<<<<< @@ -388,7 +388,7 @@ char *ptrdata; }; -/* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":19 +/* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":19 * char* ptrdata * * cdef struct heapitem: # <<<<<<<<<<<<<< @@ -401,7 +401,7 @@ union __pyx_t_5scipy_7spatial_7ckdtree_heapcontents contents; }; -/* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":23 +/* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":23 * heapcontents contents * * cdef struct heap: # <<<<<<<<<<<<<< @@ -415,7 +415,7 @@ int space; }; -/* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":139 +/* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":139 * * # Tree structure * cdef struct innernode: # <<<<<<<<<<<<<< @@ -431,7 +431,7 @@ struct __pyx_t_5scipy_7spatial_7ckdtree_innernode *greater; }; -/* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":145 +/* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":145 * innernode* less * innernode* greater * cdef struct leafnode: # <<<<<<<<<<<<<< @@ -446,7 +446,7 @@ int end_idx; }; -/* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":153 +/* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":153 * # this is the standard trick for variable-size arrays: * # malloc sizeof(nodeinfo)+self.m*sizeof(double) bytes. * cdef struct nodeinfo: # <<<<<<<<<<<<<< @@ -459,7 +459,7 @@ double side_distances[0]; }; -/* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":157 +/* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":157 * double side_distances[0] * * cdef class cKDTree: # <<<<<<<<<<<<<< @@ -1053,7 +1053,7 @@ static PyObject *__pyx_int_15; static double __pyx_k_4; -/* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":28 +/* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":28 * int space * * cdef inline heapcreate(heap* self,int initial_size): # <<<<<<<<<<<<<< @@ -1065,7 +1065,7 @@ PyObject *__pyx_r = NULL; __Pyx_RefNannySetupContext("heapcreate"); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":29 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":29 * * cdef inline heapcreate(heap* self,int initial_size): * self.space = initial_size # <<<<<<<<<<<<<< @@ -1074,7 +1074,7 @@ */ __pyx_v_self->space = __pyx_v_initial_size; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":30 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":30 * cdef inline heapcreate(heap* self,int initial_size): * self.space = initial_size * self.heap = stdlib.malloc(sizeof(heapitem)*self.space) # <<<<<<<<<<<<<< @@ -1083,7 +1083,7 @@ */ __pyx_v_self->heap = ((struct __pyx_t_5scipy_7spatial_7ckdtree_heapitem *)malloc(((sizeof(struct __pyx_t_5scipy_7spatial_7ckdtree_heapitem)) * __pyx_v_self->space))); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":31 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":31 * self.space = initial_size * self.heap = stdlib.malloc(sizeof(heapitem)*self.space) * self.n=0 # <<<<<<<<<<<<<< @@ -1098,7 +1098,7 @@ return __pyx_r; } -/* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":33 +/* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":33 * self.n=0 * * cdef inline heapdestroy(heap* self): # <<<<<<<<<<<<<< @@ -1110,7 +1110,7 @@ PyObject *__pyx_r = NULL; __Pyx_RefNannySetupContext("heapdestroy"); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":34 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":34 * * cdef inline heapdestroy(heap* self): * stdlib.free(self.heap) # <<<<<<<<<<<<<< @@ -1125,7 +1125,7 @@ return __pyx_r; } -/* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":36 +/* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":36 * stdlib.free(self.heap) * * cdef inline heapresize(heap* self, int new_space): # <<<<<<<<<<<<<< @@ -1141,7 +1141,7 @@ PyObject *__pyx_t_4 = NULL; __Pyx_RefNannySetupContext("heapresize"); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":37 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":37 * * cdef inline heapresize(heap* self, int new_space): * if new_spacen); if (__pyx_t_1) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":38 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":38 * cdef inline heapresize(heap* self, int new_space): * if new_spacespace = __pyx_v_new_space; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":40 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":40 * raise ValueError("Heap containing %d items cannot be resized to %d" % (self.n, new_space)) * self.space = new_space * self.heap = stdlib.realloc(self.heap,new_space*sizeof(heapitem)) # <<<<<<<<<<<<<< @@ -1220,7 +1220,7 @@ return __pyx_r; } -/* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":42 +/* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":42 * self.heap = stdlib.realloc(self.heap,new_space*sizeof(heapitem)) * * cdef inline heappush(heap* self, heapitem item): # <<<<<<<<<<<<<< @@ -1238,7 +1238,7 @@ int __pyx_t_4; __Pyx_RefNannySetupContext("heappush"); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":46 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":46 * cdef heapitem t * * self.n += 1 # <<<<<<<<<<<<<< @@ -1247,7 +1247,7 @@ */ __pyx_v_self->n += 1; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":47 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":47 * * self.n += 1 * if self.n>self.space: # <<<<<<<<<<<<<< @@ -1257,7 +1257,7 @@ __pyx_t_1 = (__pyx_v_self->n > __pyx_v_self->space); if (__pyx_t_1) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":48 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":48 * self.n += 1 * if self.n>self.space: * heapresize(self,2*self.space+1) # <<<<<<<<<<<<<< @@ -1271,7 +1271,7 @@ } __pyx_L3:; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":50 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":50 * heapresize(self,2*self.space+1) * * i = self.n-1 # <<<<<<<<<<<<<< @@ -1280,7 +1280,7 @@ */ __pyx_v_i = (__pyx_v_self->n - 1); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":51 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":51 * * i = self.n-1 * self.heap[i] = item # <<<<<<<<<<<<<< @@ -1289,7 +1289,7 @@ */ (__pyx_v_self->heap[__pyx_v_i]) = __pyx_v_item; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":52 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":52 * i = self.n-1 * self.heap[i] = item * while i>0 and self.heap[i].priority0 and self.heap[i].priorityheap[__Pyx_div_long((__pyx_v_i - 1), 2)]); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":54 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":54 * while i>0 and self.heap[i].priorityheap[__Pyx_div_long((__pyx_v_i - 1), 2)]) = (__pyx_v_self->heap[__pyx_v_i]); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":55 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":55 * t = self.heap[(i-1)//2] * self.heap[(i-1)//2] = self.heap[i] * self.heap[i] = t # <<<<<<<<<<<<<< @@ -1333,7 +1333,7 @@ */ (__pyx_v_self->heap[__pyx_v_i]) = __pyx_v_t; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":56 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":56 * self.heap[(i-1)//2] = self.heap[i] * self.heap[i] = t * i = (i-1)//2 # <<<<<<<<<<<<<< @@ -1355,7 +1355,7 @@ return __pyx_r; } -/* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":58 +/* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":58 * i = (i-1)//2 * * cdef heapitem heappeek(heap* self): # <<<<<<<<<<<<<< @@ -1367,7 +1367,7 @@ struct __pyx_t_5scipy_7spatial_7ckdtree_heapitem __pyx_r; __Pyx_RefNannySetupContext("heappeek"); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":59 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":59 * * cdef heapitem heappeek(heap* self): * return self.heap[0] # <<<<<<<<<<<<<< @@ -1382,7 +1382,7 @@ return __pyx_r; } -/* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":61 +/* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":61 * return self.heap[0] * * cdef heapremove(heap* self): # <<<<<<<<<<<<<< @@ -1404,7 +1404,7 @@ int __pyx_t_5; __Pyx_RefNannySetupContext("heapremove"); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":65 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":65 * cdef int i, j, k, l * * self.heap[0] = self.heap[self.n-1] # <<<<<<<<<<<<<< @@ -1413,7 +1413,7 @@ */ (__pyx_v_self->heap[0]) = (__pyx_v_self->heap[(__pyx_v_self->n - 1)]); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":66 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":66 * * self.heap[0] = self.heap[self.n-1] * self.n -= 1 # <<<<<<<<<<<<<< @@ -1422,7 +1422,7 @@ */ __pyx_v_self->n -= 1; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":67 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":67 * self.heap[0] = self.heap[self.n-1] * self.n -= 1 * if self.n < self.space//4 and self.space>40: #FIXME: magic number # <<<<<<<<<<<<<< @@ -1438,7 +1438,7 @@ } if (__pyx_t_3) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":68 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":68 * self.n -= 1 * if self.n < self.space//4 and self.space>40: #FIXME: magic number * heapresize(self,self.space//2+1) # <<<<<<<<<<<<<< @@ -1452,7 +1452,7 @@ } __pyx_L3:; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":70 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":70 * heapresize(self,self.space//2+1) * * i=0 # <<<<<<<<<<<<<< @@ -1461,7 +1461,7 @@ */ __pyx_v_i = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":71 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":71 * * i=0 * j=1 # <<<<<<<<<<<<<< @@ -1470,7 +1470,7 @@ */ __pyx_v_j = 1; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":72 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":72 * i=0 * j=1 * k=2 # <<<<<<<<<<<<<< @@ -1479,7 +1479,7 @@ */ __pyx_v_k = 2; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":73 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":73 * j=1 * k=2 * while ((j self.heap[j].priority or # <<<<<<<<<<<<<< @@ -1504,7 +1504,7 @@ } if (!__pyx_t_2) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":75 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":75 * while ((j self.heap[j].priority or * kn); if (__pyx_t_3) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":76 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":76 * self.heap[i].priority > self.heap[j].priority or * k self.heap[k].priority)): # <<<<<<<<<<<<<< @@ -1532,7 +1532,7 @@ } if (!__pyx_t_3) break; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":77 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":77 * k self.heap[k].priority)): * if kself.heap[k].priority: # <<<<<<<<<<<<<< @@ -1548,7 +1548,7 @@ } if (__pyx_t_5) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":78 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":78 * self.heap[i].priority > self.heap[k].priority)): * if kself.heap[k].priority: * l = k # <<<<<<<<<<<<<< @@ -1560,7 +1560,7 @@ } /*else*/ { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":80 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":80 * l = k * else: * l = j # <<<<<<<<<<<<<< @@ -1571,7 +1571,7 @@ } __pyx_L6:; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":81 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":81 * else: * l = j * t = self.heap[l] # <<<<<<<<<<<<<< @@ -1580,7 +1580,7 @@ */ __pyx_v_t = (__pyx_v_self->heap[__pyx_v_l]); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":82 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":82 * l = j * t = self.heap[l] * self.heap[l] = self.heap[i] # <<<<<<<<<<<<<< @@ -1589,7 +1589,7 @@ */ (__pyx_v_self->heap[__pyx_v_l]) = (__pyx_v_self->heap[__pyx_v_i]); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":83 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":83 * t = self.heap[l] * self.heap[l] = self.heap[i] * self.heap[i] = t # <<<<<<<<<<<<<< @@ -1598,7 +1598,7 @@ */ (__pyx_v_self->heap[__pyx_v_i]) = __pyx_v_t; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":84 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":84 * self.heap[l] = self.heap[i] * self.heap[i] = t * i = l # <<<<<<<<<<<<<< @@ -1607,7 +1607,7 @@ */ __pyx_v_i = __pyx_v_l; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":85 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":85 * self.heap[i] = t * i = l * j = 2*i+1 # <<<<<<<<<<<<<< @@ -1616,7 +1616,7 @@ */ __pyx_v_j = ((2 * __pyx_v_i) + 1); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":86 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":86 * i = l * j = 2*i+1 * k = 2*i+2 # <<<<<<<<<<<<<< @@ -1638,7 +1638,7 @@ return __pyx_r; } -/* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":88 +/* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":88 * k = 2*i+2 * * cdef heapitem heappop(heap* self): # <<<<<<<<<<<<<< @@ -1652,7 +1652,7 @@ PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("heappop"); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":90 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":90 * cdef heapitem heappop(heap* self): * cdef heapitem it * it = heappeek(self) # <<<<<<<<<<<<<< @@ -1661,7 +1661,7 @@ */ __pyx_v_it = __pyx_f_5scipy_7spatial_7ckdtree_heappeek(__pyx_v_self); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":91 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":91 * cdef heapitem it * it = heappeek(self) * heapremove(self) # <<<<<<<<<<<<<< @@ -1672,7 +1672,7 @@ __Pyx_GOTREF(__pyx_t_1); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":92 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":92 * it = heappeek(self) * heapremove(self) * return it # <<<<<<<<<<<<<< @@ -1691,7 +1691,7 @@ return __pyx_r; } -/* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":99 +/* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":99 * * # utility functions * cdef inline double dmax(double x, double y): # <<<<<<<<<<<<<< @@ -1704,7 +1704,7 @@ int __pyx_t_1; __Pyx_RefNannySetupContext("dmax"); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":100 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":100 * # utility functions * cdef inline double dmax(double x, double y): * if x>y: # <<<<<<<<<<<<<< @@ -1714,7 +1714,7 @@ __pyx_t_1 = (__pyx_v_x > __pyx_v_y); if (__pyx_t_1) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":101 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":101 * cdef inline double dmax(double x, double y): * if x>y: * return x # <<<<<<<<<<<<<< @@ -1727,7 +1727,7 @@ } /*else*/ { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":103 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":103 * return x * else: * return y # <<<<<<<<<<<<<< @@ -1745,7 +1745,7 @@ return __pyx_r; } -/* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":104 +/* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":104 * else: * return y * cdef inline double dabs(double x): # <<<<<<<<<<<<<< @@ -1758,7 +1758,7 @@ int __pyx_t_1; __Pyx_RefNannySetupContext("dabs"); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":105 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":105 * return y * cdef inline double dabs(double x): * if x>0: # <<<<<<<<<<<<<< @@ -1768,7 +1768,7 @@ __pyx_t_1 = (__pyx_v_x > 0); if (__pyx_t_1) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":106 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":106 * cdef inline double dabs(double x): * if x>0: * return x # <<<<<<<<<<<<<< @@ -1781,7 +1781,7 @@ } /*else*/ { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":108 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":108 * return x * else: * return -x # <<<<<<<<<<<<<< @@ -1799,7 +1799,7 @@ return __pyx_r; } -/* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":109 +/* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":109 * else: * return -x * cdef inline double _distance_p(double*x,double*y,double p,int k,double upperbound): # <<<<<<<<<<<<<< @@ -1816,7 +1816,7 @@ int __pyx_t_3; __Pyx_RefNannySetupContext("_distance_p"); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":118 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":118 * cdef int i * cdef double r * r = 0 # <<<<<<<<<<<<<< @@ -1825,7 +1825,7 @@ */ __pyx_v_r = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":119 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":119 * cdef double r * r = 0 * if p==infinity: # <<<<<<<<<<<<<< @@ -1835,7 +1835,7 @@ __pyx_t_1 = (__pyx_v_p == __pyx_v_5scipy_7spatial_7ckdtree_infinity); if (__pyx_t_1) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":120 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":120 * r = 0 * if p==infinity: * for i in range(k): # <<<<<<<<<<<<<< @@ -1846,7 +1846,7 @@ for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { __pyx_v_i = __pyx_t_3; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":121 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":121 * if p==infinity: * for i in range(k): * r = dmax(r,dabs(x[i]-y[i])) # <<<<<<<<<<<<<< @@ -1855,7 +1855,7 @@ */ __pyx_v_r = __pyx_f_5scipy_7spatial_7ckdtree_dmax(__pyx_v_r, __pyx_f_5scipy_7spatial_7ckdtree_dabs(((__pyx_v_x[__pyx_v_i]) - (__pyx_v_y[__pyx_v_i])))); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":122 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":122 * for i in range(k): * r = dmax(r,dabs(x[i]-y[i])) * if r>upperbound: # <<<<<<<<<<<<<< @@ -1865,7 +1865,7 @@ __pyx_t_1 = (__pyx_v_r > __pyx_v_upperbound); if (__pyx_t_1) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":123 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":123 * r = dmax(r,dabs(x[i]-y[i])) * if r>upperbound: * return r # <<<<<<<<<<<<<< @@ -1881,7 +1881,7 @@ goto __pyx_L3; } - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":124 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":124 * if r>upperbound: * return r * elif p==1: # <<<<<<<<<<<<<< @@ -1891,7 +1891,7 @@ __pyx_t_1 = (__pyx_v_p == 1); if (__pyx_t_1) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":125 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":125 * return r * elif p==1: * for i in range(k): # <<<<<<<<<<<<<< @@ -1902,7 +1902,7 @@ for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { __pyx_v_i = __pyx_t_3; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":126 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":126 * elif p==1: * for i in range(k): * r += dabs(x[i]-y[i]) # <<<<<<<<<<<<<< @@ -1911,7 +1911,7 @@ */ __pyx_v_r += __pyx_f_5scipy_7spatial_7ckdtree_dabs(((__pyx_v_x[__pyx_v_i]) - (__pyx_v_y[__pyx_v_i]))); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":127 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":127 * for i in range(k): * r += dabs(x[i]-y[i]) * if r>upperbound: # <<<<<<<<<<<<<< @@ -1921,7 +1921,7 @@ __pyx_t_1 = (__pyx_v_r > __pyx_v_upperbound); if (__pyx_t_1) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":128 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":128 * r += dabs(x[i]-y[i]) * if r>upperbound: * return r # <<<<<<<<<<<<<< @@ -1938,7 +1938,7 @@ } /*else*/ { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":130 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":130 * return r * else: * for i in range(k): # <<<<<<<<<<<<<< @@ -1949,7 +1949,7 @@ for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { __pyx_v_i = __pyx_t_3; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":131 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":131 * else: * for i in range(k): * r += dabs(x[i]-y[i])**p # <<<<<<<<<<<<<< @@ -1958,7 +1958,7 @@ */ __pyx_v_r += pow(__pyx_f_5scipy_7spatial_7ckdtree_dabs(((__pyx_v_x[__pyx_v_i]) - (__pyx_v_y[__pyx_v_i]))), __pyx_v_p); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":132 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":132 * for i in range(k): * r += dabs(x[i]-y[i])**p * if r>upperbound: # <<<<<<<<<<<<<< @@ -1968,7 +1968,7 @@ __pyx_t_1 = (__pyx_v_r > __pyx_v_upperbound); if (__pyx_t_1) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":133 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":133 * r += dabs(x[i]-y[i])**p * if r>upperbound: * return r # <<<<<<<<<<<<<< @@ -1984,7 +1984,7 @@ } __pyx_L3:; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":134 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":134 * if r>upperbound: * return r * return r # <<<<<<<<<<<<<< @@ -2000,7 +2000,7 @@ return __pyx_r; } -/* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":195 +/* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":195 * cdef object indices * cdef np.int32_t* raw_indices * def __init__(cKDTree self, data, int leafsize=10): # <<<<<<<<<<<<<< @@ -2106,7 +2106,7 @@ __pyx_bstruct_inner_mins.buf = NULL; __pyx_bstruct_inner_indices.buf = NULL; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":214 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":214 * cdef np.ndarray[double, ndim=1] inner_mins * cdef np.ndarray[np.int32_t, ndim=1] inner_indices * self.data = np.ascontiguousarray(data,dtype=np.float) # <<<<<<<<<<<<<< @@ -2143,7 +2143,7 @@ ((struct __pyx_obj_5scipy_7spatial_7ckdtree_cKDTree *)__pyx_v_self)->data = __pyx_t_5; __pyx_t_5 = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":215 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":215 * cdef np.ndarray[np.int32_t, ndim=1] inner_indices * self.data = np.ascontiguousarray(data,dtype=np.float) * self.n, self.m = np.shape(self.data) # <<<<<<<<<<<<<< @@ -2193,7 +2193,7 @@ ((struct __pyx_obj_5scipy_7spatial_7ckdtree_cKDTree *)__pyx_v_self)->m = __pyx_t_6; } - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":216 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":216 * self.data = np.ascontiguousarray(data,dtype=np.float) * self.n, self.m = np.shape(self.data) * self.leafsize = leafsize # <<<<<<<<<<<<<< @@ -2202,7 +2202,7 @@ */ ((struct __pyx_obj_5scipy_7spatial_7ckdtree_cKDTree *)__pyx_v_self)->leafsize = __pyx_v_leafsize; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":217 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":217 * self.n, self.m = np.shape(self.data) * self.leafsize = leafsize * if self.leafsize<1: # <<<<<<<<<<<<<< @@ -2212,7 +2212,7 @@ __pyx_t_8 = (((struct __pyx_obj_5scipy_7spatial_7ckdtree_cKDTree *)__pyx_v_self)->leafsize < 1); if (__pyx_t_8) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":218 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":218 * self.leafsize = leafsize * if self.leafsize<1: * raise ValueError("leafsize must be at least 1") # <<<<<<<<<<<<<< @@ -2234,7 +2234,7 @@ } __pyx_L6:; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":219 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":219 * if self.leafsize<1: * raise ValueError("leafsize must be at least 1") * self.maxes = np.ascontiguousarray(np.amax(self.data,axis=0)) # <<<<<<<<<<<<<< @@ -2279,7 +2279,7 @@ ((struct __pyx_obj_5scipy_7spatial_7ckdtree_cKDTree *)__pyx_v_self)->maxes = __pyx_t_4; __pyx_t_4 = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":220 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":220 * raise ValueError("leafsize must be at least 1") * self.maxes = np.ascontiguousarray(np.amax(self.data,axis=0)) * self.mins = np.ascontiguousarray(np.amin(self.data,axis=0)) # <<<<<<<<<<<<<< @@ -2324,7 +2324,7 @@ ((struct __pyx_obj_5scipy_7spatial_7ckdtree_cKDTree *)__pyx_v_self)->mins = __pyx_t_5; __pyx_t_5 = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":221 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":221 * self.maxes = np.ascontiguousarray(np.amax(self.data,axis=0)) * self.mins = np.ascontiguousarray(np.amin(self.data,axis=0)) * self.indices = np.ascontiguousarray(np.arange(self.n,dtype=np.int32)) # <<<<<<<<<<<<<< @@ -2377,7 +2377,7 @@ ((struct __pyx_obj_5scipy_7spatial_7ckdtree_cKDTree *)__pyx_v_self)->indices = __pyx_t_9; __pyx_t_9 = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":223 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":223 * self.indices = np.ascontiguousarray(np.arange(self.n,dtype=np.int32)) * * inner_data = self.data # <<<<<<<<<<<<<< @@ -2408,7 +2408,7 @@ __Pyx_DECREF(((PyObject *)__pyx_v_inner_data)); __pyx_v_inner_data = ((PyArrayObject *)((struct __pyx_obj_5scipy_7spatial_7ckdtree_cKDTree *)__pyx_v_self)->data); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":224 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":224 * * inner_data = self.data * self.raw_data = inner_data.data # <<<<<<<<<<<<<< @@ -2417,7 +2417,7 @@ */ ((struct __pyx_obj_5scipy_7spatial_7ckdtree_cKDTree *)__pyx_v_self)->raw_data = ((double *)__pyx_v_inner_data->data); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":225 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":225 * inner_data = self.data * self.raw_data = inner_data.data * inner_maxes = self.maxes # <<<<<<<<<<<<<< @@ -2448,7 +2448,7 @@ __Pyx_DECREF(((PyObject *)__pyx_v_inner_maxes)); __pyx_v_inner_maxes = ((PyArrayObject *)((struct __pyx_obj_5scipy_7spatial_7ckdtree_cKDTree *)__pyx_v_self)->maxes); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":226 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":226 * self.raw_data = inner_data.data * inner_maxes = self.maxes * self.raw_maxes = inner_maxes.data # <<<<<<<<<<<<<< @@ -2457,7 +2457,7 @@ */ ((struct __pyx_obj_5scipy_7spatial_7ckdtree_cKDTree *)__pyx_v_self)->raw_maxes = ((double *)__pyx_v_inner_maxes->data); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":227 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":227 * inner_maxes = self.maxes * self.raw_maxes = inner_maxes.data * inner_mins = self.mins # <<<<<<<<<<<<<< @@ -2488,7 +2488,7 @@ __Pyx_DECREF(((PyObject *)__pyx_v_inner_mins)); __pyx_v_inner_mins = ((PyArrayObject *)((struct __pyx_obj_5scipy_7spatial_7ckdtree_cKDTree *)__pyx_v_self)->mins); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":228 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":228 * self.raw_maxes = inner_maxes.data * inner_mins = self.mins * self.raw_mins = inner_mins.data # <<<<<<<<<<<<<< @@ -2497,7 +2497,7 @@ */ ((struct __pyx_obj_5scipy_7spatial_7ckdtree_cKDTree *)__pyx_v_self)->raw_mins = ((double *)__pyx_v_inner_mins->data); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":229 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":229 * inner_mins = self.mins * self.raw_mins = inner_mins.data * inner_indices = self.indices # <<<<<<<<<<<<<< @@ -2528,7 +2528,7 @@ __Pyx_DECREF(((PyObject *)__pyx_v_inner_indices)); __pyx_v_inner_indices = ((PyArrayObject *)((struct __pyx_obj_5scipy_7spatial_7ckdtree_cKDTree *)__pyx_v_self)->indices); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":230 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":230 * self.raw_mins = inner_mins.data * inner_indices = self.indices * self.raw_indices = inner_indices.data # <<<<<<<<<<<<<< @@ -2537,7 +2537,7 @@ */ ((struct __pyx_obj_5scipy_7spatial_7ckdtree_cKDTree *)__pyx_v_self)->raw_indices = ((__pyx_t_5numpy_int32_t *)__pyx_v_inner_indices->data); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":232 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":232 * self.raw_indices = inner_indices.data * * self.tree = self.__build(0, self.n, self.raw_maxes, self.raw_mins) # <<<<<<<<<<<<<< @@ -2581,7 +2581,7 @@ return __pyx_r; } -/* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":234 +/* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":234 * self.tree = self.__build(0, self.n, self.raw_maxes, self.raw_mins) * * cdef innernode* __build(cKDTree self, int start_idx, int end_idx, double* maxes, double* mins): # <<<<<<<<<<<<<< @@ -2611,7 +2611,7 @@ __Pyx_RefNannySetupContext("__build"); __Pyx_INCREF((PyObject *)__pyx_v_self); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":240 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":240 * cdef double size, split, minval, maxval * cdef double*mids * if end_idx-start_idx<=self.leafsize: # <<<<<<<<<<<<<< @@ -2621,7 +2621,7 @@ __pyx_t_1 = ((__pyx_v_end_idx - __pyx_v_start_idx) <= __pyx_v_self->leafsize); if (__pyx_t_1) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":241 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":241 * cdef double*mids * if end_idx-start_idx<=self.leafsize: * n = stdlib.malloc(sizeof(leafnode)) # <<<<<<<<<<<<<< @@ -2630,7 +2630,7 @@ */ __pyx_v_n = ((struct __pyx_t_5scipy_7spatial_7ckdtree_leafnode *)malloc((sizeof(struct __pyx_t_5scipy_7spatial_7ckdtree_leafnode)))); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":242 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":242 * if end_idx-start_idx<=self.leafsize: * n = stdlib.malloc(sizeof(leafnode)) * n.split_dim = -1 # <<<<<<<<<<<<<< @@ -2639,7 +2639,7 @@ */ __pyx_v_n->split_dim = -1; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":243 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":243 * n = stdlib.malloc(sizeof(leafnode)) * n.split_dim = -1 * n.start_idx = start_idx # <<<<<<<<<<<<<< @@ -2648,7 +2648,7 @@ */ __pyx_v_n->start_idx = __pyx_v_start_idx; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":244 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":244 * n.split_dim = -1 * n.start_idx = start_idx * n.end_idx = end_idx # <<<<<<<<<<<<<< @@ -2657,7 +2657,7 @@ */ __pyx_v_n->end_idx = __pyx_v_end_idx; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":245 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":245 * n.start_idx = start_idx * n.end_idx = end_idx * return n # <<<<<<<<<<<<<< @@ -2670,7 +2670,7 @@ } /*else*/ { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":247 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":247 * return n * else: * d = 0 # <<<<<<<<<<<<<< @@ -2679,7 +2679,7 @@ */ __pyx_v_d = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":248 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":248 * else: * d = 0 * size = 0 # <<<<<<<<<<<<<< @@ -2688,7 +2688,7 @@ */ __pyx_v_size = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":249 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":249 * d = 0 * size = 0 * for i in range(self.m): # <<<<<<<<<<<<<< @@ -2699,7 +2699,7 @@ for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { __pyx_v_i = __pyx_t_3; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":250 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":250 * size = 0 * for i in range(self.m): * if maxes[i]-mins[i] > size: # <<<<<<<<<<<<<< @@ -2709,7 +2709,7 @@ __pyx_t_1 = (((__pyx_v_maxes[__pyx_v_i]) - (__pyx_v_mins[__pyx_v_i])) > __pyx_v_size); if (__pyx_t_1) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":251 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":251 * for i in range(self.m): * if maxes[i]-mins[i] > size: * d = i # <<<<<<<<<<<<<< @@ -2718,7 +2718,7 @@ */ __pyx_v_d = __pyx_v_i; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":252 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":252 * if maxes[i]-mins[i] > size: * d = i * size = maxes[i]-mins[i] # <<<<<<<<<<<<<< @@ -2731,7 +2731,7 @@ __pyx_L6:; } - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":253 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":253 * d = i * size = maxes[i]-mins[i] * maxval = maxes[d] # <<<<<<<<<<<<<< @@ -2740,7 +2740,7 @@ */ __pyx_v_maxval = (__pyx_v_maxes[__pyx_v_d]); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":254 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":254 * size = maxes[i]-mins[i] * maxval = maxes[d] * minval = mins[d] # <<<<<<<<<<<<<< @@ -2749,7 +2749,7 @@ */ __pyx_v_minval = (__pyx_v_mins[__pyx_v_d]); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":255 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":255 * maxval = maxes[d] * minval = mins[d] * if maxval==minval: # <<<<<<<<<<<<<< @@ -2759,7 +2759,7 @@ __pyx_t_1 = (__pyx_v_maxval == __pyx_v_minval); if (__pyx_t_1) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":257 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":257 * if maxval==minval: * # all points are identical; warn user? * n = stdlib.malloc(sizeof(leafnode)) # <<<<<<<<<<<<<< @@ -2768,7 +2768,7 @@ */ __pyx_v_n = ((struct __pyx_t_5scipy_7spatial_7ckdtree_leafnode *)malloc((sizeof(struct __pyx_t_5scipy_7spatial_7ckdtree_leafnode)))); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":258 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":258 * # all points are identical; warn user? * n = stdlib.malloc(sizeof(leafnode)) * n.split_dim = -1 # <<<<<<<<<<<<<< @@ -2777,7 +2777,7 @@ */ __pyx_v_n->split_dim = -1; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":259 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":259 * n = stdlib.malloc(sizeof(leafnode)) * n.split_dim = -1 * n.start_idx = start_idx # <<<<<<<<<<<<<< @@ -2786,7 +2786,7 @@ */ __pyx_v_n->start_idx = __pyx_v_start_idx; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":260 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":260 * n.split_dim = -1 * n.start_idx = start_idx * n.end_idx = end_idx # <<<<<<<<<<<<<< @@ -2795,7 +2795,7 @@ */ __pyx_v_n->end_idx = __pyx_v_end_idx; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":261 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":261 * n.start_idx = start_idx * n.end_idx = end_idx * return n # <<<<<<<<<<<<<< @@ -2808,7 +2808,7 @@ } __pyx_L7:; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":263 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":263 * return n * * split = (maxval+minval)/2 # <<<<<<<<<<<<<< @@ -2817,7 +2817,7 @@ */ __pyx_v_split = ((__pyx_v_maxval + __pyx_v_minval) / 2); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":265 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":265 * split = (maxval+minval)/2 * * p = start_idx # <<<<<<<<<<<<<< @@ -2826,7 +2826,7 @@ */ __pyx_v_p = __pyx_v_start_idx; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":266 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":266 * * p = start_idx * q = end_idx-1 # <<<<<<<<<<<<<< @@ -2835,7 +2835,7 @@ */ __pyx_v_q = (__pyx_v_end_idx - 1); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":267 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":267 * p = start_idx * q = end_idx-1 * while p<=q: # <<<<<<<<<<<<<< @@ -2846,7 +2846,7 @@ __pyx_t_1 = (__pyx_v_p <= __pyx_v_q); if (!__pyx_t_1) break; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":268 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":268 * q = end_idx-1 * while p<=q: * if self.raw_data[self.raw_indices[p]*self.m+d]raw_data[(((__pyx_v_self->raw_indices[__pyx_v_p]) * __pyx_v_self->m) + __pyx_v_d)]) < __pyx_v_split); if (__pyx_t_1) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":269 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":269 * while p<=q: * if self.raw_data[self.raw_indices[p]*self.m+d]=split: # <<<<<<<<<<<<<< @@ -2877,7 +2877,7 @@ __pyx_t_1 = ((__pyx_v_self->raw_data[(((__pyx_v_self->raw_indices[__pyx_v_q]) * __pyx_v_self->m) + __pyx_v_d)]) >= __pyx_v_split); if (__pyx_t_1) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":271 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":271 * p+=1 * elif self.raw_data[self.raw_indices[q]*self.m+d]>=split: * q-=1 # <<<<<<<<<<<<<< @@ -2889,7 +2889,7 @@ } /*else*/ { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":273 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":273 * q-=1 * else: * t = self.raw_indices[p] # <<<<<<<<<<<<<< @@ -2898,7 +2898,7 @@ */ __pyx_v_t = (__pyx_v_self->raw_indices[__pyx_v_p]); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":274 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":274 * else: * t = self.raw_indices[p] * self.raw_indices[p] = self.raw_indices[q] # <<<<<<<<<<<<<< @@ -2907,7 +2907,7 @@ */ (__pyx_v_self->raw_indices[__pyx_v_p]) = (__pyx_v_self->raw_indices[__pyx_v_q]); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":275 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":275 * t = self.raw_indices[p] * self.raw_indices[p] = self.raw_indices[q] * self.raw_indices[q] = t # <<<<<<<<<<<<<< @@ -2916,7 +2916,7 @@ */ (__pyx_v_self->raw_indices[__pyx_v_q]) = __pyx_v_t; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":276 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":276 * self.raw_indices[p] = self.raw_indices[q] * self.raw_indices[q] = t * p+=1 # <<<<<<<<<<<<<< @@ -2925,7 +2925,7 @@ */ __pyx_v_p += 1; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":277 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":277 * self.raw_indices[q] = t * p+=1 * q-=1 # <<<<<<<<<<<<<< @@ -2937,7 +2937,7 @@ __pyx_L10:; } - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":280 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":280 * * # slide midpoint if necessary * if p==start_idx: # <<<<<<<<<<<<<< @@ -2947,7 +2947,7 @@ __pyx_t_1 = (__pyx_v_p == __pyx_v_start_idx); if (__pyx_t_1) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":282 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":282 * if p==start_idx: * # no points less than split * j = start_idx # <<<<<<<<<<<<<< @@ -2956,7 +2956,7 @@ */ __pyx_v_j = __pyx_v_start_idx; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":283 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":283 * # no points less than split * j = start_idx * split = self.raw_data[self.raw_indices[j]*self.m+d] # <<<<<<<<<<<<<< @@ -2965,7 +2965,7 @@ */ __pyx_v_split = (__pyx_v_self->raw_data[(((__pyx_v_self->raw_indices[__pyx_v_j]) * __pyx_v_self->m) + __pyx_v_d)]); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":284 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":284 * j = start_idx * split = self.raw_data[self.raw_indices[j]*self.m+d] * for i in range(start_idx+1, end_idx): # <<<<<<<<<<<<<< @@ -2976,7 +2976,7 @@ for (__pyx_t_3 = (__pyx_v_start_idx + 1); __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { __pyx_v_i = __pyx_t_3; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":285 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":285 * split = self.raw_data[self.raw_indices[j]*self.m+d] * for i in range(start_idx+1, end_idx): * if self.raw_data[self.raw_indices[i]*self.m+d]raw_data[(((__pyx_v_self->raw_indices[__pyx_v_i]) * __pyx_v_self->m) + __pyx_v_d)]) < __pyx_v_split); if (__pyx_t_1) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":286 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":286 * for i in range(start_idx+1, end_idx): * if self.raw_data[self.raw_indices[i]*self.m+d]raw_indices[__pyx_v_start_idx]); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":289 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":289 * split = self.raw_data[self.raw_indices[j]*self.m+d] * t = self.raw_indices[start_idx] * self.raw_indices[start_idx] = self.raw_indices[j] # <<<<<<<<<<<<<< @@ -3026,7 +3026,7 @@ */ (__pyx_v_self->raw_indices[__pyx_v_start_idx]) = (__pyx_v_self->raw_indices[__pyx_v_j]); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":290 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":290 * t = self.raw_indices[start_idx] * self.raw_indices[start_idx] = self.raw_indices[j] * self.raw_indices[j] = t # <<<<<<<<<<<<<< @@ -3035,7 +3035,7 @@ */ (__pyx_v_self->raw_indices[__pyx_v_j]) = __pyx_v_t; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":291 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":291 * self.raw_indices[start_idx] = self.raw_indices[j] * self.raw_indices[j] = t * p = start_idx+1 # <<<<<<<<<<<<<< @@ -3044,7 +3044,7 @@ */ __pyx_v_p = (__pyx_v_start_idx + 1); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":292 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":292 * self.raw_indices[j] = t * p = start_idx+1 * q = start_idx # <<<<<<<<<<<<<< @@ -3055,7 +3055,7 @@ goto __pyx_L11; } - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":293 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":293 * p = start_idx+1 * q = start_idx * elif p==end_idx: # <<<<<<<<<<<<<< @@ -3065,7 +3065,7 @@ __pyx_t_1 = (__pyx_v_p == __pyx_v_end_idx); if (__pyx_t_1) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":295 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":295 * elif p==end_idx: * # no points greater than split * j = end_idx-1 # <<<<<<<<<<<<<< @@ -3074,7 +3074,7 @@ */ __pyx_v_j = (__pyx_v_end_idx - 1); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":296 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":296 * # no points greater than split * j = end_idx-1 * split = self.raw_data[self.raw_indices[j]*self.m+d] # <<<<<<<<<<<<<< @@ -3083,7 +3083,7 @@ */ __pyx_v_split = (__pyx_v_self->raw_data[(((__pyx_v_self->raw_indices[__pyx_v_j]) * __pyx_v_self->m) + __pyx_v_d)]); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":297 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":297 * j = end_idx-1 * split = self.raw_data[self.raw_indices[j]*self.m+d] * for i in range(start_idx, end_idx-1): # <<<<<<<<<<<<<< @@ -3094,7 +3094,7 @@ for (__pyx_t_2 = __pyx_v_start_idx; __pyx_t_2 < __pyx_t_4; __pyx_t_2+=1) { __pyx_v_i = __pyx_t_2; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":298 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":298 * split = self.raw_data[self.raw_indices[j]*self.m+d] * for i in range(start_idx, end_idx-1): * if self.raw_data[self.raw_indices[i]*self.m+d]>split: # <<<<<<<<<<<<<< @@ -3104,7 +3104,7 @@ __pyx_t_1 = ((__pyx_v_self->raw_data[(((__pyx_v_self->raw_indices[__pyx_v_i]) * __pyx_v_self->m) + __pyx_v_d)]) > __pyx_v_split); if (__pyx_t_1) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":299 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":299 * for i in range(start_idx, end_idx-1): * if self.raw_data[self.raw_indices[i]*self.m+d]>split: * j = i # <<<<<<<<<<<<<< @@ -3113,7 +3113,7 @@ */ __pyx_v_j = __pyx_v_i; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":300 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":300 * if self.raw_data[self.raw_indices[i]*self.m+d]>split: * j = i * split = self.raw_data[self.raw_indices[j]*self.m+d] # <<<<<<<<<<<<<< @@ -3126,7 +3126,7 @@ __pyx_L17:; } - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":301 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":301 * j = i * split = self.raw_data[self.raw_indices[j]*self.m+d] * t = self.raw_indices[end_idx-1] # <<<<<<<<<<<<<< @@ -3135,7 +3135,7 @@ */ __pyx_v_t = (__pyx_v_self->raw_indices[(__pyx_v_end_idx - 1)]); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":302 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":302 * split = self.raw_data[self.raw_indices[j]*self.m+d] * t = self.raw_indices[end_idx-1] * self.raw_indices[end_idx-1] = self.raw_indices[j] # <<<<<<<<<<<<<< @@ -3144,7 +3144,7 @@ */ (__pyx_v_self->raw_indices[(__pyx_v_end_idx - 1)]) = (__pyx_v_self->raw_indices[__pyx_v_j]); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":303 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":303 * t = self.raw_indices[end_idx-1] * self.raw_indices[end_idx-1] = self.raw_indices[j] * self.raw_indices[j] = t # <<<<<<<<<<<<<< @@ -3153,7 +3153,7 @@ */ (__pyx_v_self->raw_indices[__pyx_v_j]) = __pyx_v_t; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":304 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":304 * self.raw_indices[end_idx-1] = self.raw_indices[j] * self.raw_indices[j] = t * p = end_idx-1 # <<<<<<<<<<<<<< @@ -3162,7 +3162,7 @@ */ __pyx_v_p = (__pyx_v_end_idx - 1); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":305 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":305 * self.raw_indices[j] = t * p = end_idx-1 * q = end_idx-2 # <<<<<<<<<<<<<< @@ -3174,7 +3174,7 @@ } __pyx_L11:; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":308 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":308 * * # construct new node representation * ni = stdlib.malloc(sizeof(innernode)) # <<<<<<<<<<<<<< @@ -3183,7 +3183,7 @@ */ __pyx_v_ni = ((struct __pyx_t_5scipy_7spatial_7ckdtree_innernode *)malloc((sizeof(struct __pyx_t_5scipy_7spatial_7ckdtree_innernode)))); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":310 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":310 * ni = stdlib.malloc(sizeof(innernode)) * * mids = stdlib.malloc(sizeof(double)*self.m) # <<<<<<<<<<<<<< @@ -3192,7 +3192,7 @@ */ __pyx_v_mids = ((double *)malloc(((sizeof(double)) * __pyx_v_self->m))); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":311 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":311 * * mids = stdlib.malloc(sizeof(double)*self.m) * for i in range(self.m): # <<<<<<<<<<<<<< @@ -3203,7 +3203,7 @@ for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { __pyx_v_i = __pyx_t_3; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":312 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":312 * mids = stdlib.malloc(sizeof(double)*self.m) * for i in range(self.m): * mids[i] = maxes[i] # <<<<<<<<<<<<<< @@ -3213,7 +3213,7 @@ (__pyx_v_mids[__pyx_v_i]) = (__pyx_v_maxes[__pyx_v_i]); } - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":313 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":313 * for i in range(self.m): * mids[i] = maxes[i] * mids[d] = split # <<<<<<<<<<<<<< @@ -3222,7 +3222,7 @@ */ (__pyx_v_mids[__pyx_v_d]) = __pyx_v_split; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":314 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":314 * mids[i] = maxes[i] * mids[d] = split * ni.less = self.__build(start_idx,p,mids,mins) # <<<<<<<<<<<<<< @@ -3231,7 +3231,7 @@ */ __pyx_v_ni->less = ((struct __pyx_vtabstruct_5scipy_7spatial_7ckdtree_cKDTree *)__pyx_v_self->__pyx_vtab)->__build(__pyx_v_self, __pyx_v_start_idx, __pyx_v_p, __pyx_v_mids, __pyx_v_mins); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":316 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":316 * ni.less = self.__build(start_idx,p,mids,mins) * * for i in range(self.m): # <<<<<<<<<<<<<< @@ -3242,7 +3242,7 @@ for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { __pyx_v_i = __pyx_t_3; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":317 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":317 * * for i in range(self.m): * mids[i] = mins[i] # <<<<<<<<<<<<<< @@ -3252,7 +3252,7 @@ (__pyx_v_mids[__pyx_v_i]) = (__pyx_v_mins[__pyx_v_i]); } - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":318 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":318 * for i in range(self.m): * mids[i] = mins[i] * mids[d] = split # <<<<<<<<<<<<<< @@ -3261,7 +3261,7 @@ */ (__pyx_v_mids[__pyx_v_d]) = __pyx_v_split; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":319 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":319 * mids[i] = mins[i] * mids[d] = split * ni.greater = self.__build(p,end_idx,maxes,mids) # <<<<<<<<<<<<<< @@ -3270,7 +3270,7 @@ */ __pyx_v_ni->greater = ((struct __pyx_vtabstruct_5scipy_7spatial_7ckdtree_cKDTree *)__pyx_v_self->__pyx_vtab)->__build(__pyx_v_self, __pyx_v_p, __pyx_v_end_idx, __pyx_v_maxes, __pyx_v_mids); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":321 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":321 * ni.greater = self.__build(p,end_idx,maxes,mids) * * stdlib.free(mids) # <<<<<<<<<<<<<< @@ -3279,7 +3279,7 @@ */ free(__pyx_v_mids); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":323 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":323 * stdlib.free(mids) * * ni.split_dim = d # <<<<<<<<<<<<<< @@ -3288,7 +3288,7 @@ */ __pyx_v_ni->split_dim = __pyx_v_d; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":324 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":324 * * ni.split_dim = d * ni.split = split # <<<<<<<<<<<<<< @@ -3297,7 +3297,7 @@ */ __pyx_v_ni->split = __pyx_v_split; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":326 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":326 * ni.split = split * * return ni # <<<<<<<<<<<<<< @@ -3316,7 +3316,7 @@ return __pyx_r; } -/* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":328 +/* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":328 * return ni * * cdef __free_tree(cKDTree self, innernode* node): # <<<<<<<<<<<<<< @@ -3331,7 +3331,7 @@ __Pyx_RefNannySetupContext("__free_tree"); __Pyx_INCREF((PyObject *)__pyx_v_self); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":329 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":329 * * cdef __free_tree(cKDTree self, innernode* node): * if node.split_dim!=-1: # <<<<<<<<<<<<<< @@ -3341,7 +3341,7 @@ __pyx_t_1 = (__pyx_v_node->split_dim != -1); if (__pyx_t_1) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":330 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":330 * cdef __free_tree(cKDTree self, innernode* node): * if node.split_dim!=-1: * self.__free_tree(node.less) # <<<<<<<<<<<<<< @@ -3352,7 +3352,7 @@ __Pyx_GOTREF(__pyx_t_2); __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":331 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":331 * if node.split_dim!=-1: * self.__free_tree(node.less) * self.__free_tree(node.greater) # <<<<<<<<<<<<<< @@ -3366,7 +3366,7 @@ } __pyx_L3:; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":332 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":332 * self.__free_tree(node.less) * self.__free_tree(node.greater) * stdlib.free(node) # <<<<<<<<<<<<<< @@ -3388,7 +3388,7 @@ return __pyx_r; } -/* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":334 +/* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":334 * stdlib.free(node) * * def __dealloc__(cKDTree self): # <<<<<<<<<<<<<< @@ -3403,7 +3403,7 @@ __Pyx_RefNannySetupContext("__dealloc__"); __Pyx_INCREF((PyObject *)__pyx_v_self); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":335 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":335 * * def __dealloc__(cKDTree self): * if (self.tree) == 0: # <<<<<<<<<<<<<< @@ -3413,7 +3413,7 @@ __pyx_t_1 = (((int)((struct __pyx_obj_5scipy_7spatial_7ckdtree_cKDTree *)__pyx_v_self)->tree) == 0); if (__pyx_t_1) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":337 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":337 * if (self.tree) == 0: * # should happen only if __init__ was never called * return # <<<<<<<<<<<<<< @@ -3425,7 +3425,7 @@ } __pyx_L5:; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":338 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":338 * # should happen only if __init__ was never called * return * self.__free_tree(self.tree) # <<<<<<<<<<<<<< @@ -3445,7 +3445,7 @@ __Pyx_RefNannyFinishContext(); } -/* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":340 +/* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":340 * self.__free_tree(self.tree) * * cdef void __query(cKDTree self, # <<<<<<<<<<<<<< @@ -3483,7 +3483,7 @@ __Pyx_RefNannySetupContext("__query"); __Pyx_INCREF((PyObject *)__pyx_v_self); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":371 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":371 * # distances between the nearest side of the cell and the target * # the head node of the cell * heapcreate(&q,12) # <<<<<<<<<<<<<< @@ -3494,7 +3494,7 @@ __Pyx_GOTREF(__pyx_t_1); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":376 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":376 * # furthest known neighbor first * # entries are (-distance**p, i) * heapcreate(&neighbors,k) # <<<<<<<<<<<<<< @@ -3505,7 +3505,7 @@ __Pyx_GOTREF(__pyx_t_1); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":379 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":379 * * # set up first nodeinfo * inf = stdlib.malloc(sizeof(nodeinfo)+self.m*sizeof(double)) # <<<<<<<<<<<<<< @@ -3514,7 +3514,7 @@ */ __pyx_v_inf = ((struct __pyx_t_5scipy_7spatial_7ckdtree_nodeinfo *)malloc(((sizeof(struct __pyx_t_5scipy_7spatial_7ckdtree_nodeinfo)) + (__pyx_v_self->m * (sizeof(double)))))); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":380 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":380 * # set up first nodeinfo * inf = stdlib.malloc(sizeof(nodeinfo)+self.m*sizeof(double)) * inf.node = self.tree # <<<<<<<<<<<<<< @@ -3523,7 +3523,7 @@ */ __pyx_v_inf->node = __pyx_v_self->tree; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":381 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":381 * inf = stdlib.malloc(sizeof(nodeinfo)+self.m*sizeof(double)) * inf.node = self.tree * for i in range(self.m): # <<<<<<<<<<<<<< @@ -3534,7 +3534,7 @@ for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { __pyx_v_i = __pyx_t_3; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":382 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":382 * inf.node = self.tree * for i in range(self.m): * inf.side_distances[i] = 0 # <<<<<<<<<<<<<< @@ -3543,7 +3543,7 @@ */ (__pyx_v_inf->side_distances[__pyx_v_i]) = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":383 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":383 * for i in range(self.m): * inf.side_distances[i] = 0 * t = x[i]-self.raw_maxes[i] # <<<<<<<<<<<<<< @@ -3552,7 +3552,7 @@ */ __pyx_v_t = ((__pyx_v_x[__pyx_v_i]) - (__pyx_v_self->raw_maxes[__pyx_v_i])); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":384 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":384 * inf.side_distances[i] = 0 * t = x[i]-self.raw_maxes[i] * if t>inf.side_distances[i]: # <<<<<<<<<<<<<< @@ -3562,7 +3562,7 @@ __pyx_t_4 = (__pyx_v_t > (__pyx_v_inf->side_distances[__pyx_v_i])); if (__pyx_t_4) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":385 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":385 * t = x[i]-self.raw_maxes[i] * if t>inf.side_distances[i]: * inf.side_distances[i] = t # <<<<<<<<<<<<<< @@ -3574,7 +3574,7 @@ } /*else*/ { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":387 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":387 * inf.side_distances[i] = t * else: * t = self.raw_mins[i]-x[i] # <<<<<<<<<<<<<< @@ -3583,7 +3583,7 @@ */ __pyx_v_t = ((__pyx_v_self->raw_mins[__pyx_v_i]) - (__pyx_v_x[__pyx_v_i])); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":388 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":388 * else: * t = self.raw_mins[i]-x[i] * if t>inf.side_distances[i]: # <<<<<<<<<<<<<< @@ -3593,7 +3593,7 @@ __pyx_t_4 = (__pyx_v_t > (__pyx_v_inf->side_distances[__pyx_v_i])); if (__pyx_t_4) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":389 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":389 * t = self.raw_mins[i]-x[i] * if t>inf.side_distances[i]: * inf.side_distances[i] = t # <<<<<<<<<<<<<< @@ -3607,7 +3607,7 @@ } __pyx_L5:; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":390 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":390 * if t>inf.side_distances[i]: * inf.side_distances[i] = t * if p!=1 and p!=infinity: # <<<<<<<<<<<<<< @@ -3623,7 +3623,7 @@ } if (__pyx_t_6) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":391 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":391 * inf.side_distances[i] = t * if p!=1 and p!=infinity: * inf.side_distances[i]=inf.side_distances[i]**p # <<<<<<<<<<<<<< @@ -3636,7 +3636,7 @@ __pyx_L7:; } - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":394 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":394 * * # compute first distance * min_distance = 0. # <<<<<<<<<<<<<< @@ -3645,7 +3645,7 @@ */ __pyx_v_min_distance = 0.0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":395 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":395 * # compute first distance * min_distance = 0. * for i in range(self.m): # <<<<<<<<<<<<<< @@ -3656,7 +3656,7 @@ for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { __pyx_v_i = __pyx_t_3; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":396 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":396 * min_distance = 0. * for i in range(self.m): * if p==infinity: # <<<<<<<<<<<<<< @@ -3666,7 +3666,7 @@ __pyx_t_6 = (__pyx_v_p == __pyx_v_5scipy_7spatial_7ckdtree_infinity); if (__pyx_t_6) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":397 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":397 * for i in range(self.m): * if p==infinity: * min_distance = dmax(min_distance,inf.side_distances[i]) # <<<<<<<<<<<<<< @@ -3678,7 +3678,7 @@ } /*else*/ { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":399 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":399 * min_distance = dmax(min_distance,inf.side_distances[i]) * else: * min_distance += inf.side_distances[i] # <<<<<<<<<<<<<< @@ -3690,7 +3690,7 @@ __pyx_L10:; } - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":402 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":402 * * # fiddle approximation factor * if eps==0: # <<<<<<<<<<<<<< @@ -3700,7 +3700,7 @@ __pyx_t_6 = (__pyx_v_eps == 0); if (__pyx_t_6) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":403 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":403 * # fiddle approximation factor * if eps==0: * epsfac=1 # <<<<<<<<<<<<<< @@ -3711,7 +3711,7 @@ goto __pyx_L11; } - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":404 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":404 * if eps==0: * epsfac=1 * elif p==infinity: # <<<<<<<<<<<<<< @@ -3721,7 +3721,7 @@ __pyx_t_6 = (__pyx_v_p == __pyx_v_5scipy_7spatial_7ckdtree_infinity); if (__pyx_t_6) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":405 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":405 * epsfac=1 * elif p==infinity: * epsfac = 1/(1+eps) # <<<<<<<<<<<<<< @@ -3738,7 +3738,7 @@ } /*else*/ { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":407 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":407 * epsfac = 1/(1+eps) * else: * epsfac = 1/(1+eps)**p # <<<<<<<<<<<<<< @@ -3754,7 +3754,7 @@ } __pyx_L11:; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":410 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":410 * * # internally we represent all distances as distance**p * if p!=infinity and distance_upper_bound!=infinity: # <<<<<<<<<<<<<< @@ -3770,7 +3770,7 @@ } if (__pyx_t_5) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":411 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":411 * # internally we represent all distances as distance**p * if p!=infinity and distance_upper_bound!=infinity: * distance_upper_bound = distance_upper_bound**p # <<<<<<<<<<<<<< @@ -3782,7 +3782,7 @@ } __pyx_L12:; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":413 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":413 * distance_upper_bound = distance_upper_bound**p * * while True: # <<<<<<<<<<<<<< @@ -3793,7 +3793,7 @@ __pyx_t_5 = 1; if (!__pyx_t_5) break; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":414 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":414 * * while True: * if inf.node.split_dim==-1: # <<<<<<<<<<<<<< @@ -3803,7 +3803,7 @@ __pyx_t_5 = (__pyx_v_inf->node->split_dim == -1); if (__pyx_t_5) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":415 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":415 * while True: * if inf.node.split_dim==-1: * node = inf.node # <<<<<<<<<<<<<< @@ -3812,7 +3812,7 @@ */ __pyx_v_node = ((struct __pyx_t_5scipy_7spatial_7ckdtree_leafnode *)__pyx_v_inf->node); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":418 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":418 * * # brute-force * for i in range(node.start_idx,node.end_idx): # <<<<<<<<<<<<<< @@ -3823,7 +3823,7 @@ for (__pyx_t_3 = __pyx_v_node->start_idx; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { __pyx_v_i = __pyx_t_3; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":421 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":421 * d = _distance_p( * self.raw_data+self.raw_indices[i]*self.m, * x,p,self.m,distance_upper_bound) # <<<<<<<<<<<<<< @@ -3832,7 +3832,7 @@ */ __pyx_v_d = __pyx_f_5scipy_7spatial_7ckdtree__distance_p((__pyx_v_self->raw_data + ((__pyx_v_self->raw_indices[__pyx_v_i]) * __pyx_v_self->m)), __pyx_v_x, __pyx_v_p, __pyx_v_self->m, __pyx_v_distance_upper_bound); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":423 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":423 * x,p,self.m,distance_upper_bound) * * if draw_indices[__pyx_v_i]); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":429 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":429 * neighbor.priority = -d * neighbor.contents.intdata = self.raw_indices[i] * heappush(&neighbors,neighbor) # <<<<<<<<<<<<<< @@ -3895,7 +3895,7 @@ __Pyx_GOTREF(__pyx_t_1); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":432 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":432 * * # adjust upper bound for efficiency * if neighbors.n==k: # <<<<<<<<<<<<<< @@ -3905,7 +3905,7 @@ __pyx_t_5 = (__pyx_v_neighbors.n == __pyx_v_k); if (__pyx_t_5) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":433 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":433 * # adjust upper bound for efficiency * if neighbors.n==k: * distance_upper_bound = -heappeek(&neighbors).priority # <<<<<<<<<<<<<< @@ -3921,7 +3921,7 @@ __pyx_L18:; } - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":435 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":435 * distance_upper_bound = -heappeek(&neighbors).priority * # done with this node, get another * stdlib.free(inf) # <<<<<<<<<<<<<< @@ -3930,7 +3930,7 @@ */ free(__pyx_v_inf); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":436 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":436 * # done with this node, get another * stdlib.free(inf) * if q.n==0: # <<<<<<<<<<<<<< @@ -3940,7 +3940,7 @@ __pyx_t_5 = (__pyx_v_q.n == 0); if (__pyx_t_5) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":438 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":438 * if q.n==0: * # no more nodes to visit * break # <<<<<<<<<<<<<< @@ -3952,7 +3952,7 @@ } /*else*/ { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":440 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":440 * break * else: * it = heappop(&q) # <<<<<<<<<<<<<< @@ -3961,7 +3961,7 @@ */ __pyx_v_it = __pyx_f_5scipy_7spatial_7ckdtree_heappop((&__pyx_v_q)); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":441 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":441 * else: * it = heappop(&q) * inf = it.contents.ptrdata # <<<<<<<<<<<<<< @@ -3970,7 +3970,7 @@ */ __pyx_v_inf = ((struct __pyx_t_5scipy_7spatial_7ckdtree_nodeinfo *)__pyx_v_it.contents.ptrdata); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":442 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":442 * it = heappop(&q) * inf = it.contents.ptrdata * min_distance = it.priority # <<<<<<<<<<<<<< @@ -3984,7 +3984,7 @@ } /*else*/ { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":444 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":444 * min_distance = it.priority * else: * inode = inf.node # <<<<<<<<<<<<<< @@ -3993,7 +3993,7 @@ */ __pyx_v_inode = ((struct __pyx_t_5scipy_7spatial_7ckdtree_innernode *)__pyx_v_inf->node); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":449 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":449 * # but since the distance_upper_bound decreases, we might get * # here even if the cell's too far * if min_distance>distance_upper_bound*epsfac: # <<<<<<<<<<<<<< @@ -4003,7 +4003,7 @@ __pyx_t_5 = (__pyx_v_min_distance > (__pyx_v_distance_upper_bound * __pyx_v_epsfac)); if (__pyx_t_5) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":451 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":451 * if min_distance>distance_upper_bound*epsfac: * # since this is the nearest cell, we're done, bail out * stdlib.free(inf) # <<<<<<<<<<<<<< @@ -4012,7 +4012,7 @@ */ free(__pyx_v_inf); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":453 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":453 * stdlib.free(inf) * # free all the nodes still on the heap * for i in range(q.n): # <<<<<<<<<<<<<< @@ -4023,7 +4023,7 @@ for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { __pyx_v_i = __pyx_t_3; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":454 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":454 * # free all the nodes still on the heap * for i in range(q.n): * stdlib.free(q.heap[i].contents.ptrdata) # <<<<<<<<<<<<<< @@ -4033,7 +4033,7 @@ free((__pyx_v_q.heap[__pyx_v_i]).contents.ptrdata); } - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":455 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":455 * for i in range(q.n): * stdlib.free(q.heap[i].contents.ptrdata) * break # <<<<<<<<<<<<<< @@ -4045,7 +4045,7 @@ } __pyx_L22:; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":458 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":458 * * # set up children for searching * if x[inode.split_dim]split_dim]) < __pyx_v_inode->split); if (__pyx_t_5) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":459 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":459 * # set up children for searching * if x[inode.split_dim]less; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":460 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":460 * if x[inode.split_dim]greater; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":463 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":463 * else: * near = inode.greater * far = inode.less # <<<<<<<<<<<<<< @@ -4096,7 +4096,7 @@ } __pyx_L25:; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":468 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":468 * # we're going here next, so no point pushing it on the queue * # no need to recompute the distance or the side_distances * inf.node = near # <<<<<<<<<<<<<< @@ -4105,7 +4105,7 @@ */ __pyx_v_inf->node = __pyx_v_near; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":473 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":473 * # on the split value; compute its distance and side_distances * # and push it on the queue if it's near enough * inf2 = stdlib.malloc(sizeof(nodeinfo)+self.m*sizeof(double)) # <<<<<<<<<<<<<< @@ -4114,7 +4114,7 @@ */ __pyx_v_inf2 = ((struct __pyx_t_5scipy_7spatial_7ckdtree_nodeinfo *)malloc(((sizeof(struct __pyx_t_5scipy_7spatial_7ckdtree_nodeinfo)) + (__pyx_v_self->m * (sizeof(double)))))); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":474 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":474 * # and push it on the queue if it's near enough * inf2 = stdlib.malloc(sizeof(nodeinfo)+self.m*sizeof(double)) * it2.contents.ptrdata = inf2 # <<<<<<<<<<<<<< @@ -4123,7 +4123,7 @@ */ __pyx_v_it2.contents.ptrdata = ((char *)__pyx_v_inf2); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":475 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":475 * inf2 = stdlib.malloc(sizeof(nodeinfo)+self.m*sizeof(double)) * it2.contents.ptrdata = inf2 * inf2.node = far # <<<<<<<<<<<<<< @@ -4132,7 +4132,7 @@ */ __pyx_v_inf2->node = __pyx_v_far; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":477 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":477 * inf2.node = far * # most side distances unchanged * for i in range(self.m): # <<<<<<<<<<<<<< @@ -4143,7 +4143,7 @@ for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { __pyx_v_i = __pyx_t_3; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":478 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":478 * # most side distances unchanged * for i in range(self.m): * inf2.side_distances[i] = inf.side_distances[i] # <<<<<<<<<<<<<< @@ -4153,7 +4153,7 @@ (__pyx_v_inf2->side_distances[__pyx_v_i]) = (__pyx_v_inf->side_distances[__pyx_v_i]); } - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":482 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":482 * # one side distance changes * # we can adjust the minimum distance without recomputing * if p == infinity: # <<<<<<<<<<<<<< @@ -4163,7 +4163,7 @@ __pyx_t_5 = (__pyx_v_p == __pyx_v_5scipy_7spatial_7ckdtree_infinity); if (__pyx_t_5) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":485 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":485 * # we never use side_distances in the l_infinity case * # inf2.side_distances[inode.split_dim] = dabs(inode.split-x[inode.split_dim]) * far_min_distance = dmax(min_distance, dabs(inode.split-x[inode.split_dim])) # <<<<<<<<<<<<<< @@ -4174,7 +4174,7 @@ goto __pyx_L28; } - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":486 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":486 * # inf2.side_distances[inode.split_dim] = dabs(inode.split-x[inode.split_dim]) * far_min_distance = dmax(min_distance, dabs(inode.split-x[inode.split_dim])) * elif p == 1: # <<<<<<<<<<<<<< @@ -4184,7 +4184,7 @@ __pyx_t_5 = (__pyx_v_p == 1); if (__pyx_t_5) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":487 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":487 * far_min_distance = dmax(min_distance, dabs(inode.split-x[inode.split_dim])) * elif p == 1: * inf2.side_distances[inode.split_dim] = dabs(inode.split-x[inode.split_dim]) # <<<<<<<<<<<<<< @@ -4193,7 +4193,7 @@ */ (__pyx_v_inf2->side_distances[__pyx_v_inode->split_dim]) = __pyx_f_5scipy_7spatial_7ckdtree_dabs((__pyx_v_inode->split - (__pyx_v_x[__pyx_v_inode->split_dim]))); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":488 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":488 * elif p == 1: * inf2.side_distances[inode.split_dim] = dabs(inode.split-x[inode.split_dim]) * far_min_distance = min_distance - inf.side_distances[inode.split_dim] + inf2.side_distances[inode.split_dim] # <<<<<<<<<<<<<< @@ -4205,7 +4205,7 @@ } /*else*/ { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":490 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":490 * far_min_distance = min_distance - inf.side_distances[inode.split_dim] + inf2.side_distances[inode.split_dim] * else: * inf2.side_distances[inode.split_dim] = dabs(inode.split-x[inode.split_dim])**p # <<<<<<<<<<<<<< @@ -4214,7 +4214,7 @@ */ (__pyx_v_inf2->side_distances[__pyx_v_inode->split_dim]) = pow(__pyx_f_5scipy_7spatial_7ckdtree_dabs((__pyx_v_inode->split - (__pyx_v_x[__pyx_v_inode->split_dim]))), __pyx_v_p); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":491 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":491 * else: * inf2.side_distances[inode.split_dim] = dabs(inode.split-x[inode.split_dim])**p * far_min_distance = min_distance - inf.side_distances[inode.split_dim] + inf2.side_distances[inode.split_dim] # <<<<<<<<<<<<<< @@ -4225,7 +4225,7 @@ } __pyx_L28:; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":493 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":493 * far_min_distance = min_distance - inf.side_distances[inode.split_dim] + inf2.side_distances[inode.split_dim] * * it2.priority = far_min_distance # <<<<<<<<<<<<<< @@ -4234,7 +4234,7 @@ */ __pyx_v_it2.priority = __pyx_v_far_min_distance; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":497 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":497 * * # far child might be too far, if so, don't bother pushing it * if far_min_distance<=distance_upper_bound*epsfac: # <<<<<<<<<<<<<< @@ -4244,7 +4244,7 @@ __pyx_t_5 = (__pyx_v_far_min_distance <= (__pyx_v_distance_upper_bound * __pyx_v_epsfac)); if (__pyx_t_5) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":498 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":498 * # far child might be too far, if so, don't bother pushing it * if far_min_distance<=distance_upper_bound*epsfac: * heappush(&q,it2) # <<<<<<<<<<<<<< @@ -4258,7 +4258,7 @@ } /*else*/ { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":500 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":500 * heappush(&q,it2) * else: * stdlib.free(inf2) # <<<<<<<<<<<<<< @@ -4267,7 +4267,7 @@ */ free(__pyx_v_inf2); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":502 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":502 * stdlib.free(inf2) * # just in case * it2.contents.ptrdata = 0 # <<<<<<<<<<<<<< @@ -4282,7 +4282,7 @@ } __pyx_L14_break:; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":505 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":505 * * # fill output arrays with sorted neighbors * for i in range(neighbors.n-1,-1,-1): # <<<<<<<<<<<<<< @@ -4331,7 +4331,7 @@ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __pyx_v_i = __pyx_t_2; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":506 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":506 * # fill output arrays with sorted neighbors * for i in range(neighbors.n-1,-1,-1): * neighbor = heappop(&neighbors) # FIXME: neighbors may be realloced # <<<<<<<<<<<<<< @@ -4340,7 +4340,7 @@ */ __pyx_v_neighbor = __pyx_f_5scipy_7spatial_7ckdtree_heappop((&__pyx_v_neighbors)); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":507 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":507 * for i in range(neighbors.n-1,-1,-1): * neighbor = heappop(&neighbors) # FIXME: neighbors may be realloced * result_indices[i] = neighbor.contents.intdata # <<<<<<<<<<<<<< @@ -4349,7 +4349,7 @@ */ (__pyx_v_result_indices[__pyx_v_i]) = __pyx_v_neighbor.contents.intdata; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":508 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":508 * neighbor = heappop(&neighbors) # FIXME: neighbors may be realloced * result_indices[i] = neighbor.contents.intdata * if p==1 or p==infinity: # <<<<<<<<<<<<<< @@ -4365,7 +4365,7 @@ } if (__pyx_t_4) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":509 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":509 * result_indices[i] = neighbor.contents.intdata * if p==1 or p==infinity: * result_distances[i] = -neighbor.priority # <<<<<<<<<<<<<< @@ -4377,7 +4377,7 @@ } /*else*/ { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":511 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":511 * result_distances[i] = -neighbor.priority * else: * result_distances[i] = (-neighbor.priority)**(1./p) # <<<<<<<<<<<<<< @@ -4394,7 +4394,7 @@ } __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":513 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":513 * result_distances[i] = (-neighbor.priority)**(1./p) * * heapdestroy(&q) # <<<<<<<<<<<<<< @@ -4405,7 +4405,7 @@ __Pyx_GOTREF(__pyx_t_9); __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":514 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":514 * * heapdestroy(&q) * heapdestroy(&neighbors) # <<<<<<<<<<<<<< @@ -4426,7 +4426,7 @@ __Pyx_RefNannyFinishContext(); } -/* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":516 +/* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":516 * heapdestroy(&neighbors) * * def query(cKDTree self, object x, int k=1, double eps=0, double p=2, # <<<<<<<<<<<<<< @@ -4582,7 +4582,7 @@ __pyx_bstruct_dd.buf = NULL; __pyx_bstruct_xx.buf = NULL; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":558 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":558 * cdef np.ndarray[double, ndim=2] xx * cdef int c * x = np.asarray(x).astype(np.float) # <<<<<<<<<<<<<< @@ -4624,7 +4624,7 @@ __pyx_v_x = __pyx_t_2; __pyx_t_2 = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":559 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":559 * cdef int c * x = np.asarray(x).astype(np.float) * if np.shape(x)[-1] != self.m: # <<<<<<<<<<<<<< @@ -4658,7 +4658,7 @@ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; if (__pyx_t_4) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":560 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":560 * x = np.asarray(x).astype(np.float) * if np.shape(x)[-1] != self.m: * raise ValueError("x must consist of vectors of length %d but has shape %s" % (self.m, np.shape(x))) # <<<<<<<<<<<<<< @@ -4707,7 +4707,7 @@ } __pyx_L6:; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":561 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":561 * if np.shape(x)[-1] != self.m: * raise ValueError("x must consist of vectors of length %d but has shape %s" % (self.m, np.shape(x))) * if p<1: # <<<<<<<<<<<<<< @@ -4717,7 +4717,7 @@ __pyx_t_4 = (__pyx_v_p < 1); if (__pyx_t_4) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":562 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":562 * raise ValueError("x must consist of vectors of length %d but has shape %s" % (self.m, np.shape(x))) * if p<1: * raise ValueError("Only p-norms with 1<=p<=infinity permitted") # <<<<<<<<<<<<<< @@ -4739,7 +4739,7 @@ } __pyx_L7:; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":563 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":563 * if p<1: * raise ValueError("Only p-norms with 1<=p<=infinity permitted") * if len(x.shape)==1: # <<<<<<<<<<<<<< @@ -4753,7 +4753,7 @@ __pyx_t_4 = (__pyx_t_6 == 1); if (__pyx_t_4) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":564 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":564 * raise ValueError("Only p-norms with 1<=p<=infinity permitted") * if len(x.shape)==1: * single = True # <<<<<<<<<<<<<< @@ -4766,7 +4766,7 @@ __pyx_v_single = __pyx_t_1; __pyx_t_1 = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":565 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":565 * if len(x.shape)==1: * single = True * x = x[np.newaxis,:] # <<<<<<<<<<<<<< @@ -4798,7 +4798,7 @@ } /*else*/ { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":567 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":567 * x = x[np.newaxis,:] * else: * single = False # <<<<<<<<<<<<<< @@ -4813,7 +4813,7 @@ } __pyx_L8:; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":568 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":568 * else: * single = False * retshape = np.shape(x)[:-1] # <<<<<<<<<<<<<< @@ -4841,12 +4841,12 @@ __pyx_v_retshape = __pyx_t_1; __pyx_t_1 = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":569 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":569 * single = False * retshape = np.shape(x)[:-1] * n = np.prod(retshape) # <<<<<<<<<<<<<< * xx = np.reshape(x,(n,self.m)) - * dd = np.empty((n,k),dtype=np.float) + * xx = np.ascontiguousarray(xx) */ __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 569; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_1); @@ -4866,12 +4866,12 @@ __pyx_v_n = __pyx_t_3; __pyx_t_3 = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":570 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":570 * retshape = np.shape(x)[:-1] * n = np.prod(retshape) * xx = np.reshape(x,(n,self.m)) # <<<<<<<<<<<<<< + * xx = np.ascontiguousarray(xx) * dd = np.empty((n,k),dtype=np.float) - * dd.fill(infinity) */ __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 570; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_3); @@ -4924,186 +4924,231 @@ __pyx_v_xx = ((PyArrayObject *)__pyx_t_5); __pyx_t_5 = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":571 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":571 * n = np.prod(retshape) * xx = np.reshape(x,(n,self.m)) - * dd = np.empty((n,k),dtype=np.float) # <<<<<<<<<<<<<< + * xx = np.ascontiguousarray(xx) # <<<<<<<<<<<<<< + * dd = np.empty((n,k),dtype=np.float) * dd.fill(infinity) - * ii = np.empty((n,k),dtype='i') */ __pyx_t_5 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 571; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_5); - __pyx_t_3 = PyObject_GetAttr(__pyx_t_5, __pyx_n_s__empty); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 571; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_3 = PyObject_GetAttr(__pyx_t_5, __pyx_n_s__ascontiguousarray); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 571; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_3); __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = PyInt_FromLong(__pyx_v_k); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 571; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 571; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_5); - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 571; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_INCREF(((PyObject *)__pyx_v_xx)); + PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_v_xx)); + __Pyx_GIVEREF(((PyObject *)__pyx_v_xx)); + __pyx_t_1 = PyObject_Call(__pyx_t_3, __pyx_t_5, NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 571; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 571; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_7 = ((PyArrayObject *)__pyx_t_1); + { + __Pyx_BufFmt_StackElem __pyx_stack[1]; + __Pyx_SafeReleaseBuffer(&__pyx_bstruct_xx); + __pyx_t_8 = __Pyx_GetBufferAndValidate(&__pyx_bstruct_xx, (PyObject*)__pyx_t_7, &__Pyx_TypeInfo_double, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack); + if (unlikely(__pyx_t_8 < 0)) { + PyErr_Fetch(&__pyx_t_11, &__pyx_t_10, &__pyx_t_9); + if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_bstruct_xx, (PyObject*)__pyx_v_xx, &__Pyx_TypeInfo_double, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) { + Py_XDECREF(__pyx_t_11); Py_XDECREF(__pyx_t_10); Py_XDECREF(__pyx_t_9); + __Pyx_RaiseBufferFallbackError(); + } else { + PyErr_Restore(__pyx_t_11, __pyx_t_10, __pyx_t_9); + } + } + __pyx_bstride_0_xx = __pyx_bstruct_xx.strides[0]; __pyx_bstride_1_xx = __pyx_bstruct_xx.strides[1]; + __pyx_bshape_0_xx = __pyx_bstruct_xx.shape[0]; __pyx_bshape_1_xx = __pyx_bstruct_xx.shape[1]; + if (unlikely(__pyx_t_8 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 571; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + } + __pyx_t_7 = 0; + __Pyx_DECREF(((PyObject *)__pyx_v_xx)); + __pyx_v_xx = ((PyArrayObject *)__pyx_t_1); + __pyx_t_1 = 0; + + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":572 + * xx = np.reshape(x,(n,self.m)) + * xx = np.ascontiguousarray(xx) + * dd = np.empty((n,k),dtype=np.float) # <<<<<<<<<<<<<< + * dd.fill(infinity) + * ii = np.empty((n,k),dtype='i') + */ + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 572; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_5 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__empty); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 572; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyInt_FromLong(__pyx_v_k); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 572; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 572; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); __Pyx_INCREF(__pyx_v_n); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_n); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_n); __Pyx_GIVEREF(__pyx_v_n); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 571; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_1); + PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1); __Pyx_GIVEREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyDict_New(); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 571; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(((PyObject *)__pyx_t_1)); - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 571; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 572; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + __pyx_t_3 = 0; + __pyx_t_3 = PyDict_New(); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 572; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_3)); + __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 572; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_2); - __pyx_t_12 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__float); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 571; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_12 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__float); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 572; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_12); __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_n_s__dtype), __pyx_t_12) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 571; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + if (PyDict_SetItem(__pyx_t_3, ((PyObject *)__pyx_n_s__dtype), __pyx_t_12) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 572; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = PyEval_CallObjectWithKeywords(__pyx_t_3, __pyx_t_5, ((PyObject *)__pyx_t_1)); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 571; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_12 = PyEval_CallObjectWithKeywords(__pyx_t_5, __pyx_t_1, ((PyObject *)__pyx_t_3)); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 572; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(((PyObject *)__pyx_t_1)); __pyx_t_1 = 0; - if (!(likely(((__pyx_t_12) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_12, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 571; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(((PyObject *)__pyx_t_3)); __pyx_t_3 = 0; + if (!(likely(((__pyx_t_12) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_12, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 572; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __pyx_t_13 = ((PyArrayObject *)__pyx_t_12); { __Pyx_BufFmt_StackElem __pyx_stack[1]; __Pyx_SafeReleaseBuffer(&__pyx_bstruct_dd); __pyx_t_8 = __Pyx_GetBufferAndValidate(&__pyx_bstruct_dd, (PyObject*)__pyx_t_13, &__Pyx_TypeInfo_double, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack); if (unlikely(__pyx_t_8 < 0)) { - PyErr_Fetch(&__pyx_t_11, &__pyx_t_10, &__pyx_t_9); + PyErr_Fetch(&__pyx_t_9, &__pyx_t_10, &__pyx_t_11); if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_bstruct_dd, (PyObject*)__pyx_v_dd, &__Pyx_TypeInfo_double, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) { - Py_XDECREF(__pyx_t_11); Py_XDECREF(__pyx_t_10); Py_XDECREF(__pyx_t_9); + Py_XDECREF(__pyx_t_9); Py_XDECREF(__pyx_t_10); Py_XDECREF(__pyx_t_11); __Pyx_RaiseBufferFallbackError(); } else { - PyErr_Restore(__pyx_t_11, __pyx_t_10, __pyx_t_9); + PyErr_Restore(__pyx_t_9, __pyx_t_10, __pyx_t_11); } } __pyx_bstride_0_dd = __pyx_bstruct_dd.strides[0]; __pyx_bstride_1_dd = __pyx_bstruct_dd.strides[1]; __pyx_bshape_0_dd = __pyx_bstruct_dd.shape[0]; __pyx_bshape_1_dd = __pyx_bstruct_dd.shape[1]; - if (unlikely(__pyx_t_8 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 571; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + if (unlikely(__pyx_t_8 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 572; __pyx_clineno = __LINE__; goto __pyx_L1_error;} } __pyx_t_13 = 0; __Pyx_DECREF(((PyObject *)__pyx_v_dd)); __pyx_v_dd = ((PyArrayObject *)__pyx_t_12); __pyx_t_12 = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":572 - * xx = np.reshape(x,(n,self.m)) + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":573 + * xx = np.ascontiguousarray(xx) * dd = np.empty((n,k),dtype=np.float) * dd.fill(infinity) # <<<<<<<<<<<<<< * ii = np.empty((n,k),dtype='i') * ii.fill(self.n) */ - __pyx_t_12 = PyObject_GetAttr(((PyObject *)__pyx_v_dd), __pyx_n_s__fill); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 572; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_12 = PyObject_GetAttr(((PyObject *)__pyx_v_dd), __pyx_n_s__fill); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 573; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_12); - __pyx_t_1 = PyFloat_FromDouble(__pyx_v_5scipy_7spatial_7ckdtree_infinity); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 572; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 572; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_1 = PyObject_Call(__pyx_t_12, __pyx_t_5, NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 572; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_3 = PyFloat_FromDouble(__pyx_v_5scipy_7spatial_7ckdtree_infinity); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 573; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 573; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_1); + PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + __pyx_t_3 = 0; + __pyx_t_3 = PyObject_Call(__pyx_t_12, __pyx_t_1, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 573; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":573 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":574 * dd = np.empty((n,k),dtype=np.float) * dd.fill(infinity) * ii = np.empty((n,k),dtype='i') # <<<<<<<<<<<<<< * ii.fill(self.n) * for c in range(n): */ - __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 573; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_5 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__empty); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 573; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyInt_FromLong(__pyx_v_k); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 573; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 574; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_1 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__empty); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 574; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_1); - __pyx_t_12 = PyTuple_New(2); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 573; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyInt_FromLong(__pyx_v_k); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 574; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_12 = PyTuple_New(2); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 574; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_12); __Pyx_INCREF(__pyx_v_n); PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_v_n); __Pyx_GIVEREF(__pyx_v_n); - PyTuple_SET_ITEM(__pyx_t_12, 1, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 573; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_12); + PyTuple_SET_ITEM(__pyx_t_12, 1, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + __pyx_t_3 = 0; + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 574; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_12); __Pyx_GIVEREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = PyDict_New(); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 573; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_12 = PyDict_New(); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 574; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(((PyObject *)__pyx_t_12)); - if (PyDict_SetItem(__pyx_t_12, ((PyObject *)__pyx_n_s__dtype), ((PyObject *)__pyx_n_s__i)) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 573; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __pyx_t_3 = PyEval_CallObjectWithKeywords(__pyx_t_5, __pyx_t_1, ((PyObject *)__pyx_t_12)); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 573; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + if (PyDict_SetItem(__pyx_t_12, ((PyObject *)__pyx_n_s__dtype), ((PyObject *)__pyx_n_s__i)) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 574; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_5 = PyEval_CallObjectWithKeywords(__pyx_t_1, __pyx_t_3, ((PyObject *)__pyx_t_12)); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 574; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; __Pyx_DECREF(((PyObject *)__pyx_t_12)); __pyx_t_12 = 0; - if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 573; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __pyx_t_14 = ((PyArrayObject *)__pyx_t_3); + if (!(likely(((__pyx_t_5) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_5, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 574; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_14 = ((PyArrayObject *)__pyx_t_5); { __Pyx_BufFmt_StackElem __pyx_stack[1]; __Pyx_SafeReleaseBuffer(&__pyx_bstruct_ii); __pyx_t_8 = __Pyx_GetBufferAndValidate(&__pyx_bstruct_ii, (PyObject*)__pyx_t_14, &__Pyx_TypeInfo_int, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack); if (unlikely(__pyx_t_8 < 0)) { - PyErr_Fetch(&__pyx_t_9, &__pyx_t_10, &__pyx_t_11); + PyErr_Fetch(&__pyx_t_11, &__pyx_t_10, &__pyx_t_9); if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_bstruct_ii, (PyObject*)__pyx_v_ii, &__Pyx_TypeInfo_int, PyBUF_FORMAT| PyBUF_STRIDES, 2, 0, __pyx_stack) == -1)) { - Py_XDECREF(__pyx_t_9); Py_XDECREF(__pyx_t_10); Py_XDECREF(__pyx_t_11); + Py_XDECREF(__pyx_t_11); Py_XDECREF(__pyx_t_10); Py_XDECREF(__pyx_t_9); __Pyx_RaiseBufferFallbackError(); } else { - PyErr_Restore(__pyx_t_9, __pyx_t_10, __pyx_t_11); + PyErr_Restore(__pyx_t_11, __pyx_t_10, __pyx_t_9); } } __pyx_bstride_0_ii = __pyx_bstruct_ii.strides[0]; __pyx_bstride_1_ii = __pyx_bstruct_ii.strides[1]; __pyx_bshape_0_ii = __pyx_bstruct_ii.shape[0]; __pyx_bshape_1_ii = __pyx_bstruct_ii.shape[1]; - if (unlikely(__pyx_t_8 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 573; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + if (unlikely(__pyx_t_8 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 574; __pyx_clineno = __LINE__; goto __pyx_L1_error;} } __pyx_t_14 = 0; __Pyx_DECREF(((PyObject *)__pyx_v_ii)); - __pyx_v_ii = ((PyArrayObject *)__pyx_t_3); - __pyx_t_3 = 0; + __pyx_v_ii = ((PyArrayObject *)__pyx_t_5); + __pyx_t_5 = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":574 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":575 * dd.fill(infinity) * ii = np.empty((n,k),dtype='i') * ii.fill(self.n) # <<<<<<<<<<<<<< * for c in range(n): * self.__query( */ - __pyx_t_3 = PyObject_GetAttr(((PyObject *)__pyx_v_ii), __pyx_n_s__fill); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 574; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_12 = PyInt_FromLong(((struct __pyx_obj_5scipy_7spatial_7ckdtree_cKDTree *)__pyx_v_self)->n); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 574; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_5 = PyObject_GetAttr(((PyObject *)__pyx_v_ii), __pyx_n_s__fill); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 575; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_12 = PyInt_FromLong(((struct __pyx_obj_5scipy_7spatial_7ckdtree_cKDTree *)__pyx_v_self)->n); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 575; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_12); - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 574; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_12); + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 575; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_12); __Pyx_GIVEREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = PyObject_Call(__pyx_t_3, __pyx_t_1, NULL); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 574; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_12 = PyObject_Call(__pyx_t_5, __pyx_t_3, NULL); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 575; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_12); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":575 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":576 * ii = np.empty((n,k),dtype='i') * ii.fill(self.n) * for c in range(n): # <<<<<<<<<<<<<< * self.__query( * (dd.data)+c*k, */ - __pyx_t_15 = __Pyx_PyInt_AsLong(__pyx_v_n); if (unlikely((__pyx_t_15 == (long)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 575; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_15 = __Pyx_PyInt_AsLong(__pyx_v_n); if (unlikely((__pyx_t_15 == (long)-1) && PyErr_Occurred())) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 576; __pyx_clineno = __LINE__; goto __pyx_L1_error;} for (__pyx_t_8 = 0; __pyx_t_8 < __pyx_t_15; __pyx_t_8+=1) { __pyx_v_c = __pyx_t_8; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":583 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":584 * eps, * p, * distance_upper_bound) # <<<<<<<<<<<<<< @@ -5113,17 +5158,17 @@ ((struct __pyx_vtabstruct_5scipy_7spatial_7ckdtree_cKDTree *)((struct __pyx_obj_5scipy_7spatial_7ckdtree_cKDTree *)__pyx_v_self)->__pyx_vtab)->__query(((struct __pyx_obj_5scipy_7spatial_7ckdtree_cKDTree *)__pyx_v_self), (((double *)__pyx_v_dd->data) + (__pyx_v_c * __pyx_v_k)), (((int *)__pyx_v_ii->data) + (__pyx_v_c * __pyx_v_k)), (((double *)__pyx_v_xx->data) + (__pyx_v_c * ((struct __pyx_obj_5scipy_7spatial_7ckdtree_cKDTree *)__pyx_v_self)->m)), __pyx_v_k, __pyx_v_eps, __pyx_v_p, __pyx_v_distance_upper_bound); } - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":584 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":585 * p, * distance_upper_bound) * if single: # <<<<<<<<<<<<<< * if k==1: * return dd[0,0], ii[0,0] */ - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_v_single); if (unlikely(__pyx_t_4 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 584; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_v_single); if (unlikely(__pyx_t_4 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 585; __pyx_clineno = __LINE__; goto __pyx_L1_error;} if (__pyx_t_4) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":585 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":586 * distance_upper_bound) * if single: * if k==1: # <<<<<<<<<<<<<< @@ -5133,7 +5178,7 @@ __pyx_t_4 = (__pyx_v_k == 1); if (__pyx_t_4) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":586 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":587 * if single: * if k==1: * return dd[0,0], ii[0,0] # <<<<<<<<<<<<<< @@ -5154,9 +5199,9 @@ } else if (unlikely(__pyx_t_17 >= __pyx_bshape_1_dd)) __pyx_t_8 = 1; if (unlikely(__pyx_t_8 != -1)) { __Pyx_RaiseBufferIndexError(__pyx_t_8); - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 586; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + {__pyx_filename = __pyx_f[0]; __pyx_lineno = 587; __pyx_clineno = __LINE__; goto __pyx_L1_error;} } - __pyx_t_12 = PyFloat_FromDouble((*__Pyx_BufPtrStrided2d(double *, __pyx_bstruct_dd.buf, __pyx_t_16, __pyx_bstride_0_dd, __pyx_t_17, __pyx_bstride_1_dd))); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 586; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_12 = PyFloat_FromDouble((*__Pyx_BufPtrStrided2d(double *, __pyx_bstruct_dd.buf, __pyx_t_16, __pyx_bstride_0_dd, __pyx_t_17, __pyx_bstride_1_dd))); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 587; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_12); __pyx_t_18 = 0; __pyx_t_19 = 0; @@ -5171,26 +5216,26 @@ } else if (unlikely(__pyx_t_19 >= __pyx_bshape_1_ii)) __pyx_t_8 = 1; if (unlikely(__pyx_t_8 != -1)) { __Pyx_RaiseBufferIndexError(__pyx_t_8); - {__pyx_filename = __pyx_f[0]; __pyx_lineno = 586; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + {__pyx_filename = __pyx_f[0]; __pyx_lineno = 587; __pyx_clineno = __LINE__; goto __pyx_L1_error;} } - __pyx_t_1 = PyInt_FromLong((*__Pyx_BufPtrStrided2d(int *, __pyx_bstruct_ii.buf, __pyx_t_18, __pyx_bstride_0_ii, __pyx_t_19, __pyx_bstride_1_ii))); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 586; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 586; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_3 = PyInt_FromLong((*__Pyx_BufPtrStrided2d(int *, __pyx_bstruct_ii.buf, __pyx_t_18, __pyx_bstride_0_ii, __pyx_t_19, __pyx_bstride_1_ii))); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 587; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_12); + __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 587; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_12); __Pyx_GIVEREF(__pyx_t_12); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); + PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); __pyx_t_12 = 0; - __pyx_t_1 = 0; - __pyx_r = __pyx_t_3; __pyx_t_3 = 0; + __pyx_r = __pyx_t_5; + __pyx_t_5 = 0; goto __pyx_L0; goto __pyx_L12; } /*else*/ { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":588 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":589 * return dd[0,0], ii[0,0] * else: * return dd[0], ii[0] # <<<<<<<<<<<<<< @@ -5198,18 +5243,18 @@ * if k==1: */ __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = __Pyx_GetItemInt(((PyObject *)__pyx_v_dd), 0, sizeof(long), PyInt_FromLong); if (!__pyx_t_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 588; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_5 = __Pyx_GetItemInt(((PyObject *)__pyx_v_dd), 0, sizeof(long), PyInt_FromLong); if (!__pyx_t_5) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 589; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_3 = __Pyx_GetItemInt(((PyObject *)__pyx_v_ii), 0, sizeof(long), PyInt_FromLong); if (!__pyx_t_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 589; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __Pyx_GetItemInt(((PyObject *)__pyx_v_ii), 0, sizeof(long), PyInt_FromLong); if (!__pyx_t_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 588; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_12 = PyTuple_New(2); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 588; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_12 = PyTuple_New(2); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 589; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_12); - PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_3); + PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_5); + __Pyx_GIVEREF(__pyx_t_5); + PyTuple_SET_ITEM(__pyx_t_12, 1, __pyx_t_3); __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_12, 1, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); + __pyx_t_5 = 0; __pyx_t_3 = 0; - __pyx_t_1 = 0; __pyx_r = __pyx_t_12; __pyx_t_12 = 0; goto __pyx_L0; @@ -5219,7 +5264,7 @@ } /*else*/ { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":590 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":591 * return dd[0], ii[0] * else: * if k==1: # <<<<<<<<<<<<<< @@ -5229,7 +5274,7 @@ __pyx_t_4 = (__pyx_v_k == 1); if (__pyx_t_4) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":591 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":592 * else: * if k==1: * return np.reshape(dd[...,0],retshape), np.reshape(ii[...,0],retshape) # <<<<<<<<<<<<<< @@ -5237,12 +5282,12 @@ * return np.reshape(dd,retshape+(k,)), np.reshape(ii,retshape+(k,)) */ __Pyx_XDECREF(__pyx_r); - __pyx_t_12 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 591; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_12 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 592; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_12); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_12, __pyx_n_s__reshape); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 591; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); + __pyx_t_3 = PyObject_GetAttr(__pyx_t_12, __pyx_n_s__reshape); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 592; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = PyTuple_New(2); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 591; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_12 = PyTuple_New(2); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 592; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_12); __Pyx_INCREF(Py_Ellipsis); PyTuple_SET_ITEM(__pyx_t_12, 0, Py_Ellipsis); @@ -5250,27 +5295,27 @@ __Pyx_INCREF(__pyx_int_0); PyTuple_SET_ITEM(__pyx_t_12, 1, __pyx_int_0); __Pyx_GIVEREF(__pyx_int_0); - __pyx_t_3 = PyObject_GetItem(((PyObject *)__pyx_v_dd), __pyx_t_12); if (!__pyx_t_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 591; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); + __pyx_t_5 = PyObject_GetItem(((PyObject *)__pyx_v_dd), __pyx_t_12); if (!__pyx_t_5) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 592; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = PyTuple_New(2); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 591; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_12 = PyTuple_New(2); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 592; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_12); - PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); + PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_5); + __Pyx_GIVEREF(__pyx_t_5); __Pyx_INCREF(__pyx_v_retshape); PyTuple_SET_ITEM(__pyx_t_12, 1, __pyx_v_retshape); __Pyx_GIVEREF(__pyx_v_retshape); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_1, __pyx_t_12, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 591; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_5 = 0; + __pyx_t_5 = PyObject_Call(__pyx_t_3, __pyx_t_12, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 592; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 591; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_12 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 592; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_12); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_12, __pyx_n_s__reshape); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 591; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); + __pyx_t_3 = PyObject_GetAttr(__pyx_t_12, __pyx_n_s__reshape); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 592; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = PyTuple_New(2); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 591; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_12 = PyTuple_New(2); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 592; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_12); __Pyx_INCREF(Py_Ellipsis); PyTuple_SET_ITEM(__pyx_t_12, 0, Py_Ellipsis); @@ -5278,29 +5323,29 @@ __Pyx_INCREF(__pyx_int_0); PyTuple_SET_ITEM(__pyx_t_12, 1, __pyx_int_0); __Pyx_GIVEREF(__pyx_int_0); - __pyx_t_5 = PyObject_GetItem(((PyObject *)__pyx_v_ii), __pyx_t_12); if (!__pyx_t_5) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 591; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); + __pyx_t_1 = PyObject_GetItem(((PyObject *)__pyx_v_ii), __pyx_t_12); if (!__pyx_t_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 592; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = PyTuple_New(2); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 591; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_12 = PyTuple_New(2); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 592; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_12); - PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_5); + PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_1); + __Pyx_GIVEREF(__pyx_t_1); __Pyx_INCREF(__pyx_v_retshape); PyTuple_SET_ITEM(__pyx_t_12, 1, __pyx_v_retshape); __Pyx_GIVEREF(__pyx_v_retshape); - __pyx_t_5 = 0; - __pyx_t_5 = PyObject_Call(__pyx_t_1, __pyx_t_12, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 591; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = 0; + __pyx_t_1 = PyObject_Call(__pyx_t_3, __pyx_t_12, NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 592; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = PyTuple_New(2); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 591; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_12 = PyTuple_New(2); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 592; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_12); - PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_12, 1, __pyx_t_5); + PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_t_5); __Pyx_GIVEREF(__pyx_t_5); - __pyx_t_3 = 0; + PyTuple_SET_ITEM(__pyx_t_12, 1, __pyx_t_1); + __Pyx_GIVEREF(__pyx_t_1); __pyx_t_5 = 0; + __pyx_t_1 = 0; __pyx_r = __pyx_t_12; __pyx_t_12 = 0; goto __pyx_L0; @@ -5308,77 +5353,77 @@ } /*else*/ { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":593 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":594 * return np.reshape(dd[...,0],retshape), np.reshape(ii[...,0],retshape) * else: * return np.reshape(dd,retshape+(k,)), np.reshape(ii,retshape+(k,)) # <<<<<<<<<<<<<< * */ __Pyx_XDECREF(__pyx_r); - __pyx_t_12 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 593; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_12 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 594; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_12); - __pyx_t_5 = PyObject_GetAttr(__pyx_t_12, __pyx_n_s__reshape); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 593; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_5); + __pyx_t_1 = PyObject_GetAttr(__pyx_t_12, __pyx_n_s__reshape); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 594; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = PyInt_FromLong(__pyx_v_k); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 593; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_12 = PyInt_FromLong(__pyx_v_k); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 594; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_12); - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 593; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_12); + __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 594; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_12); __Pyx_GIVEREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = PyNumber_Add(__pyx_v_retshape, __pyx_t_3); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 593; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_12 = PyNumber_Add(__pyx_v_retshape, __pyx_t_5); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 594; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 593; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_5 = PyTuple_New(2); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 594; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); __Pyx_INCREF(((PyObject *)__pyx_v_dd)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_v_dd)); + PyTuple_SET_ITEM(__pyx_t_5, 0, ((PyObject *)__pyx_v_dd)); __Pyx_GIVEREF(((PyObject *)__pyx_v_dd)); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_12); + PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_12); __Pyx_GIVEREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = PyObject_Call(__pyx_t_5, __pyx_t_3, NULL); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 593; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_12 = PyObject_Call(__pyx_t_1, __pyx_t_5, NULL); if (unlikely(!__pyx_t_12)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 594; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_12); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 593; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_5 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 594; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_1 = PyObject_GetAttr(__pyx_t_5, __pyx_n_s__reshape); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 594; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_5 = PyInt_FromLong(__pyx_v_k); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 594; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 594; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__reshape); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 593; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_5); + __Pyx_GIVEREF(__pyx_t_5); + __pyx_t_5 = 0; + __pyx_t_5 = PyNumber_Add(__pyx_v_retshape, __pyx_t_3); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 594; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_5); __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = PyInt_FromLong(__pyx_v_k); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 593; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 593; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyNumber_Add(__pyx_v_retshape, __pyx_t_1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 593; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 594; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 593; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); __Pyx_INCREF(((PyObject *)__pyx_v_ii)); - PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)__pyx_v_ii)); + PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_v_ii)); __Pyx_GIVEREF(((PyObject *)__pyx_v_ii)); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __pyx_t_3 = 0; - __pyx_t_3 = PyObject_Call(__pyx_t_5, __pyx_t_1, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 593; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_5); + __Pyx_GIVEREF(__pyx_t_5); + __pyx_t_5 = 0; + __pyx_t_5 = PyObject_Call(__pyx_t_1, __pyx_t_3, NULL); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 594; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 593; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_12); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 594; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_12); __Pyx_GIVEREF(__pyx_t_12); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); + PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_5); + __Pyx_GIVEREF(__pyx_t_5); __pyx_t_12 = 0; + __pyx_t_5 = 0; + __pyx_r = __pyx_t_3; __pyx_t_3 = 0; - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; goto __pyx_L0; } __pyx_L13:; @@ -5420,7 +5465,7 @@ return __pyx_r; } -/* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":187 +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":187 * # experimental exception made for __getbuffer__ and __releasebuffer__ * # -- the details of this may change. * def __getbuffer__(ndarray self, Py_buffer* info, int flags): # <<<<<<<<<<<<<< @@ -5456,7 +5501,7 @@ __Pyx_GIVEREF(__pyx_v_info->obj); __Pyx_INCREF((PyObject *)__pyx_v_self); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":193 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":193 * # of flags * cdef int copy_shape, i, ndim * cdef int endian_detector = 1 # <<<<<<<<<<<<<< @@ -5465,7 +5510,7 @@ */ __pyx_v_endian_detector = 1; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":194 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":194 * cdef int copy_shape, i, ndim * cdef int endian_detector = 1 * cdef bint little_endian = ((&endian_detector)[0] != 0) # <<<<<<<<<<<<<< @@ -5474,7 +5519,7 @@ */ __pyx_v_little_endian = ((((char *)(&__pyx_v_endian_detector))[0]) != 0); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":196 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":196 * cdef bint little_endian = ((&endian_detector)[0] != 0) * * ndim = PyArray_NDIM(self) # <<<<<<<<<<<<<< @@ -5483,7 +5528,7 @@ */ __pyx_v_ndim = PyArray_NDIM(((PyArrayObject *)__pyx_v_self)); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":198 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":198 * ndim = PyArray_NDIM(self) * * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<< @@ -5493,7 +5538,7 @@ __pyx_t_1 = ((sizeof(npy_intp)) != (sizeof(Py_ssize_t))); if (__pyx_t_1) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":199 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":199 * * if sizeof(npy_intp) != sizeof(Py_ssize_t): * copy_shape = 1 # <<<<<<<<<<<<<< @@ -5505,7 +5550,7 @@ } /*else*/ { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":201 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":201 * copy_shape = 1 * else: * copy_shape = 0 # <<<<<<<<<<<<<< @@ -5516,7 +5561,7 @@ } __pyx_L5:; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":203 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":203 * copy_shape = 0 * * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) # <<<<<<<<<<<<<< @@ -5526,7 +5571,7 @@ __pyx_t_1 = ((__pyx_v_flags & PyBUF_C_CONTIGUOUS) == PyBUF_C_CONTIGUOUS); if (__pyx_t_1) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":204 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":204 * * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): # <<<<<<<<<<<<<< @@ -5540,7 +5585,7 @@ } if (__pyx_t_3) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":205 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":205 * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): * raise ValueError(u"ndarray is not C contiguous") # <<<<<<<<<<<<<< @@ -5562,7 +5607,7 @@ } __pyx_L6:; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":207 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":207 * raise ValueError(u"ndarray is not C contiguous") * * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) # <<<<<<<<<<<<<< @@ -5572,7 +5617,7 @@ __pyx_t_3 = ((__pyx_v_flags & PyBUF_F_CONTIGUOUS) == PyBUF_F_CONTIGUOUS); if (__pyx_t_3) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":208 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":208 * * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): # <<<<<<<<<<<<<< @@ -5586,7 +5631,7 @@ } if (__pyx_t_2) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":209 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":209 * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): * raise ValueError(u"ndarray is not Fortran contiguous") # <<<<<<<<<<<<<< @@ -5608,7 +5653,7 @@ } __pyx_L7:; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":211 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":211 * raise ValueError(u"ndarray is not Fortran contiguous") * * info.buf = PyArray_DATA(self) # <<<<<<<<<<<<<< @@ -5617,7 +5662,7 @@ */ __pyx_v_info->buf = PyArray_DATA(((PyArrayObject *)__pyx_v_self)); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":212 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":212 * * info.buf = PyArray_DATA(self) * info.ndim = ndim # <<<<<<<<<<<<<< @@ -5626,7 +5671,7 @@ */ __pyx_v_info->ndim = __pyx_v_ndim; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":213 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":213 * info.buf = PyArray_DATA(self) * info.ndim = ndim * if copy_shape: # <<<<<<<<<<<<<< @@ -5636,7 +5681,7 @@ __pyx_t_6 = __pyx_v_copy_shape; if (__pyx_t_6) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":216 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":216 * # Allocate new buffer for strides and shape info. This is allocated * # as one block, strides first. * info.strides = stdlib.malloc(sizeof(Py_ssize_t) * ndim * 2) # <<<<<<<<<<<<<< @@ -5645,7 +5690,7 @@ */ __pyx_v_info->strides = ((Py_ssize_t *)malloc((((sizeof(Py_ssize_t)) * __pyx_v_ndim) * 2))); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":217 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":217 * # as one block, strides first. * info.strides = stdlib.malloc(sizeof(Py_ssize_t) * ndim * 2) * info.shape = info.strides + ndim # <<<<<<<<<<<<<< @@ -5654,7 +5699,7 @@ */ __pyx_v_info->shape = (__pyx_v_info->strides + __pyx_v_ndim); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":218 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":218 * info.strides = stdlib.malloc(sizeof(Py_ssize_t) * ndim * 2) * info.shape = info.strides + ndim * for i in range(ndim): # <<<<<<<<<<<<<< @@ -5665,7 +5710,7 @@ for (__pyx_t_7 = 0; __pyx_t_7 < __pyx_t_6; __pyx_t_7+=1) { __pyx_v_i = __pyx_t_7; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":219 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":219 * info.shape = info.strides + ndim * for i in range(ndim): * info.strides[i] = PyArray_STRIDES(self)[i] # <<<<<<<<<<<<<< @@ -5674,7 +5719,7 @@ */ (__pyx_v_info->strides[__pyx_v_i]) = (PyArray_STRIDES(((PyArrayObject *)__pyx_v_self))[__pyx_v_i]); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":220 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":220 * for i in range(ndim): * info.strides[i] = PyArray_STRIDES(self)[i] * info.shape[i] = PyArray_DIMS(self)[i] # <<<<<<<<<<<<<< @@ -5687,7 +5732,7 @@ } /*else*/ { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":222 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":222 * info.shape[i] = PyArray_DIMS(self)[i] * else: * info.strides = PyArray_STRIDES(self) # <<<<<<<<<<<<<< @@ -5696,7 +5741,7 @@ */ __pyx_v_info->strides = ((Py_ssize_t *)PyArray_STRIDES(((PyArrayObject *)__pyx_v_self))); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":223 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":223 * else: * info.strides = PyArray_STRIDES(self) * info.shape = PyArray_DIMS(self) # <<<<<<<<<<<<<< @@ -5707,7 +5752,7 @@ } __pyx_L8:; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":224 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":224 * info.strides = PyArray_STRIDES(self) * info.shape = PyArray_DIMS(self) * info.suboffsets = NULL # <<<<<<<<<<<<<< @@ -5716,7 +5761,7 @@ */ __pyx_v_info->suboffsets = NULL; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":225 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":225 * info.shape = PyArray_DIMS(self) * info.suboffsets = NULL * info.itemsize = PyArray_ITEMSIZE(self) # <<<<<<<<<<<<<< @@ -5725,7 +5770,7 @@ */ __pyx_v_info->itemsize = PyArray_ITEMSIZE(((PyArrayObject *)__pyx_v_self)); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":226 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":226 * info.suboffsets = NULL * info.itemsize = PyArray_ITEMSIZE(self) * info.readonly = not PyArray_ISWRITEABLE(self) # <<<<<<<<<<<<<< @@ -5734,7 +5779,7 @@ */ __pyx_v_info->readonly = (!PyArray_ISWRITEABLE(((PyArrayObject *)__pyx_v_self))); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":229 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":229 * * cdef int t * cdef char* f = NULL # <<<<<<<<<<<<<< @@ -5743,7 +5788,7 @@ */ __pyx_v_f = NULL; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":230 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":230 * cdef int t * cdef char* f = NULL * cdef dtype descr = self.descr # <<<<<<<<<<<<<< @@ -5753,7 +5798,7 @@ __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_v_self)->descr)); __pyx_v_descr = ((PyArrayObject *)__pyx_v_self)->descr; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":234 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":234 * cdef int offset * * cdef bint hasfields = PyDataType_HASFIELDS(descr) # <<<<<<<<<<<<<< @@ -5762,7 +5807,7 @@ */ __pyx_v_hasfields = PyDataType_HASFIELDS(__pyx_v_descr); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":236 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":236 * cdef bint hasfields = PyDataType_HASFIELDS(descr) * * if not hasfields and not copy_shape: # <<<<<<<<<<<<<< @@ -5778,7 +5823,7 @@ } if (__pyx_t_1) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":238 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":238 * if not hasfields and not copy_shape: * # do not call releasebuffer * info.obj = None # <<<<<<<<<<<<<< @@ -5794,7 +5839,7 @@ } /*else*/ { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":241 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":241 * else: * # need to call releasebuffer * info.obj = self # <<<<<<<<<<<<<< @@ -5809,7 +5854,7 @@ } __pyx_L11:; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":243 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":243 * info.obj = self * * if not hasfields: # <<<<<<<<<<<<<< @@ -5819,7 +5864,7 @@ __pyx_t_1 = (!__pyx_v_hasfields); if (__pyx_t_1) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":244 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":244 * * if not hasfields: * t = descr.type_num # <<<<<<<<<<<<<< @@ -5828,7 +5873,7 @@ */ __pyx_v_t = __pyx_v_descr->type_num; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":245 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":245 * if not hasfields: * t = descr.type_num * if ((descr.byteorder == '>' and little_endian) or # <<<<<<<<<<<<<< @@ -5843,7 +5888,7 @@ } if (!__pyx_t_2) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":246 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":246 * t = descr.type_num * if ((descr.byteorder == '>' and little_endian) or * (descr.byteorder == '<' and not little_endian)): # <<<<<<<<<<<<<< @@ -5863,7 +5908,7 @@ } if (__pyx_t_1) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":247 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":247 * if ((descr.byteorder == '>' and little_endian) or * (descr.byteorder == '<' and not little_endian)): * raise ValueError(u"Non-native byte order not supported") # <<<<<<<<<<<<<< @@ -5885,7 +5930,7 @@ } __pyx_L13:; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":248 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":248 * (descr.byteorder == '<' and not little_endian)): * raise ValueError(u"Non-native byte order not supported") * if t == NPY_BYTE: f = "b" # <<<<<<<<<<<<<< @@ -5898,7 +5943,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":249 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":249 * raise ValueError(u"Non-native byte order not supported") * if t == NPY_BYTE: f = "b" * elif t == NPY_UBYTE: f = "B" # <<<<<<<<<<<<<< @@ -5911,7 +5956,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":250 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":250 * if t == NPY_BYTE: f = "b" * elif t == NPY_UBYTE: f = "B" * elif t == NPY_SHORT: f = "h" # <<<<<<<<<<<<<< @@ -5924,7 +5969,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":251 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":251 * elif t == NPY_UBYTE: f = "B" * elif t == NPY_SHORT: f = "h" * elif t == NPY_USHORT: f = "H" # <<<<<<<<<<<<<< @@ -5937,7 +5982,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":252 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":252 * elif t == NPY_SHORT: f = "h" * elif t == NPY_USHORT: f = "H" * elif t == NPY_INT: f = "i" # <<<<<<<<<<<<<< @@ -5950,7 +5995,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":253 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":253 * elif t == NPY_USHORT: f = "H" * elif t == NPY_INT: f = "i" * elif t == NPY_UINT: f = "I" # <<<<<<<<<<<<<< @@ -5963,7 +6008,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":254 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":254 * elif t == NPY_INT: f = "i" * elif t == NPY_UINT: f = "I" * elif t == NPY_LONG: f = "l" # <<<<<<<<<<<<<< @@ -5976,7 +6021,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":255 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":255 * elif t == NPY_UINT: f = "I" * elif t == NPY_LONG: f = "l" * elif t == NPY_ULONG: f = "L" # <<<<<<<<<<<<<< @@ -5989,7 +6034,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":256 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":256 * elif t == NPY_LONG: f = "l" * elif t == NPY_ULONG: f = "L" * elif t == NPY_LONGLONG: f = "q" # <<<<<<<<<<<<<< @@ -6002,7 +6047,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":257 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":257 * elif t == NPY_ULONG: f = "L" * elif t == NPY_LONGLONG: f = "q" * elif t == NPY_ULONGLONG: f = "Q" # <<<<<<<<<<<<<< @@ -6015,7 +6060,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":258 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":258 * elif t == NPY_LONGLONG: f = "q" * elif t == NPY_ULONGLONG: f = "Q" * elif t == NPY_FLOAT: f = "f" # <<<<<<<<<<<<<< @@ -6028,7 +6073,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":259 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":259 * elif t == NPY_ULONGLONG: f = "Q" * elif t == NPY_FLOAT: f = "f" * elif t == NPY_DOUBLE: f = "d" # <<<<<<<<<<<<<< @@ -6041,7 +6086,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":260 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":260 * elif t == NPY_FLOAT: f = "f" * elif t == NPY_DOUBLE: f = "d" * elif t == NPY_LONGDOUBLE: f = "g" # <<<<<<<<<<<<<< @@ -6054,7 +6099,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":261 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":261 * elif t == NPY_DOUBLE: f = "d" * elif t == NPY_LONGDOUBLE: f = "g" * elif t == NPY_CFLOAT: f = "Zf" # <<<<<<<<<<<<<< @@ -6067,7 +6112,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":262 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":262 * elif t == NPY_LONGDOUBLE: f = "g" * elif t == NPY_CFLOAT: f = "Zf" * elif t == NPY_CDOUBLE: f = "Zd" # <<<<<<<<<<<<<< @@ -6080,7 +6125,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":263 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":263 * elif t == NPY_CFLOAT: f = "Zf" * elif t == NPY_CDOUBLE: f = "Zd" * elif t == NPY_CLONGDOUBLE: f = "Zg" # <<<<<<<<<<<<<< @@ -6093,7 +6138,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":264 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":264 * elif t == NPY_CDOUBLE: f = "Zd" * elif t == NPY_CLONGDOUBLE: f = "Zg" * elif t == NPY_OBJECT: f = "O" # <<<<<<<<<<<<<< @@ -6107,7 +6152,7 @@ } /*else*/ { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":266 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":266 * elif t == NPY_OBJECT: f = "O" * else: * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) # <<<<<<<<<<<<<< @@ -6133,7 +6178,7 @@ } __pyx_L14:; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":267 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":267 * else: * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) * info.format = f # <<<<<<<<<<<<<< @@ -6142,7 +6187,7 @@ */ __pyx_v_info->format = __pyx_v_f; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":268 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":268 * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) * info.format = f * return # <<<<<<<<<<<<<< @@ -6155,7 +6200,7 @@ } /*else*/ { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":270 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":270 * return * else: * info.format = stdlib.malloc(_buffer_format_string_len) # <<<<<<<<<<<<<< @@ -6164,7 +6209,7 @@ */ __pyx_v_info->format = ((char *)malloc(255)); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":271 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":271 * else: * info.format = stdlib.malloc(_buffer_format_string_len) * info.format[0] = '^' # Native data types, manual alignment # <<<<<<<<<<<<<< @@ -6173,7 +6218,7 @@ */ (__pyx_v_info->format[0]) = '^'; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":272 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":272 * info.format = stdlib.malloc(_buffer_format_string_len) * info.format[0] = '^' # Native data types, manual alignment * offset = 0 # <<<<<<<<<<<<<< @@ -6182,7 +6227,7 @@ */ __pyx_v_offset = 0; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":275 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":275 * f = _util_dtypestring(descr, info.format + 1, * info.format + _buffer_format_string_len, * &offset) # <<<<<<<<<<<<<< @@ -6192,7 +6237,7 @@ __pyx_t_9 = __pyx_f_5numpy__util_dtypestring(__pyx_v_descr, (__pyx_v_info->format + 1), (__pyx_v_info->format + 255), (&__pyx_v_offset)); if (unlikely(__pyx_t_9 == NULL)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 273; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __pyx_v_f = __pyx_t_9; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":276 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":276 * info.format + _buffer_format_string_len, * &offset) * f[0] = 0 # Terminate format string # <<<<<<<<<<<<<< @@ -6225,7 +6270,7 @@ return __pyx_r; } -/* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":278 +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":278 * f[0] = 0 # Terminate format string * * def __releasebuffer__(ndarray self, Py_buffer* info): # <<<<<<<<<<<<<< @@ -6239,7 +6284,7 @@ __Pyx_RefNannySetupContext("__releasebuffer__"); __Pyx_INCREF((PyObject *)__pyx_v_self); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":279 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":279 * * def __releasebuffer__(ndarray self, Py_buffer* info): * if PyArray_HASFIELDS(self): # <<<<<<<<<<<<<< @@ -6249,7 +6294,7 @@ __pyx_t_1 = PyArray_HASFIELDS(((PyArrayObject *)__pyx_v_self)); if (__pyx_t_1) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":280 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":280 * def __releasebuffer__(ndarray self, Py_buffer* info): * if PyArray_HASFIELDS(self): * stdlib.free(info.format) # <<<<<<<<<<<<<< @@ -6261,7 +6306,7 @@ } __pyx_L5:; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":281 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":281 * if PyArray_HASFIELDS(self): * stdlib.free(info.format) * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<< @@ -6271,7 +6316,7 @@ __pyx_t_1 = ((sizeof(npy_intp)) != (sizeof(Py_ssize_t))); if (__pyx_t_1) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":282 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":282 * stdlib.free(info.format) * if sizeof(npy_intp) != sizeof(Py_ssize_t): * stdlib.free(info.strides) # <<<<<<<<<<<<<< @@ -6287,7 +6332,7 @@ __Pyx_RefNannyFinishContext(); } -/* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":755 +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":755 * ctypedef npy_cdouble complex_t * * cdef inline object PyArray_MultiIterNew1(a): # <<<<<<<<<<<<<< @@ -6300,7 +6345,7 @@ PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("PyArray_MultiIterNew1"); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":756 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":756 * * cdef inline object PyArray_MultiIterNew1(a): * return PyArray_MultiIterNew(1, a) # <<<<<<<<<<<<<< @@ -6326,7 +6371,7 @@ return __pyx_r; } -/* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":758 +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":758 * return PyArray_MultiIterNew(1, a) * * cdef inline object PyArray_MultiIterNew2(a, b): # <<<<<<<<<<<<<< @@ -6339,7 +6384,7 @@ PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("PyArray_MultiIterNew2"); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":759 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":759 * * cdef inline object PyArray_MultiIterNew2(a, b): * return PyArray_MultiIterNew(2, a, b) # <<<<<<<<<<<<<< @@ -6365,7 +6410,7 @@ return __pyx_r; } -/* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":761 +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":761 * return PyArray_MultiIterNew(2, a, b) * * cdef inline object PyArray_MultiIterNew3(a, b, c): # <<<<<<<<<<<<<< @@ -6378,7 +6423,7 @@ PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("PyArray_MultiIterNew3"); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":762 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":762 * * cdef inline object PyArray_MultiIterNew3(a, b, c): * return PyArray_MultiIterNew(3, a, b, c) # <<<<<<<<<<<<<< @@ -6404,7 +6449,7 @@ return __pyx_r; } -/* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":764 +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":764 * return PyArray_MultiIterNew(3, a, b, c) * * cdef inline object PyArray_MultiIterNew4(a, b, c, d): # <<<<<<<<<<<<<< @@ -6417,7 +6462,7 @@ PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("PyArray_MultiIterNew4"); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":765 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":765 * * cdef inline object PyArray_MultiIterNew4(a, b, c, d): * return PyArray_MultiIterNew(4, a, b, c, d) # <<<<<<<<<<<<<< @@ -6443,7 +6488,7 @@ return __pyx_r; } -/* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":767 +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":767 * return PyArray_MultiIterNew(4, a, b, c, d) * * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e): # <<<<<<<<<<<<<< @@ -6456,7 +6501,7 @@ PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("PyArray_MultiIterNew5"); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":768 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":768 * * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e): * return PyArray_MultiIterNew(5, a, b, c, d, e) # <<<<<<<<<<<<<< @@ -6482,7 +6527,7 @@ return __pyx_r; } -/* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":770 +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":770 * return PyArray_MultiIterNew(5, a, b, c, d, e) * * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL: # <<<<<<<<<<<<<< @@ -6517,7 +6562,7 @@ __pyx_v_new_offset = Py_None; __Pyx_INCREF(Py_None); __pyx_v_t = Py_None; __Pyx_INCREF(Py_None); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":777 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":777 * cdef int delta_offset * cdef tuple i * cdef int endian_detector = 1 # <<<<<<<<<<<<<< @@ -6526,7 +6571,7 @@ */ __pyx_v_endian_detector = 1; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":778 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":778 * cdef tuple i * cdef int endian_detector = 1 * cdef bint little_endian = ((&endian_detector)[0] != 0) # <<<<<<<<<<<<<< @@ -6535,7 +6580,7 @@ */ __pyx_v_little_endian = ((((char *)(&__pyx_v_endian_detector))[0]) != 0); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":781 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":781 * cdef tuple fields * * for childname in descr.names: # <<<<<<<<<<<<<< @@ -6554,7 +6599,7 @@ __pyx_v_childname = __pyx_t_3; __pyx_t_3 = 0; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":782 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":782 * * for childname in descr.names: * fields = descr.fields[childname] # <<<<<<<<<<<<<< @@ -6568,7 +6613,7 @@ __pyx_v_fields = ((PyObject *)__pyx_t_3); __pyx_t_3 = 0; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":783 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":783 * for childname in descr.names: * fields = descr.fields[childname] * child, new_offset = fields # <<<<<<<<<<<<<< @@ -6591,7 +6636,7 @@ {__pyx_filename = __pyx_f[1]; __pyx_lineno = 783; __pyx_clineno = __LINE__; goto __pyx_L1_error;} } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":785 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":785 * child, new_offset = fields * * if (end - f) - (new_offset - offset[0]) < 15: # <<<<<<<<<<<<<< @@ -6616,7 +6661,7 @@ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; if (__pyx_t_6) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":786 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":786 * * if (end - f) - (new_offset - offset[0]) < 15: * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") # <<<<<<<<<<<<<< @@ -6638,7 +6683,7 @@ } __pyx_L5:; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":788 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":788 * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") * * if ((child.byteorder == '>' and little_endian) or # <<<<<<<<<<<<<< @@ -6653,7 +6698,7 @@ } if (!__pyx_t_7) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":789 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":789 * * if ((child.byteorder == '>' and little_endian) or * (child.byteorder == '<' and not little_endian)): # <<<<<<<<<<<<<< @@ -6673,7 +6718,7 @@ } if (__pyx_t_6) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":790 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":790 * if ((child.byteorder == '>' and little_endian) or * (child.byteorder == '<' and not little_endian)): * raise ValueError(u"Non-native byte order not supported") # <<<<<<<<<<<<<< @@ -6695,7 +6740,7 @@ } __pyx_L6:; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":800 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":800 * * # Output padding bytes * while offset[0] < new_offset: # <<<<<<<<<<<<<< @@ -6712,7 +6757,7 @@ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; if (!__pyx_t_6) break; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":801 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":801 * # Output padding bytes * while offset[0] < new_offset: * f[0] = 120 # "x"; pad byte # <<<<<<<<<<<<<< @@ -6721,7 +6766,7 @@ */ (__pyx_v_f[0]) = 120; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":802 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":802 * while offset[0] < new_offset: * f[0] = 120 # "x"; pad byte * f += 1 # <<<<<<<<<<<<<< @@ -6730,7 +6775,7 @@ */ __pyx_v_f += 1; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":803 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":803 * f[0] = 120 # "x"; pad byte * f += 1 * offset[0] += 1 # <<<<<<<<<<<<<< @@ -6740,7 +6785,7 @@ (__pyx_v_offset[0]) += 1; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":805 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":805 * offset[0] += 1 * * offset[0] += child.itemsize # <<<<<<<<<<<<<< @@ -6749,7 +6794,7 @@ */ (__pyx_v_offset[0]) += __pyx_v_child->elsize; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":807 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":807 * offset[0] += child.itemsize * * if not PyDataType_HASFIELDS(child): # <<<<<<<<<<<<<< @@ -6759,7 +6804,7 @@ __pyx_t_6 = (!PyDataType_HASFIELDS(__pyx_v_child)); if (__pyx_t_6) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":808 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":808 * * if not PyDataType_HASFIELDS(child): * t = child.type_num # <<<<<<<<<<<<<< @@ -6772,7 +6817,7 @@ __pyx_v_t = __pyx_t_3; __pyx_t_3 = 0; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":809 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":809 * if not PyDataType_HASFIELDS(child): * t = child.type_num * if end - f < 5: # <<<<<<<<<<<<<< @@ -6782,7 +6827,7 @@ __pyx_t_6 = ((__pyx_v_end - __pyx_v_f) < 5); if (__pyx_t_6) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":810 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":810 * t = child.type_num * if end - f < 5: * raise RuntimeError(u"Format string allocated too short.") # <<<<<<<<<<<<<< @@ -6804,7 +6849,7 @@ } __pyx_L10:; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":813 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":813 * * # Until ticket #99 is fixed, use integers to avoid warnings * if t == NPY_BYTE: f[0] = 98 #"b" # <<<<<<<<<<<<<< @@ -6823,7 +6868,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":814 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":814 * # Until ticket #99 is fixed, use integers to avoid warnings * if t == NPY_BYTE: f[0] = 98 #"b" * elif t == NPY_UBYTE: f[0] = 66 #"B" # <<<<<<<<<<<<<< @@ -6842,7 +6887,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":815 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":815 * if t == NPY_BYTE: f[0] = 98 #"b" * elif t == NPY_UBYTE: f[0] = 66 #"B" * elif t == NPY_SHORT: f[0] = 104 #"h" # <<<<<<<<<<<<<< @@ -6861,7 +6906,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":816 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":816 * elif t == NPY_UBYTE: f[0] = 66 #"B" * elif t == NPY_SHORT: f[0] = 104 #"h" * elif t == NPY_USHORT: f[0] = 72 #"H" # <<<<<<<<<<<<<< @@ -6880,7 +6925,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":817 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":817 * elif t == NPY_SHORT: f[0] = 104 #"h" * elif t == NPY_USHORT: f[0] = 72 #"H" * elif t == NPY_INT: f[0] = 105 #"i" # <<<<<<<<<<<<<< @@ -6899,7 +6944,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":818 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":818 * elif t == NPY_USHORT: f[0] = 72 #"H" * elif t == NPY_INT: f[0] = 105 #"i" * elif t == NPY_UINT: f[0] = 73 #"I" # <<<<<<<<<<<<<< @@ -6918,7 +6963,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":819 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":819 * elif t == NPY_INT: f[0] = 105 #"i" * elif t == NPY_UINT: f[0] = 73 #"I" * elif t == NPY_LONG: f[0] = 108 #"l" # <<<<<<<<<<<<<< @@ -6937,7 +6982,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":820 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":820 * elif t == NPY_UINT: f[0] = 73 #"I" * elif t == NPY_LONG: f[0] = 108 #"l" * elif t == NPY_ULONG: f[0] = 76 #"L" # <<<<<<<<<<<<<< @@ -6956,7 +7001,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":821 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":821 * elif t == NPY_LONG: f[0] = 108 #"l" * elif t == NPY_ULONG: f[0] = 76 #"L" * elif t == NPY_LONGLONG: f[0] = 113 #"q" # <<<<<<<<<<<<<< @@ -6975,7 +7020,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":822 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":822 * elif t == NPY_ULONG: f[0] = 76 #"L" * elif t == NPY_LONGLONG: f[0] = 113 #"q" * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" # <<<<<<<<<<<<<< @@ -6994,7 +7039,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":823 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":823 * elif t == NPY_LONGLONG: f[0] = 113 #"q" * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" * elif t == NPY_FLOAT: f[0] = 102 #"f" # <<<<<<<<<<<<<< @@ -7013,7 +7058,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":824 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":824 * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" * elif t == NPY_FLOAT: f[0] = 102 #"f" * elif t == NPY_DOUBLE: f[0] = 100 #"d" # <<<<<<<<<<<<<< @@ -7032,7 +7077,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":825 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":825 * elif t == NPY_FLOAT: f[0] = 102 #"f" * elif t == NPY_DOUBLE: f[0] = 100 #"d" * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" # <<<<<<<<<<<<<< @@ -7051,7 +7096,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":826 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":826 * elif t == NPY_DOUBLE: f[0] = 100 #"d" * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf # <<<<<<<<<<<<<< @@ -7072,7 +7117,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":827 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":827 * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd # <<<<<<<<<<<<<< @@ -7093,7 +7138,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":828 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":828 * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd * elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg # <<<<<<<<<<<<<< @@ -7114,7 +7159,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":829 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":829 * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd * elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg * elif t == NPY_OBJECT: f[0] = 79 #"O" # <<<<<<<<<<<<<< @@ -7134,7 +7179,7 @@ } /*else*/ { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":831 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":831 * elif t == NPY_OBJECT: f[0] = 79 #"O" * else: * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) # <<<<<<<<<<<<<< @@ -7157,7 +7202,7 @@ } __pyx_L11:; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":832 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":832 * else: * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) * f += 1 # <<<<<<<<<<<<<< @@ -7169,7 +7214,7 @@ } /*else*/ { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":836 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":836 * # Cython ignores struct boundary information ("T{...}"), * # so don't output it * f = _util_dtypestring(child, f, end, offset) # <<<<<<<<<<<<<< @@ -7183,7 +7228,7 @@ } __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":837 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":837 * # so don't output it * f = _util_dtypestring(child, f, end, offset) * return f # <<<<<<<<<<<<<< @@ -7213,7 +7258,7 @@ return __pyx_r; } -/* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":952 +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":952 * * * cdef inline void set_array_base(ndarray arr, object base): # <<<<<<<<<<<<<< @@ -7228,7 +7273,7 @@ __Pyx_INCREF((PyObject *)__pyx_v_arr); __Pyx_INCREF(__pyx_v_base); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":954 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":954 * cdef inline void set_array_base(ndarray arr, object base): * cdef PyObject* baseptr * if base is None: # <<<<<<<<<<<<<< @@ -7238,7 +7283,7 @@ __pyx_t_1 = (__pyx_v_base == Py_None); if (__pyx_t_1) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":955 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":955 * cdef PyObject* baseptr * if base is None: * baseptr = NULL # <<<<<<<<<<<<<< @@ -7250,7 +7295,7 @@ } /*else*/ { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":957 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":957 * baseptr = NULL * else: * Py_INCREF(base) # important to do this before decref below! # <<<<<<<<<<<<<< @@ -7259,7 +7304,7 @@ */ Py_INCREF(__pyx_v_base); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":958 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":958 * else: * Py_INCREF(base) # important to do this before decref below! * baseptr = base # <<<<<<<<<<<<<< @@ -7270,7 +7315,7 @@ } __pyx_L3:; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":959 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":959 * Py_INCREF(base) # important to do this before decref below! * baseptr = base * Py_XDECREF(arr.base) # <<<<<<<<<<<<<< @@ -7279,7 +7324,7 @@ */ Py_XDECREF(__pyx_v_arr->base); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":960 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":960 * baseptr = base * Py_XDECREF(arr.base) * arr.base = baseptr # <<<<<<<<<<<<<< @@ -7293,7 +7338,7 @@ __Pyx_RefNannyFinishContext(); } -/* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":962 +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":962 * arr.base = baseptr * * cdef inline object get_array_base(ndarray arr): # <<<<<<<<<<<<<< @@ -7307,7 +7352,7 @@ __Pyx_RefNannySetupContext("get_array_base"); __Pyx_INCREF((PyObject *)__pyx_v_arr); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":963 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":963 * * cdef inline object get_array_base(ndarray arr): * if arr.base is NULL: # <<<<<<<<<<<<<< @@ -7317,7 +7362,7 @@ __pyx_t_1 = (__pyx_v_arr->base == NULL); if (__pyx_t_1) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":964 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":964 * cdef inline object get_array_base(ndarray arr): * if arr.base is NULL: * return None # <<<<<<<<<<<<<< @@ -7332,7 +7377,7 @@ } /*else*/ { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":966 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":966 * return None * else: * return arr.base # <<<<<<<<<<<<<< @@ -7799,7 +7844,7 @@ /*--- Function import code ---*/ /*--- Execution code ---*/ - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":3 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":3 * # Copyright Anne M. Archibald 2008 * # Released under the scipy license * import numpy as np # <<<<<<<<<<<<<< @@ -7811,7 +7856,7 @@ if (PyObject_SetAttr(__pyx_m, __pyx_n_s__np, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 3; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":7 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":7 * cimport stdlib * * import kdtree # <<<<<<<<<<<<<< @@ -7823,7 +7868,7 @@ if (PyObject_SetAttr(__pyx_m, __pyx_n_s__kdtree, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 7; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":9 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":9 * import kdtree * * cdef double infinity = np.inf # <<<<<<<<<<<<<< @@ -7839,7 +7884,7 @@ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; __pyx_v_5scipy_7spatial_7ckdtree_infinity = __pyx_t_3; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":517 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":517 * * def query(cKDTree self, object x, int k=1, double eps=0, double p=2, * double distance_upper_bound=infinity): # <<<<<<<<<<<<<< @@ -7848,7 +7893,7 @@ */ __pyx_k_4 = __pyx_v_5scipy_7spatial_7ckdtree_infinity; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/spatial/ckdtree.pyx":1 + /* "/Users/mb312/dev_trees/scipy-work/scipy/spatial/ckdtree.pyx":1 * # Copyright Anne M. Archibald 2008 # <<<<<<<<<<<<<< * # Released under the scipy license * import numpy as np @@ -7878,7 +7923,7 @@ if (PyObject_SetAttr(__pyx_m, __pyx_n_s____test__, ((PyObject *)__pyx_t_2)) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_DECREF(((PyObject *)__pyx_t_2)); __pyx_t_2 = 0; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/stdlib.pxd":2 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/stdlib.pxd":2 * * cdef extern from "stdlib.h" nogil: # <<<<<<<<<<<<<< * void free(void *ptr) diff -Nru python-scipy-0.7.2+dfsg1/scipy/spatial/ckdtree.pyx python-scipy-0.8.0+dfsg1/scipy/spatial/ckdtree.pyx --- python-scipy-0.7.2+dfsg1/scipy/spatial/ckdtree.pyx 2008-11-10 10:34:25.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/spatial/ckdtree.pyx 2009-02-22 04:05:03.000000000 +0000 @@ -568,6 +568,7 @@ retshape = np.shape(x)[:-1] n = np.prod(retshape) xx = np.reshape(x,(n,self.m)) + xx = np.ascontiguousarray(xx) dd = np.empty((n,k),dtype=np.float) dd.fill(infinity) ii = np.empty((n,k),dtype='i') diff -Nru python-scipy-0.7.2+dfsg1/scipy/spatial/distance.py python-scipy-0.8.0+dfsg1/scipy/spatial/distance.py --- python-scipy-0.7.2+dfsg1/scipy/spatial/distance.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/spatial/distance.py 2010-07-26 15:48:35.000000000 +0100 @@ -1075,8 +1075,7 @@ n = s[1] dm = np.zeros((m * (m - 1) / 2,), dtype=np.double) - mtype = type(metric) - if mtype is types.FunctionType: + if callable(metric): k = 0 if metric == minkowski: for i in xrange(0, m - 1): @@ -1104,7 +1103,7 @@ dm[k] = metric(X[i, :], X[j, :]) k = k + 1 - elif mtype is types.StringType: + elif isinstance(metric,basestring): mstr = metric.lower() #if X.dtype != np.double and \ @@ -1322,8 +1321,16 @@ s = X.shape + if force.lower() == 'tomatrix': + if len(s) != 1: + raise ValueError("Forcing 'tomatrix' but input X is not a distance vector.") + elif force.lower() == 'tovector': + if len(s) != 2: + raise ValueError("Forcing 'tovector' but input X is not a distance matrix.") + + # X = squareform(v) - if len(s) == 1 and force != 'tomatrix': + if len(s) == 1: if X.shape[0] == 0: return np.zeros((1,1), dtype=np.double) @@ -1349,16 +1356,11 @@ # Return the distance matrix. M = M + M.transpose() return M - elif len(s) != 1 and force.lower() == 'tomatrix': - raise ValueError("Forcing 'tomatrix' but input X is not a distance vector.") - elif len(s) == 2 and force.lower() != 'tovector': + elif len(s) == 2: if s[0] != s[1]: raise ValueError('The matrix argument must be square.') if checks: - if np.sum(np.sum(X == X.transpose())) != np.product(X.shape): - raise ValueError('The distance matrix array must be symmetrical.') - if (X.diagonal() != 0).any(): - raise ValueError('The distance matrix array must have zeros along the diagonal.') + is_valid_dm(X, throw=True, name='X') # One-side of the dimensions is set here. d = s[0] @@ -1376,8 +1378,6 @@ # Convert the vector to squareform. _distance_wrap.to_vector_from_squareform_wrap(X, v) return v - elif len(s) != 2 and force.lower() == 'tomatrix': - raise ValueError("Forcing 'tomatrix' but input X is not a distance vector.") else: raise ValueError('The first argument must be one or two dimensional array. A %d-dimensional array is not permitted' % len(s)) @@ -1820,8 +1820,7 @@ n = s[1] dm = np.zeros((mA, mB), dtype=np.double) - mtype = type(metric) - if mtype is types.FunctionType: + if callable(metric): if metric == minkowski: for i in xrange(0, mA): for j in xrange(0, mB): @@ -1842,7 +1841,7 @@ for i in xrange(0, mA): for j in xrange(0, mB): dm[i, j] = metric(XA[i, :], XB[j, :]) - elif mtype is types.StringType: + elif isinstance(metric,basestring): mstr = metric.lower() #if XA.dtype != np.double and \ diff -Nru python-scipy-0.7.2+dfsg1/scipy/spatial/kdtree.py python-scipy-0.8.0+dfsg1/scipy/spatial/kdtree.py --- python-scipy-0.7.2+dfsg1/scipy/spatial/kdtree.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/spatial/kdtree.py 2010-07-26 15:48:35.000000000 +0100 @@ -79,14 +79,15 @@ class KDTree(object): - """kd-tree for quick nearest-neighbor lookup + """ + kd-tree for quick nearest-neighbor lookup This class provides an index into a set of k-dimensional points which can be used to rapidly look up the nearest neighbors of any point. The algorithm used is described in Maneewongvatana and Mount 1999. - The general idea is that the kd-tree is a binary trie, each of whose + The general idea is that the kd-tree is a binary tree, each of whose nodes represents an axis-aligned hyperrectangle. Each node specifies an axis and splits the set of points based on whether their coordinate along that axis is greater than or less than a particular value. @@ -108,6 +109,7 @@ and with other kd-trees. These do use a reasonably efficient algorithm, but the kd-tree is not necessarily the best data structure for this sort of calculation. + """ def __init__(self, data, leafsize=10): @@ -273,10 +275,11 @@ return sorted([((-d)**(1./p),i) for (d,i) in neighbors]) def query(self, x, k=1, eps=0, p=2, distance_upper_bound=np.inf): - """query the kd-tree for nearest neighbors + """ + query the kd-tree for nearest neighbors - Parameters: - =========== + Parameters + ---------- x : array-like, last dimension self.m An array of points to query. @@ -297,8 +300,8 @@ queries, it may help to supply the distance to the nearest neighbor of the most recent point. - Returns: - ======== + Returns + ------- d : array of floats The distances to the nearest neighbors. @@ -311,6 +314,48 @@ i : array of integers The locations of the neighbors in self.data. i is the same shape as d. + + Examples + -------- + + >>> from scipy.spatial import KDTree + >>> x, y = np.mgrid[0:5, 2:8] + >>> tree = KDTree(zip(x.ravel(), y.ravel())) + >>> tree.data + array([[0, 2], + [0, 3], + [0, 4], + [0, 5], + [0, 6], + [0, 7], + [1, 2], + [1, 3], + [1, 4], + [1, 5], + [1, 6], + [1, 7], + [2, 2], + [2, 3], + [2, 4], + [2, 5], + [2, 6], + [2, 7], + [3, 2], + [3, 3], + [3, 4], + [3, 5], + [3, 6], + [3, 7], + [4, 2], + [4, 3], + [4, 4], + [4, 5], + [4, 6], + [4, 7]]) + >>> pts = np.array([[0, 0], [2.1, 2.9]]) + >>> tree.query(pts) + (array([ 2. , 0.14142136]), array([ 0, 13])) + """ x = np.asarray(x) if np.shape(x)[-1] != self.m: @@ -501,6 +546,103 @@ other.tree, Rectangle(other.maxes, other.mins)) return results + def query_pairs(self, r, p=2., eps=0): + """Find all pairs of points whose distance is at most r + + Parameters + ========== + + r : positive float + The maximum distance + p : float 1<=p<=infinity + Which Minkowski norm to use + eps : nonnegative float + Approximate search. Branches of the tree are not explored + if their nearest points are further than r/(1+eps), and branches + are added in bulk if their furthest points are nearer than r*(1+eps). + + Returns + ======= + + results : set + set of pairs (i,j), ir/(1.+eps): + return + elif rect1.max_distance_rectangle(rect2, p) +#include #include #include #include "common.h" #include "distance.h" -static inline double euclidean_distance(const double *u, const double *v, int n) { +static NPY_INLINE double euclidean_distance(const double *u, const double *v, int n) { int i = 0; double s = 0.0, d; for (i = 0; i < n; i++) { @@ -49,7 +51,7 @@ return sqrt(s); } -static inline double ess_distance(const double *u, const double *v, int n) { +static NPY_INLINE double ess_distance(const double *u, const double *v, int n) { int i = 0; double s = 0.0, d; for (i = 0; i < n; i++) { @@ -59,7 +61,7 @@ return s; } -static inline double chebyshev_distance(const double *u, const double *v, int n) { +static NPY_INLINE double chebyshev_distance(const double *u, const double *v, int n) { int i = 0; double d, maxv = 0.0; for (i = 0; i < n; i++) { @@ -71,7 +73,7 @@ return maxv; } -static inline double canberra_distance(const double *u, const double *v, int n) { +static NPY_INLINE double canberra_distance(const double *u, const double *v, int n) { int i; double snum = 0.0, sdenom_u = 0.0, sdenom_v = 0.0; for (i = 0; i < n; i++) { @@ -82,7 +84,7 @@ return snum / (sdenom_u + sdenom_v); } -static inline double bray_curtis_distance(const double *u, const double *v, int n) { +static NPY_INLINE double bray_curtis_distance(const double *u, const double *v, int n) { int i; double s1 = 0.0, s2 = 0.0; for (i = 0; i < n; i++) { @@ -92,7 +94,7 @@ return s1 / s2; } -static inline double mahalanobis_distance(const double *u, const double *v, +static NPY_INLINE double mahalanobis_distance(const double *u, const double *v, const double *covinv, double *dimbuf1, double *dimbuf2, int n) { int i, j; @@ -125,7 +127,7 @@ return s / (double)n; } -static inline double hamming_distance_bool(const char *u, const char *v, int n) { +static NPY_INLINE double hamming_distance_bool(const char *u, const char *v, int n) { int i = 0; double s = 0.0; for (i = 0; i < n; i++) { @@ -134,7 +136,7 @@ return s / (double)n; } -static inline double yule_distance_bool(const char *u, const char *v, int n) { +static NPY_INLINE double yule_distance_bool(const char *u, const char *v, int n) { int i = 0; int ntt = 0, nff = 0, nft = 0, ntf = 0; for (i = 0; i < n; i++) { @@ -146,7 +148,7 @@ return (2.0 * ntf * nft) / (double)(ntt * nff + ntf * nft); } -static inline double matching_distance_bool(const char *u, const char *v, int n) { +static NPY_INLINE double matching_distance_bool(const char *u, const char *v, int n) { int i = 0; int nft = 0, ntf = 0; for (i = 0; i < n; i++) { @@ -156,7 +158,7 @@ return (double)(ntf + nft) / (double)(n); } -static inline double dice_distance_bool(const char *u, const char *v, int n) { +static NPY_INLINE double dice_distance_bool(const char *u, const char *v, int n) { int i = 0; int ntt = 0, nft = 0, ntf = 0; for (i = 0; i < n; i++) { @@ -168,7 +170,7 @@ } -static inline double rogerstanimoto_distance_bool(const char *u, const char *v, int n) { +static NPY_INLINE double rogerstanimoto_distance_bool(const char *u, const char *v, int n) { int i = 0; int ntt = 0, nff = 0, nft = 0, ntf = 0; for (i = 0; i < n; i++) { @@ -180,7 +182,7 @@ return (2.0 * (ntf + nft)) / ((double)ntt + nff + (2.0 * (ntf + nft))); } -static inline double russellrao_distance_bool(const char *u, const char *v, int n) { +static NPY_INLINE double russellrao_distance_bool(const char *u, const char *v, int n) { int i = 0; /** int nff = 0, nft = 0, ntf = 0;**/ int ntt = 0; @@ -194,7 +196,7 @@ return (double) (n - ntt) / (double) n; } -static inline double kulsinski_distance_bool(const char *u, const char *v, int n) { +static NPY_INLINE double kulsinski_distance_bool(const char *u, const char *v, int n) { int _i = 0; int ntt = 0, nft = 0, ntf = 0, nff = 0; for (_i = 0; _i < n; _i++) { @@ -206,7 +208,7 @@ return ((double)(ntf + nft - ntt + n)) / ((double)(ntf + nft + n)); } -static inline double sokalsneath_distance_bool(const char *u, const char *v, int n) { +static NPY_INLINE double sokalsneath_distance_bool(const char *u, const char *v, int n) { int _i = 0; int ntt = 0, nft = 0, ntf = 0; for (_i = 0; _i < n; _i++) { @@ -217,7 +219,7 @@ return (2.0 * (ntf + nft))/(2.0 * (ntf + nft) + ntt); } -static inline double sokalmichener_distance_bool(const char *u, const char *v, int n) { +static NPY_INLINE double sokalmichener_distance_bool(const char *u, const char *v, int n) { int _i = 0; int ntt = 0, nft = 0, ntf = 0, nff = 0; for (_i = 0; _i < n; _i++) { @@ -229,7 +231,7 @@ return (2.0 * (ntf + nft))/(2.0 * (ntf + nft) + ntt + nff); } -static inline double jaccard_distance(const double *u, const double *v, int n) { +static NPY_INLINE double jaccard_distance(const double *u, const double *v, int n) { int i = 0; double denom = 0.0, num = 0.0; for (i = 0; i < n; i++) { @@ -239,7 +241,7 @@ return num / denom; } -static inline double jaccard_distance_bool(const char *u, const char *v, int n) { +static NPY_INLINE double jaccard_distance_bool(const char *u, const char *v, int n) { int i = 0; double num = 0.0, denom = 0.0; for (i = 0; i < n; i++) { @@ -249,7 +251,7 @@ return num / denom; } -static inline double dot_product(const double *u, const double *v, int n) { +static NPY_INLINE double dot_product(const double *u, const double *v, int n) { int i; double s = 0.0; for (i = 0; i < n; i++) { @@ -258,12 +260,12 @@ return s; } -static inline double cosine_distance(const double *u, const double *v, int n, +static NPY_INLINE double cosine_distance(const double *u, const double *v, int n, const double nu, const double nv) { return 1.0 - (dot_product(u, v, n) / (nu * nv)); } -static inline double seuclidean_distance(const double *var, +static NPY_INLINE double seuclidean_distance(const double *var, const double *u, const double *v, int n) { int i = 0; double s = 0.0, d; @@ -274,7 +276,7 @@ return sqrt(s); } -static inline double city_block_distance(const double *u, const double *v, int n) { +static NPY_INLINE double city_block_distance(const double *u, const double *v, int n) { int i = 0; double s = 0.0, d; for (i = 0; i < n; i++) { diff -Nru python-scipy-0.7.2+dfsg1/scipy/spatial/src/distance_wrap.c python-scipy-0.8.0+dfsg1/scipy/spatial/src/distance_wrap.c --- python-scipy-0.7.2+dfsg1/scipy/spatial/src/distance_wrap.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/spatial/src/distance_wrap.c 2010-07-26 15:48:35.000000000 +0100 @@ -1132,7 +1132,7 @@ {NULL, NULL} /* Sentinel - marks the end of this structure */ }; -void init_distance_wrap(void) { +PyMODINIT_FUNC init_distance_wrap(void) { (void) Py_InitModule("_distance_wrap", _distanceWrapMethods); import_array(); // Must be present for NumPy. Called first after above line. } diff -Nru python-scipy-0.7.2+dfsg1/scipy/spatial/tests/test_distance.py python-scipy-0.8.0+dfsg1/scipy/spatial/tests/test_distance.py --- python-scipy-0.7.2+dfsg1/scipy/spatial/tests/test_distance.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/spatial/tests/test_distance.py 2010-07-26 15:48:36.000000000 +0100 @@ -101,7 +101,7 @@ class TestCdist(TestCase): """ - Test suite for the pdist function. + Test suite for the cdist function. """ def test_cdist_euclidean_random(self): @@ -115,7 +115,19 @@ if verbose > 2: print (Y1-Y2).max() self.failUnless(within_tol(Y1, Y2, eps)) - + + def test_cdist_euclidean_random_unicode(self): + "Tests cdist(X, u'euclidean') using unicode metric string" + eps = 1e-07 + # Get the data: the input matrix and the right output. + X1 = eo['cdist-X1'] + X2 = eo['cdist-X2'] + Y1 = cdist(X1, X2, u'euclidean') + Y2 = cdist(X1, X2, u'test_euclidean') + if verbose > 2: + print (Y1-Y2).max() + self.failUnless(within_tol(Y1, Y2, eps)) + def test_cdist_sqeuclidean_random(self): "Tests cdist(X, 'sqeuclidean') on random data." eps = 1e-07 @@ -129,7 +141,7 @@ self.failUnless(within_tol(Y1, Y2, eps)) def test_cdist_cityblock_random(self): - "Tests cdist(X, 'sqeuclidean') on random data." + "Tests cdist(X, 'cityblock') on random data." eps = 1e-07 # Get the data: the input matrix and the right output. X1 = eo['cdist-X1'] @@ -473,7 +485,17 @@ Y_test1 = pdist(X, 'euclidean') self.failUnless(within_tol(Y_test1, Y_right, eps)) + + def test_pdist_euclidean_random(self): + "Tests pdist(X, 'euclidean') with unicode metric string" + eps = 1e-07 + # Get the data: the input matrix and the right output. + X = eo['pdist-double-inp'] + Y_right = eo['pdist-euclidean'] + Y_test1 = pdist(X, u'euclidean') + self.failUnless(within_tol(Y_test1, Y_right, eps)) + def test_pdist_euclidean_random_float32(self): "Tests pdist(X, 'euclidean') on random data (float32)." eps = 1e-07 @@ -1688,3 +1710,6 @@ def correct_n_by_n(self, n): y = np.random.rand(n*(n-1)/2) return y + +if __name__=="__main__": + run_module_suite() diff -Nru python-scipy-0.7.2+dfsg1/scipy/spatial/tests/test_kdtree.py python-scipy-0.8.0+dfsg1/scipy/spatial/tests/test_kdtree.py --- python-scipy-0.7.2+dfsg1/scipy/spatial/tests/test_kdtree.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/spatial/tests/test_kdtree.py 2010-07-26 15:48:36.000000000 +0100 @@ -149,9 +149,9 @@ self.kdtree = KDTree(self.data) def test_single_query(self): - d, i = self.kdtree.query([0,0,0]) + d, i = self.kdtree.query(np.array([0,0,0])) assert isinstance(d,float) - assert isinstance(i,int) + assert np.issubdtype(i, int) def test_vectorized_query(self): d, i = self.kdtree.query(np.zeros((2,4,3))) @@ -161,7 +161,7 @@ def test_single_query_multiple_neighbors(self): s = 23 kk = self.kdtree.n+s - d, i = self.kdtree.query([0,0,0],k=kk) + d, i = self.kdtree.query(np.array([0,0,0]),k=kk) assert_equal(np.shape(d),(kk,)) assert_equal(np.shape(i),(kk,)) assert np.all(~np.isfinite(d[-s:])) @@ -196,7 +196,7 @@ [1,0,1], [1,1,0], [1,1,1]]) - self.kdtree = KDTree(self.data) + self.kdtree = cKDTree(self.data) def test_single_query(self): d, i = self.kdtree.query([0,0,0]) @@ -208,6 +208,13 @@ assert_equal(np.shape(d),(2,4)) assert_equal(np.shape(i),(2,4)) + def test_vectorized_query_noncontiguous_values(self): + qs = np.random.randn(3,1000).T + ds, i_s = self.kdtree.query(qs) + for q, d, i in zip(qs,ds,i_s): + assert_equal(self.kdtree.query(q),(d,i)) + + def test_single_query_multiple_neighbors(self): s = 23 kk = self.kdtree.n+s @@ -403,6 +410,9 @@ for ((i,j),d) in M.items(): assert j in r[i] + def test_zero_distance(self): + M = self.T1.sparse_distance_matrix(self.T1, self.r) # raises an exception for bug 870 + def test_distance_matrix(): m = 10 n = 11 @@ -424,5 +434,33 @@ dsl = distance_matrix(xs,ys,threshold=1) assert_equal(ds,dsl) +def check_onetree_query(T,d): + r = T.query_ball_tree(T, d) + s = set() + for i, l in enumerate(r): + for j in l: + if ireal = NAN; - v->imag = NAN; + v->real = NPY_NAN; + v->imag = NPY_NAN; } } @@ -174,7 +174,7 @@ cz.imag = 0; if (z < 0) { - *ai = NAN; + *ai = NPY_NAN; } else { F_FUNC(zairy,ZAIRY)(CADDR(cz), &id, &kode, CADDR(cai), &nz, &ierr); DO_MTHERR("airye:", &cai); @@ -186,7 +186,7 @@ id = 1; if (z < 0) { - *aip = NAN; + *aip = NPY_NAN; } else { F_FUNC(zairy,ZAIRY)(CADDR(cz), &id, &kode, CADDR(caip), &nz, &ierr); DO_MTHERR("airye:", &caip); @@ -215,14 +215,14 @@ /* overflow */ if (z.imag == 0 && (z.real >= 0 || v == floor(v))) { if (z.real < 0 && v/2 != floor(v/2)) - cy.real = -INFINITY; + cy.real = -NPY_INFINITY; else - cy.real = INFINITY; + cy.real = NPY_INFINITY; cy.imag = 0; } else { cy = cbesi_wrap_e(v*sign, z); - cy.real *= INFINITY; - cy.imag *= INFINITY; + cy.real *= NPY_INFINITY; + cy.imag *= NPY_INFINITY; } } @@ -272,7 +272,7 @@ double cbesi_wrap_e_real(double v, double z) { Py_complex cy, w; if (v != floor(v) && z < 0) { - return NAN; + return NPY_NAN; } else { w.real = z; w.imag = 0; @@ -297,8 +297,8 @@ if (ierr == 2) { /* overflow */ cy_j = cbesj_wrap_e(v, z); - cy_j.real *= INFINITY; - cy_j.imag *= INFINITY; + cy_j.real *= NPY_INFINITY; + cy_j.imag *= NPY_INFINITY; } if (sign == -1) { @@ -337,7 +337,7 @@ double cbesj_wrap_e_real(double v, double z) { Py_complex cy, w; if (v != floor(v) && z < 0) { - return NAN; + return NPY_NAN; } else { w.real = z; w.imag = 0; @@ -362,7 +362,7 @@ if (ierr == 2) { if (z.real >= 0 && z.imag == 0) { /* overflow */ - cy_y.real = INFINITY; + cy_y.real = NPY_INFINITY; cy_y.imag = 0; } } @@ -393,7 +393,7 @@ if (ierr == 2) { if (z.real >= 0 && z.imag == 0) { /* overflow */ - cy_y.real = INFINITY; + cy_y.real = NPY_INFINITY; cy_y.imag = 0; } } @@ -411,7 +411,7 @@ double cbesy_wrap_e_real(double v, double z) { Py_complex cy, w; if (z < 0) { - return NAN; + return NPY_NAN; } else { w.real = z; w.imag = 0; @@ -435,7 +435,7 @@ if (ierr == 2) { if (z.real >= 0 && z.imag == 0) { /* overflow */ - cy.real = INFINITY; + cy.real = NPY_INFINITY; cy.imag = 0; } } @@ -458,7 +458,7 @@ if (ierr == 2) { if (z.real >= 0 && z.imag == 0) { /* overflow */ - cy.real = INFINITY; + cy.real = NPY_INFINITY; cy.imag = 0; } } @@ -469,7 +469,7 @@ double cbesk_wrap_real( double v, double z) { Py_complex cy, w; if (z < 0) { - return NAN; + return NPY_NAN; } else { w.real = z; w.imag = 0; @@ -481,7 +481,7 @@ double cbesk_wrap_e_real( double v, double z) { Py_complex cy, w; if (z < 0) { - return NAN; + return NPY_NAN; } else { w.real = z; w.imag = 0; diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/amos_wrappers.h python-scipy-0.8.0+dfsg1/scipy/special/amos_wrappers.h --- python-scipy-0.7.2+dfsg1/scipy/special/amos_wrappers.h 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/amos_wrappers.h 2010-07-26 15:48:36.000000000 +0100 @@ -10,13 +10,7 @@ #include "Python.h" #include "cephes/mconf.h" -#ifndef NAN -extern double NAN; -#endif - -#ifndef INFINITY -extern double INFINITY; -#endif +#include #define DO_MTHERR(name, varp) \ do { \ diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/basic.py python-scipy-0.8.0+dfsg1/scipy/special/basic.py --- python-scipy-0.7.2+dfsg1/scipy/special/basic.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/basic.py 2010-07-26 15:48:36.000000000 +0100 @@ -1,5 +1,3 @@ -## Automatically adapted for scipy Oct 05, 2005 by convertcode.py - # # Author: Travis Oliphant, 2002 # @@ -8,6 +6,7 @@ from _cephes import * import types import specfun +import orthogonal def sinc(x): """Returns sin(pi*x)/(pi*x) at all points of array x. @@ -331,26 +330,38 @@ def _sph_harmonic(m,n,theta,phi): """Compute spherical harmonics. - This is a ufunc and may take scalar or array arguments like any other ufunc. - The inputs will be broadcasted against each other. + This is a ufunc and may take scalar or array arguments like any + other ufunc. The inputs will be broadcasted against each other. - :Parameters: - - `m` : int |m| <= n - The order of the harmonic. - - `n` : int >= 0 - The degree of the harmonic. - - `theta` : float [0, 2*pi] - The azimuthal (longitudinal) coordinate. - - `phi` : float [0, pi] - The polar (colatitudinal) coordinate. - - :Returns: - - `y_mn` : complex float - The harmonic $Y^m_n$ sampled at `theta` and `phi`. + Parameters + ---------- + m : int + |m| <= n; the order of the harmonic. + n : int + where `n` >= 0; the degree of the harmonic. This is often called + ``l`` (lower case L) in descriptions of spherical harmonics. + theta : float + [0, 2*pi]; the azimuthal (longitudinal) coordinate. + phi : float + [0, pi]; the polar (colatitudinal) coordinate. + + Returns + ------- + y_mn : complex float + The harmonic $Y^m_n$ sampled at `theta` and `phi` + + Notes + ----- + There are different conventions for the meaning of input arguments + `theta` and `phi`. We take `theta` to be the azimuthal angle and + `phi` to be the polar angle. It is common to see the opposite + convention - that is `theta` as the polar angle and `phi` as the + azimuthal angle. """ x = cos(phi) m,n = int(m), int(n) - Pmn,Pmnd = lpmn(m,n,x) + Pmn,Pmn_deriv = lpmn(m,n,x) + # Legendre call generates all orders up to m and degrees up to n val = Pmn[-1, -1] val *= sqrt((2*n+1)/4.0/pi) val *= exp(0.5*(gammaln(n-m+1)-gammaln(n+m+1))) @@ -410,9 +421,7 @@ return where(z==0,1.0,num/ asarray(den)) def assoc_laguerre(x,n,k=0.0): - gam = gamma - fac = gam(k+1+n)/gam(k+1)/gam(n+1) - return fac*hyp1f1(-n,k+1,x) + return orthogonal.eval_genlaguerre(n, k, x) digamma = psi @@ -487,6 +496,24 @@ all orders from 0..m and degrees from 0..n. z can be complex. + + Parameters + ---------- + m : int + |m| <= n; the order of the Legendre function + n : int + where `n` >= 0; the degree of the Legendre function. Often + called ``l`` (lower case L) in descriptions of the associated + Legendre function + z : float or complex + input value + + Returns + ------- + Pmn_z : (m+1, n+1) array + Values for all orders 0..m and degrees 0..n + Pmn_d_z : (m+1, n+1) array + Derivatives for all orders 0..m and degrees 0..n """ if not isscalar(m) or (abs(m)>n): raise ValueError, "m must be <= n." diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cdf_wrappers.c python-scipy-0.8.0+dfsg1/scipy/special/cdf_wrappers.c --- python-scipy-0.7.2+dfsg1/scipy/special/cdf_wrappers.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cdf_wrappers.c 2010-07-26 15:48:36.000000000 +0100 @@ -23,9 +23,6 @@ */ extern int scipy_special_print_error_messages; -#ifndef NAN -extern double NAN; -#endif /* Notice q and p are used in reverse from their meanings in distributions.py */ @@ -67,7 +64,7 @@ F_FUNC(cdfbet,CDFBET)(&which, &p, &q, &x, &y, &a, &b, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); if ((status == 1) || (status == 2)) return bound; } return a; @@ -81,7 +78,7 @@ F_FUNC(cdfbet,CDFBET)(&which, &p, &q, &x, &y, &a, &b, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); if ((status == 1) || (status == 2)) return bound; } return b; @@ -98,7 +95,7 @@ F_FUNC(cdfbin,CDFBIN)(&which, &p, &q, &s, &xn, &pr, &ompr, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); if ((status == 1) || (status == 2)) return bound; } return s; @@ -112,7 +109,7 @@ F_FUNC(cdfbin,CDFBIN)(&which, &p, &q, &s, &xn, &pr, &ompr, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); if ((status == 1) || (status == 2)) return bound; } return xn; @@ -127,7 +124,7 @@ F_FUNC(cdfchi,CDFCHI)(&which, &p, &q, &x, &df, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); if ((status == 1) || (status == 2)) return bound; } return df; @@ -142,7 +139,7 @@ F_FUNC(cdfchn,CDFCHN)(&which, &p, &q, &x, &df, &nc, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); if ((status == 1) || (status == 2)) return bound; } return p; @@ -156,7 +153,7 @@ F_FUNC(cdfchn,CDFCHN)(&which, &p, &q, &x, &df, &nc, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); } return x; } @@ -169,7 +166,7 @@ F_FUNC(cdfchn,CDFCHN)(&which, &p, &q, &x, &df, &nc, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); if ((status == 1) || (status == 2)) return bound; } return df; @@ -183,7 +180,7 @@ F_FUNC(cdfchn,CDFCHN)(&which, &p, &q, &x, &df, &nc, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); if ((status == 1) || (status == 2)) return bound; } return nc; @@ -199,7 +196,7 @@ F_FUNC(cdff,CDFF)(&which, &p, &q, &f, &dfn, &dfd, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); } return p; } @@ -212,7 +209,7 @@ F_FUNC(cdff,CDFF)(&which, &p, &q, &f, &dfn, &dfd, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); } return f; } @@ -227,7 +224,7 @@ F_FUNC(cdff,CDFF)(&which, &p, &q, &f, &dfn, &dfd, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); if ((status == 1) || (status == 2)) return bound; } return dfn; @@ -241,7 +238,7 @@ F_FUNC(cdff,CDFF)(&which, &p, &q, &f, &dfn, &dfd, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); if ((status == 1) || (status == 2)) return bound; } return dfd; @@ -257,7 +254,7 @@ F_FUNC(cdffnc,CDFFNC)(&which, &p, &q, &f, &dfn, &dfd, &nc, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); } return p; } @@ -270,7 +267,7 @@ F_FUNC(cdffnc,CDFFNC)(&which, &p, &q, &f, &dfn, &dfd, &nc, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); if ((status == 1) || (status == 2)) return bound; } return f; @@ -285,7 +282,7 @@ F_FUNC(cdffnc,CDFFNC)(&which, &p, &q, &f, &dfn, &dfd, &nc, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); if ((status == 1) || (status == 2)) return bound; } return dfn; @@ -298,7 +295,7 @@ F_FUNC(cdffnc,CDFFNC)(&which, &p, &q, &f, &dfn, &dfd, &nc, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); if ((status == 1) || (status == 2)) return bound; } return dfd; @@ -312,7 +309,7 @@ F_FUNC(cdffnc,CDFFNC)(&which, &p, &q, &f, &dfn, &dfd, &nc, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); if ((status == 1) || (status == 2)) return bound; } return nc; @@ -330,7 +327,7 @@ F_FUNC(cdfgam,CDFGAM)(&which, &p, &q, &x, &shp, &scl, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); } return p; } @@ -343,7 +340,7 @@ F_FUNC(cdfgam,CDFGAM)(&which, &p, &q, &x, &shp, &scl, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); if ((status == 1) || (status == 2)) return bound; } return x; @@ -357,7 +354,7 @@ F_FUNC(cdfgam,CDFGAM)(&which, &p, &q, &x, &shp, &scl, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); if ((status == 1) || (status == 2)) return bound; } return shp; @@ -371,7 +368,7 @@ F_FUNC(cdfgam,CDFGAM)(&which, &p, &q, &x, &shp, &scl, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); if ((status == 1) || (status == 2)) return bound; } return scl; @@ -386,7 +383,7 @@ F_FUNC(cdfnbn,CDFNBN)(&which, &p, &q, &s, &xn, &pr, &ompr, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); if ((status == 1) || (status == 2)) return bound; } return s; @@ -400,7 +397,7 @@ F_FUNC(cdfnbn,CDFNBN)(&which, &p, &q, &s, &xn, &pr, &ompr, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); if ((status == 1) || (status == 2)) return bound; } return xn; @@ -415,7 +412,7 @@ F_FUNC(cdfnor,CDFNOR)(&which, &p, &q, &x, &mn, &std, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); if ((status == 1) || (status == 2)) return bound; } return mn; @@ -429,7 +426,7 @@ F_FUNC(cdfnor,CDFNOR)(&which, &p, &q, &x, &mn, &std, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); if ((status == 1) || (status == 2)) return bound; } return std; @@ -444,7 +441,7 @@ F_FUNC(cdfpoi,CDFPOI)(&which, &p, &q, &s, &xlam, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); if ((status == 1) || (status == 2)) return bound; } return s; @@ -459,7 +456,7 @@ F_FUNC(cdft,CDFT)(&which, &p, &q, &t, &df, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); } return p; } @@ -472,7 +469,7 @@ F_FUNC(cdft,CDFT)(&which, &p, &q, &t, &df, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); if ((status == 1) || (status == 2)) return bound; } return t; @@ -486,7 +483,7 @@ F_FUNC(cdft,CDFT)(&which, &p, &q, &t, &df, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); if ((status == 1) || (status == 2)) return bound; } return df; @@ -501,7 +498,7 @@ F_FUNC(cdftnc,CDFTNC)(&which, &p, &q, &t, &df, &nc, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); if ((status == 1) || (status == 2)) return bound; } return p; @@ -515,7 +512,7 @@ F_FUNC(cdftnc,CDFTNC)(&which, &p, &q, &t, &df, &nc, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); if ((status == 1) || (status == 2)) return bound; } return t; @@ -529,7 +526,7 @@ F_FUNC(cdftnc,CDFTNC)(&which, &p, &q, &t, &df, &nc, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); if ((status == 1) || (status == 2)) return bound; } return df; @@ -543,7 +540,7 @@ F_FUNC(cdftnc,CDFTNC)(&which, &p, &q, &t, &df, &nc, &status, &bound); if (status) { if (scipy_special_print_error_messages) show_error(status, bound); - if ((status < 0) || (status==3) || (status==4)) return (NAN); + if ((status < 0) || (status==3) || (status==4)) return (NPY_NAN); if ((status == 1) || (status == 2)) return bound; } diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cdf_wrappers.h python-scipy-0.8.0+dfsg1/scipy/special/cdf_wrappers.h --- python-scipy-0.7.2+dfsg1/scipy/special/cdf_wrappers.h 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cdf_wrappers.h 2010-07-26 15:48:36.000000000 +0100 @@ -12,6 +12,8 @@ #include "cephes/mconf.h" #endif +#include + extern double cdfbet3_wrap(double p, double x, double b); extern double cdfbet4_wrap(double p, double x, double a); diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/airy.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/airy.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/airy.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/airy.c 2010-07-26 15:48:36.000000000 +0100 @@ -822,19 +822,6 @@ }; #endif -#ifdef ANSIPROT -extern double fabs ( double ); -extern double exp ( double ); -extern double sqrt ( double ); -extern double polevl ( double, void *, int ); -extern double p1evl ( double, void *, int ); -extern double sin ( double ); -extern double cos ( double ); -#else -double fabs(), exp(), sqrt(); -double polevl(), p1evl(), sin(), cos(); -#endif - int airy( x, ai, aip, bi, bip ) double x, *ai, *aip, *bi, *bip; { diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/bdtr.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/bdtr.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/bdtr.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/bdtr.c 2010-07-26 15:48:36.000000000 +0100 @@ -147,11 +147,6 @@ */ #include "mconf.h" -#ifndef ANSIPROT -double incbet(), incbi(), pow(), log1p(), expm1(); -#endif - -extern double NAN; double bdtrc( k, n, p ) int k, n; @@ -168,7 +163,7 @@ { domerr: mtherr( "bdtrc", DOMAIN ); - return( NAN); + return( NPY_NAN); } if( k == n ) @@ -203,7 +198,7 @@ { domerr: mtherr( "bdtr", DOMAIN ); - return( NAN ); + return( NPY_NAN ); } if( k == n ) @@ -235,7 +230,7 @@ { domerr: mtherr( "bdtri", DOMAIN ); - return( NAN ); + return( NPY_NAN ); } dn = n - k; diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/beta.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/beta.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/beta.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/beta.c 2010-07-26 15:48:36.000000000 +0100 @@ -64,9 +64,6 @@ #define MAXGAM 171.624376956302725 #endif -#ifndef ANSIPROT -double fabs(), Gamma(), lgam(), exp(), log(), floor(); -#endif extern double MAXLOG, MAXNUM; extern int sgngam; diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/btdtr.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/btdtr.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/btdtr.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/btdtr.c 2010-07-26 15:48:36.000000000 +0100 @@ -52,14 +52,6 @@ #include "mconf.h" -#define ANSIPROT -#ifndef ANSIPROT -double incbet(); -#else -extern double incbet ( double aa, double bb, double xx ); -double btdtr( double,double,double ); -#endif - double btdtr( a, b, x ) double a, b, x; { diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/cbrt.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/cbrt.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/cbrt.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/cbrt.c 2010-07-26 15:48:36.000000000 +0100 @@ -49,26 +49,13 @@ static double CBRT2I = 0.79370052598409973737585; static double CBRT4I = 0.62996052494743658238361; -#ifndef ANSIPROT -double frexp(), ldexp(); -int isnan(), isfinite(); -#else -extern int isfinite ( double x ); -#endif - double cbrt(double x) { int e, rem, sign; double z; -#ifdef NANS -if( isnan(x) ) +if( !npy_isfinite(x) ) return x; -#endif -#ifdef INFINITIES -if( !isfinite(x) ) - return x; -#endif if( x == 0 ) return( x ); if( x > 0 ) diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/chbevl.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/chbevl.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/chbevl.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/chbevl.c 2010-07-26 15:48:36.000000000 +0100 @@ -58,11 +58,7 @@ */ #include - -#define ANSIPROT -#ifdef ANSIPROT -double chbevl( double, double [], int ); -#endif +#include "protos.h" double chbevl( x, array, n ) double x; diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/chdtr.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/chdtr.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/chdtr.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/chdtr.c 2010-07-26 15:48:36.000000000 +0100 @@ -149,22 +149,12 @@ */ #include "mconf.h" -#ifndef ANSIPROT -double igamc(), igam(), igami(); -#endif - -extern double NAN; double chdtrc(df,x) double df, x; { if (x < 0.0) return 1.0; /* modified by T. Oliphant */ -if (df < 1.0) - { - mtherr( "chdtrc", DOMAIN ); - return(NAN); - } return( igamc( df/2.0, x/2.0 ) ); } @@ -174,10 +164,10 @@ double df, x; { -if( (x < 0.0) || (df < 1.0) ) +if( (x < 0.0)) /* || (df < 1.0) ) */ { mtherr( "chdtr", DOMAIN ); - return(NAN); + return(NPY_NAN); } return( igam( df/2.0, x/2.0 ) ); } @@ -189,10 +179,10 @@ { double x; -if( (y < 0.0) || (y > 1.0) || (df < 1.0) ) +if( (y < 0.0) || (y > 1.0)) /* || (df < 1.0) ) */ { mtherr( "chdtri", DOMAIN ); - return(NAN); + return(NPY_NAN); } x = igami( 0.5 * df, y ); diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/const.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/const.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/const.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/const.c 2010-07-26 15:48:36.000000000 +0100 @@ -39,7 +39,6 @@ * The global symbols for mathematical constants are * PI = 3.14159265358979323846 pi * PIO2 = 1.57079632679489661923 pi/2 - * PIO4 = 7.85398163397448309616E-1 pi/4 * SQRT2 = 1.41421356237309504880 sqrt(2) * SQRTH = 7.07106781186547524401E-1 sqrt(2)/2 * LOG2E = 1.4426950408889634073599 1/log(2) @@ -81,7 +80,6 @@ double MAXNUM = 1.79769313486231570815E308; /* 2**1024*(1-MACHEP) */ double PI = 3.14159265358979323846; /* pi */ double PIO2 = 1.57079632679489661923; /* pi/2 */ -double PIO4 = 7.85398163397448309616E-1; /* pi/4 */ double SQRT2 = 1.41421356237309504880; /* sqrt(2) */ double SQRTH = 7.07106781186547524401E-1; /* sqrt(2)/2 */ double LOG2E = 1.4426950408889634073599; /* 1/log(2) */ @@ -90,16 +88,6 @@ double LOGSQ2 = 3.46573590279972654709E-1; /* log(2)/2 */ double THPIO4 = 2.35619449019234492885; /* 3*pi/4 */ double TWOOPI = 6.36619772367581343075535E-1; /* 2/pi */ -#ifdef INFINITIES -double INFINITY = 1.0/0.0; /* 99e999; */ -#else -double INFINITY = 1.79769313486231570815E308; /* 2**1024*(1-MACHEP) */ -#endif -#ifdef NANS -double NAN = 1.0/0.0 - 1.0/0.0; -#else -double NAN = 0.0; -#endif #ifdef MINUSZERO double NEGZERO = -0.0; #else @@ -127,7 +115,6 @@ unsigned short MAXNUM[4] = {0xffff,0xffff,0xffff,0x7fef}; unsigned short PI[4] = {0x2d18,0x5444,0x21fb,0x4009}; unsigned short PIO2[4] = {0x2d18,0x5444,0x21fb,0x3ff9}; -unsigned short PIO4[4] = {0x2d18,0x5444,0x21fb,0x3fe9}; unsigned short SQRT2[4] = {0x3bcd,0x667f,0xa09e,0x3ff6}; unsigned short SQRTH[4] = {0x3bcd,0x667f,0xa09e,0x3fe6}; unsigned short LOG2E[4] = {0x82fe,0x652b,0x1547,0x3ff7}; @@ -136,16 +123,6 @@ unsigned short LOGSQ2[4] = {0x39ef,0xfefa,0x2e42,0x3fd6}; unsigned short THPIO4[4] = {0x21d2,0x7f33,0xd97c,0x4002}; unsigned short TWOOPI[4] = {0xc883,0x6dc9,0x5f30,0x3fe4}; -#ifdef INFINITIES -unsigned short INFINITY[4] = {0x0000,0x0000,0x0000,0x7ff0}; -#else -unsigned short INFINITY[4] = {0xffff,0xffff,0xffff,0x7fef}; -#endif -#ifdef NANS -unsigned short NAN[4] = {0x0000,0x0000,0x0000,0x7ffc}; -#else -unsigned short NAN[4] = {0x0000,0x0000,0x0000,0x0000}; -#endif #ifdef MINUSZERO unsigned short NEGZERO[4] = {0x0000,0x0000,0x0000,0x8000}; #else @@ -173,7 +150,6 @@ unsigned short MAXNUM[4] = {0x7fef,0xffff,0xffff,0xffff}; unsigned short PI[4] = {0x4009,0x21fb,0x5444,0x2d18}; unsigned short PIO2[4] = {0x3ff9,0x21fb,0x5444,0x2d18}; -unsigned short PIO4[4] = {0x3fe9,0x21fb,0x5444,0x2d18}; unsigned short SQRT2[4] = {0x3ff6,0xa09e,0x667f,0x3bcd}; unsigned short SQRTH[4] = {0x3fe6,0xa09e,0x667f,0x3bcd}; unsigned short LOG2E[4] = {0x3ff7,0x1547,0x652b,0x82fe}; @@ -182,16 +158,6 @@ unsigned short LOGSQ2[4] = {0x3fd6,0x2e42,0xfefa,0x39ef}; unsigned short THPIO4[4] = {0x4002,0xd97c,0x7f33,0x21d2}; unsigned short TWOOPI[4] = {0x3fe4,0x5f30,0x6dc9,0xc883}; -#ifdef INFINITIES -unsigned short INFINITY[4] = {0x7ff0,0x0000,0x0000,0x0000}; -#else -unsigned short INFINITY[4] = {0x7fef,0xffff,0xffff,0xffff}; -#endif -#ifdef NANS -unsigned short NAN[4] = {0x7ff8,0x0000,0x0000,0x0000}; -#else -unsigned short NAN[4] = {0x0000,0x0000,0x0000,0x0000}; -#endif #ifdef MINUSZERO unsigned short NEGZERO[4] = {0x8000,0x0000,0x0000,0x0000}; #else @@ -211,7 +177,6 @@ unsigned short MAXNUM[4] = {077777,0177777,0177777,0177777,}; unsigned short PI[4] = {040511,007732,0121041,064302,}; unsigned short PIO2[4] = {040311,007732,0121041,064302,}; -unsigned short PIO4[4] = {040111,007732,0121041,064302,}; unsigned short SQRT2[4] = {040265,002363,031771,0157145,}; unsigned short SQRTH[4] = {040065,002363,031771,0157144,}; unsigned short LOG2E[4] = {040270,0125073,024534,013761,}; @@ -220,9 +185,6 @@ unsigned short LOGSQ2[4] = {037661,071027,0173721,0147572,}; unsigned short THPIO4[4] = {040426,0145743,0174631,007222,}; unsigned short TWOOPI[4] = {040042,0174603,067116,042025,}; -/* Approximate infinity by MAXNUM. */ -unsigned short INFINITY[4] = {077777,0177777,0177777,0177777,}; -unsigned short NAN[4] = {0000000,0000000,0000000,0000000}; #ifdef MINUSZERO unsigned short NEGZERO[4] = {0000000,0000000,0000000,0100000}; #else @@ -239,7 +201,6 @@ extern unsigned short MAXNUM[]; extern unsigned short PI[]; extern unsigned short PIO2[]; -extern unsigned short PIO4[]; extern unsigned short SQRT2[]; extern unsigned short SQRTH[]; extern unsigned short LOG2E[]; @@ -248,7 +209,5 @@ extern unsigned short LOGSQ2[]; extern unsigned short THPIO4[]; extern unsigned short TWOOPI[]; -extern unsigned short INFINITY[]; -extern unsigned short NAN[]; extern unsigned short NEGZERO[]; #endif diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/dawsn.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/dawsn.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/dawsn.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/dawsn.c 2010-07-26 15:48:36.000000000 +0100 @@ -342,9 +342,6 @@ }; #endif -#ifndef ANSIPROT -double chbevl(), sqrt(), fabs(), polevl(), p1evl(); -#endif extern double PI, MACHEP; double dawsn( xx ) diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/ellie.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/ellie.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/ellie.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/ellie.c 2010-07-26 15:48:36.000000000 +0100 @@ -55,24 +55,7 @@ #include "mconf.h" -#define ANSIPROT - extern double PI, PIO2, MACHEP; -#ifndef ANSIPROT -double sqrt(), fabs(), log(), sin(), tan(), atan(), floor(); -double ellpe(), ellpk(); -#else -double ellie(double,double); -extern double sqrt(double); -extern double fabs(double); -extern double log(double); -extern double sin(double); -extern double tan(double); -extern double atan(double); -extern double floor(double); -extern double ellpe(double); -extern double ellpk(double); -#endif double ellie( phi, m ) double phi, m; diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/ellik.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/ellik.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/ellik.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/ellik.c 2010-07-26 15:48:36.000000000 +0100 @@ -55,9 +55,6 @@ /* Incomplete elliptic integral of first kind */ #include "mconf.h" -#ifndef ANSIPROT -double sqrt(), fabs(), log(), tan(), atan(), floor(), ellpk(); -#endif extern double PI, PIO2, MACHEP, MAXNUM; double ellik( phi, m ) diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/ellpe.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/ellpe.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/ellpe.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/ellpe.c 2010-07-26 15:48:36.000000000 +0100 @@ -178,12 +178,6 @@ }; #endif -#ifndef ANSIPROT -double polevl(), log(); -#endif - -extern double NAN; - double ellpe(x) double x; { @@ -193,7 +187,7 @@ if( x == 0.0 ) return( 1.0 ); mtherr( "ellpe", DOMAIN ); - return( NAN ); + return( NPY_NAN ); } return( polevl(x,P,10) - log(x) * (x * polevl(x,Q,9)) ); } diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/ellpj.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/ellpj.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/ellpj.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/ellpj.c 2010-07-26 15:48:36.000000000 +0100 @@ -63,11 +63,7 @@ */ #include "mconf.h" -#ifndef ANSIPROT -double sqrt(), fabs(), sin(), cos(), asin(), tanh(); -double sinh(), cosh(), atan(), exp(); -#endif -extern double PIO2, MACHEP, NAN; +extern double PIO2, MACHEP; int ellpj( u, m, sn, cn, dn, ph ) double u, m; @@ -80,13 +76,13 @@ /* Check for special cases */ -if( m < 0.0 || m > 1.0 || isnan(m)) +if( m < 0.0 || m > 1.0 || npy_isnan(m)) { mtherr( "ellpj", DOMAIN ); - *sn = NAN; - *cn = NAN; - *ph = NAN; - *dn = NAN; + *sn = NPY_NAN; + *cn = NPY_NAN; + *ph = NPY_NAN; + *dn = NPY_NAN; return(-1); } if( m < 1.0e-9 ) diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/ellpk.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/ellpk.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/ellpk.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/ellpk.c 2010-07-26 15:48:36.000000000 +0100 @@ -201,10 +201,7 @@ static double C1 = 1.3862943611198906188E0; /* log(4) */ #endif -#ifndef ANSIPROT -double polevl(), p1evl(), log(); -#endif -extern double MACHEP, MAXNUM, NAN; +extern double MACHEP, MAXNUM; double ellpk(x) /* Changed to use m argument rather than m1 = 1-m */ double x; @@ -214,7 +211,7 @@ if( (x < 0.0) || (x > 1.0) ) { mtherr( "ellpk", DOMAIN ); - return( NAN ); + return( NPY_NAN ); } if( x > MACHEP ) diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/euclid.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/euclid.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/euclid.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/euclid.c 2010-07-26 15:48:36.000000000 +0100 @@ -27,15 +27,12 @@ #include "mconf.h" -#ifndef ANSIPROT -double fabs(), floor(), euclid(); -#else -double euclid( double *, double * ); -#endif extern double MACHEP; #define BIG (1.0/MACHEP) +double euclid(double* num, double* den ); + typedef struct { double n; /* numerator */ @@ -43,12 +40,10 @@ } fract; /* Add fractions. */ -#ifdef ANSIPROT static void radd(fract*,fract*,fract*); static void rsub(fract*,fract*,fract*); static void rmul(fract*,fract*,fract*); static void rdiv(fract*,fract*,fract*); -#endif void radd( f1, f2, f3 ) fract *f1, *f2, *f3; diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/exp10.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/exp10.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/exp10.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/exp10.c 2010-07-26 15:48:36.000000000 +0100 @@ -155,32 +155,18 @@ static double MAXL10 = 308.2547155599167; #endif -#ifndef ANSIPROT -double floor(), ldexp(), polevl(), p1evl(); -int isnan(), isfinite(); -#endif extern double MAXNUM; -#ifdef INFINITIES -extern double INFINITY; -#endif double exp10(double x) { double px, xx; short n; -#ifdef NANS -if( isnan(x) ) +if( npy_isnan(x) ) return(x); -#endif if( x > MAXL10 ) { -#ifdef INFINITIES - return( INFINITY ); -#else - mtherr( "exp10", OVERFLOW ); - return( MAXNUM ); -#endif + return( NPY_INFINITY ); } if( x < -MAXL10 ) /* Would like to use MINLOG but can't */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/exp2.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/exp2.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/exp2.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/exp2.c 2010-07-26 15:48:36.000000000 +0100 @@ -118,13 +118,6 @@ #define MINL2 -1022.0 #endif -#ifndef ANSIPROT -double polevl(), p1evl(), floor(), ldexp(); -int isnan(), isfinite(); -#endif -#ifdef INFINITIES -extern double INFINITY; -#endif extern double MAXNUM; double exp2(double x) @@ -132,25 +125,15 @@ double px, xx; short n; -#ifdef NANS -if( isnan(x) ) +if( npy_isnan(x) ) return(x); -#endif if( x > MAXL2) { -#ifdef INFINITIES - return( INFINITY ); -#else - mtherr( "exp2", OVERFLOW ); - return( MAXNUM ); -#endif + return( NPY_INFINITY ); } if( x < MINL2 ) { -#ifndef INFINITIES - mtherr( "exp2", UNDERFLOW ); -#endif return(0.0); } diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/expn.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/expn.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/expn.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/expn.c 2010-07-26 15:48:36.000000000 +0100 @@ -50,9 +50,6 @@ * Direct inquiries to 30 Frost Street, Cambridge, MA 02140 */ #include "mconf.h" -#ifndef ANSIPROT -double pow(), Gamma(), log(), exp(), fabs(); -#endif #define EUL 0.57721566490153286060 #define BIG 1.44115188075855872E+17 extern double MAXNUM, MACHEP, MAXLOG; diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/fdtr.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/fdtr.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/fdtr.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/fdtr.c 2010-07-26 15:48:36.000000000 +0100 @@ -161,11 +161,6 @@ #include "mconf.h" -#ifndef ANSIPROT -double incbet(), incbi(); -#endif - -extern double NAN; double fdtrc( a, b, x ) double a, b; @@ -176,7 +171,7 @@ if( (a < 1.0) || (b < 1.0) || (x < 0.0) ) { mtherr( "fdtrc", DOMAIN ); - return( NAN ); + return( NPY_NAN ); } w = b / (b + a * x); return( incbet( 0.5*b, 0.5*a, w ) ); @@ -191,7 +186,7 @@ if( (a < 1.0) || (b < 1.0) || (x < 0.0) ) { mtherr( "fdtr", DOMAIN ); - return( NAN ); + return( NPY_NAN ); } w = a * x; w = w / (b + w); @@ -208,7 +203,7 @@ if( (a < 1.0) || (b < 1.0) || (y <= 0.0) || (y > 1.0) ) { mtherr( "fdtri", DOMAIN ); - return( NAN ); + return( NPY_NAN ); } y = 1.0-y; a = a; diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/fresnl.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/fresnl.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/fresnl.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/fresnl.c 2010-07-26 15:48:36.000000000 +0100 @@ -446,9 +446,6 @@ }; #endif -#ifndef ANSIPROT -double fabs(), cos(), sin(), polevl(), p1evl(); -#endif extern double PI, PIO2, MACHEP; int fresnl( xxa, ssa, cca ) diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/gamma.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/gamma.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/gamma.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/gamma.c 2010-07-26 15:48:36.000000000 +0100 @@ -270,19 +270,7 @@ int sgngam = 0; extern int sgngam; extern double MAXLOG, MAXNUM, PI; -#ifndef ANSIPROT -double pow(), log(), exp(), sin(), polevl(), p1evl(), floor(), fabs(); -int isnan(), isfinite(); -#else -extern int isfinite ( double x ); static double stirf(double); -#endif -#ifdef INFINITIES -extern double INFINITY; -#endif -#ifdef NANS -extern double NAN; -#endif /* Gamma function computed by Stirling's formula. * The polynomial STIR is valid for 33 <= x <= 172. @@ -292,11 +280,7 @@ double y, w, v; if (x >= MAXGAM) { -#ifdef INFINITIES - return (INFINITY); -#else - return (MAXNUM); -#endif + return (NPY_INFINITY); } w = 1.0/x; w = 1.0 + w * polevl( w, STIR, 4 ); @@ -322,21 +306,9 @@ int i; sgngam = 1; -#ifdef NANS -if( isnan(x) ) - return(x); -#endif -#ifdef INFINITIES -#ifdef NANS -if( x == INFINITY ) - return(x); -if( x == -INFINITY ) - return(x); -#else -if( !isfinite(x) ) - return(x); -#endif -#endif +if (!npy_isfinite(x)) { + return x; +} q = fabs(x); if( q > 33.0 ) @@ -346,13 +318,9 @@ p = floor(q); if( p == q ) { -#ifdef NANS gamnan: mtherr( "Gamma", OVERFLOW ); return (MAXNUM); -#else - goto goverf; -#endif } i = p; if( (i & 1) == 0 ) @@ -366,13 +334,7 @@ z = q * sin( PI * z ); if( z == 0.0 ) { -#ifdef INFINITIES - return( sgngam * INFINITY); -#else -goverf: - mtherr( "Gamma", OVERFLOW ); - return( sgngam * MAXNUM); -#endif + return( sgngam * NPY_INFINITY); } z = fabs(z); z = PI/(z * stirf(q) ); @@ -418,16 +380,7 @@ small: if( x == 0.0 ) { -#ifdef INFINITIES -#ifdef NANS goto gamnan; -#else - return( INFINITY ); -#endif -#else - mtherr( "Gamma", SING ); - return( MAXNUM ); -#endif } else return( z/((1.0 + 0.5772156649015329 * x) * x) ); @@ -574,15 +527,9 @@ int i; sgngam = 1; -#ifdef NANS -if( isnan(x) ) - return(x); -#endif -#ifdef INFINITIES -if( !isfinite(x) ) - return(INFINITY); -#endif +if( !npy_isfinite(x) ) + return x; if( x < -34.0 ) { @@ -592,12 +539,8 @@ if( p == q ) { lgsing: -#ifdef INFINITIES mtherr( "lgam", SING ); - return (INFINITY); -#else - goto loverf; -#endif + return (NPY_INFINITY); } i = p; if( (i & 1) == 0 ) @@ -654,13 +597,7 @@ if( x > MAXLGM ) { -#ifdef INFINITIES - return( sgngam * INFINITY ); -#else -loverf: - mtherr( "lgam", OVERFLOW ); - return( sgngam * MAXNUM ); -#endif + return( sgngam * NPY_INFINITY ); } q = ( x - 0.5 ) * log(x) - x + LS2PI; diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/gdtr.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/gdtr.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/gdtr.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/gdtr.c 2010-07-26 15:48:36.000000000 +0100 @@ -96,13 +96,7 @@ */ #include "mconf.h" -#ifndef ANSIPROT -double igam(), igamc(); -#else double gdtri(double,double,double); -#endif - -extern double NAN; double gdtr( a, b, x ) double a, b, x; @@ -111,7 +105,7 @@ if( x < 0.0 ) { mtherr( "gdtr", DOMAIN ); - return( NAN ); + return( NPY_NAN ); } return( igam( b, a * x ) ); } @@ -124,7 +118,7 @@ if( x < 0.0 ) { mtherr( "gdtrc", DOMAIN ); - return( NAN ); + return( NPY_NAN ); } return( igamc( b, a * x ) ); } @@ -137,7 +131,7 @@ if ((y < 0.0) || (y > 1.0) || (a <= 0.0) || (b < 0.0)) { mtherr("gdtri", DOMAIN); - return( NAN ); + return( NPY_NAN ); } return ( igami (b, 1.0-y) / a); diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/gels.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/gels.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/gels.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/gels.c 2010-07-26 15:48:36.000000000 +0100 @@ -63,13 +63,7 @@ C .................................................................. C */ -#define ANSIPROT -#ifndef ANSIPROT -double fabs(); -#else -extern double fabs(double); -int gels( double [], double [], int, double, double [] ); -#endif +#include "protos.h" int gels( A, R, M, EPS, AUX ) diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/hyp2f1.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/hyp2f1.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/hyp2f1.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/hyp2f1.c 2010-07-26 15:48:36.000000000 +0100 @@ -1,7 +1,7 @@ -/* hyp2f1.c +/* hyp2f1.c * - * Gauss hypergeometric function F - * 2 1 + * Gauss hypergeometric function F + * 2 1 * * * SYNOPSIS: @@ -24,12 +24,12 @@ * k = 0 * * Cases addressed are - * Tests and escapes for negative integer a, b, or c - * Linear transformation if c - a or c - b negative integer - * Special case c = a or c = b - * Linear transformation for x near +1 - * Transformation for x < -0.5 - * Psi function expansion if x > 0.5 and c - a - b integer + * Tests and escapes for negative integer a, b, or c + * Linear transformation if c - a or c - b negative integer + * Special case c = a or c = b + * Linear transformation for x near +1 + * Transformation for x < -0.5 + * Psi function expansion if x > 0.5 and c - a - b integer * Conditionally, a recurrence on c to make c-a-b > 0 * * x < -1 AMS 15.3.7 transformation applied (Travis Oliphant) @@ -59,15 +59,15 @@ * in cases not addressed (such as x < -1). */ -/* hyp2f1 */ +/* hyp2f1 */ /* -Cephes Math Library Release 2.8: June, 2000 -Copyright 1984, 1987, 1992, 2000 by Stephen L. Moshier -*/ - + * Cephes Math Library Release 2.8: June, 2000 + * Copyright 1984, 1987, 1992, 2000 by Stephen L. Moshier + */ +#include #include "mconf.h" #ifdef DEC @@ -92,199 +92,191 @@ #define ETHRESH 1.0e-12 -#ifdef ANSIPROT -extern double fabs ( double ); -extern double pow ( double, double ); -extern double round ( double ); -extern double gamma ( double ); -extern double log ( double ); -extern double exp ( double ); -extern double psi ( double ); -static double hyt2f1(double, double, double, double, double *); -static double hys2f1(double, double, double, double, double *); -double hyp2f1(double, double, double, double); -#else -double fabs(), pow(), round(), gamma(), log(), exp(), psi(); -static double hyt2f1(); -static double hys2f1(); -double hyp2f1(); -#endif -extern double MAXNUM, MACHEP, NAN; +extern double MACHEP; -double hyp2f1( a, b, c, x ) +static double hyt2f1(double a, double b, double c, double x, double *loss); +static double hys2f1(double a, double b, double c, double x, double *loss); +static double hyp2f1ra(double a, double b, double c, double x, double* loss); + +double hyp2f1(a, b, c, x) double a, b, c, x; { -double d, d1, d2, e; -double p, q, r, s, y, ax; -double ia, ib, ic, id, err; -int i, aid; -int neg_int_a = 0, neg_int_b = 0; -int neg_int_ca_or_cb = 0; - -err = 0.0; -ax = fabs(x); -s = 1.0 - x; -ia = round(a); /* nearest integer to a */ -ib = round(b); + double d, d1, d2, e; + double p, q, r, s, y, ax; + double ia, ib, ic, id, err; + double t1; + int i, aid; + int neg_int_a = 0, neg_int_b = 0; + int neg_int_ca_or_cb = 0; + + err = 0.0; + ax = fabs(x); + s = 1.0 - x; + ia = round(a); /* nearest integer to a */ + ib = round(b); -if (x == 0.0) { - return 1.0; -} + if (x == 0.0) { + return 1.0; + } -d = c - a - b; -if (d <= -1) { - return pow(s, d) * hyp2f1(c-a, c-b, c, x); -} -if (d <= 0 && x == 1) - goto hypdiv; + d = c - a - b; + id = round(d); -if (a <= 0 && fabs(a-ia) < EPS ) { /* a is a negative integer */ - neg_int_a = 1; -} + if ((a == 0 || b == 0) && c != 0) { + return 1.0; + } -if (b <= 0 && fabs(b-ib) < EPS ) { /* b is a negative integer */ - neg_int_b = 1; -} + if (a <= 0 && fabs(a - ia) < EPS) { /* a is a negative integer */ + neg_int_a = 1; + } -if (ax < 1.0 || x == -1.0) { - /* 2F1(a,b;b;x) = (1-x)**(-a) */ - if( fabs(b-c) < EPS ) { /* b = c */ - y = pow( s, -a ); /* s to the -a power */ - goto hypdon; - } - if( fabs(a-c) < EPS ) { /* a = c */ - y = pow( s, -b ); /* s to the -b power */ - goto hypdon; - } -} + if (b <= 0 && fabs(b - ib) < EPS) { /* b is a negative integer */ + neg_int_b = 1; + } + if (d <= -1 && !(fabs(d-id) > EPS && s < 0) && !(neg_int_a || neg_int_b)) { + return pow(s, d) * hyp2f1(c - a, c - b, c, x); + } + if (d <= 0 && x == 1) + goto hypdiv; + if (ax < 1.0 || x == -1.0) { + /* 2F1(a,b;b;x) = (1-x)**(-a) */ + if (fabs(b - c) < EPS) { /* b = c */ + y = pow(s, -a); /* s to the -a power */ + goto hypdon; + } + if (fabs(a - c) < EPS) { /* a = c */ + y = pow(s, -b); /* s to the -b power */ + goto hypdon; + } + } -if( c <= 0.0 ) - { - ic = round(c); /* nearest integer to c */ - if( fabs(c-ic) < EPS ) /* c is a negative integer */ - { - /* check if termination before explosion */ - if( neg_int_a && (ia > ic) ) - goto hypok; - if( neg_int_b && (ib > ic) ) - goto hypok; - goto hypdiv; - } - } -if (neg_int_a || neg_int_b) /* function is a polynomial */ - goto hypok; -if (x < -1.0) { - double t1; + if (c <= 0.0) { + ic = round(c); /* nearest integer to c */ + if (fabs(c - ic) < EPS) { /* c is a negative integer */ + /* check if termination before explosion */ + if (neg_int_a && (ia > ic)) + goto hypok; + if (neg_int_b && (ib > ic)) + goto hypok; + goto hypdiv; + } + } + + if (neg_int_a || neg_int_b) /* function is a polynomial */ + goto hypok; + t1 = fabs(b - a); - if (fabs(t1 - round(t1)) < EPS) { - /* this transformation has a pole for b-a= +-integer, - so we average around it. + if (x < -2.0 && fabs(t1 - round(t1)) > EPS) { + /* This transform has a pole for b-a integer, and + * may produce large cancellation errors for |1/x| close 1 */ - return 0.5*(hyp2f1(a, b*(1+1e-9), c, x) + hyp2f1(a, b*(1-1e-9), c, x)); + p = hyp2f1(a, 1 - c + a, 1 - b + a, 1.0 / x); + q = hyp2f1(b, 1 - c + b, 1 - a + b, 1.0 / x); + p *= pow(-x, -a); + q *= pow(-x, -b); + t1 = gamma(c); + s = t1 * gamma(b - a) / (gamma(b) * gamma(c - a)); + y = t1 * gamma(a - b) / (gamma(a) * gamma(c - b)); + return s * p + y * q; + } else if (x < -1.0) { + if (fabs(a) < fabs(b)) { + return pow(s, -a) * hyp2f1(a, c-b, c, x/(x-1)); + } else { + return pow(s, -b) * hyp2f1(b, c-a, c, x/(x-1)); + } } - p = hyp2f1(a, 1-c+a, 1-b+a, 1.0/x); - q = hyp2f1(b, 1-c+b, 1-a+b, 1.0/x); - p *= pow(-x, -a); - q *= pow(-x, -b); - t1 = gamma(c); - s = t1*gamma(b-a)/(gamma(b)*gamma(c-a)); - y = t1*gamma(a-b)/(gamma(a)*gamma(c-b)); - return s*p + y*q; -} -if( ax > 1.0 ) /* series diverges */ - goto hypdiv; + if (ax > 1.0) /* series diverges */ + goto hypdiv; -p = c - a; -ia = round(p); /* nearest integer to c-a */ -if( (ia <= 0.0) && (fabs(p-ia) < EPS) ) /* negative int c - a */ + p = c - a; + ia = round(p); /* nearest integer to c-a */ + if ((ia <= 0.0) && (fabs(p - ia) < EPS)) /* negative int c - a */ neg_int_ca_or_cb = 1; -r = c - b; -ib = round(r); /* nearest integer to c-b */ -if( (ib <= 0.0) && (fabs(r-ib) < EPS) ) /* negative int c - b */ + r = c - b; + ib = round(r); /* nearest integer to c-b */ + if ((ib <= 0.0) && (fabs(r - ib) < EPS)) /* negative int c - b */ neg_int_ca_or_cb = 1; -id = round(d); /* nearest integer to d */ -q = fabs(d-id); + id = round(d); /* nearest integer to d */ + q = fabs(d - id); -/* Thanks to Christian Burger - * for reporting a bug here. */ -if( fabs(ax-1.0) < EPS ) { /* |x| == 1.0 */ - if( x > 0.0 ) { - if (neg_int_ca_or_cb) { - if( d >= 0.0 ) - goto hypf; - else - goto hypdiv; - } - if( d <= 0.0 ) - goto hypdiv; - y = gamma(c)*gamma(d)/(gamma(p)*gamma(r)); - goto hypdon; - } - if( d <= -1.0 ) - goto hypdiv; -} + /* Thanks to Christian Burger + * for reporting a bug here. */ + if (fabs(ax - 1.0) < EPS) { /* |x| == 1.0 */ + if (x > 0.0) { + if (neg_int_ca_or_cb) { + if (d >= 0.0) + goto hypf; + else + goto hypdiv; + } + if (d <= 0.0) + goto hypdiv; + y = gamma(c) * gamma(d) / (gamma(p) * gamma(r)); + goto hypdon; + } + if (d <= -1.0) + goto hypdiv; + } -/* Conditionally make d > 0 by recurrence on c - * AMS55 #15.2.27 - */ -if( d < 0.0 ) - { -/* Try the power series first */ - y = hyt2f1( a, b, c, x, &err ); - if( err < ETHRESH ) - goto hypdon; -/* Apply the recurrence if power series fails */ - err = 0.0; - aid = 2 - id; - e = c + aid; - d2 = hyp2f1(a,b,e,x); - d1 = hyp2f1(a,b,e+1.0,x); - q = a + b + 1.0; - for( i=0; i ETHRESH ) - { - mtherr( "hyp2f1", PLOSS ); -/* printf( "Estimated err = %.2e\n", err ); */ - } -return(y); + /* Conditionally make d > 0 by recurrence on c + * AMS55 #15.2.27 + */ + if (d < 0.0) { + /* Try the power series first */ + y = hyt2f1(a, b, c, x, &err); + if (err < ETHRESH) + goto hypdon; + /* Apply the recurrence if power series fails */ + err = 0.0; + aid = 2 - id; + e = c + aid; + d2 = hyp2f1(a, b, e, x); + d1 = hyp2f1(a, b, e + 1.0, x); + q = a + b + 1.0; + for (i = 0; i < aid; i++) { + r = e - 1.0; + y = (e * (r - (2.0 * e - q) * x) * d2 + + (e - a) * (e - b) * x * d1) / (e * r * s); + e = r; + d1 = d2; + d2 = y; + } + goto hypdon; + } + + + if (neg_int_ca_or_cb) + goto hypf; /* negative integer c-a or c-b */ + + hypok: + y = hyt2f1(a, b, c, x, &err); + + + hypdon: + if (err > ETHRESH) { + mtherr("hyp2f1", PLOSS); + /* printf( "Estimated err = %.2e\n", err ); */ + } + return (y); /* The transformation for c-a or c-b negative integer * AMS55 #15.3.3 */ -hypf: -y = pow( s, d ) * hys2f1( c-a, c-b, c, x, &err ); -goto hypdon; + hypf: + y = pow(s, d) * hys2f1(c - a, c - b, c, x, &err); + goto hypdon; /* The alarm exit */ -hypdiv: -mtherr( "hyp2f1", OVERFLOW ); -return( MAXNUM ); + hypdiv: + mtherr("hyp2f1", OVERFLOW); + return NPY_INFINITY; } @@ -295,141 +287,150 @@ /* Apply transformations for |x| near 1 * then call the power series */ -static double hyt2f1( a, b, c, x, loss ) +static double hyt2f1(a, b, c, x, loss) double a, b, c, x; double *loss; { -double p, q, r, s, t, y, d, err, err1; -double ax, id, d1, d2, e, y1; -int i, aid; - -err = 0.0; -s = 1.0 - x; -if( x < -0.5 ) - { - if( b > a ) - y = pow( s, -a ) * hys2f1( a, c-b, c, -x/s, &err ); - - else - y = pow( s, -b ) * hys2f1( c-a, b, c, -x/s, &err ); + double p, q, r, s, t, y, d, err, err1; + double ax, id, d1, d2, e, y1; + int i, aid; - goto done; - } + int ia, ib, neg_int_a = 0, neg_int_b = 0; -d = c - a - b; -id = round(d); /* nearest integer to d */ + ia = round(a); + ib = round(b); -if( x > 0.9 ) -{ -if( fabs(d-id) > EPS ) /* test for integer c-a-b */ - { -/* Try the power series first */ - y = hys2f1( a, b, c, x, &err ); - if( err < ETHRESH ) - goto done; -/* If power series fails, then apply AMS55 #15.3.6 */ - q = hys2f1( a, b, 1.0-d, s, &err ); - q *= gamma(d) /(gamma(c-a) * gamma(c-b)); - r = pow(s,d) * hys2f1( c-a, c-b, d+1.0, s, &err1 ); - r *= gamma(-d)/(gamma(a) * gamma(b)); - y = q + r; - - q = fabs(q); /* estimate cancellation error */ - r = fabs(r); - if( q > r ) - r = q; - err += err1 + (MACHEP*r)/y; - - y *= gamma(c); - goto done; - } -else - { -/* Psi function expansion, AMS55 #15.3.10, #15.3.11, #15.3.12 */ - if( id >= 0.0 ) - { - e = d; - d1 = d; - d2 = 0.0; - aid = id; - } - else - { - e = -d; - d1 = 0.0; - d2 = d; - aid = -id; - } - - ax = log(s); - - /* sum for t = 0 */ - y = psi(1.0) + psi(1.0+e) - psi(a+d1) - psi(b+d1) - ax; - y /= gamma(e+1.0); - - p = (a+d1) * (b+d1) * s / gamma(e+2.0); /* Poch for t=1 */ - t = 1.0; - do - { - r = psi(1.0+t) + psi(1.0+t+e) - psi(a+t+d1) - - psi(b+t+d1) - ax; - q = p * r; - y += q; - p *= s * (a+t+d1) / (t+1.0); - p *= (b+t+d1) / (t+1.0+e); - t += 1.0; - } - while( fabs(q/y) > EPS ); - - - if( id == 0.0 ) - { - y *= gamma(c)/(gamma(a)*gamma(b)); - goto psidon; - } - - y1 = 1.0; - - if( aid == 1 ) - goto nosum; - - t = 0.0; - p = 1.0; - for( i=1; i 0.0 ) - y *= q; - else - y1 *= q; - - y += y1; -psidon: - goto done; - } + if (a <= 0 && fabs(a - ia) < EPS) { /* a is a negative integer */ + neg_int_a = 1; + } -} + if (b <= 0 && fabs(b - ib) < EPS) { /* b is a negative integer */ + neg_int_b = 1; + } + + err = 0.0; + s = 1.0 - x; + if (x < -0.5 && !(neg_int_a || neg_int_b)) { + if (b > a) + y = pow(s, -a) * hys2f1(a, c - b, c, -x / s, &err); + + else + y = pow(s, -b) * hys2f1(c - a, b, c, -x / s, &err); + + goto done; + } + + d = c - a - b; + id = round(d); /* nearest integer to d */ + + if (x > 0.9 && !(neg_int_a || neg_int_b)) { + if (fabs(d - id) > EPS) { + /* test for integer c-a-b */ + /* Try the power series first */ + y = hys2f1(a, b, c, x, &err); + if (err < ETHRESH) + goto done; + /* If power series fails, then apply AMS55 #15.3.6 */ + q = hys2f1(a, b, 1.0 - d, s, &err); + q *= gamma(d) / (gamma(c - a) * gamma(c - b)); + r = pow(s, d) * hys2f1(c - a, c - b, d + 1.0, s, &err1); + r *= gamma(-d) / (gamma(a) * gamma(b)); + y = q + r; + + q = fabs(q); /* estimate cancellation error */ + r = fabs(r); + if (q > r) + r = q; + err += err1 + (MACHEP * r) / y; + + y *= gamma(c); + goto done; + } else { + /* Psi function expansion, AMS55 #15.3.10, #15.3.11, #15.3.12 + * + * Although AMS55 does not explicitly state it, this expansion fails + * for negative integer a or b, since the psi and Gamma functions + * involved have poles. + */ + + if (id >= 0.0) { + e = d; + d1 = d; + d2 = 0.0; + aid = id; + } else { + e = -d; + d1 = 0.0; + d2 = d; + aid = -id; + } + + ax = log(s); + + /* sum for t = 0 */ + y = psi(1.0) + psi(1.0 + e) - psi(a + d1) - psi(b + d1) - ax; + y /= gamma(e + 1.0); + + p = (a + d1) * (b + d1) * s / gamma(e + 2.0); /* Poch for t=1 */ + t = 1.0; + do { + r = psi(1.0 + t) + psi(1.0 + t + e) - psi(a + t + d1) + - psi(b + t + d1) - ax; + q = p * r; + y += q; + p *= s * (a + t + d1) / (t + 1.0); + p *= (b + t + d1) / (t + 1.0 + e); + t += 1.0; + } + while (fabs(q / y) > EPS); + + + if (id == 0.0) { + y *= gamma(c) / (gamma(a) * gamma(b)); + goto psidon; + } + + y1 = 1.0; + + if (aid == 1) + goto nosum; + + t = 0.0; + p = 1.0; + for (i = 1; i < aid; i++) { + r = 1.0 - e + t; + p *= s * (a + t + d2) * (b + t + d2) / r; + t += 1.0; + p /= t; + y1 += p; + } + nosum: + p = gamma(c); + y1 *= gamma(e) * p / (gamma(a + d1) * gamma(b + d1)); + + y *= p / (gamma(a + d2) * gamma(b + d2)); + if ((aid & 1) != 0) + y = -y; + + q = pow(s, id); /* s to the id power */ + if (id > 0.0) + y *= q; + else + y1 *= q; + + y += y1; + psidon: + goto done; + } + + } /* Use defining power series if no special cases */ -y = hys2f1( a, b, c, x, &err ); + y = hys2f1(a, b, c, x, &err); -done: -*loss = err; -return(y); + done: + *loss = err; + return (y); } @@ -438,45 +439,127 @@ /* Defining power series expansion of Gauss hypergeometric function */ -static double hys2f1( a, b, c, x, loss ) +static double hys2f1(a, b, c, x, loss) double a, b, c, x; -double *loss; /* estimates loss of significance */ +double *loss; /* estimates loss of significance */ +{ + double f, g, h, k, m, s, u, umax, t; + int i; + int ia, ib, intflag = 0; + + if (fabs(b) > fabs(a)) { + /* Ensure that |a| > |b| ... */ + f = b; + b = a; + a = f; + } + + ia = round(a); + ib = round(b); + + if (fabs(b-ib) < EPS && ib <= 0 && fabs(b) < fabs(a)) { + /* .. except when `b` is a smaller negative integer */ + f = b; + b = a; + a = f; + intflag = 1; + } + + if ((fabs(a) > fabs(c) + 1 || intflag) && fabs(c-a) > 2 && fabs(a) > 2) { + /* |a| >> |c| implies that large cancellation error is to be expected. + * + * We try to reduce it with the recurrence relations + */ + return hyp2f1ra(a, b, c, x, loss); + } + + i = 0; + umax = 0.0; + f = a; + g = b; + h = c; + s = 1.0; + u = 1.0; + k = 0.0; + do { + if (fabs(h) < EPS) { + *loss = 1.0; + return NPY_INFINITY; + } + m = k + 1.0; + u = u * ((f + k) * (g + k) * x / ((h + k) * m)); + s += u; + k = fabs(u); /* remember largest term summed */ + if (k > umax) + umax = k; + k = m; + if (++i > 10000) { /* should never happen */ + *loss = 1.0; + return (s); + } + } + while (fabs(u / s) > MACHEP); + + /* return estimated relative error */ + *loss = (MACHEP * umax) / fabs(s) + (MACHEP * i); + + return (s); +} + + +/* + * Evaluate hypergeometric function by two-term recurrence in `a`. + * + * This avoids some of the loss of precision in the strongly alternating + * hypergeometric series, and can be used to reduce the `a` and `b` parameters + * to smaller values. + * + * AMS55 #15.2.10 + */ +static double hyp2f1ra(double a, double b, double c, double x, + double* loss) { -double f, g, h, k, m, s, u, umax; -int i; + double f2, f1, f0; + int n, m, da; + double t, err; + + /* Don't cross c or zero */ + if ((c < 0 && a <= c) || (c >= 0 && a >= c)) { + da = round(a - c); + } else { + da = round(a); + } + t = a - da; -i = 0; -umax = 0.0; -f = a; -g = b; -h = c; -s = 1.0; -u = 1.0; -k = 0.0; -do - { - if( fabs(h) < EPS ) - { - *loss = 1.0; - return( MAXNUM ); - } - m = k + 1.0; - u = u * ((f+k) * (g+k) * x / ((h+k) * m)); - s += u; - k = fabs(u); /* remember largest term summed */ - if( k > umax ) - umax = k; - k = m; - if( ++i > 10000 ) /* should never happen */ - { - *loss = 1.0; - return(s); - } - } -while( fabs(u/s) > MACHEP ); + *loss = 0; -/* return estimated relative error */ -*loss = (MACHEP*umax)/fabs(s) + (MACHEP*i); + assert(da != 0); + + if (da < 0) { + /* Recurse down */ + f2 = 0; + f1 = hys2f1(t, b, c, x, &err); *loss += err; + f0 = hys2f1(t-1, b, c, x, &err); *loss += err; + t -= 1; + for (n = 1; n < -da; ++n) { + f2 = f1; + f1 = f0; + f0 = -(2*t-c-t*x+b*x)/(c-t)*f1 - t*(x-1)/(c-t)*f2; + t -= 1; + } + } else { + /* Recurse up */ + f2 = 0; + f1 = hys2f1(t, b, c, x, &err); *loss += err; + f0 = hys2f1(t+1, b, c, x, &err); *loss += err; + t += 1; + for (n = 1; n < da; ++n) { + f2 = f1; + f1 = f0; + f0 = -((2*t-c-t*x+b*x)*f1 + (c-t)*f2)/(t*(x-1)); + t += 1; + } + } -return(s); + return f0; } diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/hyperg.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/hyperg.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/hyperg.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/hyperg.c 2010-07-26 15:48:36.000000000 +0100 @@ -66,23 +66,10 @@ #include "mconf.h" -#ifdef ANSIPROT -extern double exp ( double ); -extern double log ( double ); -extern double gamma ( double ); -extern double lgam ( double ); -extern double fabs ( double ); -double hyp2f0 ( double, double, double, int, double * ); -static double hy1f1p(double, double, double, double *); -static double hy1f1a(double, double, double, double *); -double hyperg (double, double, double); -#else -double exp(), log(), gamma(), lgam(), fabs(), hyp2f0(); -static double hy1f1p(); -static double hy1f1a(); -double hyperg(); -#endif -extern double MAXNUM, MACHEP, NAN, INFINITY; +extern double MAXNUM, MACHEP; + +static double hy1f1p(double a, double b, double x, double *acanc ); +static double hy1f1a(double a, double b, double x, double *acanc ); double hyperg( a, b, x) double a, b, x; @@ -289,7 +276,7 @@ /* nan */ acanc = 1.0; -if (asum == INFINITY || asum == -INFINITY) +if (asum == NPY_INFINITY || asum == -NPY_INFINITY) /* infinity */ acanc = 0; diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/i0.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/i0.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/i0.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/i0.c 2010-07-26 15:48:36.000000000 +0100 @@ -351,14 +351,6 @@ }; #endif -#ifdef ANSIPROT -extern double chbevl ( double, void *, int ); -extern double exp ( double ); -extern double sqrt ( double ); -#else -double chbevl(), exp(), sqrt(); -#endif - double i0(x) double x; { diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/i1.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/i1.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/i1.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/i1.c 2010-07-26 15:48:36.000000000 +0100 @@ -350,14 +350,6 @@ #endif /* i1.c */ -#ifdef ANSIPROT -extern double chbevl ( double, void *, int ); -extern double exp ( double ); -extern double sqrt ( double ); -extern double fabs ( double ); -#else -double chbevl(), exp(), sqrt(), fabs(); -#endif double i1(x) double x; diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/igam.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/igam.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/igam.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/igam.c 2010-07-26 15:48:36.000000000 +0100 @@ -84,9 +84,6 @@ */ #include "mconf.h" -#ifndef ANSIPROT -double lgam(), exp(), log(), fabs(), igam(), igamc(); -#endif extern double MACHEP, MAXLOG; static double big = 4.503599627370496e15; diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/igami.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/igami.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/igami.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/igami.c 2010-07-26 15:48:36.000000000 +0100 @@ -51,10 +51,7 @@ #include "mconf.h" #include -extern double MACHEP, MAXNUM, MAXLOG, MINLOG, NAN; -#ifndef ANSIPROT -double igamc(), ndtri(), exp(), fabs(), log(), sqrt(), lgam(); -#endif +extern double MACHEP, MAXNUM, MAXLOG, MINLOG; double igami( a, y0 ) double a, y0; @@ -71,7 +68,7 @@ if ((y0<0.0) || (y0>1.0) || (a<=0)) { mtherr("igami", DOMAIN); - return(NAN); + return(NPY_NAN); } if (y0==0.0) { diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/incbet.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/incbet.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/incbet.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/incbet.c 2010-07-26 15:48:36.000000000 +0100 @@ -68,19 +68,13 @@ #endif extern double MACHEP, MINLOG, MAXLOG; -#ifdef ANSIPROT -static double incbcf(double, double, double); -static double incbd(double, double, double); -static double pseries(double, double, double); -#else -double Gamma(), lgam(), exp(), log(), pow(), fabs(); -static double incbcf(), incbd(), pseries(); -#endif static double big = 4.503599627370496e15; static double biginv = 2.22044604925031308085e-16; -extern double NAN; +static double incbcf(double a, double b, double x ); +static double incbd(double a, double b, double x ); +static double pseries(double a, double b, double x); double incbet( aa, bb, xx ) double aa, bb, xx; @@ -99,7 +93,7 @@ return( 1.0 ); domerr: mtherr( "incbet", DOMAIN ); - return( NAN ); + return( NPY_NAN ); } flag = 0; diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/incbi.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/incbi.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/incbi.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/incbi.c 2010-07-26 15:48:36.000000000 +0100 @@ -47,9 +47,6 @@ #include "mconf.h" extern double MACHEP, MAXNUM, MAXLOG, MINLOG; -#ifndef ANSIPROT -double ndtri(), exp(), fabs(), log(), sqrt(), lgam(), incbet(); -#endif double incbi( aa, bb, yy0 ) double aa, bb, yy0; diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/isnan.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/isnan.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/isnan.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/isnan.c 1970-01-01 01:00:00.000000000 +0100 @@ -1,130 +0,0 @@ -/* isnan() - * signbit() - * isfinite() - * - * Floating point numeric utilities - * - * - * - * SYNOPSIS: - * - * double ceil(), floor(), frexp(), ldexp(); --- gone - * int signbit(), isnan(), isfinite(); - * double x, y; - * int expnt, n; - * - * y = floor(x); -gone - * y = ceil(x); -gone - * y = frexp( x, &expnt ); -gone - * y = ldexp( x, n ); -gone - * n = signbit(x); - * n = isnan(x); - * n = isfinite(x); - * - * - * - * DESCRIPTION: - * - * All four routines return a double precision floating point - * result. - * - * floor() returns the largest integer less than or equal to x. - * It truncates toward minus infinity. - * - * ceil() returns the smallest integer greater than or equal - * to x. It truncates toward plus infinity. - * - * frexp() extracts the exponent from x. It returns an integer - * power of two to expnt and the significand between 0.5 and 1 - * to y. Thus x = y * 2**expn. - * - * ldexp() multiplies x by 2**n. - * - * signbit(x) returns 1 if the sign bit of x is 1, else 0. - * - * These functions are part of the standard C run time library - * for many but not all C compilers. The ones supplied are - * written in C for either DEC or IEEE arithmetic. They should - * be used only if your compiler library does not already have - * them. - * - * The IEEE versions assume that denormal numbers are implemented - * in the arithmetic. Some modifications will be required if - * the arithmetic has abrupt rather than gradual underflow. - */ - - -/* -Cephes Math Library Release 2.3: March, 1995 -Copyright 1984, 1995 by Stephen L. Moshier -*/ -#include -#include - -#include "mconf.h" - -/* XXX: horrible hacks, but those cephes macros are buggy and just plain ugly anywa. - * We should use npy_* macros instead once npy_math can be used reliably by - * packages outside numpy - */ -#undef isnan -#undef signbit -#undef isfinite - -#define isnan(x) ((x) != (x)) - -int cephes_isnan(double x) -{ - return isnan(x); -} - -int isfinite(double x) -{ - return !isnan((x) + (-x)); -} - -static int isbigendian(void) -{ - const union { - npy_uint32 i; - char c[4]; - } bint = {0x01020304}; - - if (bint.c[0] == 1) { - return 1; - } - return 0; -} - -int signbit(double x) -{ - union - { - double d; - short s[4]; - int i[2]; - } u; - - u.d = x; - - /* - * Tuis is stupid, we test for endianness every time, but that the easiest - * way I can see without using platform checks - for scipy 0.8.0, we should - * use npy_math - */ -#if SIZEOF_INT == 4 - if (isbigendian()) { - return u.i[1] < 0; - } else { - return u.i[0] < 0; - } - -#else /* SIZEOF_INT != 4 */ - - if (isbigendian()) { - return u.s[3] < 0; - } else { - return u.s[0] < 0; - } -#endif /* SIZEOF_INT */ -} diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/j0.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/j0.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/j0.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/j0.c 2010-07-26 15:48:36.000000000 +0100 @@ -458,19 +458,7 @@ }; #endif -#ifdef ANSIPROT -extern double polevl ( double, void *, int ); -extern double p1evl ( double, void *, int ); -extern double log ( double ); -extern double sin ( double ); -extern double cos ( double ); -extern double sqrt ( double ); -double j0 ( double ); -#else -double polevl(), p1evl(), log(), sin(), cos(), sqrt(); -double j0(); -#endif -extern double TWOOPI, SQ2OPI, PIO4; +extern double TWOOPI, SQ2OPI; double j0(x) double x; @@ -495,7 +483,7 @@ q = 25.0/(x*x); p = polevl( q, PP, 6)/polevl( q, PQ, 6 ); q = polevl( q, QP, 7)/p1evl( q, QQ, 7 ); -xn = x - PIO4; +xn = x - NPY_PI_4; p = p * cos(xn) - w * q * sin(xn); return( p * SQ2OPI / sqrt(x) ); } @@ -510,10 +498,9 @@ */ /* -#define PIO4 .78539816339744830962 +#define NPY_PI_4 .78539816339744830962 #define SQ2OPI .79788456080286535588 */ -extern double INFINITY, NAN; double y0(x) double x; @@ -524,10 +511,10 @@ { if (x == 0.0) { mtherr("y0", SING); - return -INFINITY; + return -NPY_INFINITY; } else if (x < 0.0) { mtherr("y0", DOMAIN); - return NAN; + return NPY_NAN; } z = x * x; w = polevl( z, YP, 7) / p1evl( z, YQ, 7 ); @@ -539,7 +526,7 @@ z = 25.0 / (x * x); p = polevl( z, PP, 6)/polevl( z, PQ, 6 ); q = polevl( z, QP, 7)/p1evl( z, QQ, 7 ); -xn = x - PIO4; +xn = x - NPY_PI_4; p = p * sin(xn) + w * q * cos(xn); return( p * SQ2OPI / sqrt(x) ); } diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/j1.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/j1.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/j1.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/j1.c 2010-07-26 15:48:36.000000000 +0100 @@ -444,18 +444,6 @@ #define Z2 (*(double *)DZ2) #endif -#ifdef ANSIPROT -extern double polevl ( double, void *, int ); -extern double p1evl ( double, void *, int ); -extern double log ( double ); -extern double sin ( double ); -extern double cos ( double ); -extern double sqrt ( double ); -double j1 ( double ); -#else -double polevl(), p1evl(), log(), sin(), cos(), sqrt(); -double j1(); -#endif extern double TWOOPI, THPIO4, SQ2OPI; double j1(x) @@ -485,8 +473,6 @@ } -extern double INFINITY, NAN; - double y1(x) double x; { @@ -496,10 +482,10 @@ { if (x == 0.0) { mtherr("y1", SING); - return -INFINITY; + return -NPY_INFINITY; } else if (x <= 0.0) { mtherr("y1", DOMAIN); - return NAN; + return NPY_NAN; } z = x * x; w = x * (polevl( z, YP, 5 ) / p1evl( z, YQ, 8 )); diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/jv.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/jv.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/jv.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/jv.c 2010-07-26 15:48:36.000000000 +0100 @@ -60,39 +60,15 @@ #define MAXGAM 171.624376956302725 #endif -#ifdef ANSIPROT -extern int airy(double, double *, double *, double *, double *); -extern double fabs(double); -extern double floor(double); -extern double frexp(double, int *); -extern double polevl(double, void *, int); -extern double j0(double); -extern double j1(double); -extern double sqrt(double); -extern double cbrt(double); -extern double exp(double); -extern double log(double); -extern double sin(double); -extern double cos(double); -extern double acos(double); -extern double pow(double, double); -extern double gamma(double); -extern double lgam(double); -static double recur(double *, double, double *, int); -static double jvs(double, double); -static double hankel(double, double); -static double jnx(double, double); -static double jnt(double, double); -#else -int airy(); -double fabs(), floor(), frexp(), polevl(), j0(), j1(), sqrt(), cbrt(); -double exp(), log(), sin(), cos(), acos(), pow(), gamma(), lgam(); -static double recur(), jvs(), hankel(), jnx(), jnt(); -#endif - -extern double MAXNUM, MACHEP, MINLOG, MAXLOG, INFINITY, NAN; +extern double MAXNUM, MACHEP, MINLOG, MAXLOG; #define BIG 1.44115188075855872E+17 +static double jvs(double n, double x); +static double hankel(double n, double x); +static double recur(double *n, double x, double *newn, int cancel); +static double jnx(double n, double x); +static double jnt(double n, double x); + double jv(double n, double x) { double k, q, t, y, an; @@ -123,7 +99,7 @@ if ((x < 0.0) && (y != an)) { mtherr("Jv", DOMAIN); - y = NAN; + y = NPY_NAN; goto done; } @@ -231,7 +207,7 @@ */ if (n < 0.0) { mtherr("Jv", TLOSS); - y = NAN; + y = NPY_NAN; goto done; } t = x / n; diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/k0.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/k0.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/k0.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/k0.c 2010-07-26 15:48:36.000000000 +0100 @@ -273,17 +273,7 @@ #endif /* k0.c */ -#ifdef ANSIPROT -extern double chbevl ( double, void *, int ); -extern double exp ( double ); -extern double i0 ( double ); -extern double log ( double ); -extern double sqrt ( double ); -#else -double chbevl(), exp(), i0(), log(), sqrt(); -#endif extern double PI; -extern double INFINITY, NAN; double k0(x) double x; @@ -292,10 +282,10 @@ if (x == 0.0) { mtherr("k0", SING); - return INFINITY; + return NPY_INFINITY; } else if (x < 0.0) { mtherr("k0", DOMAIN); - return NAN; + return NPY_NAN; } if( x <= 2.0 ) @@ -319,10 +309,10 @@ if (x == 0.0) { mtherr("k0e", SING); - return INFINITY; + return NPY_INFINITY; } else if (x < 0.0) { mtherr( "k0e", DOMAIN ); - return NAN; + return NPY_NAN; } if( x <= 2.0 ) diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/k1.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/k1.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/k1.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/k1.c 2010-07-26 15:48:36.000000000 +0100 @@ -276,17 +276,8 @@ }; #endif -#ifdef ANSIPROT -extern double chbevl ( double, void *, int ); -extern double exp ( double ); -extern double i1 ( double ); -extern double log ( double ); -extern double sqrt ( double ); -#else -double chbevl(), exp(), i1(), log(), sqrt(); -#endif extern double PI; -extern double MINLOG, INFINITY, NAN; +extern double MINLOG; double k1(x) double x; @@ -295,10 +286,10 @@ if (x == 0.0) { mtherr("k1", SING); - return INFINITY; + return NPY_INFINITY; } else if (x < 0.0) { mtherr("k1", DOMAIN); - return NAN; + return NPY_NAN; } z = 0.5 * x; @@ -322,10 +313,10 @@ if (x == 0.0) { mtherr("k1e", SING); - return INFINITY; + return NPY_INFINITY; } else if (x < 0.0) { mtherr("k1e", DOMAIN); - return NAN; + return NPY_NAN; } if( x <= 2.0 ) diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/kn.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/kn.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/kn.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/kn.c 2010-07-26 15:48:36.000000000 +0100 @@ -81,15 +81,7 @@ #define EUL 5.772156649015328606065e-1 #define MAXFAC 31 -#ifdef ANSIPROT -extern double fabs ( double ); -extern double exp ( double ); -extern double log ( double ); -extern double sqrt ( double ); -#else -double fabs(), exp(), log(), sqrt(); -#endif -extern double MACHEP, MAXNUM, MAXLOG, PI, INFINITY, NAN; +extern double MACHEP, MAXNUM, MAXLOG, PI; double kn( nn, x ) int nn; @@ -114,10 +106,10 @@ if(x <= 0.0) { if( x < 0.0 ) { mtherr("kn", DOMAIN); - return NAN; + return NPY_NAN; } else { mtherr("kn", SING); - return INFINITY; + return NPY_INFINITY; } } diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/kolmogorov.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/kolmogorov.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/kolmogorov.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/kolmogorov.c 2010-07-26 15:48:36.000000000 +0100 @@ -24,15 +24,7 @@ #include "mconf.h" -#ifndef ANSIPROT -double pow (), floor (), lgam (), exp (), sqrt (), log (), fabs (); -#else -double smirnov(int,double); -double smirnovi(int,double); -double kolmogorov (double); -double kolmogi (double); -#endif -extern double MAXLOG, NAN; +extern double MAXLOG; /* Exact Smirnov statistic, for one-sided test. */ double @@ -44,7 +36,7 @@ double evn, omevn, p, t, c, lgamnp1; if (n <= 0 || e < 0.0 || e > 1.0) - return (NAN); + return (NPY_NAN); if (e == 0.0) return 1.0; nn = (int) (floor ((double) n * (1.0 - e))); p = 0.0; @@ -126,7 +118,7 @@ if (p <= 0.0 || p > 1.0) { mtherr ("smirnovi", DOMAIN); - return (NAN); + return (NPY_NAN); } /* Start with approximation p = exp(-2 n e^2). */ e = sqrt (-log (p) / (2.0 * n)); @@ -174,7 +166,7 @@ if (p <= 0.0 || p > 1.0) { mtherr ("kolmogi", DOMAIN); - return (NAN); + return (NPY_NAN); } if ( (1.0 - p ) < 1e-16) return 0.0; /* Start with approximation p = 2 exp(-2 y^2). */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/mconf.h python-scipy-0.8.0+dfsg1/scipy/special/cephes/mconf.h --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/mconf.h 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/mconf.h 2010-07-26 15:48:36.000000000 +0100 @@ -65,7 +65,11 @@ #ifndef CEPHES_MCONF_H #define CEPHES_MCONF_H +#include +#include + #include "cephes_names.h" +#include "protos.h" /* Constant definitions for math error conditions */ @@ -82,13 +86,6 @@ #define EDOM 33 #define ERANGE 34 -/* Complex numeral. */ -typedef struct - { - double r; - double i; - } cmplx; - /* Long double complex numeral. */ /* typedef struct @@ -100,24 +97,6 @@ /* Type of computer arithmetic */ -/* This is kind of improper, as the byte-order of floats may not - * be the same as the byte-order of ints. However, it works. - * - * SciPy note: we bypass this detection and set UNK to 1 to prevent Endianess - * issues. - */ - -/* -#include -#ifdef WORDS_BIGENDIAN -# define MIEEE 1 -# define BIGENDIAN 1 -#else -# define IBMPC 1 -# define BIGENDIAN 0 -#endif -*/ - /* UNKnown arithmetic, invokes coefficients given in * normal decimal format. Beware of range boundary * problems (MACHEP, MAXLOG, etc. in const.c) and @@ -155,19 +134,6 @@ /* Define to support tiny denormal numbers, else undefine. */ #define DENORMAL 1 -/* Define to ask for infinity support, else undefine. */ -#define INFINITIES 1 -#ifdef NOINFINITIES -#undef INFINITIES -#endif - -/* Define to ask for support of numbers that are Not-a-Number, - else undefine. This may automatically define INFINITIES in some files. */ -#define NANS 1 -#ifdef NONANS -#undef NANS -#endif - /* Define to distinguish between -0.0 and +0.0. */ #define MINUSZERO 1 @@ -175,14 +141,6 @@ See atan.c and clog.c. */ #define ANSIC 1 -/* Get ANSI function prototypes, if you want them. */ -#if defined(__STDC__) || defined(_MSC_EXTENSIONS) -#define ANSIPROT -#include "protos.h" -#else -int mtherr(); -#endif - /* Variable for error reporting. See mtherr.c. */ extern int merror; diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/mmmpy.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/mmmpy.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/mmmpy.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/mmmpy.c 2010-07-26 15:48:36.000000000 +0100 @@ -29,11 +29,7 @@ * */ - -#define ANSIPROT -#ifdef ANSIPROT -void mmmpy( int,int,double*,double*,double* ); -#endif +#include "protos.h" void mmmpy( r, c, A, B, Y ) diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/mtransp.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/mtransp.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/mtransp.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/mtransp.c 2010-07-26 15:48:36.000000000 +0100 @@ -27,10 +27,7 @@ */ -#define ANSIPROT -#ifdef ANSIPROT void mtransp( int,double*,double* ); -#endif void mtransp( n, A, T ) diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/mvmpy.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/mvmpy.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/mvmpy.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/mvmpy.c 2010-07-26 15:48:36.000000000 +0100 @@ -31,11 +31,6 @@ */ -#define ANSIPROT -#ifdef ANSIPROT -void mvmpy( int, int, double*, double*, double* ); -#endif - void mvmpy( r, c, A, V, Y ) int r, c; diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/nbdtr.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/nbdtr.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/nbdtr.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/nbdtr.c 2010-07-26 15:48:36.000000000 +0100 @@ -151,11 +151,6 @@ */ #include "mconf.h" -#ifndef ANSIPROT -double incbet(), incbi(); -#endif - -extern double NAN; double nbdtrc( k, n, p ) int k, n; @@ -169,7 +164,7 @@ { domerr: mtherr( "nbdtr", DOMAIN ); - return( NAN ); + return( NPY_NAN ); } dk = k+1; @@ -191,7 +186,7 @@ { domerr: mtherr( "nbdtr", DOMAIN ); - return( NAN ); + return( NPY_NAN ); } dk = k+1; dn = n; @@ -212,7 +207,7 @@ { domerr: mtherr( "nbdtri", DOMAIN ); - return( NAN ); + return( NPY_NAN ); } dk = k+1; dn = n; diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/ndtr.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/ndtr.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/ndtr.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/ndtr.c 2010-07-26 15:48:36.000000000 +0100 @@ -147,7 +147,7 @@ #include "mconf.h" -extern double SQRTH, NAN; +extern double SQRTH; extern double MAXLOG; #ifdef UNK @@ -380,18 +380,13 @@ #define UTHRESH 37.519379347 #endif -#ifndef ANSIPROT -double polevl(), p1evl(), exp(), log(), fabs(); -double erf(), erfc(); -#endif - double ndtr(double a) { double x, y, z; -if (isnan(a)) { +if (npy_isnan(a)) { mtherr("ndtr", DOMAIN); - return (NAN); + return (NPY_NAN); } x = a * SQRTH; @@ -416,9 +411,9 @@ { double p,q,x,y,z; -if (isnan(a)) { +if (npy_isnan(a)) { mtherr("erfc", DOMAIN); - return (NAN); + return (NPY_NAN); } if( a < 0.0 ) @@ -470,9 +465,9 @@ { double y, z; -if (isnan(x)) { +if (npy_isnan(x)) { mtherr("erf", DOMAIN); - return (NAN); + return (NPY_NAN); } if( fabs(x) > 1.0 ) diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/ndtri.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/ndtri.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/ndtri.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/ndtri.c 2010-07-26 15:48:36.000000000 +0100 @@ -361,10 +361,6 @@ }; #endif -#ifndef ANSIPROT -double polevl(), p1evl(), log(), sqrt(); -#endif - double ndtri(y0) double y0; { diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/pdtr.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/pdtr.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/pdtr.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/pdtr.c 2010-07-26 15:48:36.000000000 +0100 @@ -126,11 +126,6 @@ */ #include "mconf.h" -#ifndef ANSIPROT -double igam(), igamc(), igami(); -#endif - -extern double NAN; double pdtrc( k, m ) int k; @@ -141,7 +136,7 @@ if( (k < 0) || (m <= 0.0) ) { mtherr( "pdtrc", DOMAIN ); - return( NAN ); + return( NPY_NAN ); } v = k+1; return( igam( v, m ) ); @@ -158,7 +153,7 @@ if( (k < 0) || (m <= 0.0) ) { mtherr( "pdtr", DOMAIN ); - return( NAN ); + return( NPY_NAN ); } v = k+1; return( igamc( v, m ) ); @@ -174,7 +169,7 @@ if( (k < 0) || (y < 0.0) || (y >= 1.0) ) { mtherr( "pdtri", DOMAIN ); - return( NAN ); + return( NPY_NAN ); } v = k+1; v = igami( v, y ); diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/polevl.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/polevl.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/polevl.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/polevl.c 2010-07-26 15:48:36.000000000 +0100 @@ -48,12 +48,7 @@ Copyright 1984, 1987, 1988 by Stephen L. Moshier Direct inquiries to 30 Frost Street, Cambridge, MA 02140 */ - -#define ANSIPROT -#ifdef ANSIPROT -double polevl( double, double [], int); -double p1evl( double, double [], int); -#endif +#include "protos.h" double polevl( x, coef, N ) double x; diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/polmisc.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/polmisc.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/polmisc.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/polmisc.c 2010-07-26 15:48:36.000000000 +0100 @@ -6,11 +6,6 @@ #include #include #include "mconf.h" -#ifndef ANSIPROT -double atan2(), sqrt(), fabs(), sin(), cos(); -void polclr(), polmov(), polsbt(), poladd(), polsub(), polmul(); -int poldiv(); -#endif /* Highest degree of polynomial to be handled by the polyn.c subroutine package. */ @@ -261,9 +256,6 @@ double a, sc; double *w, *c; int i; -#ifndef ANSIPROT - double sin(), cos(); -#endif if (nn > N) { diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/polrt.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/polrt.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/polrt.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/polrt.c 2010-07-26 15:48:36.000000000 +0100 @@ -53,11 +53,6 @@ double i; }cmplx; */ -#ifndef ANSIPROT -double fabs(); -#else -int polrt( double [], double [], int, cmplx []); -#endif int polrt( xcof, cof, m, root ) double xcof[], cof[]; diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/powi.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/powi.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/powi.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/powi.c 2010-07-26 15:48:36.000000000 +0100 @@ -44,11 +44,7 @@ */ #include "mconf.h" -#ifndef ANSIPROT -double log(), frexp(); -int signbit(); -#endif -extern double NEGZERO, INFINITY, MAXNUM, MAXLOG, MINLOG, LOGE2; +extern double NEGZERO, MAXNUM, MAXLOG, MINLOG, LOGE2; double powi( x, nn ) double x; @@ -63,7 +59,7 @@ if( nn == 0 ) return( 1.0 ); else if( nn < 0 ) - return( INFINITY ); + return( NPY_INFINITY ); else { if( nn & 1 ) @@ -121,7 +117,7 @@ if( s > MAXLOG ) { mtherr( "powi", OVERFLOW ); - y = INFINITY; + y = NPY_INFINITY; goto done; } diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/protos.h python-scipy-0.8.0+dfsg1/scipy/special/cephes/protos.h --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/protos.h 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/protos.h 2010-07-26 15:48:36.000000000 +0100 @@ -1,9 +1,13 @@ -/* - * This file was automatically generated by version 1.7 of cextract. - * Manual editing not recommended. - * - * Created: Fri Mar 31 19:17:33 1995 - */ +#ifndef __SCIPY_SPECIAL_CEPHES +#define __SCIPY_SPECIAL_CEPHES + +/* Complex numeral. */ +typedef struct + { + double r; + double i; + } cmplx; + extern double acosh ( double x ); extern int airy ( double x, double *ai, double *aip, double *bi, double *bip ); extern double asin ( double x ); @@ -19,7 +23,7 @@ extern double lbeta ( double a, double b ); extern double btdtr ( double a, double b, double x ); extern double cbrt ( double x ); -extern double chbevl ( double x, void *P, int n ); +extern double chbevl ( double x, double P[], int n ); extern double chdtrc ( double df, double x ); extern double chdtr ( double df, double x ); extern double chdtri ( double df, double y ); @@ -42,7 +46,6 @@ */ /*extern double cabs ( cmplx *z );*/ /* extern void csqrt ( cmplx *z, cmplx *w );*/ -extern double hypot ( double x, double y ); extern double cosh ( double x ); extern double dawsn ( double xx ); extern void eigens ( double A[], double RR[], double E[], int N ); @@ -64,18 +67,6 @@ /* extern int fftr ( double x[], int m0, double sine[] ); */ -extern double ceil ( double x ); -extern double floor ( double x ); -extern double frexp ( double x, int *pw2 ); -extern double ldexp ( double x, int pw2 ); -/* extern int signbit ( double x ); -extern int isnan ( double x ); -extern int isfinite ( double x ); -*/ -#ifndef isnan -extern int cephes_isnan ( double x ); -#define isnan cephes_isnan -#endif extern int fresnl ( double xxa, double *ssa, double *cca ); extern double Gamma ( double x ); extern double lgam ( double x ); @@ -117,10 +108,10 @@ extern long lrand ( void ); extern long lsqrt ( long x ); extern int minv ( double A[], double X[], int n, double B[], int IPS[] ); -extern int mmmpy ( int r, int c, double *A, double *B, double *Y ); +extern void mmmpy ( int r, int c, double *A, double *B, double *Y ); extern int mtherr ( char *name, int code ); -extern double polevl ( double x, void *P, int N ); -extern double p1evl ( double x, void *P, int N ); +extern double polevl ( double x, double *P, int N ); +extern double p1evl ( double x, double *P, int N ); extern void mtransp ( int n, double *A, double *T ); extern void mvmpy ( int r, int c, double *A, double *V, double *Y ); extern double nbdtrc ( int k, int n, double p ); @@ -191,3 +182,9 @@ extern void polsqt ( double pol[], double ans[], int nn ); extern void polsin ( double x[], double y[], int nn ); extern void polcos ( double x[], double y[], int nn ); + +/* polrt.c */ +int polrt( double [], double [], int, cmplx []); + +double yv(double v, double x ); +#endif diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/psi.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/psi.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/psi.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/psi.c 2010-07-26 15:48:36.000000000 +0100 @@ -108,14 +108,6 @@ #define EUL 0.57721566490153286061 -#ifdef ANSIPROT -extern double floor ( double ); -extern double log ( double ); -extern double tan ( double ); -extern double polevl ( double, void *, int ); -#else -double floor(), log(), tan(), polevl(); -#endif extern double PI, MAXNUM; diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/rgamma.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/rgamma.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/rgamma.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/rgamma.c 2010-07-26 15:48:36.000000000 +0100 @@ -137,9 +137,6 @@ static char name[] = "rgamma"; -#ifndef ANSIPROT -double chbevl(), exp(), log(), sin(), lgam(); -#endif extern double PI, MAXLOG, MAXNUM; diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/round.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/round.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/round.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/round.c 2010-07-26 15:48:36.000000000 +0100 @@ -35,8 +35,6 @@ #include "mconf.h" -extern double floor(double); - double round(double x) { double y, r; diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/scipy_iv.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/scipy_iv.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/scipy_iv.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/scipy_iv.c 2010-07-26 15:48:36.000000000 +0100 @@ -69,20 +69,11 @@ #include #include "mconf.h" -#ifdef ANSIPROT -extern double exp(double); -extern double gamma(double); -extern double log(double); -extern double fabs(double); -extern double floor(double); -#else -double exp(), gamma(), log(), fabs(), floor(); -#endif -extern double MACHEP, MAXNUM, NAN, PI, INFINITY, EULER; +extern double MACHEP, MAXNUM, PI, EULER; static double iv_asymptotic(double v, double x); -void ikv_asymptotic_uniform(double v, double x, double *i, double *k); -void ikv_temme(double v, double x, double *I, double *K); +void ikv_asymptotic_uniform(double v, double x, double *Iv, double *Kv); +void ikv_temme(double v, double x, double *Iv, double *Kv); double iv(double v, double x) { @@ -102,7 +93,7 @@ if (x < 0.0) { if (t != v) { mtherr("iv", DOMAIN); - return (NAN); + return (NPY_NAN); } if (v != 2.0 * floor(v / 2.0)) { sign = -1; @@ -152,7 +143,7 @@ prefactor = exp(x) / sqrt(2 * PI * x); - if (prefactor == INFINITY) { + if (prefactor == NPY_INFINITY) { return prefactor; } @@ -356,6 +347,9 @@ double f, h, p, q, coef, sum, sum1, tolerance; double a, b, c, d, sigma, gamma1, gamma2; unsigned long k; + double gp; + double gm; + /* * |x| <= 2, Temme series converge rapidly @@ -364,8 +358,8 @@ BOOST_ASSERT(fabs(x) <= 2); BOOST_ASSERT(fabs(v) <= 0.5f); - double gp = gamma(v + 1) - 1; - double gm = gamma(-v + 1) - 1; + gp = gamma(v + 1) - 1; + gm = gamma(-v + 1) - 1; a = log(x / 2); b = exp(v * a); @@ -529,7 +523,7 @@ * Compute I(v, x) and K(v, x) simultaneously by Temme's method, see * Temme, Journal of Computational Physics, vol 19, 324 (1975) */ -void ikv_temme(double v, double x, double *I, double *K) +void ikv_temme(double v, double x, double *Iv_p, double *Kv_p) { /* Kv1 = K_(v+1), fv = I_(v+1) / I_v */ /* Ku1 = K_(u+1), fu = I_(u+1) / I_u */ @@ -540,10 +534,10 @@ int kind; kind = 0; - if (I != NULL) { + if (Iv_p != NULL) { kind |= need_i; } - if (K != NULL) { + if (Kv_p != NULL) { kind |= need_k; } @@ -556,8 +550,8 @@ u = v - n; /* -1/2 <= u < 1/2 */ if (x < 0) { - if (I != NULL) *I = NAN; - if (K != NULL) *K = NAN; + if (Iv_p != NULL) *Iv_p = NPY_NAN; + if (Kv_p != NULL) *Kv_p = NPY_NAN; mtherr("ikv_temme", DOMAIN); return; } @@ -565,25 +559,25 @@ Iv = (v == 0) ? 1 : 0; if (kind & need_k) { mtherr("ikv_temme", OVERFLOW); - Kv = INFINITY; + Kv = NPY_INFINITY; } else { - Kv = NAN; /* any value will do */ + Kv = NPY_NAN; /* any value will do */ } if (reflect && (kind & need_i)) { double z = (u + n % 2); - Iv = sin(PI * z) == 0 ? Iv : INFINITY; - if (Iv == INFINITY || Iv == -INFINITY) { + Iv = sin(PI * z) == 0 ? Iv : NPY_INFINITY; + if (Iv == NPY_INFINITY || Iv == -NPY_INFINITY) { mtherr("ikv_temme", OVERFLOW); } } - if (I != NULL) { - *I = Iv; + if (Iv_p != NULL) { + *Iv_p = Iv; } - if (K != NULL) { - *K = Kv; + if (Kv_p != NULL) { + *Kv_p = Kv; } return; } @@ -625,23 +619,23 @@ } } else { - Iv = NAN; /* any value will do */ + Iv = NPY_NAN; /* any value will do */ } if (reflect) { double z = (u + n % 2); - if (I != NULL) { - *I = Iv + (2 / PI) * sin(PI * z) * Kv; /* reflection formula */ + if (Iv_p != NULL) { + *Iv_p = Iv + (2 / PI) * sin(PI * z) * Kv; /* reflection formula */ } - if (K != NULL) { - *K = Kv; + if (Kv_p != NULL) { + *Kv_p = Kv; } } else { - if (I != NULL) { - *I = Iv; + if (Iv_p != NULL) { + *Iv_p = Iv; } - if (K != NULL) { - *K = Kv; + if (Kv_p != NULL) { + *Kv_p = Kv; } } return; diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/shichi.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/shichi.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/shichi.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/shichi.c 2010-07-26 15:48:36.000000000 +0100 @@ -500,9 +500,6 @@ /* Sine and cosine integrals */ -#ifndef ANSIPROT -double log(), exp(), fabs(), chbevl(); -#endif #define EUL 0.57721566490153286061 extern double MACHEP, MAXNUM, PIO2; diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/sici.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/sici.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/sici.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/sici.c 2010-07-26 15:48:36.000000000 +0100 @@ -54,6 +54,8 @@ Copyright 1984, 1987, 1989 by Stephen L. Moshier Direct inquiries to 30 Frost Street, Cambridge, MA 02140 */ +#include +#include #include "mconf.h" @@ -575,9 +577,6 @@ }; #endif -#ifndef ANSIPROT -double log(), sin(), cos(), polevl(), p1evl(); -#endif #define EUL 0.57721566490153286061 extern double MAXNUM, PIO2, MACHEP; @@ -606,11 +605,20 @@ } -if( x > 1.0e9 ) - { +if( x > 1.0e9 ) { + if (npy_isinf(x)) { + if (sign == -1) { + *si = -PIO2; + *ci = NPY_NAN; + } else { + *si = PIO2; + *ci = 0; + } + return 0; + } *si = PIO2 - cos(x)/x; *ci = sin(x)/x; - } +} diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/simq.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/simq.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/simq.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/simq.c 2010-07-26 15:48:36.000000000 +0100 @@ -48,10 +48,7 @@ /* simq 2 */ #include -#define ANSIPROT -#ifdef ANSIPROT int simq(double [], double [], double [], int, int, int [] ); -#endif #define fabs(x) ((x) < 0 ? -(x) : (x)) diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/sincos.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/sincos.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/sincos.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/sincos.c 2010-07-26 15:48:36.000000000 +0100 @@ -226,12 +226,6 @@ 9.99847695156391239157E-1, }; -#ifndef ANSIPROT -double floor(); -#else -extern void sincos ( double x, double *s, double *c, int flg ); -#endif - void sincos(x, s, c, flg) double x; diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/sindg.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/sindg.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/sindg.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/sindg.c 2010-07-26 15:48:36.000000000 +0100 @@ -174,15 +174,6 @@ static double lossth = 1.0e14; #endif -#ifndef ANSIPROT -double polevl(), floor(), ldexp(); -#else -extern double polevl (double, void *, int); -extern double floor(double); -extern double ldexp(double,int); -#endif -extern double PIO4; - double sindg(x) double x; { @@ -203,7 +194,7 @@ return(0.0); } -y = floor( x/45.0 ); /* integer part of x/PIO4 */ +y = floor( x/45.0 ); /* integer part of x/NPY_PI_4 */ /* strip high bits of integer part to prevent integer overflow */ z = ldexp( y, -4 ); diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/spence.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/spence.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/spence.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/spence.c 2010-07-26 15:48:36.000000000 +0100 @@ -140,10 +140,7 @@ }; #endif -#ifndef ANSIPROT -double fabs(), log(), polevl(); -#endif -extern double PI, MACHEP, NAN; +extern double PI, MACHEP; double spence(x) double x; @@ -154,7 +151,7 @@ if( x < 0.0 ) { mtherr( "spence", DOMAIN ); - return(NAN); + return(NPY_NAN); } if( x == 1.0 ) diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/stdtr.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/stdtr.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/stdtr.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/stdtr.c 2010-07-26 15:48:36.000000000 +0100 @@ -86,10 +86,7 @@ #include "mconf.h" -extern double PI, MACHEP, MAXNUM, NAN; -#ifndef ANSIPROT -double sqrt(), atan(), incbet(), incbi(), fabs(); -#endif +extern double PI, MACHEP, MAXNUM; double stdtr( k, t ) int k; @@ -101,7 +98,7 @@ if( k <= 0 ) { mtherr( "stdtr", DOMAIN ); - return(NAN); + return(NPY_NAN); } if( t == 0 ) @@ -188,7 +185,7 @@ if( k <= 0 || p <= 0.0 || p >= 1.0 ) { mtherr( "stdtri", DOMAIN ); - return(NAN); + return(NPY_NAN); } rk = k; diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/struve.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/struve.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/struve.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/struve.c 2010-07-26 15:48:36.000000000 +0100 @@ -36,26 +36,8 @@ */ #include "mconf.h" #define DEBUG 0 -#ifdef ANSIPROT -extern double gamma ( double ); -extern double pow ( double, double ); -extern double sqrt ( double ); -extern double yn ( int, double ); -extern double jv ( double, double ); -extern double fabs ( double ); -extern double floor ( double ); -extern double sin ( double ); -extern double cos ( double ); -double yv ( double, double ); -double onef2 (double, double, double, double, double * ); -double threef0 (double, double, double, double, double * ); -#else -double gamma(), pow(), sqrt(), yn(), yv(), jv(), fabs(), floor(); -double sin(), cos(); -double onef2(), threef0(); -#endif static double stop = 1.37e-17; -extern double MACHEP, INFINITY; +extern double MACHEP; double onef2( a, b, c, x, err ) double a, b, c, x; @@ -224,8 +206,8 @@ if (v > -1) { return 0.0; } else if (v < -1) { - if ((int)(floor(0.5-v)-1) % 2) return -INFINITY; - else return INFINITY; + if ((int)(floor(0.5-v)-1) % 2) return -NPY_INFINITY; + else return NPY_INFINITY; } else { return 2.0/PI; } diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/unity.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/unity.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/unity.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/unity.c 2010-07-26 15:48:36.000000000 +0100 @@ -10,12 +10,6 @@ */ #include "mconf.h" -#ifdef INFINITIES -extern double INFINITY; -#endif -#ifndef ANSIPROT -int isnan(), isfinite(); -#endif /* log1p(x) = log(1 + x) */ /* Coefficients for log(1+x) = x - x**2/2 + x**3 P(x)/Q(x) @@ -43,9 +37,6 @@ #define SQRTH 0.70710678118654752440 #define SQRT2 1.41421356237309504880 -#ifndef ANSIPROT -double log(), polevl(), p1evl(), exp(), cos(); -#endif double log1p(double x) { @@ -83,16 +74,16 @@ { double r, xx; -#ifdef NANS -if( isnan(x) ) - return(x); -#endif -#ifdef INFINITIES -if( x == INFINITY ) - return(INFINITY); -if( x == -INFINITY ) - return(-1.0); -#endif +if (!npy_isfinite(x)) { + if (npy_isnan(x)) { + return x; + } else if (x > 0) { + return x; + } else { + return -1.0; + } + +} if( (x < -0.5) || (x > 0.5) ) return( exp(x) - 1.0 ); xx = x * x; @@ -115,13 +106,11 @@ 4.1666666666666666609054E-2, }; -extern double PIO4; - double cosm1(double x) { double xx; -if( (x < -PIO4) || (x > PIO4) ) +if( (x < -NPY_PI_4) || (x > NPY_PI_4) ) return( cos(x) - 1.0 ); xx = x * x; xx = -0.5*xx + xx * xx * polevl( xx, coscof, 6 ); diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/yn.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/yn.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/yn.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/yn.c 2010-07-26 15:48:36.000000000 +0100 @@ -53,17 +53,7 @@ */ #include "mconf.h" -#ifdef ANSIPROT -extern double y0 ( double ); -extern double y1 ( double ); -extern double log ( double ); -#else -double y0(), y1(), log(); -#endif -extern double MAXNUM, MAXLOG, INFINITY; -#ifdef NANS -extern double NAN; -#endif +extern double MAXNUM, MAXLOG; double yn( n, x ) int n; @@ -92,10 +82,10 @@ /* test for overflow */ if (x == 0.0) { mtherr("yn", SING); - return -INFINITY; + return -NPY_INFINITY; } else if (x < 0.0) { mtherr("yn", DOMAIN); - return NAN; + return NPY_NAN; } /* forward recurrence on n */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/zeta.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/zeta.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/zeta.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/zeta.c 2010-07-26 15:48:36.000000000 +0100 @@ -61,10 +61,7 @@ */ #include "mconf.h" -#ifndef ANSIPROT -double fabs(), pow(), floor(); -#endif -extern double MAXNUM, MACHEP, NAN; +extern double MAXNUM, MACHEP; /* Expansion coefficients * for Euler-Maclaurin summation formula @@ -101,7 +98,7 @@ { domerr: mtherr( "zeta", DOMAIN ); - return(NAN); + return(NPY_NAN); } if( q <= 0.0 ) diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/cephes/zetac.c python-scipy-0.8.0+dfsg1/scipy/special/cephes/zetac.c --- python-scipy-0.7.2+dfsg1/scipy/special/cephes/zetac.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/cephes/zetac.c 2010-07-26 15:48:36.000000000 +0100 @@ -494,10 +494,6 @@ /* * Riemann zeta function, minus one */ -#ifndef ANSIPROT -double sin(), floor(), Gamma(), pow(), exp(); -double polevl(), p1evl(); -#endif extern double MACHEP; double zetac(x) diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/_cephesmodule.c python-scipy-0.8.0+dfsg1/scipy/special/_cephesmodule.c --- python-scipy-0.7.2+dfsg1/scipy/special/_cephesmodule.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/_cephesmodule.c 2010-07-26 15:48:36.000000000 +0100 @@ -64,7 +64,7 @@ static void * ellpj_data[] = { (void *)ellpj, (void *)ellpj,}; static void * exp1_data[] = { (void *)exp1_wrap, (void *)exp1_wrap, (void *)cexp1_wrap, (void *)cexp1_wrap,}; -static void * expi_data[] = { (void *)expi_wrap, (void *)expi_wrap,}; +static void * expi_data[] = { (void *)expi_wrap, (void *)expi_wrap, (void *)cexpi_wrap, (void *)cexpi_wrap,}; static void * expn_data[] = { (void *)expn, (void *)expn, }; static void * kn_data[] = { (void *)kn, (void *)kn, }; @@ -559,7 +559,7 @@ f = PyUFunc_FromFuncAndData(cephes1rc_functions, exp1_data, cephes_1rc_types, 4, 1, 1, PyUFunc_None, "exp1", exp1_doc, 0); PyDict_SetItemString(dictionary, "exp1", f); Py_DECREF(f); - f = PyUFunc_FromFuncAndData(cephes1_functions, expi_data, cephes_2_types, 2, 1, 1, PyUFunc_None, "expi", expi_doc, 0); + f = PyUFunc_FromFuncAndData(cephes1rc_functions, expi_data, cephes_1rc_types, 4, 1, 1, PyUFunc_None, "expi", expi_doc, 0); PyDict_SetItemString(dictionary, "expi", f); Py_DECREF(f); diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/c_misc/fsolve.c python-scipy-0.8.0+dfsg1/scipy/special/c_misc/fsolve.c --- python-scipy-0.7.2+dfsg1/scipy/special/c_misc/fsolve.c 2010-04-05 08:55:24.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/c_misc/fsolve.c 2010-07-26 15:48:36.000000000 +0100 @@ -48,7 +48,7 @@ false_position(double *a, double *fa, double *b, double *fb, objective_function f, void *f_extra, double abserr, double relerr, double bisect_til, - double *best_x, double *best_f) + double *best_x, double *best_f, double *errest) { double x1=*a, f1=*fa, x2=*b, f2=*fb; fsolve_result_t r = FSOLVE_CONVERGED; @@ -163,5 +163,6 @@ r = FSOLVE_EXACT; finish: *a = x1; *fa = f1; *b = x2; *fb = f2; + *errest = w; return r; } diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/c_misc/gammaincinv.c python-scipy-0.8.0+dfsg1/scipy/special/c_misc/gammaincinv.c --- python-scipy-0.7.2+dfsg1/scipy/special/c_misc/gammaincinv.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/c_misc/gammaincinv.c 2010-07-26 15:48:36.000000000 +0100 @@ -1,9 +1,19 @@ +#include +#include + #include #include + #include "../cephes.h" #undef fabs #include "misc.h" +/* Limits after which to issue warnings about non-convergence */ +#define ALLOWED_ATOL (1e-306) +#define ALLOWED_RTOL (1e-9) + +void scipy_special_raise_warning(char *fmt, ...); + /* Inverse of the (regularised) incomplete Gamma integral. @@ -14,7 +24,7 @@ */ -extern double MACHEP; +extern double MACHEP, MAXNUM; static double gammainc(double x, double params[2]) @@ -28,13 +38,17 @@ double lo = 0.0, hi; double flo = -y, fhi = 0.25 - y; double params[2]; - double best_x, best_f; + double best_x, best_f, errest; fsolve_result_t r; - if (a <= 0.0 || y <= 0.0 || y > 0.25) { + if (a <= 0.0 || y <= 0.0 || y >= 0.25) { return cephes_igami(a, 1-y); } + /* Note: flo and fhi must have different signs (and be != 0), + * otherwise fsolve terminates with an error. + */ + params[0] = a; params[1] = y; hi = cephes_igami(a, 0.75); @@ -46,10 +60,14 @@ r = false_position(&lo, &flo, &hi, &fhi, (objective_function)gammainc, params, - MACHEP, MACHEP, 1e-2*a, - &best_x, &best_f); - if (r == FSOLVE_NOT_BRACKET) { - best_x = 0.0; + 2*MACHEP, 2*MACHEP, 1e-2*a, + &best_x, &best_f, &errest); + if (!(r == FSOLVE_CONVERGED || r == FSOLVE_EXACT) && + errest > ALLOWED_ATOL + ALLOWED_RTOL*fabs(best_x)) { + scipy_special_raise_warning( + "gammaincinv: failed to converge at (a, y) = (%.20g, %.20g): got %g +- %g, code %d\n", + a, y, best_x, errest, r); + best_x = NPY_NAN; } return best_x; } diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/c_misc/misc.h python-scipy-0.8.0+dfsg1/scipy/special/c_misc/misc.h --- python-scipy-0.7.2+dfsg1/scipy/special/c_misc/misc.h 2010-04-05 08:55:24.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/c_misc/misc.h 2010-07-26 15:48:36.000000000 +0100 @@ -18,7 +18,7 @@ fsolve_result_t false_position(double *a, double *fa, double *b, double *fb, objective_function f, void *f_extra, double abserr, double relerr, double bisect_til, - double *best_x, double *best_f); + double *best_x, double *best_f, double *errest); double besselpoly(double a, double lambda, double nu); double gammaincinv(double a, double x); diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/__init__.py python-scipy-0.8.0+dfsg1/scipy/special/__init__.py --- python-scipy-0.7.2+dfsg1/scipy/special/__init__.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/__init__.py 2010-07-26 15:48:36.000000000 +0100 @@ -8,10 +8,9 @@ from basic import * import specfun import orthogonal -from orthogonal import legendre, chebyt, chebyu, chebyc, chebys, \ - jacobi, laguerre, genlaguerre, hermite, hermitenorm, gegenbauer, \ - sh_legendre, sh_chebyt, sh_chebyu, sh_jacobi, poch +from orthogonal import * from spfun_stats import multigammaln +from lambertw import lambertw __all__ = filter(lambda s:not s.startswith('_'),dir()) diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/lambertw.c python-scipy-0.8.0+dfsg1/scipy/special/lambertw.c --- python-scipy-0.7.2+dfsg1/scipy/special/lambertw.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/lambertw.c 2010-07-26 15:48:36.000000000 +0100 @@ -0,0 +1,2584 @@ +/* Generated by Cython 0.12.1 on Mon May 31 10:16:35 2010 */ + +#define PY_SSIZE_T_CLEAN +#include "Python.h" +#include "structmember.h" +#ifndef Py_PYTHON_H + #error Python headers needed to compile C extensions, please install development version of Python. +#else + +#ifndef PY_LONG_LONG + #define PY_LONG_LONG LONG_LONG +#endif +#ifndef DL_EXPORT + #define DL_EXPORT(t) t +#endif +#if PY_VERSION_HEX < 0x02040000 + #define METH_COEXIST 0 + #define PyDict_CheckExact(op) (Py_TYPE(op) == &PyDict_Type) + #define PyDict_Contains(d,o) PySequence_Contains(d,o) +#endif + +#if PY_VERSION_HEX < 0x02050000 + typedef int Py_ssize_t; + #define PY_SSIZE_T_MAX INT_MAX + #define PY_SSIZE_T_MIN INT_MIN + #define PY_FORMAT_SIZE_T "" + #define PyInt_FromSsize_t(z) PyInt_FromLong(z) + #define PyInt_AsSsize_t(o) PyInt_AsLong(o) + #define PyNumber_Index(o) PyNumber_Int(o) + #define PyIndex_Check(o) PyNumber_Check(o) + #define PyErr_WarnEx(category, message, stacklevel) PyErr_Warn(category, message) +#endif + +#if PY_VERSION_HEX < 0x02060000 + #define Py_REFCNT(ob) (((PyObject*)(ob))->ob_refcnt) + #define Py_TYPE(ob) (((PyObject*)(ob))->ob_type) + #define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size) + #define PyVarObject_HEAD_INIT(type, size) \ + PyObject_HEAD_INIT(type) size, + #define PyType_Modified(t) + + typedef struct { + void *buf; + PyObject *obj; + Py_ssize_t len; + Py_ssize_t itemsize; + int readonly; + int ndim; + char *format; + Py_ssize_t *shape; + Py_ssize_t *strides; + Py_ssize_t *suboffsets; + void *internal; + } Py_buffer; + + #define PyBUF_SIMPLE 0 + #define PyBUF_WRITABLE 0x0001 + #define PyBUF_FORMAT 0x0004 + #define PyBUF_ND 0x0008 + #define PyBUF_STRIDES (0x0010 | PyBUF_ND) + #define PyBUF_C_CONTIGUOUS (0x0020 | PyBUF_STRIDES) + #define PyBUF_F_CONTIGUOUS (0x0040 | PyBUF_STRIDES) + #define PyBUF_ANY_CONTIGUOUS (0x0080 | PyBUF_STRIDES) + #define PyBUF_INDIRECT (0x0100 | PyBUF_STRIDES) + +#endif + +#if PY_MAJOR_VERSION < 3 + #define __Pyx_BUILTIN_MODULE_NAME "__builtin__" +#else + #define __Pyx_BUILTIN_MODULE_NAME "builtins" +#endif + +#if PY_MAJOR_VERSION >= 3 + #define Py_TPFLAGS_CHECKTYPES 0 + #define Py_TPFLAGS_HAVE_INDEX 0 +#endif + +#if (PY_VERSION_HEX < 0x02060000) || (PY_MAJOR_VERSION >= 3) + #define Py_TPFLAGS_HAVE_NEWBUFFER 0 +#endif + +#if PY_MAJOR_VERSION >= 3 + #define PyBaseString_Type PyUnicode_Type + #define PyString_Type PyUnicode_Type + #define PyString_CheckExact PyUnicode_CheckExact +#else + #define PyBytes_Type PyString_Type + #define PyBytes_CheckExact PyString_CheckExact +#endif + +#if PY_MAJOR_VERSION >= 3 + #define PyInt_Type PyLong_Type + #define PyInt_Check(op) PyLong_Check(op) + #define PyInt_CheckExact(op) PyLong_CheckExact(op) + #define PyInt_FromString PyLong_FromString + #define PyInt_FromUnicode PyLong_FromUnicode + #define PyInt_FromLong PyLong_FromLong + #define PyInt_FromSize_t PyLong_FromSize_t + #define PyInt_FromSsize_t PyLong_FromSsize_t + #define PyInt_AsLong PyLong_AsLong + #define PyInt_AS_LONG PyLong_AS_LONG + #define PyInt_AsSsize_t PyLong_AsSsize_t + #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask + #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask + #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y) + #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y) +#else + #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y) + #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y) + +#endif + +#if PY_MAJOR_VERSION >= 3 + #define PyMethod_New(func, self, klass) PyInstanceMethod_New(func) +#endif + +#if !defined(WIN32) && !defined(MS_WINDOWS) + #ifndef __stdcall + #define __stdcall + #endif + #ifndef __cdecl + #define __cdecl + #endif + #ifndef __fastcall + #define __fastcall + #endif +#else + #define _USE_MATH_DEFINES +#endif + +#if PY_VERSION_HEX < 0x02050000 + #define __Pyx_GetAttrString(o,n) PyObject_GetAttrString((o),((char *)(n))) + #define __Pyx_SetAttrString(o,n,a) PyObject_SetAttrString((o),((char *)(n)),(a)) + #define __Pyx_DelAttrString(o,n) PyObject_DelAttrString((o),((char *)(n))) +#else + #define __Pyx_GetAttrString(o,n) PyObject_GetAttrString((o),(n)) + #define __Pyx_SetAttrString(o,n,a) PyObject_SetAttrString((o),(n),(a)) + #define __Pyx_DelAttrString(o,n) PyObject_DelAttrString((o),(n)) +#endif + +#if PY_VERSION_HEX < 0x02050000 + #define __Pyx_NAMESTR(n) ((char *)(n)) + #define __Pyx_DOCSTR(n) ((char *)(n)) +#else + #define __Pyx_NAMESTR(n) (n) + #define __Pyx_DOCSTR(n) (n) +#endif +#ifdef __cplusplus +#define __PYX_EXTERN_C extern "C" +#else +#define __PYX_EXTERN_C extern +#endif +#include +#define __PYX_HAVE_API__scipy__special__lambertw +#include "math.h" +#include "numpy/npy_math.h" +#include "numpy/arrayobject.h" +#include "numpy/ufuncobject.h" + +#ifndef CYTHON_INLINE + #if defined(__GNUC__) + #define CYTHON_INLINE __inline__ + #elif defined(_MSC_VER) + #define CYTHON_INLINE __inline + #else + #define CYTHON_INLINE + #endif +#endif + +typedef struct {PyObject **p; char *s; const long n; const char* encoding; const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; /*proto*/ + + +/* Type Conversion Predeclarations */ + +#if PY_MAJOR_VERSION < 3 +#define __Pyx_PyBytes_FromString PyString_FromString +#define __Pyx_PyBytes_FromStringAndSize PyString_FromStringAndSize +#define __Pyx_PyBytes_AsString PyString_AsString +#else +#define __Pyx_PyBytes_FromString PyBytes_FromString +#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize +#define __Pyx_PyBytes_AsString PyBytes_AsString +#endif + +#define __Pyx_PyBytes_FromUString(s) __Pyx_PyBytes_FromString((char*)s) +#define __Pyx_PyBytes_AsUString(s) ((unsigned char*) __Pyx_PyBytes_AsString(s)) + +#define __Pyx_PyBool_FromLong(b) ((b) ? (Py_INCREF(Py_True), Py_True) : (Py_INCREF(Py_False), Py_False)) +static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); +static CYTHON_INLINE PyObject* __Pyx_PyNumber_Int(PyObject* x); + +#if !defined(T_PYSSIZET) +#if PY_VERSION_HEX < 0x02050000 +#define T_PYSSIZET T_INT +#elif !defined(T_LONGLONG) +#define T_PYSSIZET \ + ((sizeof(Py_ssize_t) == sizeof(int)) ? T_INT : \ + ((sizeof(Py_ssize_t) == sizeof(long)) ? T_LONG : -1)) +#else +#define T_PYSSIZET \ + ((sizeof(Py_ssize_t) == sizeof(int)) ? T_INT : \ + ((sizeof(Py_ssize_t) == sizeof(long)) ? T_LONG : \ + ((sizeof(Py_ssize_t) == sizeof(PY_LONG_LONG)) ? T_LONGLONG : -1))) +#endif +#endif + + +#if !defined(T_ULONGLONG) +#define __Pyx_T_UNSIGNED_INT(x) \ + ((sizeof(x) == sizeof(unsigned char)) ? T_UBYTE : \ + ((sizeof(x) == sizeof(unsigned short)) ? T_USHORT : \ + ((sizeof(x) == sizeof(unsigned int)) ? T_UINT : \ + ((sizeof(x) == sizeof(unsigned long)) ? T_ULONG : -1)))) +#else +#define __Pyx_T_UNSIGNED_INT(x) \ + ((sizeof(x) == sizeof(unsigned char)) ? T_UBYTE : \ + ((sizeof(x) == sizeof(unsigned short)) ? T_USHORT : \ + ((sizeof(x) == sizeof(unsigned int)) ? T_UINT : \ + ((sizeof(x) == sizeof(unsigned long)) ? T_ULONG : \ + ((sizeof(x) == sizeof(unsigned PY_LONG_LONG)) ? T_ULONGLONG : -1))))) +#endif +#if !defined(T_LONGLONG) +#define __Pyx_T_SIGNED_INT(x) \ + ((sizeof(x) == sizeof(char)) ? T_BYTE : \ + ((sizeof(x) == sizeof(short)) ? T_SHORT : \ + ((sizeof(x) == sizeof(int)) ? T_INT : \ + ((sizeof(x) == sizeof(long)) ? T_LONG : -1)))) +#else +#define __Pyx_T_SIGNED_INT(x) \ + ((sizeof(x) == sizeof(char)) ? T_BYTE : \ + ((sizeof(x) == sizeof(short)) ? T_SHORT : \ + ((sizeof(x) == sizeof(int)) ? T_INT : \ + ((sizeof(x) == sizeof(long)) ? T_LONG : \ + ((sizeof(x) == sizeof(PY_LONG_LONG)) ? T_LONGLONG : -1))))) +#endif + +#define __Pyx_T_FLOATING(x) \ + ((sizeof(x) == sizeof(float)) ? T_FLOAT : \ + ((sizeof(x) == sizeof(double)) ? T_DOUBLE : -1)) + +#if !defined(T_SIZET) +#if !defined(T_ULONGLONG) +#define T_SIZET \ + ((sizeof(size_t) == sizeof(unsigned int)) ? T_UINT : \ + ((sizeof(size_t) == sizeof(unsigned long)) ? T_ULONG : -1)) +#else +#define T_SIZET \ + ((sizeof(size_t) == sizeof(unsigned int)) ? T_UINT : \ + ((sizeof(size_t) == sizeof(unsigned long)) ? T_ULONG : \ + ((sizeof(size_t) == sizeof(unsigned PY_LONG_LONG)) ? T_ULONGLONG : -1))) +#endif +#endif + +static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); +static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t); +static CYTHON_INLINE size_t __Pyx_PyInt_AsSize_t(PyObject*); + +#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) + + +#ifdef __GNUC__ +/* Test for GCC > 2.95 */ +#if __GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95)) +#define likely(x) __builtin_expect(!!(x), 1) +#define unlikely(x) __builtin_expect(!!(x), 0) +#else /* __GNUC__ > 2 ... */ +#define likely(x) (x) +#define unlikely(x) (x) +#endif /* __GNUC__ > 2 ... */ +#else /* __GNUC__ */ +#define likely(x) (x) +#define unlikely(x) (x) +#endif /* __GNUC__ */ + +static PyObject *__pyx_m; +static PyObject *__pyx_b; +static PyObject *__pyx_empty_tuple; +static PyObject *__pyx_empty_bytes; +static int __pyx_lineno; +static int __pyx_clineno = 0; +static const char * __pyx_cfilenm= __FILE__; +static const char *__pyx_filename; +static const char **__pyx_f; + + +#if !defined(CYTHON_CCOMPLEX) + #if defined(__cplusplus) + #define CYTHON_CCOMPLEX 1 + #elif defined(_Complex_I) + #define CYTHON_CCOMPLEX 1 + #else + #define CYTHON_CCOMPLEX 0 + #endif +#endif + +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + #include + #else + #include + #endif +#endif + +#if CYTHON_CCOMPLEX && !defined(__cplusplus) && defined(__sun__) && defined(__GNUC__) + #undef _Complex_I + #define _Complex_I 1.0fj +#endif + +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + typedef ::std::complex< double > __pyx_t_double_complex; + #else + typedef double _Complex __pyx_t_double_complex; + #endif +#else + typedef struct { double real, imag; } __pyx_t_double_complex; +#endif + +/* Type declarations */ + +#ifndef CYTHON_REFNANNY + #define CYTHON_REFNANNY 0 +#endif + +#if CYTHON_REFNANNY + typedef struct { + void (*INCREF)(void*, PyObject*, int); + void (*DECREF)(void*, PyObject*, int); + void (*GOTREF)(void*, PyObject*, int); + void (*GIVEREF)(void*, PyObject*, int); + void* (*SetupContext)(const char*, int, const char*); + void (*FinishContext)(void**); + } __Pyx_RefNannyAPIStruct; + static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL; + static __Pyx_RefNannyAPIStruct * __Pyx_RefNannyImportAPI(const char *modname) { + PyObject *m = NULL, *p = NULL; + void *r = NULL; + m = PyImport_ImportModule((char *)modname); + if (!m) goto end; + p = PyObject_GetAttrString(m, (char *)"RefNannyAPI"); + if (!p) goto end; + r = PyLong_AsVoidPtr(p); + end: + Py_XDECREF(p); + Py_XDECREF(m); + return (__Pyx_RefNannyAPIStruct *)r; + } + #define __Pyx_RefNannySetupContext(name) void *__pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__) + #define __Pyx_RefNannyFinishContext() __Pyx_RefNanny->FinishContext(&__pyx_refnanny) + #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r);} } while(0) +#else + #define __Pyx_RefNannySetupContext(name) + #define __Pyx_RefNannyFinishContext() + #define __Pyx_INCREF(r) Py_INCREF(r) + #define __Pyx_DECREF(r) Py_DECREF(r) + #define __Pyx_GOTREF(r) + #define __Pyx_GIVEREF(r) + #define __Pyx_XDECREF(r) Py_XDECREF(r) +#endif /* CYTHON_REFNANNY */ +#define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);} } while(0) +#define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r);} } while(0) + +static void __Pyx_RaiseDoubleKeywordsError( + const char* func_name, PyObject* kw_name); /*proto*/ + +static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact, + Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found); /*proto*/ + +static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[], PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args, const char* function_name); /*proto*/ + +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + #define __Pyx_CREAL(z) ((z).real()) + #define __Pyx_CIMAG(z) ((z).imag()) + #else + #define __Pyx_CREAL(z) (__real__(z)) + #define __Pyx_CIMAG(z) (__imag__(z)) + #endif +#else + #define __Pyx_CREAL(z) ((z).real) + #define __Pyx_CIMAG(z) ((z).imag) +#endif + +#if defined(_WIN32) && defined(__cplusplus) && CYTHON_CCOMPLEX + #define __Pyx_SET_CREAL(z,x) ((z).real(x)) + #define __Pyx_SET_CIMAG(z,y) ((z).imag(y)) +#else + #define __Pyx_SET_CREAL(z,x) __Pyx_CREAL(z) = (x) + #define __Pyx_SET_CIMAG(z,y) __Pyx_CIMAG(z) = (y) +#endif + +static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double, double); + +#if CYTHON_CCOMPLEX + #define __Pyx_c_eq(a, b) ((a)==(b)) + #define __Pyx_c_sum(a, b) ((a)+(b)) + #define __Pyx_c_diff(a, b) ((a)-(b)) + #define __Pyx_c_prod(a, b) ((a)*(b)) + #define __Pyx_c_quot(a, b) ((a)/(b)) + #define __Pyx_c_neg(a) (-(a)) + #ifdef __cplusplus + #define __Pyx_c_is_zero(z) ((z)==(double)0) + #define __Pyx_c_conj(z) (::std::conj(z)) + /*#define __Pyx_c_abs(z) (::std::abs(z))*/ + #else + #define __Pyx_c_is_zero(z) ((z)==0) + #define __Pyx_c_conj(z) (conj(z)) + /*#define __Pyx_c_abs(z) (cabs(z))*/ + #endif +#else + static CYTHON_INLINE int __Pyx_c_eq(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot(__pyx_t_double_complex, __pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg(__pyx_t_double_complex); + static CYTHON_INLINE int __Pyx_c_is_zero(__pyx_t_double_complex); + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj(__pyx_t_double_complex); + /*static CYTHON_INLINE double __Pyx_c_abs(__pyx_t_double_complex);*/ +#endif + +static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list); /*proto*/ + +static PyObject *__Pyx_GetName(PyObject *dict, PyObject *name); /*proto*/ + +#define __pyx_PyComplex_FromComplex(z) \ + PyComplex_FromDoubles((double)__Pyx_CREAL(z), \ + (double)__Pyx_CIMAG(z)) + +static CYTHON_INLINE PyObject *__Pyx_PyInt_to_py_npy_intp(npy_intp); + +static CYTHON_INLINE npy_intp __Pyx_PyInt_from_py_npy_intp(PyObject *); + +static CYTHON_INLINE unsigned char __Pyx_PyInt_AsUnsignedChar(PyObject *); + +static CYTHON_INLINE unsigned short __Pyx_PyInt_AsUnsignedShort(PyObject *); + +static CYTHON_INLINE unsigned int __Pyx_PyInt_AsUnsignedInt(PyObject *); + +static CYTHON_INLINE char __Pyx_PyInt_AsChar(PyObject *); + +static CYTHON_INLINE short __Pyx_PyInt_AsShort(PyObject *); + +static CYTHON_INLINE int __Pyx_PyInt_AsInt(PyObject *); + +static CYTHON_INLINE signed char __Pyx_PyInt_AsSignedChar(PyObject *); + +static CYTHON_INLINE signed short __Pyx_PyInt_AsSignedShort(PyObject *); + +static CYTHON_INLINE signed int __Pyx_PyInt_AsSignedInt(PyObject *); + +static CYTHON_INLINE unsigned long __Pyx_PyInt_AsUnsignedLong(PyObject *); + +static CYTHON_INLINE unsigned PY_LONG_LONG __Pyx_PyInt_AsUnsignedLongLong(PyObject *); + +static CYTHON_INLINE long __Pyx_PyInt_AsLong(PyObject *); + +static CYTHON_INLINE PY_LONG_LONG __Pyx_PyInt_AsLongLong(PyObject *); + +static CYTHON_INLINE signed long __Pyx_PyInt_AsSignedLong(PyObject *); + +static CYTHON_INLINE signed PY_LONG_LONG __Pyx_PyInt_AsSignedLongLong(PyObject *); + +#ifndef __PYX_FORCE_INIT_THREADS + #if PY_VERSION_HEX < 0x02040200 + #define __PYX_FORCE_INIT_THREADS 1 + #else + #define __PYX_FORCE_INIT_THREADS 0 + #endif +#endif + +static CYTHON_INLINE void __Pyx_ErrRestore(PyObject *type, PyObject *value, PyObject *tb); /*proto*/ +static CYTHON_INLINE void __Pyx_ErrFetch(PyObject **type, PyObject **value, PyObject **tb); /*proto*/ + +static void __Pyx_WriteUnraisable(const char *name); /*proto*/ + +static void __Pyx_AddTraceback(const char *funcname); /*proto*/ + +static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); /*proto*/ +/* Module declarations from cython */ + +/* Module declarations from scipy.special.lambertw */ + +static PyUFuncGenericFunction __pyx_v_5scipy_7special_8lambertw__loop_funcs[1]; +static char __pyx_v_5scipy_7special_8lambertw__inp_outp_types[4]; +static void *__pyx_v_5scipy_7special_8lambertw_the_func_to_apply[1]; +static CYTHON_INLINE int __pyx_f_5scipy_7special_8lambertw_zisnan(__pyx_t_double_complex); /*proto*/ +static CYTHON_INLINE double __pyx_f_5scipy_7special_8lambertw_zabs(__pyx_t_double_complex); /*proto*/ +static CYTHON_INLINE __pyx_t_double_complex __pyx_f_5scipy_7special_8lambertw_zlog(__pyx_t_double_complex); /*proto*/ +static CYTHON_INLINE __pyx_t_double_complex __pyx_f_5scipy_7special_8lambertw_zexp(__pyx_t_double_complex); /*proto*/ +static void __pyx_f_5scipy_7special_8lambertw_lambertw_raise_warning(__pyx_t_double_complex); /*proto*/ +static __pyx_t_double_complex __pyx_f_5scipy_7special_8lambertw_lambertw_scalar(__pyx_t_double_complex, long, double); /*proto*/ +static void __pyx_f_5scipy_7special_8lambertw__apply_func_to_1d_vec(char **, npy_intp *, npy_intp *, void *); /*proto*/ +#define __Pyx_MODULE_NAME "scipy.special.lambertw" +int __pyx_module_is_main_scipy__special__lambertw = 0; + +/* Implementation of scipy.special.lambertw */ +static PyObject *__pyx_builtin_range; +static char __pyx_k_1[] = "Lambert W iteration failed to converge: %r"; +static char __pyx_k_3[] = ""; +static char __pyx_k_4[] = "lambertw (line 193)"; +static char __pyx_k__k[] = "k"; +static char __pyx_k__z[] = "z"; +static char __pyx_k__tol[] = "tol"; +static char __pyx_k__imag[] = "imag"; +static char __pyx_k__real[] = "real"; +static char __pyx_k__warn[] = "warn"; +static char __pyx_k__range[] = "range"; +static char __pyx_k____main__[] = "__main__"; +static char __pyx_k____test__[] = "__test__"; +static char __pyx_k__lambertw[] = "lambertw"; +static char __pyx_k__warnings[] = "warnings"; +static char __pyx_k___lambertw[] = "_lambertw"; +static PyObject *__pyx_kp_s_1; +static PyObject *__pyx_kp_u_4; +static PyObject *__pyx_n_s____main__; +static PyObject *__pyx_n_s____test__; +static PyObject *__pyx_n_s___lambertw; +static PyObject *__pyx_n_s__imag; +static PyObject *__pyx_n_s__k; +static PyObject *__pyx_n_s__lambertw; +static PyObject *__pyx_n_s__range; +static PyObject *__pyx_n_s__real; +static PyObject *__pyx_n_s__tol; +static PyObject *__pyx_n_s__warn; +static PyObject *__pyx_n_s__warnings; +static PyObject *__pyx_n_s__z; +static PyObject *__pyx_int_0; +static PyObject *__pyx_k_2; + +/* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":44 + * double NPY_PI + * + * cdef inline bint zisnan(double complex x) nogil: # <<<<<<<<<<<<<< + * return npy_isnan(x.real) or npy_isnan(x.imag) + * + */ + +static CYTHON_INLINE int __pyx_f_5scipy_7special_8lambertw_zisnan(__pyx_t_double_complex __pyx_v_x) { + int __pyx_r; + int __pyx_t_1; + int __pyx_t_2; + int __pyx_t_3; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":45 + * + * cdef inline bint zisnan(double complex x) nogil: + * return npy_isnan(x.real) or npy_isnan(x.imag) # <<<<<<<<<<<<<< + * + * cdef inline double zabs(double complex x) nogil: + */ + __pyx_t_1 = npy_isnan(__Pyx_CREAL(__pyx_v_x)); + if (!__pyx_t_1) { + __pyx_t_2 = npy_isnan(__Pyx_CIMAG(__pyx_v_x)); + __pyx_t_3 = __pyx_t_2; + } else { + __pyx_t_3 = __pyx_t_1; + } + __pyx_r = __pyx_t_3; + goto __pyx_L0; + + __pyx_r = 0; + __pyx_L0:; + return __pyx_r; +} + +/* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":47 + * return npy_isnan(x.real) or npy_isnan(x.imag) + * + * cdef inline double zabs(double complex x) nogil: # <<<<<<<<<<<<<< + * cdef double r + * r = npy_cabs((&x)[0]) + */ + +static CYTHON_INLINE double __pyx_f_5scipy_7special_8lambertw_zabs(__pyx_t_double_complex __pyx_v_x) { + double __pyx_v_r; + double __pyx_r; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":49 + * cdef inline double zabs(double complex x) nogil: + * cdef double r + * r = npy_cabs((&x)[0]) # <<<<<<<<<<<<<< + * return r + * + */ + __pyx_v_r = npy_cabs((((npy_cdouble *)(&__pyx_v_x))[0])); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":50 + * cdef double r + * r = npy_cabs((&x)[0]) + * return r # <<<<<<<<<<<<<< + * + * cdef inline double complex zlog(double complex x) nogil: + */ + __pyx_r = __pyx_v_r; + goto __pyx_L0; + + __pyx_r = 0; + __pyx_L0:; + return __pyx_r; +} + +/* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":52 + * return r + * + * cdef inline double complex zlog(double complex x) nogil: # <<<<<<<<<<<<<< + * cdef npy_cdouble r + * r = npy_clog((&x)[0]) + */ + +static CYTHON_INLINE __pyx_t_double_complex __pyx_f_5scipy_7special_8lambertw_zlog(__pyx_t_double_complex __pyx_v_x) { + npy_cdouble __pyx_v_r; + __pyx_t_double_complex __pyx_r; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":54 + * cdef inline double complex zlog(double complex x) nogil: + * cdef npy_cdouble r + * r = npy_clog((&x)[0]) # <<<<<<<<<<<<<< + * return (&r)[0] + * + */ + __pyx_v_r = npy_clog((((npy_cdouble *)(&__pyx_v_x))[0])); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":55 + * cdef npy_cdouble r + * r = npy_clog((&x)[0]) + * return (&r)[0] # <<<<<<<<<<<<<< + * + * cdef inline double complex zexp(double complex x) nogil: + */ + __pyx_r = (((__pyx_t_double_complex *)(&__pyx_v_r))[0]); + goto __pyx_L0; + + __pyx_r = __pyx_t_double_complex_from_parts(0, 0); + __pyx_L0:; + return __pyx_r; +} + +/* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":57 + * return (&r)[0] + * + * cdef inline double complex zexp(double complex x) nogil: # <<<<<<<<<<<<<< + * cdef npy_cdouble r + * r = npy_cexp((&x)[0]) + */ + +static CYTHON_INLINE __pyx_t_double_complex __pyx_f_5scipy_7special_8lambertw_zexp(__pyx_t_double_complex __pyx_v_x) { + npy_cdouble __pyx_v_r; + __pyx_t_double_complex __pyx_r; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":59 + * cdef inline double complex zexp(double complex x) nogil: + * cdef npy_cdouble r + * r = npy_cexp((&x)[0]) # <<<<<<<<<<<<<< + * return (&r)[0] + * + */ + __pyx_v_r = npy_cexp((((npy_cdouble *)(&__pyx_v_x))[0])); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":60 + * cdef npy_cdouble r + * r = npy_cexp((&x)[0]) + * return (&r)[0] # <<<<<<<<<<<<<< + * + * cdef void lambertw_raise_warning(double complex z) with gil: + */ + __pyx_r = (((__pyx_t_double_complex *)(&__pyx_v_r))[0]); + goto __pyx_L0; + + __pyx_r = __pyx_t_double_complex_from_parts(0, 0); + __pyx_L0:; + return __pyx_r; +} + +/* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":62 + * return (&r)[0] + * + * cdef void lambertw_raise_warning(double complex z) with gil: # <<<<<<<<<<<<<< + * warnings.warn("Lambert W iteration failed to converge: %r" % z) + * + */ + +static void __pyx_f_5scipy_7special_8lambertw_lambertw_raise_warning(__pyx_t_double_complex __pyx_v_z) { + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + PyGILState_STATE _save = PyGILState_Ensure(); + __Pyx_RefNannySetupContext("lambertw_raise_warning"); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":63 + * + * cdef void lambertw_raise_warning(double complex z) with gil: + * warnings.warn("Lambert W iteration failed to converge: %r" % z) # <<<<<<<<<<<<<< + * + * # Heavy lifting is here: + */ + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__warnings); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 63; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__warn); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 63; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = __pyx_PyComplex_FromComplex(__pyx_v_z); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 63; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_3 = PyNumber_Remainder(((PyObject *)__pyx_kp_s_1), __pyx_t_1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 63; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 63; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + __pyx_t_3 = 0; + __pyx_t_3 = PyObject_Call(__pyx_t_2, __pyx_t_1, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 63; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_WriteUnraisable("scipy.special.lambertw.lambertw_raise_warning"); + __pyx_L0:; + __Pyx_RefNannyFinishContext(); + PyGILState_Release(_save); +} + +/* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":68 + * + * @cython.cdivision(True) + * cdef double complex lambertw_scalar(double complex z, long k, double tol) nogil: # <<<<<<<<<<<<<< + * """ + * This is just the implementation of W for a single input z. + */ + +static __pyx_t_double_complex __pyx_f_5scipy_7special_8lambertw_lambertw_scalar(__pyx_t_double_complex __pyx_v_z, long __pyx_v_k, double __pyx_v_tol) { + __pyx_t_double_complex __pyx_v_w; + double __pyx_v_u; + double __pyx_v_absz; + __pyx_t_double_complex __pyx_v_ew; + __pyx_t_double_complex __pyx_v_wew; + __pyx_t_double_complex __pyx_v_wewz; + __pyx_t_double_complex __pyx_v_wn; + int __pyx_v_i; + __pyx_t_double_complex __pyx_r; + int __pyx_t_1; + int __pyx_t_2; + int __pyx_t_3; + int __pyx_t_4; + long __pyx_t_5; + int __pyx_t_6; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":74 + * """ + * # Comments copied verbatim from [2] are marked with '>' + * if zisnan(z): # <<<<<<<<<<<<<< + * return z + * + */ + __pyx_t_1 = __pyx_f_5scipy_7special_8lambertw_zisnan(__pyx_v_z); + if (__pyx_t_1) { + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":75 + * # Comments copied verbatim from [2] are marked with '>' + * if zisnan(z): + * return z # <<<<<<<<<<<<<< + * + * # Return value: + */ + __pyx_r = __pyx_v_z; + goto __pyx_L0; + goto __pyx_L3; + } + __pyx_L3:; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":82 + * #> We must be extremely careful near the singularities at -1/e and 0 + * cdef double u + * u = exp(-1) # <<<<<<<<<<<<<< + * + * cdef double absz + */ + __pyx_v_u = exp(-1); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":85 + * + * cdef double absz + * absz = zabs(z) # <<<<<<<<<<<<<< + * if absz <= u: + * if z == 0: + */ + __pyx_v_absz = __pyx_f_5scipy_7special_8lambertw_zabs(__pyx_v_z); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":86 + * cdef double absz + * absz = zabs(z) + * if absz <= u: # <<<<<<<<<<<<<< + * if z == 0: + * #> w(0,0) = 0; for all other branches we hit the pole + */ + __pyx_t_1 = (__pyx_v_absz <= __pyx_v_u); + if (__pyx_t_1) { + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":87 + * absz = zabs(z) + * if absz <= u: + * if z == 0: # <<<<<<<<<<<<<< + * #> w(0,0) = 0; for all other branches we hit the pole + * if k == 0: + */ + __pyx_t_1 = (__Pyx_c_eq(__pyx_v_z, __pyx_t_double_complex_from_parts(0, 0))); + if (__pyx_t_1) { + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":89 + * if z == 0: + * #> w(0,0) = 0; for all other branches we hit the pole + * if k == 0: # <<<<<<<<<<<<<< + * return z + * return -NPY_INFINITY + */ + __pyx_t_1 = (__pyx_v_k == 0); + if (__pyx_t_1) { + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":90 + * #> w(0,0) = 0; for all other branches we hit the pole + * if k == 0: + * return z # <<<<<<<<<<<<<< + * return -NPY_INFINITY + * + */ + __pyx_r = __pyx_v_z; + goto __pyx_L0; + goto __pyx_L6; + } + __pyx_L6:; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":91 + * if k == 0: + * return z + * return -NPY_INFINITY # <<<<<<<<<<<<<< + * + * if k == 0: + */ + __pyx_r = __pyx_t_double_complex_from_parts((-NPY_INFINITY), 0); + goto __pyx_L0; + goto __pyx_L5; + } + __pyx_L5:; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":93 + * return -NPY_INFINITY + * + * if k == 0: # <<<<<<<<<<<<<< + * w = z # Initial guess for iteration + * #> For small real z < 0, the -1 branch beaves roughly like log(-z) + */ + __pyx_t_1 = (__pyx_v_k == 0); + if (__pyx_t_1) { + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":94 + * + * if k == 0: + * w = z # Initial guess for iteration # <<<<<<<<<<<<<< + * #> For small real z < 0, the -1 branch beaves roughly like log(-z) + * elif k == -1 and z.imag ==0 and z.real < 0: + */ + __pyx_v_w = __pyx_v_z; + goto __pyx_L7; + } + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":96 + * w = z # Initial guess for iteration + * #> For small real z < 0, the -1 branch beaves roughly like log(-z) + * elif k == -1 and z.imag ==0 and z.real < 0: # <<<<<<<<<<<<<< + * w = log(-z.real) + * #> Use a simple asymptotic approximation. + */ + __pyx_t_1 = (__pyx_v_k == -1); + if (__pyx_t_1) { + __pyx_t_2 = (__Pyx_CIMAG(__pyx_v_z) == 0); + if (__pyx_t_2) { + __pyx_t_3 = (__Pyx_CREAL(__pyx_v_z) < 0); + __pyx_t_4 = __pyx_t_3; + } else { + __pyx_t_4 = __pyx_t_2; + } + __pyx_t_2 = __pyx_t_4; + } else { + __pyx_t_2 = __pyx_t_1; + } + if (__pyx_t_2) { + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":97 + * #> For small real z < 0, the -1 branch beaves roughly like log(-z) + * elif k == -1 and z.imag ==0 and z.real < 0: + * w = log(-z.real) # <<<<<<<<<<<<<< + * #> Use a simple asymptotic approximation. + * else: + */ + __pyx_v_w = __pyx_t_double_complex_from_parts(log((-__Pyx_CREAL(__pyx_v_z))), 0); + goto __pyx_L7; + } + /*else*/ { + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":100 + * #> Use a simple asymptotic approximation. + * else: + * w = zlog(z) # <<<<<<<<<<<<<< + * #> The branches are roughly logarithmic. This approximation + * #> gets better for large |k|; need to check that this always + */ + __pyx_v_w = __pyx_f_5scipy_7special_8lambertw_zlog(__pyx_v_z); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":104 + * #> gets better for large |k|; need to check that this always + * #> works for k ~= -1, 0, 1. + * if k: w = w + k*2*NPY_PI*1j # <<<<<<<<<<<<<< + * + * elif k == 0 and z.imag and zabs(z) <= 0.7: + */ + __pyx_t_5 = __pyx_v_k; + if (__pyx_t_5) { + __pyx_v_w = __Pyx_c_sum(__pyx_v_w, __Pyx_c_prod(__pyx_t_double_complex_from_parts(((__pyx_v_k * 2) * NPY_PI), 0), __pyx_t_double_complex_from_parts(0, 1.0))); + goto __pyx_L8; + } + __pyx_L8:; + } + __pyx_L7:; + goto __pyx_L4; + } + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":106 + * if k: w = w + k*2*NPY_PI*1j + * + * elif k == 0 and z.imag and zabs(z) <= 0.7: # <<<<<<<<<<<<<< + * #> Both the W(z) ~= z and W(z) ~= ln(z) approximations break + * #> down around z ~= -0.5 (converging to the wrong branch), so patch + */ + __pyx_t_2 = (__pyx_v_k == 0); + if (__pyx_t_2) { + if ((__Pyx_CIMAG(__pyx_v_z) != 0)) { + __pyx_t_1 = (__pyx_f_5scipy_7special_8lambertw_zabs(__pyx_v_z) <= 0.69999999999999996); + __pyx_t_4 = __pyx_t_1; + } else { + __pyx_t_4 = (__Pyx_CIMAG(__pyx_v_z) != 0); + } + __pyx_t_1 = __pyx_t_4; + } else { + __pyx_t_1 = __pyx_t_2; + } + if (__pyx_t_1) { + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":110 + * #> down around z ~= -0.5 (converging to the wrong branch), so patch + * #> with a constant approximation (adjusted for sign) + * if zabs(z+0.5) < 0.1: # <<<<<<<<<<<<<< + * if z.imag > 0: + * w = 0.7 + 0.7j + */ + __pyx_t_1 = (__pyx_f_5scipy_7special_8lambertw_zabs(__Pyx_c_sum(__pyx_v_z, __pyx_t_double_complex_from_parts(0.5, 0))) < 0.10000000000000001); + if (__pyx_t_1) { + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":111 + * #> with a constant approximation (adjusted for sign) + * if zabs(z+0.5) < 0.1: + * if z.imag > 0: # <<<<<<<<<<<<<< + * w = 0.7 + 0.7j + * else: + */ + __pyx_t_1 = (__Pyx_CIMAG(__pyx_v_z) > 0); + if (__pyx_t_1) { + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":112 + * if zabs(z+0.5) < 0.1: + * if z.imag > 0: + * w = 0.7 + 0.7j # <<<<<<<<<<<<<< + * else: + * w = 0.7 - 0.7j + */ + __pyx_v_w = __Pyx_c_sum(__pyx_t_double_complex_from_parts(0.69999999999999996, 0), __pyx_t_double_complex_from_parts(0, 0.69999999999999996)); + goto __pyx_L10; + } + /*else*/ { + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":114 + * w = 0.7 + 0.7j + * else: + * w = 0.7 - 0.7j # <<<<<<<<<<<<<< + * else: + * w = z + */ + __pyx_v_w = __Pyx_c_diff(__pyx_t_double_complex_from_parts(0.69999999999999996, 0), __pyx_t_double_complex_from_parts(0, 0.69999999999999996)); + } + __pyx_L10:; + goto __pyx_L9; + } + /*else*/ { + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":116 + * w = 0.7 - 0.7j + * else: + * w = z # <<<<<<<<<<<<<< + * + * else: + */ + __pyx_v_w = __pyx_v_z; + } + __pyx_L9:; + goto __pyx_L4; + } + /*else*/ { + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":119 + * + * else: + * if z.real == NPY_INFINITY: # <<<<<<<<<<<<<< + * if k == 0: + * return z + */ + __pyx_t_1 = (__Pyx_CREAL(__pyx_v_z) == NPY_INFINITY); + if (__pyx_t_1) { + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":120 + * else: + * if z.real == NPY_INFINITY: + * if k == 0: # <<<<<<<<<<<<<< + * return z + * else: + */ + __pyx_t_1 = (__pyx_v_k == 0); + if (__pyx_t_1) { + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":121 + * if z.real == NPY_INFINITY: + * if k == 0: + * return z # <<<<<<<<<<<<<< + * else: + * return z + 2*k*NPY_PI*1j + */ + __pyx_r = __pyx_v_z; + goto __pyx_L0; + goto __pyx_L12; + } + /*else*/ { + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":123 + * return z + * else: + * return z + 2*k*NPY_PI*1j # <<<<<<<<<<<<<< + * + * if z.real == -NPY_INFINITY: + */ + __pyx_r = __Pyx_c_sum(__pyx_v_z, __Pyx_c_prod(__pyx_t_double_complex_from_parts(((2 * __pyx_v_k) * NPY_PI), 0), __pyx_t_double_complex_from_parts(0, 1.0))); + goto __pyx_L0; + } + __pyx_L12:; + goto __pyx_L11; + } + __pyx_L11:; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":125 + * return z + 2*k*NPY_PI*1j + * + * if z.real == -NPY_INFINITY: # <<<<<<<<<<<<<< + * return (-z) + (2*k+1)*NPY_PI*1j + * + */ + __pyx_t_1 = (__Pyx_CREAL(__pyx_v_z) == (-NPY_INFINITY)); + if (__pyx_t_1) { + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":126 + * + * if z.real == -NPY_INFINITY: + * return (-z) + (2*k+1)*NPY_PI*1j # <<<<<<<<<<<<<< + * + * #> Simple asymptotic approximation as above + */ + __pyx_r = __Pyx_c_sum(__Pyx_c_neg(__pyx_v_z), __Pyx_c_prod(__pyx_t_double_complex_from_parts((((2 * __pyx_v_k) + 1) * NPY_PI), 0), __pyx_t_double_complex_from_parts(0, 1.0))); + goto __pyx_L0; + goto __pyx_L13; + } + __pyx_L13:; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":129 + * + * #> Simple asymptotic approximation as above + * w = zlog(z) # <<<<<<<<<<<<<< + * if k: w = w + k*2*NPY_PI*1j + * + */ + __pyx_v_w = __pyx_f_5scipy_7special_8lambertw_zlog(__pyx_v_z); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":130 + * #> Simple asymptotic approximation as above + * w = zlog(z) + * if k: w = w + k*2*NPY_PI*1j # <<<<<<<<<<<<<< + * + * #> Use Halley iteration to solve w*exp(w) = z + */ + __pyx_t_5 = __pyx_v_k; + if (__pyx_t_5) { + __pyx_v_w = __Pyx_c_sum(__pyx_v_w, __Pyx_c_prod(__pyx_t_double_complex_from_parts(((__pyx_v_k * 2) * NPY_PI), 0), __pyx_t_double_complex_from_parts(0, 1.0))); + goto __pyx_L14; + } + __pyx_L14:; + } + __pyx_L4:; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":135 + * cdef double complex ew, wew, wewz, wn + * cdef int i + * for i in range(100): # <<<<<<<<<<<<<< + * ew = zexp(w) + * wew = w*ew + */ + for (__pyx_t_6 = 0; __pyx_t_6 < 100; __pyx_t_6+=1) { + __pyx_v_i = __pyx_t_6; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":136 + * cdef int i + * for i in range(100): + * ew = zexp(w) # <<<<<<<<<<<<<< + * wew = w*ew + * wewz = wew-z + */ + __pyx_v_ew = __pyx_f_5scipy_7special_8lambertw_zexp(__pyx_v_w); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":137 + * for i in range(100): + * ew = zexp(w) + * wew = w*ew # <<<<<<<<<<<<<< + * wewz = wew-z + * wn = w - wewz / (wew + ew - (w + 2)*wewz/(2*w + 2)) + */ + __pyx_v_wew = __Pyx_c_prod(__pyx_v_w, __pyx_v_ew); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":138 + * ew = zexp(w) + * wew = w*ew + * wewz = wew-z # <<<<<<<<<<<<<< + * wn = w - wewz / (wew + ew - (w + 2)*wewz/(2*w + 2)) + * if zabs(wn-w) < tol*zabs(wn): + */ + __pyx_v_wewz = __Pyx_c_diff(__pyx_v_wew, __pyx_v_z); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":139 + * wew = w*ew + * wewz = wew-z + * wn = w - wewz / (wew + ew - (w + 2)*wewz/(2*w + 2)) # <<<<<<<<<<<<<< + * if zabs(wn-w) < tol*zabs(wn): + * return wn + */ + __pyx_v_wn = __Pyx_c_diff(__pyx_v_w, __Pyx_c_quot(__pyx_v_wewz, __Pyx_c_diff(__Pyx_c_sum(__pyx_v_wew, __pyx_v_ew), __Pyx_c_quot(__Pyx_c_prod(__Pyx_c_sum(__pyx_v_w, __pyx_t_double_complex_from_parts(2, 0)), __pyx_v_wewz), __Pyx_c_sum(__Pyx_c_prod(__pyx_t_double_complex_from_parts(2, 0), __pyx_v_w), __pyx_t_double_complex_from_parts(2, 0)))))); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":140 + * wewz = wew-z + * wn = w - wewz / (wew + ew - (w + 2)*wewz/(2*w + 2)) + * if zabs(wn-w) < tol*zabs(wn): # <<<<<<<<<<<<<< + * return wn + * else: + */ + __pyx_t_1 = (__pyx_f_5scipy_7special_8lambertw_zabs(__Pyx_c_diff(__pyx_v_wn, __pyx_v_w)) < (__pyx_v_tol * __pyx_f_5scipy_7special_8lambertw_zabs(__pyx_v_wn))); + if (__pyx_t_1) { + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":141 + * wn = w - wewz / (wew + ew - (w + 2)*wewz/(2*w + 2)) + * if zabs(wn-w) < tol*zabs(wn): + * return wn # <<<<<<<<<<<<<< + * else: + * w = wn + */ + __pyx_r = __pyx_v_wn; + goto __pyx_L0; + goto __pyx_L17; + } + /*else*/ { + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":143 + * return wn + * else: + * w = wn # <<<<<<<<<<<<<< + * + * lambertw_raise_warning(z) + */ + __pyx_v_w = __pyx_v_wn; + } + __pyx_L17:; + } + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":145 + * w = wn + * + * lambertw_raise_warning(z) # <<<<<<<<<<<<<< + * return wn + * + */ + __pyx_f_5scipy_7special_8lambertw_lambertw_raise_warning(__pyx_v_z); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":146 + * + * lambertw_raise_warning(z) + * return wn # <<<<<<<<<<<<<< + * + * + */ + __pyx_r = __pyx_v_wn; + goto __pyx_L0; + + __pyx_r = __pyx_t_double_complex_from_parts(0, 0); + __pyx_L0:; + return __pyx_r; +} + +/* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":166 + * int identity, char* name, char* doc, int c) + * + * cdef void _apply_func_to_1d_vec(char **args, npy_intp *dimensions, npy_intp *steps, # <<<<<<<<<<<<<< + * void *func) nogil: + * cdef npy_intp i + */ + +static void __pyx_f_5scipy_7special_8lambertw__apply_func_to_1d_vec(char **__pyx_v_args, npy_intp *__pyx_v_dimensions, npy_intp *__pyx_v_steps, void *__pyx_v_func) { + npy_intp __pyx_v_i; + char *__pyx_v_ip1; + char *__pyx_v_ip2; + char *__pyx_v_ip3; + char *__pyx_v_op; + npy_intp __pyx_t_1; + npy_intp __pyx_t_2; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":169 + * void *func) nogil: + * cdef npy_intp i + * cdef char *ip1=args[0], *ip2=args[1], *ip3=args[2], *op=args[3] # <<<<<<<<<<<<<< + * for i in range(0, dimensions[0]): + * (op)[0] = (func)( + */ + __pyx_v_ip1 = (__pyx_v_args[0]); + __pyx_v_ip2 = (__pyx_v_args[1]); + __pyx_v_ip3 = (__pyx_v_args[2]); + __pyx_v_op = (__pyx_v_args[3]); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":170 + * cdef npy_intp i + * cdef char *ip1=args[0], *ip2=args[1], *ip3=args[2], *op=args[3] + * for i in range(0, dimensions[0]): # <<<<<<<<<<<<<< + * (op)[0] = (func)( + * (ip1)[0], (ip2)[0], (ip3)[0]) + */ + __pyx_t_1 = (__pyx_v_dimensions[0]); + for (__pyx_t_2 = 0; __pyx_t_2 < __pyx_t_1; __pyx_t_2+=1) { + __pyx_v_i = __pyx_t_2; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":171 + * cdef char *ip1=args[0], *ip2=args[1], *ip3=args[2], *op=args[3] + * for i in range(0, dimensions[0]): + * (op)[0] = (func)( # <<<<<<<<<<<<<< + * (ip1)[0], (ip2)[0], (ip3)[0]) + * ip1 += steps[0]; ip2 += steps[1]; ip3 += steps[2]; op += steps[3] + */ + (((__pyx_t_double_complex *)__pyx_v_op)[0]) = ((__pyx_t_double_complex (*)(__pyx_t_double_complex, long, double))__pyx_v_func)((((__pyx_t_double_complex *)__pyx_v_ip1)[0]), (((long *)__pyx_v_ip2)[0]), (((double *)__pyx_v_ip3)[0])); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":173 + * (op)[0] = (func)( + * (ip1)[0], (ip2)[0], (ip3)[0]) + * ip1 += steps[0]; ip2 += steps[1]; ip3 += steps[2]; op += steps[3] # <<<<<<<<<<<<<< + * + * cdef PyUFuncGenericFunction _loop_funcs[1] + */ + __pyx_v_ip1 += (__pyx_v_steps[0]); + __pyx_v_ip2 += (__pyx_v_steps[1]); + __pyx_v_ip3 += (__pyx_v_steps[2]); + __pyx_v_op += (__pyx_v_steps[3]); + } + +} + +/* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":193 + * _inp_outp_types, 1, 3, 1, 0, "", "", 0) + * + * def lambertw(z, k=0, tol=1e-8): # <<<<<<<<<<<<<< + * r""" + * lambertw(z, k=0, tol=1e-8) + */ + +static PyObject *__pyx_pf_5scipy_7special_8lambertw_lambertw(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static char __pyx_doc_5scipy_7special_8lambertw_lambertw[] = "\n"" lambertw(z, k=0, tol=1e-8)\n""\n"" Lambert W function.\n""\n"" The Lambert W function `W(z)` is defined as the inverse function\n"" of :math:`w \\exp(w)`. In other words, the value of :math:`W(z)` is\n"" such that :math:`z = W(z) \\exp(W(z))` for any complex number\n"" :math:`z`.\n""\n"" The Lambert W function is a multivalued function with infinitely\n"" many branches. Each branch gives a separate solution of the\n"" equation :math:`w \\exp(w)`. Here, the branches are indexed by the\n"" integer `k`.\n"" \n"" Parameters\n"" ----------\n"" z : array_like\n"" Input argument\n"" k : integer, optional\n"" Branch index\n"" tol : float\n"" Evaluation tolerance\n""\n"" Notes\n"" -----\n"" All branches are supported by `lambertw`:\n""\n"" * ``lambertw(z)`` gives the principal solution (branch 0)\n"" * ``lambertw(z, k)`` gives the solution on branch `k`\n""\n"" The Lambert W function has two partially real branches: the\n"" principal branch (`k = 0`) is real for real `z > -1/e`, and the\n"" `k = -1` branch is real for `-1/e < z < 0`. All branches except\n"" `k = 0` have a logarithmic singularity at `z = 0`.\n""\n"" .. rubric:: Possible issues\n"" \n"" The evaluation can become inaccurate very close to the branch point\n"" at `-1/e`. In some corner cases, :func:`lambertw` might currently\n"" fail to converge, or can end up on the wrong branch.\n""\n"" .. rubric:: Algorithm\n""\n"" Halley's iteration is used to invert `w \\exp(w)`, using a first-order\n"" asymptotic approximation (`O(\\log(w))` or `O(w)`) as the initial\n"" estimate.\n""\n"" The definition, implementation and choice of branches is based\n"" on Corless et al, \"On the Lambert W function\", Adv. Comp. Math. 5\n"" (1996) 329-359, available online here:\n"" http://www.apmaths.uwo.ca/~djeffrey/Offprints/W-adv-cm.pdf\n"" \n"" TODO: use a series expansion when extremely close to the branch point\n"" at `-1/e` and make sure that the proper branch is chosen there\n""\n"" Examples\n"" --------\n"" The Lambert W function is the inverse of `w \\exp(w)`::\n""\n"" >>> from scipy.special import lambertw\n"" >>> w = lambertw(1)\n"" >>> w\n"" 0.56714329040978387299996866221035555\n"" >>> w*exp(w)\n"" 1.0\n""\n"" Any branch gives a valid inverse::\n""\n"" >>> w = lambertw(1, k=3)\n"" >>> w\n"" (-2.8535817554090378072068187234910812 +\n"" 17.113535539412145912607826671159289j)\n"" >>> w*exp(w)\n"" (1.0 + 3.5075477124212226194278700785075126e-36j)\n""\n"" .. rubric:: Applications to equation-solving\n""\n"" The Lambert W function may be used to solve various kinds of\n"" equations, such as finding the value of the infinite power\n"" tower `z^{z^{z^{\\ldots}}}`::\n""\n"" >>> def tower(z, n):\n"" ... if n == 0:\n"" ... return z\n"" ... return z ** tower(z, n-1)\n"" ...\n"" >>> tower(0.5, 100)\n"" 0.641185744504986\n"" >>> -lambertw(-log(0.5))/log(0.5)\n"" 0.6411857445049859844862004821148236665628209571911\n""\n"" .. rubric:: Properties\n""\n"" The Lambert W function grows roughly like the natural logarithm\n"" for large arguments::\n""\n"" >>> lambertw(1000)\n"" 5.2496028524016\n"" >>> log(1000)\n"" 6.90775527898214\n"" >>> lambertw(10**100)\n"" 224.843106445119\n"" >>> log(10**100)\n"" 230.258509299405\n"" \n"" The principal branch of the Lambert W function has a rational\n"" Taylor series expansion around `z = 0`::\n"" \n"" >>> nprint(taylor(lambertw, 0, 6), 10)\n"" [0.0, 1.0, -1.0, 1.5, -2.666666667, 5.208333333, -10.8]\n"" \n"" Some special values and limits are::\n"" \n"" >>> lambertw(0)\n"" 0.0\n"" >>> lambertw(1)\n"" 0.567143290409784\n"" >>> lambertw(e)\n"" 1.0\n"" >>> lambertw(inf)\n"" +inf\n"" >>> lambertw(0, k=-1)\n"" -inf\n"" >>> lambertw(0, k=3)\n"" -inf\n"" >>> lambertw(inf, k=3)\n"" (+inf + 18.8495559215388j)\n""\n"" The `k = 0` and `k = -1` branches join at `z = -1/e` where\n"" `W(z) = -1` for both branches. Since `-1/e` can only be represented\n"" approximately with mpmath numbers, evaluating the Lambert W function\n"" at this point only gives `-1` approximately::\n""\n"" >>> lambertw(-1/e, 0)\n"" -0.999999999999837133022867\n"" >>> lambertw(-1/e, -1)\n"" -1.00000000000016286697718\n"" \n"" If `-1/e` happens to round in the negative direction, there might be\n"" a small imaginary part::\n"" \n"" >>> lambertw(-1/e)\n"" (-1.0 + 8.22007971511612e-9j)\n""\n"" "; +static PyObject *__pyx_pf_5scipy_7special_8lambertw_lambertw(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + PyObject *__pyx_v_z = 0; + PyObject *__pyx_v_k = 0; + PyObject *__pyx_v_tol = 0; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__z,&__pyx_n_s__k,&__pyx_n_s__tol,0}; + __Pyx_RefNannySetupContext("lambertw"); + __pyx_self = __pyx_self; + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); + PyObject* values[3] = {0,0,0}; + values[1] = ((PyObject *)__pyx_int_0); + values[2] = __pyx_k_2; + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 0: + values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__z); + if (likely(values[0])) kw_args--; + else goto __pyx_L5_argtuple_error; + case 1: + if (kw_args > 0) { + PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__k); + if (unlikely(value)) { values[1] = value; kw_args--; } + } + case 2: + if (kw_args > 0) { + PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__tol); + if (unlikely(value)) { values[2] = value; kw_args--; } + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "lambertw") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 193; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + __pyx_v_z = values[0]; + __pyx_v_k = values[1]; + __pyx_v_tol = values[2]; + } else { + __pyx_v_k = ((PyObject *)__pyx_int_0); + __pyx_v_tol = __pyx_k_2; + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 3: __pyx_v_tol = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: __pyx_v_k = PyTuple_GET_ITEM(__pyx_args, 1); + case 1: __pyx_v_z = PyTuple_GET_ITEM(__pyx_args, 0); + break; + default: goto __pyx_L5_argtuple_error; + } + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("lambertw", 0, 1, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 193; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + __pyx_L3_error:; + __Pyx_AddTraceback("scipy.special.lambertw.lambertw"); + return NULL; + __pyx_L4_argument_unpacking_done:; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":340 + * + * """ + * return _lambertw(z, k, tol) # <<<<<<<<<<<<<< + * + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s___lambertw); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 340; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = PyTuple_New(3); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 340; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(__pyx_v_z); + PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_z); + __Pyx_GIVEREF(__pyx_v_z); + __Pyx_INCREF(__pyx_v_k); + PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_v_k); + __Pyx_GIVEREF(__pyx_v_k); + __Pyx_INCREF(__pyx_v_tol); + PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_v_tol); + __Pyx_GIVEREF(__pyx_v_tol); + __pyx_t_3 = PyObject_Call(__pyx_t_1, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 340; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_r = __pyx_t_3; + __pyx_t_3 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_AddTraceback("scipy.special.lambertw.lambertw"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +static struct PyMethodDef __pyx_methods[] = { + {__Pyx_NAMESTR("lambertw"), (PyCFunction)__pyx_pf_5scipy_7special_8lambertw_lambertw, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_5scipy_7special_8lambertw_lambertw)}, + {0, 0, 0, 0} +}; + +static void __pyx_init_filenames(void); /*proto*/ + +#if PY_MAJOR_VERSION >= 3 +static struct PyModuleDef __pyx_moduledef = { + PyModuleDef_HEAD_INIT, + __Pyx_NAMESTR("lambertw"), + 0, /* m_doc */ + -1, /* m_size */ + __pyx_methods /* m_methods */, + NULL, /* m_reload */ + NULL, /* m_traverse */ + NULL, /* m_clear */ + NULL /* m_free */ +}; +#endif + +static __Pyx_StringTabEntry __pyx_string_tab[] = { + {&__pyx_kp_s_1, __pyx_k_1, sizeof(__pyx_k_1), 0, 0, 1, 0}, + {&__pyx_kp_u_4, __pyx_k_4, sizeof(__pyx_k_4), 0, 1, 0, 0}, + {&__pyx_n_s____main__, __pyx_k____main__, sizeof(__pyx_k____main__), 0, 0, 1, 1}, + {&__pyx_n_s____test__, __pyx_k____test__, sizeof(__pyx_k____test__), 0, 0, 1, 1}, + {&__pyx_n_s___lambertw, __pyx_k___lambertw, sizeof(__pyx_k___lambertw), 0, 0, 1, 1}, + {&__pyx_n_s__imag, __pyx_k__imag, sizeof(__pyx_k__imag), 0, 0, 1, 1}, + {&__pyx_n_s__k, __pyx_k__k, sizeof(__pyx_k__k), 0, 0, 1, 1}, + {&__pyx_n_s__lambertw, __pyx_k__lambertw, sizeof(__pyx_k__lambertw), 0, 0, 1, 1}, + {&__pyx_n_s__range, __pyx_k__range, sizeof(__pyx_k__range), 0, 0, 1, 1}, + {&__pyx_n_s__real, __pyx_k__real, sizeof(__pyx_k__real), 0, 0, 1, 1}, + {&__pyx_n_s__tol, __pyx_k__tol, sizeof(__pyx_k__tol), 0, 0, 1, 1}, + {&__pyx_n_s__warn, __pyx_k__warn, sizeof(__pyx_k__warn), 0, 0, 1, 1}, + {&__pyx_n_s__warnings, __pyx_k__warnings, sizeof(__pyx_k__warnings), 0, 0, 1, 1}, + {&__pyx_n_s__z, __pyx_k__z, sizeof(__pyx_k__z), 0, 0, 1, 1}, + {0, 0, 0, 0, 0, 0, 0} +}; +static int __Pyx_InitCachedBuiltins(void) { + __pyx_builtin_range = __Pyx_GetName(__pyx_b, __pyx_n_s__range); if (!__pyx_builtin_range) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 135; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + return 0; + __pyx_L1_error:; + return -1; +} + +static int __Pyx_InitGlobals(void) { + if (__Pyx_InitStrings(__pyx_string_tab) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + __pyx_int_0 = PyInt_FromLong(0); if (unlikely(!__pyx_int_0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + return 0; + __pyx_L1_error:; + return -1; +} + +#if PY_MAJOR_VERSION < 3 +PyMODINIT_FUNC initlambertw(void); /*proto*/ +PyMODINIT_FUNC initlambertw(void) +#else +PyMODINIT_FUNC PyInit_lambertw(void); /*proto*/ +PyMODINIT_FUNC PyInit_lambertw(void) +#endif +{ + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + #if CYTHON_REFNANNY + void* __pyx_refnanny = NULL; + __Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny"); + if (!__Pyx_RefNanny) { + PyErr_Clear(); + __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny"); + if (!__Pyx_RefNanny) + Py_FatalError("failed to import 'refnanny' module"); + } + __pyx_refnanny = __Pyx_RefNanny->SetupContext("PyMODINIT_FUNC PyInit_lambertw(void)", __LINE__, __FILE__); + #endif + __pyx_init_filenames(); + __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + #if PY_MAJOR_VERSION < 3 + __pyx_empty_bytes = PyString_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + #else + __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + #endif + /*--- Library function declarations ---*/ + /*--- Threads initialization code ---*/ + #if defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS + #ifdef WITH_THREAD /* Python build with threading support? */ + PyEval_InitThreads(); + #endif + #endif + /*--- Module creation code ---*/ + #if PY_MAJOR_VERSION < 3 + __pyx_m = Py_InitModule4(__Pyx_NAMESTR("lambertw"), __pyx_methods, 0, 0, PYTHON_API_VERSION); + #else + __pyx_m = PyModule_Create(&__pyx_moduledef); + #endif + if (!__pyx_m) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + #if PY_MAJOR_VERSION < 3 + Py_INCREF(__pyx_m); + #endif + __pyx_b = PyImport_AddModule(__Pyx_NAMESTR(__Pyx_BUILTIN_MODULE_NAME)); + if (!__pyx_b) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + if (__Pyx_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + /*--- Initialize various global constants etc. ---*/ + if (unlikely(__Pyx_InitGlobals() < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + if (__pyx_module_is_main_scipy__special__lambertw) { + if (__Pyx_SetAttrString(__pyx_m, "__name__", __pyx_n_s____main__) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + } + /*--- Builtin init code ---*/ + if (unlikely(__Pyx_InitCachedBuiltins() < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + /*--- Global init code ---*/ + /*--- Function export code ---*/ + /*--- Type init code ---*/ + /*--- Type import code ---*/ + /*--- Function import code ---*/ + /*--- Execution code ---*/ + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":24 + * + * import cython + * import warnings # <<<<<<<<<<<<<< + * + * cdef extern from "math.h": + */ + __pyx_t_1 = __Pyx_Import(((PyObject *)__pyx_n_s__warnings), 0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 24; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + if (PyObject_SetAttr(__pyx_m, __pyx_n_s__warnings, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 24; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":176 + * + * cdef PyUFuncGenericFunction _loop_funcs[1] + * _loop_funcs[0] = _apply_func_to_1d_vec # <<<<<<<<<<<<<< + * + * cdef char _inp_outp_types[4] + */ + (__pyx_v_5scipy_7special_8lambertw__loop_funcs[0]) = __pyx_f_5scipy_7special_8lambertw__apply_func_to_1d_vec; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":179 + * + * cdef char _inp_outp_types[4] + * _inp_outp_types[0] = NPY_CDOUBLE # <<<<<<<<<<<<<< + * _inp_outp_types[1] = NPY_LONG + * _inp_outp_types[2] = NPY_DOUBLE + */ + (__pyx_v_5scipy_7special_8lambertw__inp_outp_types[0]) = NPY_CDOUBLE; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":180 + * cdef char _inp_outp_types[4] + * _inp_outp_types[0] = NPY_CDOUBLE + * _inp_outp_types[1] = NPY_LONG # <<<<<<<<<<<<<< + * _inp_outp_types[2] = NPY_DOUBLE + * _inp_outp_types[3] = NPY_CDOUBLE + */ + (__pyx_v_5scipy_7special_8lambertw__inp_outp_types[1]) = NPY_LONG; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":181 + * _inp_outp_types[0] = NPY_CDOUBLE + * _inp_outp_types[1] = NPY_LONG + * _inp_outp_types[2] = NPY_DOUBLE # <<<<<<<<<<<<<< + * _inp_outp_types[3] = NPY_CDOUBLE + * + */ + (__pyx_v_5scipy_7special_8lambertw__inp_outp_types[2]) = NPY_DOUBLE; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":182 + * _inp_outp_types[1] = NPY_LONG + * _inp_outp_types[2] = NPY_DOUBLE + * _inp_outp_types[3] = NPY_CDOUBLE # <<<<<<<<<<<<<< + * + * import_array() + */ + (__pyx_v_5scipy_7special_8lambertw__inp_outp_types[3]) = NPY_CDOUBLE; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":184 + * _inp_outp_types[3] = NPY_CDOUBLE + * + * import_array() # <<<<<<<<<<<<<< + * import_ufunc() + * + */ + import_array(); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":185 + * + * import_array() + * import_ufunc() # <<<<<<<<<<<<<< + * + * # The actual ufunc declaration: + */ + import_ufunc(); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":189 + * # The actual ufunc declaration: + * cdef void *the_func_to_apply[1] + * the_func_to_apply[0] = lambertw_scalar # <<<<<<<<<<<<<< + * _lambertw = PyUFunc_FromFuncAndData(_loop_funcs, the_func_to_apply, + * _inp_outp_types, 1, 3, 1, 0, "", "", 0) + */ + (__pyx_v_5scipy_7special_8lambertw_the_func_to_apply[0]) = ((void *)__pyx_f_5scipy_7special_8lambertw_lambertw_scalar); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":191 + * the_func_to_apply[0] = lambertw_scalar + * _lambertw = PyUFunc_FromFuncAndData(_loop_funcs, the_func_to_apply, + * _inp_outp_types, 1, 3, 1, 0, "", "", 0) # <<<<<<<<<<<<<< + * + * def lambertw(z, k=0, tol=1e-8): + */ + __pyx_t_1 = PyUFunc_FromFuncAndData(__pyx_v_5scipy_7special_8lambertw__loop_funcs, __pyx_v_5scipy_7special_8lambertw_the_func_to_apply, __pyx_v_5scipy_7special_8lambertw__inp_outp_types, 1, 3, 1, 0, __pyx_k_3, __pyx_k_3, 0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 190; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + if (PyObject_SetAttr(__pyx_m, __pyx_n_s___lambertw, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 190; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":193 + * _inp_outp_types, 1, 3, 1, 0, "", "", 0) + * + * def lambertw(z, k=0, tol=1e-8): # <<<<<<<<<<<<<< + * r""" + * lambertw(z, k=0, tol=1e-8) + */ + __pyx_t_1 = PyFloat_FromDouble(1e-08); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 193; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_k_2 = __pyx_t_1; + __Pyx_GIVEREF(__pyx_t_1); + __pyx_t_1 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/lambertw.pyx":1 + * # Implementation of the Lambert W function [1]. Based on the MPMath # <<<<<<<<<<<<<< + * # implementation [2], and documentaion [3]. + * # + */ + __pyx_t_1 = PyDict_New(); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_1)); + __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__lambertw); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = __Pyx_GetAttrString(__pyx_t_2, "__doc__"); + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_4), __pyx_t_3) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (PyObject_SetAttr(__pyx_m, __pyx_n_s____test__, ((PyObject *)__pyx_t_1)) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(((PyObject *)__pyx_t_1)); __pyx_t_1 = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + if (__pyx_m) { + __Pyx_AddTraceback("init scipy.special.lambertw"); + Py_DECREF(__pyx_m); __pyx_m = 0; + } else if (!PyErr_Occurred()) { + PyErr_SetString(PyExc_ImportError, "init scipy.special.lambertw"); + } + __pyx_L0:; + __Pyx_RefNannyFinishContext(); + #if PY_MAJOR_VERSION < 3 + return; + #else + return __pyx_m; + #endif +} + +static const char *__pyx_filenames[] = { + "lambertw.pyx", +}; + +/* Runtime support code */ + +static void __pyx_init_filenames(void) { + __pyx_f = __pyx_filenames; +} + +static void __Pyx_RaiseDoubleKeywordsError( + const char* func_name, + PyObject* kw_name) +{ + PyErr_Format(PyExc_TypeError, + #if PY_MAJOR_VERSION >= 3 + "%s() got multiple values for keyword argument '%U'", func_name, kw_name); + #else + "%s() got multiple values for keyword argument '%s'", func_name, + PyString_AS_STRING(kw_name)); + #endif +} + +static void __Pyx_RaiseArgtupleInvalid( + const char* func_name, + int exact, + Py_ssize_t num_min, + Py_ssize_t num_max, + Py_ssize_t num_found) +{ + Py_ssize_t num_expected; + const char *number, *more_or_less; + + if (num_found < num_min) { + num_expected = num_min; + more_or_less = "at least"; + } else { + num_expected = num_max; + more_or_less = "at most"; + } + if (exact) { + more_or_less = "exactly"; + } + number = (num_expected == 1) ? "" : "s"; + PyErr_Format(PyExc_TypeError, + #if PY_VERSION_HEX < 0x02050000 + "%s() takes %s %d positional argument%s (%d given)", + #else + "%s() takes %s %zd positional argument%s (%zd given)", + #endif + func_name, more_or_less, num_expected, number, num_found); +} + +static int __Pyx_ParseOptionalKeywords( + PyObject *kwds, + PyObject **argnames[], + PyObject *kwds2, + PyObject *values[], + Py_ssize_t num_pos_args, + const char* function_name) +{ + PyObject *key = 0, *value = 0; + Py_ssize_t pos = 0; + PyObject*** name; + PyObject*** first_kw_arg = argnames + num_pos_args; + + while (PyDict_Next(kwds, &pos, &key, &value)) { + name = first_kw_arg; + while (*name && (**name != key)) name++; + if (*name) { + values[name-argnames] = value; + } else { + #if PY_MAJOR_VERSION < 3 + if (unlikely(!PyString_CheckExact(key)) && unlikely(!PyString_Check(key))) { + #else + if (unlikely(!PyUnicode_CheckExact(key)) && unlikely(!PyUnicode_Check(key))) { + #endif + goto invalid_keyword_type; + } else { + for (name = first_kw_arg; *name; name++) { + #if PY_MAJOR_VERSION >= 3 + if (PyUnicode_GET_SIZE(**name) == PyUnicode_GET_SIZE(key) && + PyUnicode_Compare(**name, key) == 0) break; + #else + if (PyString_GET_SIZE(**name) == PyString_GET_SIZE(key) && + _PyString_Eq(**name, key)) break; + #endif + } + if (*name) { + values[name-argnames] = value; + } else { + /* unexpected keyword found */ + for (name=argnames; name != first_kw_arg; name++) { + if (**name == key) goto arg_passed_twice; + #if PY_MAJOR_VERSION >= 3 + if (PyUnicode_GET_SIZE(**name) == PyUnicode_GET_SIZE(key) && + PyUnicode_Compare(**name, key) == 0) goto arg_passed_twice; + #else + if (PyString_GET_SIZE(**name) == PyString_GET_SIZE(key) && + _PyString_Eq(**name, key)) goto arg_passed_twice; + #endif + } + if (kwds2) { + if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad; + } else { + goto invalid_keyword; + } + } + } + } + } + return 0; +arg_passed_twice: + __Pyx_RaiseDoubleKeywordsError(function_name, **name); + goto bad; +invalid_keyword_type: + PyErr_Format(PyExc_TypeError, + "%s() keywords must be strings", function_name); + goto bad; +invalid_keyword: + PyErr_Format(PyExc_TypeError, + #if PY_MAJOR_VERSION < 3 + "%s() got an unexpected keyword argument '%s'", + function_name, PyString_AsString(key)); + #else + "%s() got an unexpected keyword argument '%U'", + function_name, key); + #endif +bad: + return -1; +} + +#if CYTHON_CCOMPLEX + #ifdef __cplusplus + static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { + return ::std::complex< double >(x, y); + } + #else + static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { + return x + y*(__pyx_t_double_complex)_Complex_I; + } + #endif +#else + static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { + __pyx_t_double_complex z; + z.real = x; + z.imag = y; + return z; + } +#endif + +#if CYTHON_CCOMPLEX +#else + static CYTHON_INLINE int __Pyx_c_eq(__pyx_t_double_complex a, __pyx_t_double_complex b) { + return (a.real == b.real) && (a.imag == b.imag); + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + z.real = a.real + b.real; + z.imag = a.imag + b.imag; + return z; + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + z.real = a.real - b.real; + z.imag = a.imag - b.imag; + return z; + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + z.real = a.real * b.real - a.imag * b.imag; + z.imag = a.real * b.imag + a.imag * b.real; + return z; + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot(__pyx_t_double_complex a, __pyx_t_double_complex b) { + __pyx_t_double_complex z; + double denom = b.real * b.real + b.imag * b.imag; + z.real = (a.real * b.real + a.imag * b.imag) / denom; + z.imag = (a.imag * b.real - a.real * b.imag) / denom; + return z; + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg(__pyx_t_double_complex a) { + __pyx_t_double_complex z; + z.real = -a.real; + z.imag = -a.imag; + return z; + } + static CYTHON_INLINE int __Pyx_c_is_zero(__pyx_t_double_complex a) { + return (a.real == 0) && (a.imag == 0); + } + static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj(__pyx_t_double_complex a) { + __pyx_t_double_complex z; + z.real = a.real; + z.imag = -a.imag; + return z; + } +/* + static CYTHON_INLINE double __Pyx_c_abs(__pyx_t_double_complex z) { +#if HAVE_HYPOT + return hypot(z.real, z.imag); +#else + return sqrt(z.real*z.real + z.imag*z.imag); +#endif + } +*/ +#endif + +static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list) { + PyObject *__import__ = 0; + PyObject *empty_list = 0; + PyObject *module = 0; + PyObject *global_dict = 0; + PyObject *empty_dict = 0; + PyObject *list; + __import__ = __Pyx_GetAttrString(__pyx_b, "__import__"); + if (!__import__) + goto bad; + if (from_list) + list = from_list; + else { + empty_list = PyList_New(0); + if (!empty_list) + goto bad; + list = empty_list; + } + global_dict = PyModule_GetDict(__pyx_m); + if (!global_dict) + goto bad; + empty_dict = PyDict_New(); + if (!empty_dict) + goto bad; + module = PyObject_CallFunctionObjArgs(__import__, + name, global_dict, empty_dict, list, NULL); +bad: + Py_XDECREF(empty_list); + Py_XDECREF(__import__); + Py_XDECREF(empty_dict); + return module; +} + +static PyObject *__Pyx_GetName(PyObject *dict, PyObject *name) { + PyObject *result; + result = PyObject_GetAttr(dict, name); + if (!result) + PyErr_SetObject(PyExc_NameError, name); + return result; +} + +static CYTHON_INLINE PyObject *__Pyx_PyInt_to_py_npy_intp(npy_intp val) { + const npy_intp neg_one = (npy_intp)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(npy_intp) < sizeof(long)) { + return PyInt_FromLong((long)val); + } else if (sizeof(npy_intp) == sizeof(long)) { + if (is_unsigned) + return PyLong_FromUnsignedLong((unsigned long)val); + else + return PyInt_FromLong((long)val); + } else { /* (sizeof(npy_intp) > sizeof(long)) */ + if (is_unsigned) + return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG)val); + else + return PyLong_FromLongLong((PY_LONG_LONG)val); + } +} + +static CYTHON_INLINE npy_intp __Pyx_PyInt_from_py_npy_intp(PyObject* x) { + const npy_intp neg_one = (npy_intp)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(npy_intp) == sizeof(char)) { + if (is_unsigned) + return (npy_intp)__Pyx_PyInt_AsUnsignedChar(x); + else + return (npy_intp)__Pyx_PyInt_AsSignedChar(x); + } else if (sizeof(npy_intp) == sizeof(short)) { + if (is_unsigned) + return (npy_intp)__Pyx_PyInt_AsUnsignedShort(x); + else + return (npy_intp)__Pyx_PyInt_AsSignedShort(x); + } else if (sizeof(npy_intp) == sizeof(int)) { + if (is_unsigned) + return (npy_intp)__Pyx_PyInt_AsUnsignedInt(x); + else + return (npy_intp)__Pyx_PyInt_AsSignedInt(x); + } else if (sizeof(npy_intp) == sizeof(long)) { + if (is_unsigned) + return (npy_intp)__Pyx_PyInt_AsUnsignedLong(x); + else + return (npy_intp)__Pyx_PyInt_AsSignedLong(x); + } else if (sizeof(npy_intp) == sizeof(PY_LONG_LONG)) { + if (is_unsigned) + return (npy_intp)__Pyx_PyInt_AsUnsignedLongLong(x); + else + return (npy_intp)__Pyx_PyInt_AsSignedLongLong(x); +#if 0 + } else if (sizeof(npy_intp) > sizeof(short) && + sizeof(npy_intp) < sizeof(int)) { /* __int32 ILP64 ? */ + if (is_unsigned) + return (npy_intp)__Pyx_PyInt_AsUnsignedInt(x); + else + return (npy_intp)__Pyx_PyInt_AsSignedInt(x); +#endif + } + PyErr_SetString(PyExc_TypeError, "npy_intp"); + return (npy_intp)-1; +} + +static CYTHON_INLINE unsigned char __Pyx_PyInt_AsUnsignedChar(PyObject* x) { + const unsigned char neg_one = (unsigned char)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(unsigned char) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(unsigned char)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to unsigned char" : + "value too large to convert to unsigned char"); + } + return (unsigned char)-1; + } + return (unsigned char)val; + } + return (unsigned char)__Pyx_PyInt_AsUnsignedLong(x); +} + +static CYTHON_INLINE unsigned short __Pyx_PyInt_AsUnsignedShort(PyObject* x) { + const unsigned short neg_one = (unsigned short)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(unsigned short) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(unsigned short)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to unsigned short" : + "value too large to convert to unsigned short"); + } + return (unsigned short)-1; + } + return (unsigned short)val; + } + return (unsigned short)__Pyx_PyInt_AsUnsignedLong(x); +} + +static CYTHON_INLINE unsigned int __Pyx_PyInt_AsUnsignedInt(PyObject* x) { + const unsigned int neg_one = (unsigned int)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(unsigned int) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(unsigned int)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to unsigned int" : + "value too large to convert to unsigned int"); + } + return (unsigned int)-1; + } + return (unsigned int)val; + } + return (unsigned int)__Pyx_PyInt_AsUnsignedLong(x); +} + +static CYTHON_INLINE char __Pyx_PyInt_AsChar(PyObject* x) { + const char neg_one = (char)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(char) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(char)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to char" : + "value too large to convert to char"); + } + return (char)-1; + } + return (char)val; + } + return (char)__Pyx_PyInt_AsLong(x); +} + +static CYTHON_INLINE short __Pyx_PyInt_AsShort(PyObject* x) { + const short neg_one = (short)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(short) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(short)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to short" : + "value too large to convert to short"); + } + return (short)-1; + } + return (short)val; + } + return (short)__Pyx_PyInt_AsLong(x); +} + +static CYTHON_INLINE int __Pyx_PyInt_AsInt(PyObject* x) { + const int neg_one = (int)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(int) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(int)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to int" : + "value too large to convert to int"); + } + return (int)-1; + } + return (int)val; + } + return (int)__Pyx_PyInt_AsLong(x); +} + +static CYTHON_INLINE signed char __Pyx_PyInt_AsSignedChar(PyObject* x) { + const signed char neg_one = (signed char)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(signed char) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(signed char)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to signed char" : + "value too large to convert to signed char"); + } + return (signed char)-1; + } + return (signed char)val; + } + return (signed char)__Pyx_PyInt_AsSignedLong(x); +} + +static CYTHON_INLINE signed short __Pyx_PyInt_AsSignedShort(PyObject* x) { + const signed short neg_one = (signed short)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(signed short) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(signed short)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to signed short" : + "value too large to convert to signed short"); + } + return (signed short)-1; + } + return (signed short)val; + } + return (signed short)__Pyx_PyInt_AsSignedLong(x); +} + +static CYTHON_INLINE signed int __Pyx_PyInt_AsSignedInt(PyObject* x) { + const signed int neg_one = (signed int)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(signed int) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(signed int)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to signed int" : + "value too large to convert to signed int"); + } + return (signed int)-1; + } + return (signed int)val; + } + return (signed int)__Pyx_PyInt_AsSignedLong(x); +} + +static CYTHON_INLINE unsigned long __Pyx_PyInt_AsUnsignedLong(PyObject* x) { + const unsigned long neg_one = (unsigned long)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to unsigned long"); + return (unsigned long)-1; + } + return (unsigned long)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to unsigned long"); + return (unsigned long)-1; + } + return PyLong_AsUnsignedLong(x); + } else { + return PyLong_AsLong(x); + } + } else { + unsigned long val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (unsigned long)-1; + val = __Pyx_PyInt_AsUnsignedLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static CYTHON_INLINE unsigned PY_LONG_LONG __Pyx_PyInt_AsUnsignedLongLong(PyObject* x) { + const unsigned PY_LONG_LONG neg_one = (unsigned PY_LONG_LONG)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to unsigned PY_LONG_LONG"); + return (unsigned PY_LONG_LONG)-1; + } + return (unsigned PY_LONG_LONG)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to unsigned PY_LONG_LONG"); + return (unsigned PY_LONG_LONG)-1; + } + return PyLong_AsUnsignedLongLong(x); + } else { + return PyLong_AsLongLong(x); + } + } else { + unsigned PY_LONG_LONG val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (unsigned PY_LONG_LONG)-1; + val = __Pyx_PyInt_AsUnsignedLongLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static CYTHON_INLINE long __Pyx_PyInt_AsLong(PyObject* x) { + const long neg_one = (long)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to long"); + return (long)-1; + } + return (long)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to long"); + return (long)-1; + } + return PyLong_AsUnsignedLong(x); + } else { + return PyLong_AsLong(x); + } + } else { + long val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (long)-1; + val = __Pyx_PyInt_AsLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static CYTHON_INLINE PY_LONG_LONG __Pyx_PyInt_AsLongLong(PyObject* x) { + const PY_LONG_LONG neg_one = (PY_LONG_LONG)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to PY_LONG_LONG"); + return (PY_LONG_LONG)-1; + } + return (PY_LONG_LONG)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to PY_LONG_LONG"); + return (PY_LONG_LONG)-1; + } + return PyLong_AsUnsignedLongLong(x); + } else { + return PyLong_AsLongLong(x); + } + } else { + PY_LONG_LONG val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (PY_LONG_LONG)-1; + val = __Pyx_PyInt_AsLongLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static CYTHON_INLINE signed long __Pyx_PyInt_AsSignedLong(PyObject* x) { + const signed long neg_one = (signed long)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to signed long"); + return (signed long)-1; + } + return (signed long)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to signed long"); + return (signed long)-1; + } + return PyLong_AsUnsignedLong(x); + } else { + return PyLong_AsLong(x); + } + } else { + signed long val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (signed long)-1; + val = __Pyx_PyInt_AsSignedLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static CYTHON_INLINE signed PY_LONG_LONG __Pyx_PyInt_AsSignedLongLong(PyObject* x) { + const signed PY_LONG_LONG neg_one = (signed PY_LONG_LONG)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to signed PY_LONG_LONG"); + return (signed PY_LONG_LONG)-1; + } + return (signed PY_LONG_LONG)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to signed PY_LONG_LONG"); + return (signed PY_LONG_LONG)-1; + } + return PyLong_AsUnsignedLongLong(x); + } else { + return PyLong_AsLongLong(x); + } + } else { + signed PY_LONG_LONG val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (signed PY_LONG_LONG)-1; + val = __Pyx_PyInt_AsSignedLongLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static CYTHON_INLINE void __Pyx_ErrRestore(PyObject *type, PyObject *value, PyObject *tb) { + PyObject *tmp_type, *tmp_value, *tmp_tb; + PyThreadState *tstate = PyThreadState_GET(); + + tmp_type = tstate->curexc_type; + tmp_value = tstate->curexc_value; + tmp_tb = tstate->curexc_traceback; + tstate->curexc_type = type; + tstate->curexc_value = value; + tstate->curexc_traceback = tb; + Py_XDECREF(tmp_type); + Py_XDECREF(tmp_value); + Py_XDECREF(tmp_tb); +} + +static CYTHON_INLINE void __Pyx_ErrFetch(PyObject **type, PyObject **value, PyObject **tb) { + PyThreadState *tstate = PyThreadState_GET(); + *type = tstate->curexc_type; + *value = tstate->curexc_value; + *tb = tstate->curexc_traceback; + + tstate->curexc_type = 0; + tstate->curexc_value = 0; + tstate->curexc_traceback = 0; +} + + +static void __Pyx_WriteUnraisable(const char *name) { + PyObject *old_exc, *old_val, *old_tb; + PyObject *ctx; + __Pyx_ErrFetch(&old_exc, &old_val, &old_tb); + #if PY_MAJOR_VERSION < 3 + ctx = PyString_FromString(name); + #else + ctx = PyUnicode_FromString(name); + #endif + __Pyx_ErrRestore(old_exc, old_val, old_tb); + if (!ctx) { + PyErr_WriteUnraisable(Py_None); + } else { + PyErr_WriteUnraisable(ctx); + Py_DECREF(ctx); + } +} + +#include "compile.h" +#include "frameobject.h" +#include "traceback.h" + +static void __Pyx_AddTraceback(const char *funcname) { + PyObject *py_srcfile = 0; + PyObject *py_funcname = 0; + PyObject *py_globals = 0; + PyCodeObject *py_code = 0; + PyFrameObject *py_frame = 0; + + #if PY_MAJOR_VERSION < 3 + py_srcfile = PyString_FromString(__pyx_filename); + #else + py_srcfile = PyUnicode_FromString(__pyx_filename); + #endif + if (!py_srcfile) goto bad; + if (__pyx_clineno) { + #if PY_MAJOR_VERSION < 3 + py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, __pyx_clineno); + #else + py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, __pyx_clineno); + #endif + } + else { + #if PY_MAJOR_VERSION < 3 + py_funcname = PyString_FromString(funcname); + #else + py_funcname = PyUnicode_FromString(funcname); + #endif + } + if (!py_funcname) goto bad; + py_globals = PyModule_GetDict(__pyx_m); + if (!py_globals) goto bad; + py_code = PyCode_New( + 0, /*int argcount,*/ + #if PY_MAJOR_VERSION >= 3 + 0, /*int kwonlyargcount,*/ + #endif + 0, /*int nlocals,*/ + 0, /*int stacksize,*/ + 0, /*int flags,*/ + __pyx_empty_bytes, /*PyObject *code,*/ + __pyx_empty_tuple, /*PyObject *consts,*/ + __pyx_empty_tuple, /*PyObject *names,*/ + __pyx_empty_tuple, /*PyObject *varnames,*/ + __pyx_empty_tuple, /*PyObject *freevars,*/ + __pyx_empty_tuple, /*PyObject *cellvars,*/ + py_srcfile, /*PyObject *filename,*/ + py_funcname, /*PyObject *name,*/ + __pyx_lineno, /*int firstlineno,*/ + __pyx_empty_bytes /*PyObject *lnotab*/ + ); + if (!py_code) goto bad; + py_frame = PyFrame_New( + PyThreadState_GET(), /*PyThreadState *tstate,*/ + py_code, /*PyCodeObject *code,*/ + py_globals, /*PyObject *globals,*/ + 0 /*PyObject *locals*/ + ); + if (!py_frame) goto bad; + py_frame->f_lineno = __pyx_lineno; + PyTraceBack_Here(py_frame); +bad: + Py_XDECREF(py_srcfile); + Py_XDECREF(py_funcname); + Py_XDECREF(py_code); + Py_XDECREF(py_frame); +} + +static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) { + while (t->p) { + #if PY_MAJOR_VERSION < 3 + if (t->is_unicode) { + *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL); + } else if (t->intern) { + *t->p = PyString_InternFromString(t->s); + } else { + *t->p = PyString_FromStringAndSize(t->s, t->n - 1); + } + #else /* Python 3+ has unicode identifiers */ + if (t->is_unicode | t->is_str) { + if (t->intern) { + *t->p = PyUnicode_InternFromString(t->s); + } else if (t->encoding) { + *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL); + } else { + *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1); + } + } else { + *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1); + } + #endif + if (!*t->p) + return -1; + ++t; + } + return 0; +} + +/* Type Conversion Functions */ + +static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { + if (x == Py_True) return 1; + else if ((x == Py_False) | (x == Py_None)) return 0; + else return PyObject_IsTrue(x); +} + +static CYTHON_INLINE PyObject* __Pyx_PyNumber_Int(PyObject* x) { + PyNumberMethods *m; + const char *name = NULL; + PyObject *res = NULL; +#if PY_VERSION_HEX < 0x03000000 + if (PyInt_Check(x) || PyLong_Check(x)) +#else + if (PyLong_Check(x)) +#endif + return Py_INCREF(x), x; + m = Py_TYPE(x)->tp_as_number; +#if PY_VERSION_HEX < 0x03000000 + if (m && m->nb_int) { + name = "int"; + res = PyNumber_Int(x); + } + else if (m && m->nb_long) { + name = "long"; + res = PyNumber_Long(x); + } +#else + if (m && m->nb_int) { + name = "int"; + res = PyNumber_Long(x); + } +#endif + if (res) { +#if PY_VERSION_HEX < 0x03000000 + if (!PyInt_Check(res) && !PyLong_Check(res)) { +#else + if (!PyLong_Check(res)) { +#endif + PyErr_Format(PyExc_TypeError, + "__%s__ returned non-%s (type %.200s)", + name, name, Py_TYPE(res)->tp_name); + Py_DECREF(res); + return NULL; + } + } + else if (!PyErr_Occurred()) { + PyErr_SetString(PyExc_TypeError, + "an integer is required"); + } + return res; +} + +static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { + Py_ssize_t ival; + PyObject* x = PyNumber_Index(b); + if (!x) return -1; + ival = PyInt_AsSsize_t(x); + Py_DECREF(x); + return ival; +} + +static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) { +#if PY_VERSION_HEX < 0x02050000 + if (ival <= LONG_MAX) + return PyInt_FromLong((long)ival); + else { + unsigned char *bytes = (unsigned char *) &ival; + int one = 1; int little = (int)*(unsigned char*)&one; + return _PyLong_FromByteArray(bytes, sizeof(size_t), little, 0); + } +#else + return PyInt_FromSize_t(ival); +#endif +} + +static CYTHON_INLINE size_t __Pyx_PyInt_AsSize_t(PyObject* x) { + unsigned PY_LONG_LONG val = __Pyx_PyInt_AsUnsignedLongLong(x); + if (unlikely(val == (unsigned PY_LONG_LONG)-1 && PyErr_Occurred())) { + return (size_t)-1; + } else if (unlikely(val != (unsigned PY_LONG_LONG)(size_t)val)) { + PyErr_SetString(PyExc_OverflowError, + "value too large to convert to size_t"); + return (size_t)-1; + } + return (size_t)val; +} + + +#endif /* Py_PYTHON_H */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/orthogonal_eval.c python-scipy-0.8.0+dfsg1/scipy/special/orthogonal_eval.c --- python-scipy-0.7.2+dfsg1/scipy/special/orthogonal_eval.c 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/orthogonal_eval.c 2010-07-26 15:48:36.000000000 +0100 @@ -0,0 +1,5197 @@ +/* Generated by Cython 0.12.1 on Mon May 31 10:17:30 2010 */ + +#define PY_SSIZE_T_CLEAN +#include "Python.h" +#include "structmember.h" +#ifndef Py_PYTHON_H + #error Python headers needed to compile C extensions, please install development version of Python. +#else + +#ifndef PY_LONG_LONG + #define PY_LONG_LONG LONG_LONG +#endif +#ifndef DL_EXPORT + #define DL_EXPORT(t) t +#endif +#if PY_VERSION_HEX < 0x02040000 + #define METH_COEXIST 0 + #define PyDict_CheckExact(op) (Py_TYPE(op) == &PyDict_Type) + #define PyDict_Contains(d,o) PySequence_Contains(d,o) +#endif + +#if PY_VERSION_HEX < 0x02050000 + typedef int Py_ssize_t; + #define PY_SSIZE_T_MAX INT_MAX + #define PY_SSIZE_T_MIN INT_MIN + #define PY_FORMAT_SIZE_T "" + #define PyInt_FromSsize_t(z) PyInt_FromLong(z) + #define PyInt_AsSsize_t(o) PyInt_AsLong(o) + #define PyNumber_Index(o) PyNumber_Int(o) + #define PyIndex_Check(o) PyNumber_Check(o) + #define PyErr_WarnEx(category, message, stacklevel) PyErr_Warn(category, message) +#endif + +#if PY_VERSION_HEX < 0x02060000 + #define Py_REFCNT(ob) (((PyObject*)(ob))->ob_refcnt) + #define Py_TYPE(ob) (((PyObject*)(ob))->ob_type) + #define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size) + #define PyVarObject_HEAD_INIT(type, size) \ + PyObject_HEAD_INIT(type) size, + #define PyType_Modified(t) + + typedef struct { + void *buf; + PyObject *obj; + Py_ssize_t len; + Py_ssize_t itemsize; + int readonly; + int ndim; + char *format; + Py_ssize_t *shape; + Py_ssize_t *strides; + Py_ssize_t *suboffsets; + void *internal; + } Py_buffer; + + #define PyBUF_SIMPLE 0 + #define PyBUF_WRITABLE 0x0001 + #define PyBUF_FORMAT 0x0004 + #define PyBUF_ND 0x0008 + #define PyBUF_STRIDES (0x0010 | PyBUF_ND) + #define PyBUF_C_CONTIGUOUS (0x0020 | PyBUF_STRIDES) + #define PyBUF_F_CONTIGUOUS (0x0040 | PyBUF_STRIDES) + #define PyBUF_ANY_CONTIGUOUS (0x0080 | PyBUF_STRIDES) + #define PyBUF_INDIRECT (0x0100 | PyBUF_STRIDES) + +#endif + +#if PY_MAJOR_VERSION < 3 + #define __Pyx_BUILTIN_MODULE_NAME "__builtin__" +#else + #define __Pyx_BUILTIN_MODULE_NAME "builtins" +#endif + +#if PY_MAJOR_VERSION >= 3 + #define Py_TPFLAGS_CHECKTYPES 0 + #define Py_TPFLAGS_HAVE_INDEX 0 +#endif + +#if (PY_VERSION_HEX < 0x02060000) || (PY_MAJOR_VERSION >= 3) + #define Py_TPFLAGS_HAVE_NEWBUFFER 0 +#endif + +#if PY_MAJOR_VERSION >= 3 + #define PyBaseString_Type PyUnicode_Type + #define PyString_Type PyUnicode_Type + #define PyString_CheckExact PyUnicode_CheckExact +#else + #define PyBytes_Type PyString_Type + #define PyBytes_CheckExact PyString_CheckExact +#endif + +#if PY_MAJOR_VERSION >= 3 + #define PyInt_Type PyLong_Type + #define PyInt_Check(op) PyLong_Check(op) + #define PyInt_CheckExact(op) PyLong_CheckExact(op) + #define PyInt_FromString PyLong_FromString + #define PyInt_FromUnicode PyLong_FromUnicode + #define PyInt_FromLong PyLong_FromLong + #define PyInt_FromSize_t PyLong_FromSize_t + #define PyInt_FromSsize_t PyLong_FromSsize_t + #define PyInt_AsLong PyLong_AsLong + #define PyInt_AS_LONG PyLong_AS_LONG + #define PyInt_AsSsize_t PyLong_AsSsize_t + #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask + #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask + #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y) + #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y) +#else + #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y) + #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y) + +#endif + +#if PY_MAJOR_VERSION >= 3 + #define PyMethod_New(func, self, klass) PyInstanceMethod_New(func) +#endif + +#if !defined(WIN32) && !defined(MS_WINDOWS) + #ifndef __stdcall + #define __stdcall + #endif + #ifndef __cdecl + #define __cdecl + #endif + #ifndef __fastcall + #define __fastcall + #endif +#else + #define _USE_MATH_DEFINES +#endif + +#if PY_VERSION_HEX < 0x02050000 + #define __Pyx_GetAttrString(o,n) PyObject_GetAttrString((o),((char *)(n))) + #define __Pyx_SetAttrString(o,n,a) PyObject_SetAttrString((o),((char *)(n)),(a)) + #define __Pyx_DelAttrString(o,n) PyObject_DelAttrString((o),((char *)(n))) +#else + #define __Pyx_GetAttrString(o,n) PyObject_GetAttrString((o),(n)) + #define __Pyx_SetAttrString(o,n,a) PyObject_SetAttrString((o),(n),(a)) + #define __Pyx_DelAttrString(o,n) PyObject_DelAttrString((o),(n)) +#endif + +#if PY_VERSION_HEX < 0x02050000 + #define __Pyx_NAMESTR(n) ((char *)(n)) + #define __Pyx_DOCSTR(n) ((char *)(n)) +#else + #define __Pyx_NAMESTR(n) (n) + #define __Pyx_DOCSTR(n) (n) +#endif +#ifdef __cplusplus +#define __PYX_EXTERN_C extern "C" +#else +#define __PYX_EXTERN_C extern +#endif +#include +#define __PYX_HAVE_API__scipy__special__orthogonal_eval +#include "math.h" +#include "numpy/arrayobject.h" +#include "numpy/ufuncobject.h" + +#ifndef CYTHON_INLINE + #if defined(__GNUC__) + #define CYTHON_INLINE __inline__ + #elif defined(_MSC_VER) + #define CYTHON_INLINE __inline + #else + #define CYTHON_INLINE + #endif +#endif + +typedef struct {PyObject **p; char *s; const long n; const char* encoding; const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; /*proto*/ + + +/* Type Conversion Predeclarations */ + +#if PY_MAJOR_VERSION < 3 +#define __Pyx_PyBytes_FromString PyString_FromString +#define __Pyx_PyBytes_FromStringAndSize PyString_FromStringAndSize +#define __Pyx_PyBytes_AsString PyString_AsString +#else +#define __Pyx_PyBytes_FromString PyBytes_FromString +#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize +#define __Pyx_PyBytes_AsString PyBytes_AsString +#endif + +#define __Pyx_PyBytes_FromUString(s) __Pyx_PyBytes_FromString((char*)s) +#define __Pyx_PyBytes_AsUString(s) ((unsigned char*) __Pyx_PyBytes_AsString(s)) + +#define __Pyx_PyBool_FromLong(b) ((b) ? (Py_INCREF(Py_True), Py_True) : (Py_INCREF(Py_False), Py_False)) +static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); +static CYTHON_INLINE PyObject* __Pyx_PyNumber_Int(PyObject* x); + +#if !defined(T_PYSSIZET) +#if PY_VERSION_HEX < 0x02050000 +#define T_PYSSIZET T_INT +#elif !defined(T_LONGLONG) +#define T_PYSSIZET \ + ((sizeof(Py_ssize_t) == sizeof(int)) ? T_INT : \ + ((sizeof(Py_ssize_t) == sizeof(long)) ? T_LONG : -1)) +#else +#define T_PYSSIZET \ + ((sizeof(Py_ssize_t) == sizeof(int)) ? T_INT : \ + ((sizeof(Py_ssize_t) == sizeof(long)) ? T_LONG : \ + ((sizeof(Py_ssize_t) == sizeof(PY_LONG_LONG)) ? T_LONGLONG : -1))) +#endif +#endif + + +#if !defined(T_ULONGLONG) +#define __Pyx_T_UNSIGNED_INT(x) \ + ((sizeof(x) == sizeof(unsigned char)) ? T_UBYTE : \ + ((sizeof(x) == sizeof(unsigned short)) ? T_USHORT : \ + ((sizeof(x) == sizeof(unsigned int)) ? T_UINT : \ + ((sizeof(x) == sizeof(unsigned long)) ? T_ULONG : -1)))) +#else +#define __Pyx_T_UNSIGNED_INT(x) \ + ((sizeof(x) == sizeof(unsigned char)) ? T_UBYTE : \ + ((sizeof(x) == sizeof(unsigned short)) ? T_USHORT : \ + ((sizeof(x) == sizeof(unsigned int)) ? T_UINT : \ + ((sizeof(x) == sizeof(unsigned long)) ? T_ULONG : \ + ((sizeof(x) == sizeof(unsigned PY_LONG_LONG)) ? T_ULONGLONG : -1))))) +#endif +#if !defined(T_LONGLONG) +#define __Pyx_T_SIGNED_INT(x) \ + ((sizeof(x) == sizeof(char)) ? T_BYTE : \ + ((sizeof(x) == sizeof(short)) ? T_SHORT : \ + ((sizeof(x) == sizeof(int)) ? T_INT : \ + ((sizeof(x) == sizeof(long)) ? T_LONG : -1)))) +#else +#define __Pyx_T_SIGNED_INT(x) \ + ((sizeof(x) == sizeof(char)) ? T_BYTE : \ + ((sizeof(x) == sizeof(short)) ? T_SHORT : \ + ((sizeof(x) == sizeof(int)) ? T_INT : \ + ((sizeof(x) == sizeof(long)) ? T_LONG : \ + ((sizeof(x) == sizeof(PY_LONG_LONG)) ? T_LONGLONG : -1))))) +#endif + +#define __Pyx_T_FLOATING(x) \ + ((sizeof(x) == sizeof(float)) ? T_FLOAT : \ + ((sizeof(x) == sizeof(double)) ? T_DOUBLE : -1)) + +#if !defined(T_SIZET) +#if !defined(T_ULONGLONG) +#define T_SIZET \ + ((sizeof(size_t) == sizeof(unsigned int)) ? T_UINT : \ + ((sizeof(size_t) == sizeof(unsigned long)) ? T_ULONG : -1)) +#else +#define T_SIZET \ + ((sizeof(size_t) == sizeof(unsigned int)) ? T_UINT : \ + ((sizeof(size_t) == sizeof(unsigned long)) ? T_ULONG : \ + ((sizeof(size_t) == sizeof(unsigned PY_LONG_LONG)) ? T_ULONGLONG : -1))) +#endif +#endif + +static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); +static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t); +static CYTHON_INLINE size_t __Pyx_PyInt_AsSize_t(PyObject*); + +#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) + + +#ifdef __GNUC__ +/* Test for GCC > 2.95 */ +#if __GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95)) +#define likely(x) __builtin_expect(!!(x), 1) +#define unlikely(x) __builtin_expect(!!(x), 0) +#else /* __GNUC__ > 2 ... */ +#define likely(x) (x) +#define unlikely(x) (x) +#endif /* __GNUC__ > 2 ... */ +#else /* __GNUC__ */ +#define likely(x) (x) +#define unlikely(x) (x) +#endif /* __GNUC__ */ + +static PyObject *__pyx_m; +static PyObject *__pyx_b; +static PyObject *__pyx_empty_tuple; +static PyObject *__pyx_empty_bytes; +static int __pyx_lineno; +static int __pyx_clineno = 0; +static const char * __pyx_cfilenm= __FILE__; +static const char *__pyx_filename; +static const char **__pyx_f; + + +/* Type declarations */ + +#ifndef CYTHON_REFNANNY + #define CYTHON_REFNANNY 0 +#endif + +#if CYTHON_REFNANNY + typedef struct { + void (*INCREF)(void*, PyObject*, int); + void (*DECREF)(void*, PyObject*, int); + void (*GOTREF)(void*, PyObject*, int); + void (*GIVEREF)(void*, PyObject*, int); + void* (*SetupContext)(const char*, int, const char*); + void (*FinishContext)(void**); + } __Pyx_RefNannyAPIStruct; + static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL; + static __Pyx_RefNannyAPIStruct * __Pyx_RefNannyImportAPI(const char *modname) { + PyObject *m = NULL, *p = NULL; + void *r = NULL; + m = PyImport_ImportModule((char *)modname); + if (!m) goto end; + p = PyObject_GetAttrString(m, (char *)"RefNannyAPI"); + if (!p) goto end; + r = PyLong_AsVoidPtr(p); + end: + Py_XDECREF(p); + Py_XDECREF(m); + return (__Pyx_RefNannyAPIStruct *)r; + } + #define __Pyx_RefNannySetupContext(name) void *__pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__) + #define __Pyx_RefNannyFinishContext() __Pyx_RefNanny->FinishContext(&__pyx_refnanny) + #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__) + #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r);} } while(0) +#else + #define __Pyx_RefNannySetupContext(name) + #define __Pyx_RefNannyFinishContext() + #define __Pyx_INCREF(r) Py_INCREF(r) + #define __Pyx_DECREF(r) Py_DECREF(r) + #define __Pyx_GOTREF(r) + #define __Pyx_GIVEREF(r) + #define __Pyx_XDECREF(r) Py_XDECREF(r) +#endif /* CYTHON_REFNANNY */ +#define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);} } while(0) +#define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r);} } while(0) + +static void __Pyx_RaiseDoubleKeywordsError( + const char* func_name, PyObject* kw_name); /*proto*/ + +static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact, + Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found); /*proto*/ + +static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[], PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args, const char* function_name); /*proto*/ + +static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index); + +static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(void); + +static PyObject *__Pyx_UnpackItem(PyObject *, Py_ssize_t index); /*proto*/ +static int __Pyx_EndUnpack(PyObject *); /*proto*/ + +static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list); /*proto*/ + +static PyObject *__Pyx_GetName(PyObject *dict, PyObject *name); /*proto*/ + +static CYTHON_INLINE PyObject *__Pyx_PyInt_to_py_npy_intp(npy_intp); + +static CYTHON_INLINE void __Pyx_ErrRestore(PyObject *type, PyObject *value, PyObject *tb); /*proto*/ +static CYTHON_INLINE void __Pyx_ErrFetch(PyObject **type, PyObject **value, PyObject **tb); /*proto*/ + +static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb); /*proto*/ + +static CYTHON_INLINE unsigned char __Pyx_PyInt_AsUnsignedChar(PyObject *); + +static CYTHON_INLINE unsigned short __Pyx_PyInt_AsUnsignedShort(PyObject *); + +static CYTHON_INLINE unsigned int __Pyx_PyInt_AsUnsignedInt(PyObject *); + +static CYTHON_INLINE char __Pyx_PyInt_AsChar(PyObject *); + +static CYTHON_INLINE short __Pyx_PyInt_AsShort(PyObject *); + +static CYTHON_INLINE int __Pyx_PyInt_AsInt(PyObject *); + +static CYTHON_INLINE signed char __Pyx_PyInt_AsSignedChar(PyObject *); + +static CYTHON_INLINE signed short __Pyx_PyInt_AsSignedShort(PyObject *); + +static CYTHON_INLINE signed int __Pyx_PyInt_AsSignedInt(PyObject *); + +static CYTHON_INLINE unsigned long __Pyx_PyInt_AsUnsignedLong(PyObject *); + +static CYTHON_INLINE unsigned PY_LONG_LONG __Pyx_PyInt_AsUnsignedLongLong(PyObject *); + +static CYTHON_INLINE long __Pyx_PyInt_AsLong(PyObject *); + +static CYTHON_INLINE PY_LONG_LONG __Pyx_PyInt_AsLongLong(PyObject *); + +static CYTHON_INLINE signed long __Pyx_PyInt_AsSignedLong(PyObject *); + +static CYTHON_INLINE signed PY_LONG_LONG __Pyx_PyInt_AsSignedLongLong(PyObject *); + +static void __Pyx_AddTraceback(const char *funcname); /*proto*/ + +static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); /*proto*/ +/* Module declarations from scipy.special.orthogonal_eval */ + +static char __pyx_v_5scipy_7special_15orthogonal_eval__id_d_types[3]; +static PyUFuncGenericFunction __pyx_v_5scipy_7special_15orthogonal_eval__id_d_funcs[1]; +static void *__pyx_v_5scipy_7special_15orthogonal_eval_chebyt_data[1]; +static double __pyx_f_5scipy_7special_15orthogonal_eval_eval_poly_chebyt(long, double); /*proto*/ +static void __pyx_f_5scipy_7special_15orthogonal_eval__loop_id_d(char **, npy_intp *, npy_intp *, void *); /*proto*/ +#define __Pyx_MODULE_NAME "scipy.special.orthogonal_eval" +int __pyx_module_is_main_scipy__special__orthogonal_eval = 0; + +/* Implementation of scipy.special.orthogonal_eval */ +static PyObject *__pyx_builtin_range; +static PyObject *__pyx_builtin_ValueError; +static char __pyx_k_1[] = "Order must be integer"; +static char __pyx_k_2[] = "\nEvaluate orthogonal polynomial values using recurrence relations.\n\nReferences\n----------\n\n.. [AMS55] Abramowitz & Stegun, Section 22.5.\n\n.. [MH] Mason & Handscombe, Chebyshev Polynomials, CRC Press (2003).\n\n"; +static char __pyx_k_3[] = ""; +static char __pyx_k_4[] = "scipy.special._cephes"; +static char __pyx_k_5[] = "binom (line 96)"; +static char __pyx_k_6[] = "eval_jacobi (line 100)"; +static char __pyx_k_7[] = "eval_sh_jacobi (line 109)"; +static char __pyx_k_8[] = "eval_gegenbauer (line 114)"; +static char __pyx_k_9[] = "eval_chebyt (line 123)"; +static char __pyx_k_10[] = "eval_chebyu (line 132)"; +static char __pyx_k_11[] = "eval_chebys (line 141)"; +static char __pyx_k_12[] = "eval_chebyc (line 145)"; +static char __pyx_k_13[] = "eval_sh_chebyt (line 149)"; +static char __pyx_k_14[] = "eval_sh_chebyu (line 153)"; +static char __pyx_k_15[] = "eval_legendre (line 157)"; +static char __pyx_k_16[] = "eval_sh_legendre (line 166)"; +static char __pyx_k_17[] = "eval_genlaguerre (line 170)"; +static char __pyx_k_18[] = "eval_laguerre (line 178)"; +static char __pyx_k_19[] = "eval_hermite (line 182)"; +static char __pyx_k_20[] = "eval_hermitenorm (line 204)"; +static char __pyx_k__k[] = "k"; +static char __pyx_k__n[] = "n"; +static char __pyx_k__p[] = "p"; +static char __pyx_k__q[] = "q"; +static char __pyx_k__x[] = "x"; +static char __pyx_k__np[] = "np"; +static char __pyx_k__any[] = "any"; +static char __pyx_k__exp[] = "exp"; +static char __pyx_k__out[] = "out"; +static char __pyx_k__beta[] = "beta"; +static char __pyx_k__alpha[] = "alpha"; +static char __pyx_k__binom[] = "binom"; +static char __pyx_k__gamma[] = "gamma"; +static char __pyx_k__numpy[] = "numpy"; +static char __pyx_k__range[] = "range"; +static char __pyx_k__hyp1f1[] = "hyp1f1"; +static char __pyx_k__hyp2f1[] = "hyp2f1"; +static char __pyx_k__gammaln[] = "gammaln"; +static char __pyx_k____main__[] = "__main__"; +static char __pyx_k____test__[] = "__test__"; +static char __pyx_k__ValueError[] = "ValueError"; +static char __pyx_k__atleast_1d[] = "atleast_1d"; +static char __pyx_k__zeros_like[] = "zeros_like"; +static char __pyx_k__eval_chebyc[] = "eval_chebyc"; +static char __pyx_k__eval_chebys[] = "eval_chebys"; +static char __pyx_k__eval_chebyt[] = "eval_chebyt"; +static char __pyx_k__eval_chebyu[] = "eval_chebyu"; +static char __pyx_k__eval_jacobi[] = "eval_jacobi"; +static char __pyx_k___eval_chebyt[] = "_eval_chebyt"; +static char __pyx_k__eval_hermite[] = "eval_hermite"; +static char __pyx_k__eval_laguerre[] = "eval_laguerre"; +static char __pyx_k__eval_legendre[] = "eval_legendre"; +static char __pyx_k__eval_sh_chebyt[] = "eval_sh_chebyt"; +static char __pyx_k__eval_sh_chebyu[] = "eval_sh_chebyu"; +static char __pyx_k__eval_sh_jacobi[] = "eval_sh_jacobi"; +static char __pyx_k__eval_gegenbauer[] = "eval_gegenbauer"; +static char __pyx_k__broadcast_arrays[] = "broadcast_arrays"; +static char __pyx_k__eval_genlaguerre[] = "eval_genlaguerre"; +static char __pyx_k__eval_hermitenorm[] = "eval_hermitenorm"; +static char __pyx_k__eval_sh_legendre[] = "eval_sh_legendre"; +static PyObject *__pyx_kp_s_1; +static PyObject *__pyx_kp_u_10; +static PyObject *__pyx_kp_u_11; +static PyObject *__pyx_kp_u_12; +static PyObject *__pyx_kp_u_13; +static PyObject *__pyx_kp_u_14; +static PyObject *__pyx_kp_u_15; +static PyObject *__pyx_kp_u_16; +static PyObject *__pyx_kp_u_17; +static PyObject *__pyx_kp_u_18; +static PyObject *__pyx_kp_u_19; +static PyObject *__pyx_kp_u_20; +static PyObject *__pyx_n_s_4; +static PyObject *__pyx_kp_u_5; +static PyObject *__pyx_kp_u_6; +static PyObject *__pyx_kp_u_7; +static PyObject *__pyx_kp_u_8; +static PyObject *__pyx_kp_u_9; +static PyObject *__pyx_n_s__ValueError; +static PyObject *__pyx_n_s____main__; +static PyObject *__pyx_n_s____test__; +static PyObject *__pyx_n_s___eval_chebyt; +static PyObject *__pyx_n_s__alpha; +static PyObject *__pyx_n_s__any; +static PyObject *__pyx_n_s__atleast_1d; +static PyObject *__pyx_n_s__beta; +static PyObject *__pyx_n_s__binom; +static PyObject *__pyx_n_s__broadcast_arrays; +static PyObject *__pyx_n_s__eval_chebyc; +static PyObject *__pyx_n_s__eval_chebys; +static PyObject *__pyx_n_s__eval_chebyt; +static PyObject *__pyx_n_s__eval_chebyu; +static PyObject *__pyx_n_s__eval_gegenbauer; +static PyObject *__pyx_n_s__eval_genlaguerre; +static PyObject *__pyx_n_s__eval_hermite; +static PyObject *__pyx_n_s__eval_hermitenorm; +static PyObject *__pyx_n_s__eval_jacobi; +static PyObject *__pyx_n_s__eval_laguerre; +static PyObject *__pyx_n_s__eval_legendre; +static PyObject *__pyx_n_s__eval_sh_chebyt; +static PyObject *__pyx_n_s__eval_sh_chebyu; +static PyObject *__pyx_n_s__eval_sh_jacobi; +static PyObject *__pyx_n_s__eval_sh_legendre; +static PyObject *__pyx_n_s__exp; +static PyObject *__pyx_n_s__gamma; +static PyObject *__pyx_n_s__gammaln; +static PyObject *__pyx_n_s__hyp1f1; +static PyObject *__pyx_n_s__hyp2f1; +static PyObject *__pyx_n_s__k; +static PyObject *__pyx_n_s__n; +static PyObject *__pyx_n_s__np; +static PyObject *__pyx_n_s__numpy; +static PyObject *__pyx_n_s__out; +static PyObject *__pyx_n_s__p; +static PyObject *__pyx_n_s__q; +static PyObject *__pyx_n_s__range; +static PyObject *__pyx_n_s__x; +static PyObject *__pyx_n_s__zeros_like; +static PyObject *__pyx_int_0; +static PyObject *__pyx_int_1; +static PyObject *__pyx_int_2; +static PyObject *__pyx_int_neg_1; + +/* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":24 + * double sqrt(double x) nogil + * + * cdef double eval_poly_chebyt(long k, double x) nogil: # <<<<<<<<<<<<<< + * # Use Chebyshev T recurrence directly, see [MH] + * cdef long m + */ + +static double __pyx_f_5scipy_7special_15orthogonal_eval_eval_poly_chebyt(long __pyx_v_k, double __pyx_v_x) { + long __pyx_v_m; + double __pyx_v_b2; + double __pyx_v_b1; + double __pyx_v_b0; + double __pyx_r; + long __pyx_t_1; + long __pyx_t_2; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":29 + * cdef double b2, b1, b0 + * + * b2 = 0 # <<<<<<<<<<<<<< + * b1 = -1 + * b0 = 0 + */ + __pyx_v_b2 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":30 + * + * b2 = 0 + * b1 = -1 # <<<<<<<<<<<<<< + * b0 = 0 + * x = 2*x + */ + __pyx_v_b1 = -1; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":31 + * b2 = 0 + * b1 = -1 + * b0 = 0 # <<<<<<<<<<<<<< + * x = 2*x + * for m in range(k+1): + */ + __pyx_v_b0 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":32 + * b1 = -1 + * b0 = 0 + * x = 2*x # <<<<<<<<<<<<<< + * for m in range(k+1): + * b2 = b1 + */ + __pyx_v_x = (2 * __pyx_v_x); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":33 + * b0 = 0 + * x = 2*x + * for m in range(k+1): # <<<<<<<<<<<<<< + * b2 = b1 + * b1 = b0 + */ + __pyx_t_1 = (__pyx_v_k + 1); + for (__pyx_t_2 = 0; __pyx_t_2 < __pyx_t_1; __pyx_t_2+=1) { + __pyx_v_m = __pyx_t_2; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":34 + * x = 2*x + * for m in range(k+1): + * b2 = b1 # <<<<<<<<<<<<<< + * b1 = b0 + * b0 = x*b1 - b2 + */ + __pyx_v_b2 = __pyx_v_b1; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":35 + * for m in range(k+1): + * b2 = b1 + * b1 = b0 # <<<<<<<<<<<<<< + * b0 = x*b1 - b2 + * return (b0 - b2)/2.0 + */ + __pyx_v_b1 = __pyx_v_b0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":36 + * b2 = b1 + * b1 = b0 + * b0 = x*b1 - b2 # <<<<<<<<<<<<<< + * return (b0 - b2)/2.0 + * + */ + __pyx_v_b0 = ((__pyx_v_x * __pyx_v_b1) - __pyx_v_b2); + } + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":37 + * b1 = b0 + * b0 = x*b1 - b2 + * return (b0 - b2)/2.0 # <<<<<<<<<<<<<< + * + * #------------------------------------------------------------------------------ + */ + __pyx_r = ((__pyx_v_b0 - __pyx_v_b2) / 2.0); + goto __pyx_L0; + + __pyx_r = 0; + __pyx_L0:; + return __pyx_r; +} + +/* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":57 + * int identity, char* name, char* doc, int c) + * + * cdef void _loop_id_d(char **args, npy_intp *dimensions, npy_intp *steps, # <<<<<<<<<<<<<< + * void *func) nogil: + * cdef int i + */ + +static void __pyx_f_5scipy_7special_15orthogonal_eval__loop_id_d(char **__pyx_v_args, npy_intp *__pyx_v_dimensions, npy_intp *__pyx_v_steps, void *__pyx_v_func) { + int __pyx_v_i; + char *__pyx_v_ip1; + char *__pyx_v_ip2; + char *__pyx_v_op; + npy_intp __pyx_t_1; + int __pyx_t_2; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":61 + * cdef int i + * cdef double x + * cdef char *ip1=args[0], *ip2=args[1], *op=args[2] # <<<<<<<<<<<<<< + * for i in range(0, dimensions[0]): + * (op)[0] = (func)( + */ + __pyx_v_ip1 = (__pyx_v_args[0]); + __pyx_v_ip2 = (__pyx_v_args[1]); + __pyx_v_op = (__pyx_v_args[2]); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":62 + * cdef double x + * cdef char *ip1=args[0], *ip2=args[1], *op=args[2] + * for i in range(0, dimensions[0]): # <<<<<<<<<<<<<< + * (op)[0] = (func)( + * (ip1)[0], (ip2)[0]) + */ + __pyx_t_1 = (__pyx_v_dimensions[0]); + for (__pyx_t_2 = 0; __pyx_t_2 < __pyx_t_1; __pyx_t_2+=1) { + __pyx_v_i = __pyx_t_2; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":63 + * cdef char *ip1=args[0], *ip2=args[1], *op=args[2] + * for i in range(0, dimensions[0]): + * (op)[0] = (func)( # <<<<<<<<<<<<<< + * (ip1)[0], (ip2)[0]) + * ip1 += steps[0]; ip2 += steps[1]; op += steps[2] + */ + (((double *)__pyx_v_op)[0]) = ((double (*)(long, double))__pyx_v_func)((((long *)__pyx_v_ip1)[0]), (((double *)__pyx_v_ip2)[0])); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":65 + * (op)[0] = (func)( + * (ip1)[0], (ip2)[0]) + * ip1 += steps[0]; ip2 += steps[1]; op += steps[2] # <<<<<<<<<<<<<< + * + * cdef char _id_d_types[3] + */ + __pyx_v_ip1 += (__pyx_v_steps[0]); + __pyx_v_ip2 += (__pyx_v_steps[1]); + __pyx_v_op += (__pyx_v_steps[2]); + } + +} + +/* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":96 + * from numpy import exp + * + * def binom(n, k): # <<<<<<<<<<<<<< + * """Binomial coefficient""" + * return np.exp(gammaln(1+n) - gammaln(1+k) - gammaln(1+n-k)) + */ + +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_binom(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static char __pyx_doc_5scipy_7special_15orthogonal_eval_binom[] = "Binomial coefficient"; +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_binom(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + PyObject *__pyx_v_n = 0; + PyObject *__pyx_v_k = 0; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + PyObject *__pyx_t_4 = NULL; + PyObject *__pyx_t_5 = NULL; + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__n,&__pyx_n_s__k,0}; + __Pyx_RefNannySetupContext("binom"); + __pyx_self = __pyx_self; + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); + PyObject* values[2] = {0,0}; + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 0: + values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__n); + if (likely(values[0])) kw_args--; + else goto __pyx_L5_argtuple_error; + case 1: + values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__k); + if (likely(values[1])) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("binom", 1, 2, 2, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 96; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "binom") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 96; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + __pyx_v_n = values[0]; + __pyx_v_k = values[1]; + } else if (PyTuple_GET_SIZE(__pyx_args) != 2) { + goto __pyx_L5_argtuple_error; + } else { + __pyx_v_n = PyTuple_GET_ITEM(__pyx_args, 0); + __pyx_v_k = PyTuple_GET_ITEM(__pyx_args, 1); + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("binom", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 96; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + __pyx_L3_error:; + __Pyx_AddTraceback("scipy.special.orthogonal_eval.binom"); + return NULL; + __pyx_L4_argument_unpacking_done:; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":98 + * def binom(n, k): + * """Binomial coefficient""" + * return np.exp(gammaln(1+n) - gammaln(1+k) - gammaln(1+n-k)) # <<<<<<<<<<<<<< + * + * def eval_jacobi(n, alpha, beta, x, out=None): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 98; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__exp); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 98; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__gammaln); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 98; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_3 = PyNumber_Add(__pyx_int_1, __pyx_v_n); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 98; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 98; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + __pyx_t_3 = 0; + __pyx_t_3 = PyObject_Call(__pyx_t_1, __pyx_t_4, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 98; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_4 = __Pyx_GetName(__pyx_m, __pyx_n_s__gammaln); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 98; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_1 = PyNumber_Add(__pyx_int_1, __pyx_v_k); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 98; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 98; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_1); + __Pyx_GIVEREF(__pyx_t_1); + __pyx_t_1 = 0; + __pyx_t_1 = PyObject_Call(__pyx_t_4, __pyx_t_5, NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 98; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_5 = PyNumber_Subtract(__pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 98; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__gammaln); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 98; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_3 = PyNumber_Add(__pyx_int_1, __pyx_v_n); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 98; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyNumber_Subtract(__pyx_t_3, __pyx_v_k); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 98; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 98; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_4); + __Pyx_GIVEREF(__pyx_t_4); + __pyx_t_4 = 0; + __pyx_t_4 = PyObject_Call(__pyx_t_1, __pyx_t_3, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 98; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyNumber_Subtract(__pyx_t_5, __pyx_t_4); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 98; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 98; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + __pyx_t_3 = 0; + __pyx_t_3 = PyObject_Call(__pyx_t_2, __pyx_t_4, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 98; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_r = __pyx_t_3; + __pyx_t_3 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_XDECREF(__pyx_t_5); + __Pyx_AddTraceback("scipy.special.orthogonal_eval.binom"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":100 + * return np.exp(gammaln(1+n) - gammaln(1+k) - gammaln(1+n-k)) + * + * def eval_jacobi(n, alpha, beta, x, out=None): # <<<<<<<<<<<<<< + * """Evaluate Jacobi polynomial at a point.""" + * d = binom(n+alpha, n) + */ + +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_jacobi(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static char __pyx_doc_5scipy_7special_15orthogonal_eval_eval_jacobi[] = "Evaluate Jacobi polynomial at a point."; +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_jacobi(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + PyObject *__pyx_v_n = 0; + PyObject *__pyx_v_alpha = 0; + PyObject *__pyx_v_beta = 0; + PyObject *__pyx_v_x = 0; + PyObject *__pyx_v_out = 0; + PyObject *__pyx_v_d; + PyObject *__pyx_v_a; + PyObject *__pyx_v_b; + PyObject *__pyx_v_c; + PyObject *__pyx_v_g; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__n,&__pyx_n_s__alpha,&__pyx_n_s__beta,&__pyx_n_s__x,&__pyx_n_s__out,0}; + __Pyx_RefNannySetupContext("eval_jacobi"); + __pyx_self = __pyx_self; + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); + PyObject* values[5] = {0,0,0,0,0}; + values[4] = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4); + case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); + case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 0: + values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__n); + if (likely(values[0])) kw_args--; + else goto __pyx_L5_argtuple_error; + case 1: + values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__alpha); + if (likely(values[1])) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("eval_jacobi", 0, 4, 5, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 100; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + case 2: + values[2] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__beta); + if (likely(values[2])) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("eval_jacobi", 0, 4, 5, 2); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 100; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + case 3: + values[3] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__x); + if (likely(values[3])) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("eval_jacobi", 0, 4, 5, 3); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 100; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + case 4: + if (kw_args > 0) { + PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__out); + if (unlikely(value)) { values[4] = value; kw_args--; } + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "eval_jacobi") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 100; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + __pyx_v_n = values[0]; + __pyx_v_alpha = values[1]; + __pyx_v_beta = values[2]; + __pyx_v_x = values[3]; + __pyx_v_out = values[4]; + } else { + __pyx_v_out = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 5: + __pyx_v_out = PyTuple_GET_ITEM(__pyx_args, 4); + case 4: + __pyx_v_x = PyTuple_GET_ITEM(__pyx_args, 3); + __pyx_v_beta = PyTuple_GET_ITEM(__pyx_args, 2); + __pyx_v_alpha = PyTuple_GET_ITEM(__pyx_args, 1); + __pyx_v_n = PyTuple_GET_ITEM(__pyx_args, 0); + break; + default: goto __pyx_L5_argtuple_error; + } + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("eval_jacobi", 0, 4, 5, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 100; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + __pyx_L3_error:; + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_jacobi"); + return NULL; + __pyx_L4_argument_unpacking_done:; + __pyx_v_d = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_a = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_b = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_c = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_g = Py_None; __Pyx_INCREF(Py_None); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":102 + * def eval_jacobi(n, alpha, beta, x, out=None): + * """Evaluate Jacobi polynomial at a point.""" + * d = binom(n+alpha, n) # <<<<<<<<<<<<<< + * a = -n + * b = n + alpha + beta + 1 + */ + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__binom); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 102; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = PyNumber_Add(__pyx_v_n, __pyx_v_alpha); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 102; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 102; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2); + __Pyx_GIVEREF(__pyx_t_2); + __Pyx_INCREF(__pyx_v_n); + PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_v_n); + __Pyx_GIVEREF(__pyx_v_n); + __pyx_t_2 = 0; + __pyx_t_2 = PyObject_Call(__pyx_t_1, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 102; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_v_d); + __pyx_v_d = __pyx_t_2; + __pyx_t_2 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":103 + * """Evaluate Jacobi polynomial at a point.""" + * d = binom(n+alpha, n) + * a = -n # <<<<<<<<<<<<<< + * b = n + alpha + beta + 1 + * c = alpha + 1 + */ + __pyx_t_2 = PyNumber_Negative(__pyx_v_n); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 103; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_v_a); + __pyx_v_a = __pyx_t_2; + __pyx_t_2 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":104 + * d = binom(n+alpha, n) + * a = -n + * b = n + alpha + beta + 1 # <<<<<<<<<<<<<< + * c = alpha + 1 + * g = (1-x)/2.0 + */ + __pyx_t_2 = PyNumber_Add(__pyx_v_n, __pyx_v_alpha); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 104; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyNumber_Add(__pyx_t_2, __pyx_v_beta); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 104; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyNumber_Add(__pyx_t_3, __pyx_int_1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 104; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_v_b); + __pyx_v_b = __pyx_t_2; + __pyx_t_2 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":105 + * a = -n + * b = n + alpha + beta + 1 + * c = alpha + 1 # <<<<<<<<<<<<<< + * g = (1-x)/2.0 + * return hyp2f1(a, b, c, g) * d + */ + __pyx_t_2 = PyNumber_Add(__pyx_v_alpha, __pyx_int_1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 105; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_v_c); + __pyx_v_c = __pyx_t_2; + __pyx_t_2 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":106 + * b = n + alpha + beta + 1 + * c = alpha + 1 + * g = (1-x)/2.0 # <<<<<<<<<<<<<< + * return hyp2f1(a, b, c, g) * d + * + */ + __pyx_t_2 = PyNumber_Subtract(__pyx_int_1, __pyx_v_x); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 106; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyFloat_FromDouble(2.0); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 106; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_1 = __Pyx_PyNumber_Divide(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 106; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_v_g); + __pyx_v_g = __pyx_t_1; + __pyx_t_1 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":107 + * c = alpha + 1 + * g = (1-x)/2.0 + * return hyp2f1(a, b, c, g) * d # <<<<<<<<<<<<<< + * + * def eval_sh_jacobi(n, p, q, x, out=None): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__hyp2f1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 107; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_3 = PyTuple_New(4); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 107; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_INCREF(__pyx_v_a); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_a); + __Pyx_GIVEREF(__pyx_v_a); + __Pyx_INCREF(__pyx_v_b); + PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_v_b); + __Pyx_GIVEREF(__pyx_v_b); + __Pyx_INCREF(__pyx_v_c); + PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_v_c); + __Pyx_GIVEREF(__pyx_v_c); + __Pyx_INCREF(__pyx_v_g); + PyTuple_SET_ITEM(__pyx_t_3, 3, __pyx_v_g); + __Pyx_GIVEREF(__pyx_v_g); + __pyx_t_2 = PyObject_Call(__pyx_t_1, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 107; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyNumber_Multiply(__pyx_t_2, __pyx_v_d); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 107; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_r = __pyx_t_3; + __pyx_t_3 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_jacobi"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_DECREF(__pyx_v_d); + __Pyx_DECREF(__pyx_v_a); + __Pyx_DECREF(__pyx_v_b); + __Pyx_DECREF(__pyx_v_c); + __Pyx_DECREF(__pyx_v_g); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":109 + * return hyp2f1(a, b, c, g) * d + * + * def eval_sh_jacobi(n, p, q, x, out=None): # <<<<<<<<<<<<<< + * """Evaluate shifted Jacobi polynomial at a point.""" + * factor = np.exp(gammaln(1+n) + gammaln(n+p) - gammaln(2*n+p)) + */ + +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_sh_jacobi(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static char __pyx_doc_5scipy_7special_15orthogonal_eval_eval_sh_jacobi[] = "Evaluate shifted Jacobi polynomial at a point."; +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_sh_jacobi(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + PyObject *__pyx_v_n = 0; + PyObject *__pyx_v_p = 0; + PyObject *__pyx_v_q = 0; + PyObject *__pyx_v_x = 0; + PyObject *__pyx_v_out = 0; + PyObject *__pyx_v_factor; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + PyObject *__pyx_t_4 = NULL; + PyObject *__pyx_t_5 = NULL; + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__n,&__pyx_n_s__p,&__pyx_n_s__q,&__pyx_n_s__x,&__pyx_n_s__out,0}; + __Pyx_RefNannySetupContext("eval_sh_jacobi"); + __pyx_self = __pyx_self; + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); + PyObject* values[5] = {0,0,0,0,0}; + values[4] = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4); + case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); + case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 0: + values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__n); + if (likely(values[0])) kw_args--; + else goto __pyx_L5_argtuple_error; + case 1: + values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__p); + if (likely(values[1])) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("eval_sh_jacobi", 0, 4, 5, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 109; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + case 2: + values[2] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__q); + if (likely(values[2])) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("eval_sh_jacobi", 0, 4, 5, 2); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 109; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + case 3: + values[3] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__x); + if (likely(values[3])) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("eval_sh_jacobi", 0, 4, 5, 3); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 109; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + case 4: + if (kw_args > 0) { + PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__out); + if (unlikely(value)) { values[4] = value; kw_args--; } + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "eval_sh_jacobi") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 109; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + __pyx_v_n = values[0]; + __pyx_v_p = values[1]; + __pyx_v_q = values[2]; + __pyx_v_x = values[3]; + __pyx_v_out = values[4]; + } else { + __pyx_v_out = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 5: + __pyx_v_out = PyTuple_GET_ITEM(__pyx_args, 4); + case 4: + __pyx_v_x = PyTuple_GET_ITEM(__pyx_args, 3); + __pyx_v_q = PyTuple_GET_ITEM(__pyx_args, 2); + __pyx_v_p = PyTuple_GET_ITEM(__pyx_args, 1); + __pyx_v_n = PyTuple_GET_ITEM(__pyx_args, 0); + break; + default: goto __pyx_L5_argtuple_error; + } + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("eval_sh_jacobi", 0, 4, 5, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 109; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + __pyx_L3_error:; + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_sh_jacobi"); + return NULL; + __pyx_L4_argument_unpacking_done:; + __pyx_v_factor = Py_None; __Pyx_INCREF(Py_None); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":111 + * def eval_sh_jacobi(n, p, q, x, out=None): + * """Evaluate shifted Jacobi polynomial at a point.""" + * factor = np.exp(gammaln(1+n) + gammaln(n+p) - gammaln(2*n+p)) # <<<<<<<<<<<<<< + * return factor * eval_jacobi(n, p-q, q-1, 2*x-1) + * + */ + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 111; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__exp); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 111; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__gammaln); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 111; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_3 = PyNumber_Add(__pyx_int_1, __pyx_v_n); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 111; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 111; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + __pyx_t_3 = 0; + __pyx_t_3 = PyObject_Call(__pyx_t_1, __pyx_t_4, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 111; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_4 = __Pyx_GetName(__pyx_m, __pyx_n_s__gammaln); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 111; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_1 = PyNumber_Add(__pyx_v_n, __pyx_v_p); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 111; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 111; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_1); + __Pyx_GIVEREF(__pyx_t_1); + __pyx_t_1 = 0; + __pyx_t_1 = PyObject_Call(__pyx_t_4, __pyx_t_5, NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 111; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_5 = PyNumber_Add(__pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 111; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__gammaln); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 111; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_3 = PyNumber_Multiply(__pyx_int_2, __pyx_v_n); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 111; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyNumber_Add(__pyx_t_3, __pyx_v_p); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 111; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 111; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_4); + __Pyx_GIVEREF(__pyx_t_4); + __pyx_t_4 = 0; + __pyx_t_4 = PyObject_Call(__pyx_t_1, __pyx_t_3, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 111; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyNumber_Subtract(__pyx_t_5, __pyx_t_4); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 111; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 111; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + __pyx_t_3 = 0; + __pyx_t_3 = PyObject_Call(__pyx_t_2, __pyx_t_4, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 111; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_DECREF(__pyx_v_factor); + __pyx_v_factor = __pyx_t_3; + __pyx_t_3 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":112 + * """Evaluate shifted Jacobi polynomial at a point.""" + * factor = np.exp(gammaln(1+n) + gammaln(n+p) - gammaln(2*n+p)) + * return factor * eval_jacobi(n, p-q, q-1, 2*x-1) # <<<<<<<<<<<<<< + * + * def eval_gegenbauer(n, alpha, x, out=None): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__eval_jacobi); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 112; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyNumber_Subtract(__pyx_v_p, __pyx_v_q); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 112; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_2 = PyNumber_Subtract(__pyx_v_q, __pyx_int_1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 112; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_5 = PyNumber_Multiply(__pyx_int_2, __pyx_v_x); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 112; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __pyx_t_1 = PyNumber_Subtract(__pyx_t_5, __pyx_int_1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 112; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_5 = PyTuple_New(4); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 112; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_INCREF(__pyx_v_n); + PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_v_n); + __Pyx_GIVEREF(__pyx_v_n); + PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_4); + __Pyx_GIVEREF(__pyx_t_4); + PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_t_2); + __Pyx_GIVEREF(__pyx_t_2); + PyTuple_SET_ITEM(__pyx_t_5, 3, __pyx_t_1); + __Pyx_GIVEREF(__pyx_t_1); + __pyx_t_4 = 0; + __pyx_t_2 = 0; + __pyx_t_1 = 0; + __pyx_t_1 = PyObject_Call(__pyx_t_3, __pyx_t_5, NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 112; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; + __pyx_t_5 = PyNumber_Multiply(__pyx_v_factor, __pyx_t_1); if (unlikely(!__pyx_t_5)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 112; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_5); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_r = __pyx_t_5; + __pyx_t_5 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_XDECREF(__pyx_t_5); + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_sh_jacobi"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_DECREF(__pyx_v_factor); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":114 + * return factor * eval_jacobi(n, p-q, q-1, 2*x-1) + * + * def eval_gegenbauer(n, alpha, x, out=None): # <<<<<<<<<<<<<< + * """Evaluate Gegenbauer polynomial at a point.""" + * d = gamma(n+2*alpha)/gamma(1+n)/gamma(2*alpha) + */ + +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_gegenbauer(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static char __pyx_doc_5scipy_7special_15orthogonal_eval_eval_gegenbauer[] = "Evaluate Gegenbauer polynomial at a point."; +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_gegenbauer(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + PyObject *__pyx_v_n = 0; + PyObject *__pyx_v_alpha = 0; + PyObject *__pyx_v_x = 0; + PyObject *__pyx_v_out = 0; + PyObject *__pyx_v_d; + PyObject *__pyx_v_a; + PyObject *__pyx_v_b; + PyObject *__pyx_v_c; + PyObject *__pyx_v_g; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + PyObject *__pyx_t_4 = NULL; + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__n,&__pyx_n_s__alpha,&__pyx_n_s__x,&__pyx_n_s__out,0}; + __Pyx_RefNannySetupContext("eval_gegenbauer"); + __pyx_self = __pyx_self; + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); + PyObject* values[4] = {0,0,0,0}; + values[3] = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); + case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 0: + values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__n); + if (likely(values[0])) kw_args--; + else goto __pyx_L5_argtuple_error; + case 1: + values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__alpha); + if (likely(values[1])) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("eval_gegenbauer", 0, 3, 4, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 114; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + case 2: + values[2] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__x); + if (likely(values[2])) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("eval_gegenbauer", 0, 3, 4, 2); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 114; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + case 3: + if (kw_args > 0) { + PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__out); + if (unlikely(value)) { values[3] = value; kw_args--; } + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "eval_gegenbauer") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 114; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + __pyx_v_n = values[0]; + __pyx_v_alpha = values[1]; + __pyx_v_x = values[2]; + __pyx_v_out = values[3]; + } else { + __pyx_v_out = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 4: + __pyx_v_out = PyTuple_GET_ITEM(__pyx_args, 3); + case 3: + __pyx_v_x = PyTuple_GET_ITEM(__pyx_args, 2); + __pyx_v_alpha = PyTuple_GET_ITEM(__pyx_args, 1); + __pyx_v_n = PyTuple_GET_ITEM(__pyx_args, 0); + break; + default: goto __pyx_L5_argtuple_error; + } + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("eval_gegenbauer", 0, 3, 4, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 114; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + __pyx_L3_error:; + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_gegenbauer"); + return NULL; + __pyx_L4_argument_unpacking_done:; + __pyx_v_d = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_a = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_b = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_c = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_g = Py_None; __Pyx_INCREF(Py_None); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":116 + * def eval_gegenbauer(n, alpha, x, out=None): + * """Evaluate Gegenbauer polynomial at a point.""" + * d = gamma(n+2*alpha)/gamma(1+n)/gamma(2*alpha) # <<<<<<<<<<<<<< + * a = -n + * b = n + 2*alpha + */ + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__gamma); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 116; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = PyNumber_Multiply(__pyx_int_2, __pyx_v_alpha); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 116; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyNumber_Add(__pyx_v_n, __pyx_t_2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 116; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 116; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + __pyx_t_3 = 0; + __pyx_t_3 = PyObject_Call(__pyx_t_1, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 116; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__gamma); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 116; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_1 = PyNumber_Add(__pyx_int_1, __pyx_v_n); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 116; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 116; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_1); + __Pyx_GIVEREF(__pyx_t_1); + __pyx_t_1 = 0; + __pyx_t_1 = PyObject_Call(__pyx_t_2, __pyx_t_4, NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 116; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_4 = __Pyx_PyNumber_Divide(__pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 116; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__gamma); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 116; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_3 = PyNumber_Multiply(__pyx_int_2, __pyx_v_alpha); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 116; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 116; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + __pyx_t_3 = 0; + __pyx_t_3 = PyObject_Call(__pyx_t_1, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 116; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = __Pyx_PyNumber_Divide(__pyx_t_4, __pyx_t_3); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 116; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_v_d); + __pyx_v_d = __pyx_t_2; + __pyx_t_2 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":117 + * """Evaluate Gegenbauer polynomial at a point.""" + * d = gamma(n+2*alpha)/gamma(1+n)/gamma(2*alpha) + * a = -n # <<<<<<<<<<<<<< + * b = n + 2*alpha + * c = alpha + 0.5 + */ + __pyx_t_2 = PyNumber_Negative(__pyx_v_n); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 117; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_v_a); + __pyx_v_a = __pyx_t_2; + __pyx_t_2 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":118 + * d = gamma(n+2*alpha)/gamma(1+n)/gamma(2*alpha) + * a = -n + * b = n + 2*alpha # <<<<<<<<<<<<<< + * c = alpha + 0.5 + * g = (1-x)/2.0 + */ + __pyx_t_2 = PyNumber_Multiply(__pyx_int_2, __pyx_v_alpha); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 118; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyNumber_Add(__pyx_v_n, __pyx_t_2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 118; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_v_b); + __pyx_v_b = __pyx_t_3; + __pyx_t_3 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":119 + * a = -n + * b = n + 2*alpha + * c = alpha + 0.5 # <<<<<<<<<<<<<< + * g = (1-x)/2.0 + * return hyp2f1(a, b, c, g) * d + */ + __pyx_t_3 = PyFloat_FromDouble(0.5); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 119; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_2 = PyNumber_Add(__pyx_v_alpha, __pyx_t_3); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 119; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_v_c); + __pyx_v_c = __pyx_t_2; + __pyx_t_2 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":120 + * b = n + 2*alpha + * c = alpha + 0.5 + * g = (1-x)/2.0 # <<<<<<<<<<<<<< + * return hyp2f1(a, b, c, g) * d + * + */ + __pyx_t_2 = PyNumber_Subtract(__pyx_int_1, __pyx_v_x); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 120; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyFloat_FromDouble(2.0); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 120; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = __Pyx_PyNumber_Divide(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 120; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_v_g); + __pyx_v_g = __pyx_t_4; + __pyx_t_4 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":121 + * c = alpha + 0.5 + * g = (1-x)/2.0 + * return hyp2f1(a, b, c, g) * d # <<<<<<<<<<<<<< + * + * def eval_chebyt(n, x, out=None): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_4 = __Pyx_GetName(__pyx_m, __pyx_n_s__hyp2f1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 121; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyTuple_New(4); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 121; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_INCREF(__pyx_v_a); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_a); + __Pyx_GIVEREF(__pyx_v_a); + __Pyx_INCREF(__pyx_v_b); + PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_v_b); + __Pyx_GIVEREF(__pyx_v_b); + __Pyx_INCREF(__pyx_v_c); + PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_v_c); + __Pyx_GIVEREF(__pyx_v_c); + __Pyx_INCREF(__pyx_v_g); + PyTuple_SET_ITEM(__pyx_t_3, 3, __pyx_v_g); + __Pyx_GIVEREF(__pyx_v_g); + __pyx_t_2 = PyObject_Call(__pyx_t_4, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 121; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyNumber_Multiply(__pyx_t_2, __pyx_v_d); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 121; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_r = __pyx_t_3; + __pyx_t_3 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_gegenbauer"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_DECREF(__pyx_v_d); + __Pyx_DECREF(__pyx_v_a); + __Pyx_DECREF(__pyx_v_b); + __Pyx_DECREF(__pyx_v_c); + __Pyx_DECREF(__pyx_v_g); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":123 + * return hyp2f1(a, b, c, g) * d + * + * def eval_chebyt(n, x, out=None): # <<<<<<<<<<<<<< + * """ + * Evaluate Chebyshev T polynomial at a point. + */ + +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_chebyt(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static char __pyx_doc_5scipy_7special_15orthogonal_eval_eval_chebyt[] = "\n Evaluate Chebyshev T polynomial at a point.\n\n This routine is numerically stable for `x` in ``[-1, 1]`` at least\n up to order ``10000``.\n "; +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_chebyt(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + PyObject *__pyx_v_n = 0; + PyObject *__pyx_v_x = 0; + PyObject *__pyx_v_out = 0; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__n,&__pyx_n_s__x,&__pyx_n_s__out,0}; + __Pyx_RefNannySetupContext("eval_chebyt"); + __pyx_self = __pyx_self; + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); + PyObject* values[3] = {0,0,0}; + values[2] = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 0: + values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__n); + if (likely(values[0])) kw_args--; + else goto __pyx_L5_argtuple_error; + case 1: + values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__x); + if (likely(values[1])) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("eval_chebyt", 0, 2, 3, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 123; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + case 2: + if (kw_args > 0) { + PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__out); + if (unlikely(value)) { values[2] = value; kw_args--; } + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "eval_chebyt") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 123; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + __pyx_v_n = values[0]; + __pyx_v_x = values[1]; + __pyx_v_out = values[2]; + } else { + __pyx_v_out = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 3: + __pyx_v_out = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: + __pyx_v_x = PyTuple_GET_ITEM(__pyx_args, 1); + __pyx_v_n = PyTuple_GET_ITEM(__pyx_args, 0); + break; + default: goto __pyx_L5_argtuple_error; + } + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("eval_chebyt", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 123; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + __pyx_L3_error:; + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_chebyt"); + return NULL; + __pyx_L4_argument_unpacking_done:; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":130 + * up to order ``10000``. + * """ + * return _eval_chebyt(n, x, out) # <<<<<<<<<<<<<< + * + * def eval_chebyu(n, x, out=None): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s___eval_chebyt); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 130; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = PyTuple_New(3); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 130; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(__pyx_v_n); + PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_n); + __Pyx_GIVEREF(__pyx_v_n); + __Pyx_INCREF(__pyx_v_x); + PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_v_x); + __Pyx_GIVEREF(__pyx_v_x); + __Pyx_INCREF(__pyx_v_out); + PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_v_out); + __Pyx_GIVEREF(__pyx_v_out); + __pyx_t_3 = PyObject_Call(__pyx_t_1, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 130; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_r = __pyx_t_3; + __pyx_t_3 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_chebyt"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":132 + * return _eval_chebyt(n, x, out) + * + * def eval_chebyu(n, x, out=None): # <<<<<<<<<<<<<< + * """Evaluate Chebyshev U polynomial at a point.""" + * d = n+1 + */ + +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_chebyu(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static char __pyx_doc_5scipy_7special_15orthogonal_eval_eval_chebyu[] = "Evaluate Chebyshev U polynomial at a point."; +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_chebyu(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + PyObject *__pyx_v_n = 0; + PyObject *__pyx_v_x = 0; + PyObject *__pyx_v_out = 0; + PyObject *__pyx_v_d; + PyObject *__pyx_v_a; + PyObject *__pyx_v_b; + PyObject *__pyx_v_c; + PyObject *__pyx_v_g; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__n,&__pyx_n_s__x,&__pyx_n_s__out,0}; + __Pyx_RefNannySetupContext("eval_chebyu"); + __pyx_self = __pyx_self; + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); + PyObject* values[3] = {0,0,0}; + values[2] = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 0: + values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__n); + if (likely(values[0])) kw_args--; + else goto __pyx_L5_argtuple_error; + case 1: + values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__x); + if (likely(values[1])) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("eval_chebyu", 0, 2, 3, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 132; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + case 2: + if (kw_args > 0) { + PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__out); + if (unlikely(value)) { values[2] = value; kw_args--; } + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "eval_chebyu") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 132; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + __pyx_v_n = values[0]; + __pyx_v_x = values[1]; + __pyx_v_out = values[2]; + } else { + __pyx_v_out = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 3: + __pyx_v_out = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: + __pyx_v_x = PyTuple_GET_ITEM(__pyx_args, 1); + __pyx_v_n = PyTuple_GET_ITEM(__pyx_args, 0); + break; + default: goto __pyx_L5_argtuple_error; + } + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("eval_chebyu", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 132; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + __pyx_L3_error:; + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_chebyu"); + return NULL; + __pyx_L4_argument_unpacking_done:; + __pyx_v_d = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_a = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_b = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_c = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_g = Py_None; __Pyx_INCREF(Py_None); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":134 + * def eval_chebyu(n, x, out=None): + * """Evaluate Chebyshev U polynomial at a point.""" + * d = n+1 # <<<<<<<<<<<<<< + * a = -n + * b = n+2 + */ + __pyx_t_1 = PyNumber_Add(__pyx_v_n, __pyx_int_1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 134; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_v_d); + __pyx_v_d = __pyx_t_1; + __pyx_t_1 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":135 + * """Evaluate Chebyshev U polynomial at a point.""" + * d = n+1 + * a = -n # <<<<<<<<<<<<<< + * b = n+2 + * c = 1.5 + */ + __pyx_t_1 = PyNumber_Negative(__pyx_v_n); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 135; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_v_a); + __pyx_v_a = __pyx_t_1; + __pyx_t_1 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":136 + * d = n+1 + * a = -n + * b = n+2 # <<<<<<<<<<<<<< + * c = 1.5 + * g = (1-x)/2.0 + */ + __pyx_t_1 = PyNumber_Add(__pyx_v_n, __pyx_int_2); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 136; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_v_b); + __pyx_v_b = __pyx_t_1; + __pyx_t_1 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":137 + * a = -n + * b = n+2 + * c = 1.5 # <<<<<<<<<<<<<< + * g = (1-x)/2.0 + * return hyp2f1(a, b, c, g) * d + */ + __pyx_t_1 = PyFloat_FromDouble(1.5); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 137; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_v_c); + __pyx_v_c = __pyx_t_1; + __pyx_t_1 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":138 + * b = n+2 + * c = 1.5 + * g = (1-x)/2.0 # <<<<<<<<<<<<<< + * return hyp2f1(a, b, c, g) * d + * + */ + __pyx_t_1 = PyNumber_Subtract(__pyx_int_1, __pyx_v_x); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 138; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = PyFloat_FromDouble(2.0); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 138; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = __Pyx_PyNumber_Divide(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 138; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_v_g); + __pyx_v_g = __pyx_t_3; + __pyx_t_3 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":139 + * c = 1.5 + * g = (1-x)/2.0 + * return hyp2f1(a, b, c, g) * d # <<<<<<<<<<<<<< + * + * def eval_chebys(n, x, out=None): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__hyp2f1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 139; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_2 = PyTuple_New(4); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 139; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(__pyx_v_a); + PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_a); + __Pyx_GIVEREF(__pyx_v_a); + __Pyx_INCREF(__pyx_v_b); + PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_v_b); + __Pyx_GIVEREF(__pyx_v_b); + __Pyx_INCREF(__pyx_v_c); + PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_v_c); + __Pyx_GIVEREF(__pyx_v_c); + __Pyx_INCREF(__pyx_v_g); + PyTuple_SET_ITEM(__pyx_t_2, 3, __pyx_v_g); + __Pyx_GIVEREF(__pyx_v_g); + __pyx_t_1 = PyObject_Call(__pyx_t_3, __pyx_t_2, NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 139; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyNumber_Multiply(__pyx_t_1, __pyx_v_d); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 139; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_r = __pyx_t_2; + __pyx_t_2 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_chebyu"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_DECREF(__pyx_v_d); + __Pyx_DECREF(__pyx_v_a); + __Pyx_DECREF(__pyx_v_b); + __Pyx_DECREF(__pyx_v_c); + __Pyx_DECREF(__pyx_v_g); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":141 + * return hyp2f1(a, b, c, g) * d + * + * def eval_chebys(n, x, out=None): # <<<<<<<<<<<<<< + * """Evaluate Chebyshev S polynomial at a point.""" + * return eval_chebyu(n, x/2, out=out) + */ + +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_chebys(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static char __pyx_doc_5scipy_7special_15orthogonal_eval_eval_chebys[] = "Evaluate Chebyshev S polynomial at a point."; +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_chebys(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + PyObject *__pyx_v_n = 0; + PyObject *__pyx_v_x = 0; + PyObject *__pyx_v_out = 0; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + PyObject *__pyx_t_4 = NULL; + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__n,&__pyx_n_s__x,&__pyx_n_s__out,0}; + __Pyx_RefNannySetupContext("eval_chebys"); + __pyx_self = __pyx_self; + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); + PyObject* values[3] = {0,0,0}; + values[2] = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 0: + values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__n); + if (likely(values[0])) kw_args--; + else goto __pyx_L5_argtuple_error; + case 1: + values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__x); + if (likely(values[1])) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("eval_chebys", 0, 2, 3, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 141; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + case 2: + if (kw_args > 0) { + PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__out); + if (unlikely(value)) { values[2] = value; kw_args--; } + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "eval_chebys") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 141; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + __pyx_v_n = values[0]; + __pyx_v_x = values[1]; + __pyx_v_out = values[2]; + } else { + __pyx_v_out = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 3: + __pyx_v_out = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: + __pyx_v_x = PyTuple_GET_ITEM(__pyx_args, 1); + __pyx_v_n = PyTuple_GET_ITEM(__pyx_args, 0); + break; + default: goto __pyx_L5_argtuple_error; + } + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("eval_chebys", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 141; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + __pyx_L3_error:; + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_chebys"); + return NULL; + __pyx_L4_argument_unpacking_done:; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":143 + * def eval_chebys(n, x, out=None): + * """Evaluate Chebyshev S polynomial at a point.""" + * return eval_chebyu(n, x/2, out=out) # <<<<<<<<<<<<<< + * + * def eval_chebyc(n, x, out=None): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__eval_chebyu); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 143; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = __Pyx_PyNumber_Divide(__pyx_v_x, __pyx_int_2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 143; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 143; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_INCREF(__pyx_v_n); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_n); + __Pyx_GIVEREF(__pyx_v_n); + PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_2); + __Pyx_GIVEREF(__pyx_t_2); + __pyx_t_2 = 0; + __pyx_t_2 = PyDict_New(); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 143; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_2)); + if (PyDict_SetItem(__pyx_t_2, ((PyObject *)__pyx_n_s__out), __pyx_v_out) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 143; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_4 = PyEval_CallObjectWithKeywords(__pyx_t_1, __pyx_t_3, ((PyObject *)__pyx_t_2)); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 143; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(((PyObject *)__pyx_t_2)); __pyx_t_2 = 0; + __pyx_r = __pyx_t_4; + __pyx_t_4 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_chebys"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":145 + * return eval_chebyu(n, x/2, out=out) + * + * def eval_chebyc(n, x, out=None): # <<<<<<<<<<<<<< + * """Evaluate Chebyshev C polynomial at a point.""" + * return 2*eval_chebyt(n, x/2.0, out) + */ + +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_chebyc(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static char __pyx_doc_5scipy_7special_15orthogonal_eval_eval_chebyc[] = "Evaluate Chebyshev C polynomial at a point."; +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_chebyc(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + PyObject *__pyx_v_n = 0; + PyObject *__pyx_v_x = 0; + PyObject *__pyx_v_out = 0; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__n,&__pyx_n_s__x,&__pyx_n_s__out,0}; + __Pyx_RefNannySetupContext("eval_chebyc"); + __pyx_self = __pyx_self; + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); + PyObject* values[3] = {0,0,0}; + values[2] = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 0: + values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__n); + if (likely(values[0])) kw_args--; + else goto __pyx_L5_argtuple_error; + case 1: + values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__x); + if (likely(values[1])) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("eval_chebyc", 0, 2, 3, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 145; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + case 2: + if (kw_args > 0) { + PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__out); + if (unlikely(value)) { values[2] = value; kw_args--; } + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "eval_chebyc") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 145; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + __pyx_v_n = values[0]; + __pyx_v_x = values[1]; + __pyx_v_out = values[2]; + } else { + __pyx_v_out = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 3: + __pyx_v_out = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: + __pyx_v_x = PyTuple_GET_ITEM(__pyx_args, 1); + __pyx_v_n = PyTuple_GET_ITEM(__pyx_args, 0); + break; + default: goto __pyx_L5_argtuple_error; + } + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("eval_chebyc", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 145; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + __pyx_L3_error:; + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_chebyc"); + return NULL; + __pyx_L4_argument_unpacking_done:; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":147 + * def eval_chebyc(n, x, out=None): + * """Evaluate Chebyshev C polynomial at a point.""" + * return 2*eval_chebyt(n, x/2.0, out) # <<<<<<<<<<<<<< + * + * def eval_sh_chebyt(n, x, out=None): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__eval_chebyt); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 147; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = PyFloat_FromDouble(2.0); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 147; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = __Pyx_PyNumber_Divide(__pyx_v_x, __pyx_t_2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 147; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyTuple_New(3); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 147; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(__pyx_v_n); + PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_n); + __Pyx_GIVEREF(__pyx_v_n); + PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + __Pyx_INCREF(__pyx_v_out); + PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_v_out); + __Pyx_GIVEREF(__pyx_v_out); + __pyx_t_3 = 0; + __pyx_t_3 = PyObject_Call(__pyx_t_1, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 147; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyNumber_Multiply(__pyx_int_2, __pyx_t_3); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 147; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_r = __pyx_t_2; + __pyx_t_2 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_chebyc"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":149 + * return 2*eval_chebyt(n, x/2.0, out) + * + * def eval_sh_chebyt(n, x, out=None): # <<<<<<<<<<<<<< + * """Evaluate shifted Chebyshev T polynomial at a point.""" + * return eval_chebyt(n, 2*x-1, out=out) + */ + +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_sh_chebyt(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static char __pyx_doc_5scipy_7special_15orthogonal_eval_eval_sh_chebyt[] = "Evaluate shifted Chebyshev T polynomial at a point."; +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_sh_chebyt(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + PyObject *__pyx_v_n = 0; + PyObject *__pyx_v_x = 0; + PyObject *__pyx_v_out = 0; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + PyObject *__pyx_t_4 = NULL; + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__n,&__pyx_n_s__x,&__pyx_n_s__out,0}; + __Pyx_RefNannySetupContext("eval_sh_chebyt"); + __pyx_self = __pyx_self; + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); + PyObject* values[3] = {0,0,0}; + values[2] = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 0: + values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__n); + if (likely(values[0])) kw_args--; + else goto __pyx_L5_argtuple_error; + case 1: + values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__x); + if (likely(values[1])) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("eval_sh_chebyt", 0, 2, 3, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 149; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + case 2: + if (kw_args > 0) { + PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__out); + if (unlikely(value)) { values[2] = value; kw_args--; } + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "eval_sh_chebyt") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 149; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + __pyx_v_n = values[0]; + __pyx_v_x = values[1]; + __pyx_v_out = values[2]; + } else { + __pyx_v_out = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 3: + __pyx_v_out = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: + __pyx_v_x = PyTuple_GET_ITEM(__pyx_args, 1); + __pyx_v_n = PyTuple_GET_ITEM(__pyx_args, 0); + break; + default: goto __pyx_L5_argtuple_error; + } + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("eval_sh_chebyt", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 149; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + __pyx_L3_error:; + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_sh_chebyt"); + return NULL; + __pyx_L4_argument_unpacking_done:; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":151 + * def eval_sh_chebyt(n, x, out=None): + * """Evaluate shifted Chebyshev T polynomial at a point.""" + * return eval_chebyt(n, 2*x-1, out=out) # <<<<<<<<<<<<<< + * + * def eval_sh_chebyu(n, x, out=None): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__eval_chebyt); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 151; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = PyNumber_Multiply(__pyx_int_2, __pyx_v_x); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 151; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyNumber_Subtract(__pyx_t_2, __pyx_int_1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 151; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 151; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(__pyx_v_n); + PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_n); + __Pyx_GIVEREF(__pyx_v_n); + PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + __pyx_t_3 = 0; + __pyx_t_3 = PyDict_New(); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 151; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_3)); + if (PyDict_SetItem(__pyx_t_3, ((PyObject *)__pyx_n_s__out), __pyx_v_out) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 151; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_4 = PyEval_CallObjectWithKeywords(__pyx_t_1, __pyx_t_2, ((PyObject *)__pyx_t_3)); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 151; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(((PyObject *)__pyx_t_3)); __pyx_t_3 = 0; + __pyx_r = __pyx_t_4; + __pyx_t_4 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_sh_chebyt"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":153 + * return eval_chebyt(n, 2*x-1, out=out) + * + * def eval_sh_chebyu(n, x, out=None): # <<<<<<<<<<<<<< + * """Evaluate shifted Chebyshev U polynomial at a point.""" + * return eval_chebyu(n, 2*x-1, out=out) + */ + +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_sh_chebyu(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static char __pyx_doc_5scipy_7special_15orthogonal_eval_eval_sh_chebyu[] = "Evaluate shifted Chebyshev U polynomial at a point."; +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_sh_chebyu(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + PyObject *__pyx_v_n = 0; + PyObject *__pyx_v_x = 0; + PyObject *__pyx_v_out = 0; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + PyObject *__pyx_t_4 = NULL; + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__n,&__pyx_n_s__x,&__pyx_n_s__out,0}; + __Pyx_RefNannySetupContext("eval_sh_chebyu"); + __pyx_self = __pyx_self; + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); + PyObject* values[3] = {0,0,0}; + values[2] = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 0: + values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__n); + if (likely(values[0])) kw_args--; + else goto __pyx_L5_argtuple_error; + case 1: + values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__x); + if (likely(values[1])) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("eval_sh_chebyu", 0, 2, 3, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 153; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + case 2: + if (kw_args > 0) { + PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__out); + if (unlikely(value)) { values[2] = value; kw_args--; } + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "eval_sh_chebyu") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 153; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + __pyx_v_n = values[0]; + __pyx_v_x = values[1]; + __pyx_v_out = values[2]; + } else { + __pyx_v_out = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 3: + __pyx_v_out = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: + __pyx_v_x = PyTuple_GET_ITEM(__pyx_args, 1); + __pyx_v_n = PyTuple_GET_ITEM(__pyx_args, 0); + break; + default: goto __pyx_L5_argtuple_error; + } + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("eval_sh_chebyu", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 153; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + __pyx_L3_error:; + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_sh_chebyu"); + return NULL; + __pyx_L4_argument_unpacking_done:; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":155 + * def eval_sh_chebyu(n, x, out=None): + * """Evaluate shifted Chebyshev U polynomial at a point.""" + * return eval_chebyu(n, 2*x-1, out=out) # <<<<<<<<<<<<<< + * + * def eval_legendre(n, x, out=None): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__eval_chebyu); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 155; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = PyNumber_Multiply(__pyx_int_2, __pyx_v_x); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 155; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyNumber_Subtract(__pyx_t_2, __pyx_int_1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 155; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 155; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(__pyx_v_n); + PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_n); + __Pyx_GIVEREF(__pyx_v_n); + PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + __pyx_t_3 = 0; + __pyx_t_3 = PyDict_New(); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 155; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_3)); + if (PyDict_SetItem(__pyx_t_3, ((PyObject *)__pyx_n_s__out), __pyx_v_out) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 155; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_4 = PyEval_CallObjectWithKeywords(__pyx_t_1, __pyx_t_2, ((PyObject *)__pyx_t_3)); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 155; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(((PyObject *)__pyx_t_3)); __pyx_t_3 = 0; + __pyx_r = __pyx_t_4; + __pyx_t_4 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_sh_chebyu"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":157 + * return eval_chebyu(n, 2*x-1, out=out) + * + * def eval_legendre(n, x, out=None): # <<<<<<<<<<<<<< + * """Evaluate Legendre polynomial at a point.""" + * d = 1 + */ + +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_legendre(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static char __pyx_doc_5scipy_7special_15orthogonal_eval_eval_legendre[] = "Evaluate Legendre polynomial at a point."; +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_legendre(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + PyObject *__pyx_v_n = 0; + PyObject *__pyx_v_x = 0; + PyObject *__pyx_v_out = 0; + PyObject *__pyx_v_d; + PyObject *__pyx_v_a; + PyObject *__pyx_v_b; + PyObject *__pyx_v_c; + PyObject *__pyx_v_g; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__n,&__pyx_n_s__x,&__pyx_n_s__out,0}; + __Pyx_RefNannySetupContext("eval_legendre"); + __pyx_self = __pyx_self; + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); + PyObject* values[3] = {0,0,0}; + values[2] = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 0: + values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__n); + if (likely(values[0])) kw_args--; + else goto __pyx_L5_argtuple_error; + case 1: + values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__x); + if (likely(values[1])) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("eval_legendre", 0, 2, 3, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 157; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + case 2: + if (kw_args > 0) { + PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__out); + if (unlikely(value)) { values[2] = value; kw_args--; } + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "eval_legendre") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 157; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + __pyx_v_n = values[0]; + __pyx_v_x = values[1]; + __pyx_v_out = values[2]; + } else { + __pyx_v_out = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 3: + __pyx_v_out = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: + __pyx_v_x = PyTuple_GET_ITEM(__pyx_args, 1); + __pyx_v_n = PyTuple_GET_ITEM(__pyx_args, 0); + break; + default: goto __pyx_L5_argtuple_error; + } + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("eval_legendre", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 157; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + __pyx_L3_error:; + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_legendre"); + return NULL; + __pyx_L4_argument_unpacking_done:; + __pyx_v_d = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_a = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_b = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_c = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_g = Py_None; __Pyx_INCREF(Py_None); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":159 + * def eval_legendre(n, x, out=None): + * """Evaluate Legendre polynomial at a point.""" + * d = 1 # <<<<<<<<<<<<<< + * a = -n + * b = n+1 + */ + __Pyx_INCREF(__pyx_int_1); + __Pyx_DECREF(__pyx_v_d); + __pyx_v_d = __pyx_int_1; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":160 + * """Evaluate Legendre polynomial at a point.""" + * d = 1 + * a = -n # <<<<<<<<<<<<<< + * b = n+1 + * c = 1 + */ + __pyx_t_1 = PyNumber_Negative(__pyx_v_n); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 160; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_v_a); + __pyx_v_a = __pyx_t_1; + __pyx_t_1 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":161 + * d = 1 + * a = -n + * b = n+1 # <<<<<<<<<<<<<< + * c = 1 + * g = (1-x)/2.0 + */ + __pyx_t_1 = PyNumber_Add(__pyx_v_n, __pyx_int_1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 161; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_v_b); + __pyx_v_b = __pyx_t_1; + __pyx_t_1 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":162 + * a = -n + * b = n+1 + * c = 1 # <<<<<<<<<<<<<< + * g = (1-x)/2.0 + * return hyp2f1(a, b, c, g) * d + */ + __Pyx_INCREF(__pyx_int_1); + __Pyx_DECREF(__pyx_v_c); + __pyx_v_c = __pyx_int_1; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":163 + * b = n+1 + * c = 1 + * g = (1-x)/2.0 # <<<<<<<<<<<<<< + * return hyp2f1(a, b, c, g) * d + * + */ + __pyx_t_1 = PyNumber_Subtract(__pyx_int_1, __pyx_v_x); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 163; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = PyFloat_FromDouble(2.0); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 163; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = __Pyx_PyNumber_Divide(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 163; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_v_g); + __pyx_v_g = __pyx_t_3; + __pyx_t_3 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":164 + * c = 1 + * g = (1-x)/2.0 + * return hyp2f1(a, b, c, g) * d # <<<<<<<<<<<<<< + * + * def eval_sh_legendre(n, x, out=None): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__hyp2f1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 164; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_2 = PyTuple_New(4); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 164; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(__pyx_v_a); + PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_a); + __Pyx_GIVEREF(__pyx_v_a); + __Pyx_INCREF(__pyx_v_b); + PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_v_b); + __Pyx_GIVEREF(__pyx_v_b); + __Pyx_INCREF(__pyx_v_c); + PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_v_c); + __Pyx_GIVEREF(__pyx_v_c); + __Pyx_INCREF(__pyx_v_g); + PyTuple_SET_ITEM(__pyx_t_2, 3, __pyx_v_g); + __Pyx_GIVEREF(__pyx_v_g); + __pyx_t_1 = PyObject_Call(__pyx_t_3, __pyx_t_2, NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 164; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyNumber_Multiply(__pyx_t_1, __pyx_v_d); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 164; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_r = __pyx_t_2; + __pyx_t_2 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_legendre"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_DECREF(__pyx_v_d); + __Pyx_DECREF(__pyx_v_a); + __Pyx_DECREF(__pyx_v_b); + __Pyx_DECREF(__pyx_v_c); + __Pyx_DECREF(__pyx_v_g); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":166 + * return hyp2f1(a, b, c, g) * d + * + * def eval_sh_legendre(n, x, out=None): # <<<<<<<<<<<<<< + * """Evaluate shifted Legendre polynomial at a point.""" + * return eval_legendre(n, 2*x-1, out=out) + */ + +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_sh_legendre(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static char __pyx_doc_5scipy_7special_15orthogonal_eval_eval_sh_legendre[] = "Evaluate shifted Legendre polynomial at a point."; +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_sh_legendre(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + PyObject *__pyx_v_n = 0; + PyObject *__pyx_v_x = 0; + PyObject *__pyx_v_out = 0; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + PyObject *__pyx_t_4 = NULL; + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__n,&__pyx_n_s__x,&__pyx_n_s__out,0}; + __Pyx_RefNannySetupContext("eval_sh_legendre"); + __pyx_self = __pyx_self; + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); + PyObject* values[3] = {0,0,0}; + values[2] = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 0: + values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__n); + if (likely(values[0])) kw_args--; + else goto __pyx_L5_argtuple_error; + case 1: + values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__x); + if (likely(values[1])) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("eval_sh_legendre", 0, 2, 3, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 166; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + case 2: + if (kw_args > 0) { + PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__out); + if (unlikely(value)) { values[2] = value; kw_args--; } + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "eval_sh_legendre") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 166; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + __pyx_v_n = values[0]; + __pyx_v_x = values[1]; + __pyx_v_out = values[2]; + } else { + __pyx_v_out = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 3: + __pyx_v_out = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: + __pyx_v_x = PyTuple_GET_ITEM(__pyx_args, 1); + __pyx_v_n = PyTuple_GET_ITEM(__pyx_args, 0); + break; + default: goto __pyx_L5_argtuple_error; + } + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("eval_sh_legendre", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 166; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + __pyx_L3_error:; + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_sh_legendre"); + return NULL; + __pyx_L4_argument_unpacking_done:; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":168 + * def eval_sh_legendre(n, x, out=None): + * """Evaluate shifted Legendre polynomial at a point.""" + * return eval_legendre(n, 2*x-1, out=out) # <<<<<<<<<<<<<< + * + * def eval_genlaguerre(n, alpha, x, out=None): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__eval_legendre); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 168; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = PyNumber_Multiply(__pyx_int_2, __pyx_v_x); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 168; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyNumber_Subtract(__pyx_t_2, __pyx_int_1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 168; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 168; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(__pyx_v_n); + PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_n); + __Pyx_GIVEREF(__pyx_v_n); + PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + __pyx_t_3 = 0; + __pyx_t_3 = PyDict_New(); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 168; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_3)); + if (PyDict_SetItem(__pyx_t_3, ((PyObject *)__pyx_n_s__out), __pyx_v_out) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 168; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_4 = PyEval_CallObjectWithKeywords(__pyx_t_1, __pyx_t_2, ((PyObject *)__pyx_t_3)); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 168; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(((PyObject *)__pyx_t_3)); __pyx_t_3 = 0; + __pyx_r = __pyx_t_4; + __pyx_t_4 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_sh_legendre"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":170 + * return eval_legendre(n, 2*x-1, out=out) + * + * def eval_genlaguerre(n, alpha, x, out=None): # <<<<<<<<<<<<<< + * """Evaluate generalized Laguerre polynomial at a point.""" + * d = binom(n+alpha, n) + */ + +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_genlaguerre(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static char __pyx_doc_5scipy_7special_15orthogonal_eval_eval_genlaguerre[] = "Evaluate generalized Laguerre polynomial at a point."; +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_genlaguerre(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + PyObject *__pyx_v_n = 0; + PyObject *__pyx_v_alpha = 0; + PyObject *__pyx_v_x = 0; + PyObject *__pyx_v_out = 0; + PyObject *__pyx_v_d; + PyObject *__pyx_v_a; + PyObject *__pyx_v_b; + PyObject *__pyx_v_g; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__n,&__pyx_n_s__alpha,&__pyx_n_s__x,&__pyx_n_s__out,0}; + __Pyx_RefNannySetupContext("eval_genlaguerre"); + __pyx_self = __pyx_self; + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); + PyObject* values[4] = {0,0,0,0}; + values[3] = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); + case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 0: + values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__n); + if (likely(values[0])) kw_args--; + else goto __pyx_L5_argtuple_error; + case 1: + values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__alpha); + if (likely(values[1])) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("eval_genlaguerre", 0, 3, 4, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 170; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + case 2: + values[2] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__x); + if (likely(values[2])) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("eval_genlaguerre", 0, 3, 4, 2); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 170; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + case 3: + if (kw_args > 0) { + PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__out); + if (unlikely(value)) { values[3] = value; kw_args--; } + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "eval_genlaguerre") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 170; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + __pyx_v_n = values[0]; + __pyx_v_alpha = values[1]; + __pyx_v_x = values[2]; + __pyx_v_out = values[3]; + } else { + __pyx_v_out = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 4: + __pyx_v_out = PyTuple_GET_ITEM(__pyx_args, 3); + case 3: + __pyx_v_x = PyTuple_GET_ITEM(__pyx_args, 2); + __pyx_v_alpha = PyTuple_GET_ITEM(__pyx_args, 1); + __pyx_v_n = PyTuple_GET_ITEM(__pyx_args, 0); + break; + default: goto __pyx_L5_argtuple_error; + } + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("eval_genlaguerre", 0, 3, 4, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 170; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + __pyx_L3_error:; + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_genlaguerre"); + return NULL; + __pyx_L4_argument_unpacking_done:; + __pyx_v_d = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_a = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_b = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_g = Py_None; __Pyx_INCREF(Py_None); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":172 + * def eval_genlaguerre(n, alpha, x, out=None): + * """Evaluate generalized Laguerre polynomial at a point.""" + * d = binom(n+alpha, n) # <<<<<<<<<<<<<< + * a = -n + * b = alpha + 1 + */ + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__binom); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 172; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = PyNumber_Add(__pyx_v_n, __pyx_v_alpha); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 172; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 172; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2); + __Pyx_GIVEREF(__pyx_t_2); + __Pyx_INCREF(__pyx_v_n); + PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_v_n); + __Pyx_GIVEREF(__pyx_v_n); + __pyx_t_2 = 0; + __pyx_t_2 = PyObject_Call(__pyx_t_1, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 172; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_v_d); + __pyx_v_d = __pyx_t_2; + __pyx_t_2 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":173 + * """Evaluate generalized Laguerre polynomial at a point.""" + * d = binom(n+alpha, n) + * a = -n # <<<<<<<<<<<<<< + * b = alpha + 1 + * g = x + */ + __pyx_t_2 = PyNumber_Negative(__pyx_v_n); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 173; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_v_a); + __pyx_v_a = __pyx_t_2; + __pyx_t_2 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":174 + * d = binom(n+alpha, n) + * a = -n + * b = alpha + 1 # <<<<<<<<<<<<<< + * g = x + * return hyp1f1(a, b, g) * d + */ + __pyx_t_2 = PyNumber_Add(__pyx_v_alpha, __pyx_int_1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 174; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_v_b); + __pyx_v_b = __pyx_t_2; + __pyx_t_2 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":175 + * a = -n + * b = alpha + 1 + * g = x # <<<<<<<<<<<<<< + * return hyp1f1(a, b, g) * d + * + */ + __Pyx_INCREF(__pyx_v_x); + __Pyx_DECREF(__pyx_v_g); + __pyx_v_g = __pyx_v_x; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":176 + * b = alpha + 1 + * g = x + * return hyp1f1(a, b, g) * d # <<<<<<<<<<<<<< + * + * def eval_laguerre(n, x, out=None): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__hyp1f1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 176; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 176; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_INCREF(__pyx_v_a); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_a); + __Pyx_GIVEREF(__pyx_v_a); + __Pyx_INCREF(__pyx_v_b); + PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_v_b); + __Pyx_GIVEREF(__pyx_v_b); + __Pyx_INCREF(__pyx_v_g); + PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_v_g); + __Pyx_GIVEREF(__pyx_v_g); + __pyx_t_1 = PyObject_Call(__pyx_t_2, __pyx_t_3, NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 176; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyNumber_Multiply(__pyx_t_1, __pyx_v_d); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 176; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_r = __pyx_t_3; + __pyx_t_3 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_genlaguerre"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_DECREF(__pyx_v_d); + __Pyx_DECREF(__pyx_v_a); + __Pyx_DECREF(__pyx_v_b); + __Pyx_DECREF(__pyx_v_g); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":178 + * return hyp1f1(a, b, g) * d + * + * def eval_laguerre(n, x, out=None): # <<<<<<<<<<<<<< + * """Evaluate Laguerre polynomial at a point.""" + * return eval_genlaguerre(n, 0., x, out=out) + */ + +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_laguerre(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static char __pyx_doc_5scipy_7special_15orthogonal_eval_eval_laguerre[] = "Evaluate Laguerre polynomial at a point."; +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_laguerre(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + PyObject *__pyx_v_n = 0; + PyObject *__pyx_v_x = 0; + PyObject *__pyx_v_out = 0; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + PyObject *__pyx_t_4 = NULL; + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__n,&__pyx_n_s__x,&__pyx_n_s__out,0}; + __Pyx_RefNannySetupContext("eval_laguerre"); + __pyx_self = __pyx_self; + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); + PyObject* values[3] = {0,0,0}; + values[2] = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 0: + values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__n); + if (likely(values[0])) kw_args--; + else goto __pyx_L5_argtuple_error; + case 1: + values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__x); + if (likely(values[1])) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("eval_laguerre", 0, 2, 3, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 178; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + case 2: + if (kw_args > 0) { + PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__out); + if (unlikely(value)) { values[2] = value; kw_args--; } + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "eval_laguerre") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 178; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + __pyx_v_n = values[0]; + __pyx_v_x = values[1]; + __pyx_v_out = values[2]; + } else { + __pyx_v_out = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 3: + __pyx_v_out = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: + __pyx_v_x = PyTuple_GET_ITEM(__pyx_args, 1); + __pyx_v_n = PyTuple_GET_ITEM(__pyx_args, 0); + break; + default: goto __pyx_L5_argtuple_error; + } + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("eval_laguerre", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 178; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + __pyx_L3_error:; + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_laguerre"); + return NULL; + __pyx_L4_argument_unpacking_done:; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":180 + * def eval_laguerre(n, x, out=None): + * """Evaluate Laguerre polynomial at a point.""" + * return eval_genlaguerre(n, 0., x, out=out) # <<<<<<<<<<<<<< + * + * def eval_hermite(n, x, out=None): + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__eval_genlaguerre); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 180; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = PyFloat_FromDouble(0.0); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 180; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 180; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_INCREF(__pyx_v_n); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_n); + __Pyx_GIVEREF(__pyx_v_n); + PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_2); + __Pyx_GIVEREF(__pyx_t_2); + __Pyx_INCREF(__pyx_v_x); + PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_v_x); + __Pyx_GIVEREF(__pyx_v_x); + __pyx_t_2 = 0; + __pyx_t_2 = PyDict_New(); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 180; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_2)); + if (PyDict_SetItem(__pyx_t_2, ((PyObject *)__pyx_n_s__out), __pyx_v_out) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 180; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_4 = PyEval_CallObjectWithKeywords(__pyx_t_1, __pyx_t_3, ((PyObject *)__pyx_t_2)); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 180; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(((PyObject *)__pyx_t_2)); __pyx_t_2 = 0; + __pyx_r = __pyx_t_4; + __pyx_t_4 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_laguerre"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":182 + * return eval_genlaguerre(n, 0., x, out=out) + * + * def eval_hermite(n, x, out=None): # <<<<<<<<<<<<<< + * """Evaluate Hermite polynomial at a point.""" + * n, x = np.broadcast_arrays(n, x) + */ + +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_hermite(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static char __pyx_doc_5scipy_7special_15orthogonal_eval_eval_hermite[] = "Evaluate Hermite polynomial at a point."; +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_hermite(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + PyObject *__pyx_v_n = 0; + PyObject *__pyx_v_x = 0; + PyObject *__pyx_v_out = 0; + PyObject *__pyx_v_even; + PyObject *__pyx_v_m; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + PyObject *__pyx_t_4 = NULL; + int __pyx_t_5; + PyObject *__pyx_t_6 = NULL; + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__n,&__pyx_n_s__x,&__pyx_n_s__out,0}; + __Pyx_RefNannySetupContext("eval_hermite"); + __pyx_self = __pyx_self; + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); + PyObject* values[3] = {0,0,0}; + values[2] = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 0: + values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__n); + if (likely(values[0])) kw_args--; + else goto __pyx_L5_argtuple_error; + case 1: + values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__x); + if (likely(values[1])) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("eval_hermite", 0, 2, 3, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 182; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + case 2: + if (kw_args > 0) { + PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__out); + if (unlikely(value)) { values[2] = value; kw_args--; } + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "eval_hermite") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 182; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + __pyx_v_n = values[0]; + __pyx_v_x = values[1]; + __pyx_v_out = values[2]; + } else { + __pyx_v_out = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 3: + __pyx_v_out = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: + __pyx_v_x = PyTuple_GET_ITEM(__pyx_args, 1); + __pyx_v_n = PyTuple_GET_ITEM(__pyx_args, 0); + break; + default: goto __pyx_L5_argtuple_error; + } + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("eval_hermite", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 182; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + __pyx_L3_error:; + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_hermite"); + return NULL; + __pyx_L4_argument_unpacking_done:; + __Pyx_INCREF(__pyx_v_n); + __Pyx_INCREF(__pyx_v_x); + __Pyx_INCREF(__pyx_v_out); + __pyx_v_even = Py_None; __Pyx_INCREF(Py_None); + __pyx_v_m = Py_None; __Pyx_INCREF(Py_None); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":184 + * def eval_hermite(n, x, out=None): + * """Evaluate Hermite polynomial at a point.""" + * n, x = np.broadcast_arrays(n, x) # <<<<<<<<<<<<<< + * n, x = np.atleast_1d(n, x) + * + */ + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 184; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__broadcast_arrays); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 184; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 184; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_INCREF(__pyx_v_n); + PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_n); + __Pyx_GIVEREF(__pyx_v_n); + __Pyx_INCREF(__pyx_v_x); + PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_v_x); + __Pyx_GIVEREF(__pyx_v_x); + __pyx_t_3 = PyObject_Call(__pyx_t_2, __pyx_t_1, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 184; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + if (PyTuple_CheckExact(__pyx_t_3) && likely(PyTuple_GET_SIZE(__pyx_t_3) == 2)) { + PyObject* tuple = __pyx_t_3; + __pyx_t_1 = PyTuple_GET_ITEM(tuple, 0); __Pyx_INCREF(__pyx_t_1); + __pyx_t_2 = PyTuple_GET_ITEM(tuple, 1); __Pyx_INCREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_v_n); + __pyx_v_n = __pyx_t_1; + __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_v_x); + __pyx_v_x = __pyx_t_2; + __pyx_t_2 = 0; + } else { + __pyx_t_4 = PyObject_GetIter(__pyx_t_3); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 184; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_1 = __Pyx_UnpackItem(__pyx_t_4, 0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 184; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = __Pyx_UnpackItem(__pyx_t_4, 1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 184; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + if (__Pyx_EndUnpack(__pyx_t_4) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 184; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_DECREF(__pyx_v_n); + __pyx_v_n = __pyx_t_1; + __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_v_x); + __pyx_v_x = __pyx_t_2; + __pyx_t_2 = 0; + } + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":185 + * """Evaluate Hermite polynomial at a point.""" + * n, x = np.broadcast_arrays(n, x) + * n, x = np.atleast_1d(n, x) # <<<<<<<<<<<<<< + * + * if out is None: + */ + __pyx_t_3 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 185; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_2 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__atleast_1d); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 185; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 185; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_INCREF(__pyx_v_n); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_n); + __Pyx_GIVEREF(__pyx_v_n); + __Pyx_INCREF(__pyx_v_x); + PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_v_x); + __Pyx_GIVEREF(__pyx_v_x); + __pyx_t_1 = PyObject_Call(__pyx_t_2, __pyx_t_3, NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 185; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (PyTuple_CheckExact(__pyx_t_1) && likely(PyTuple_GET_SIZE(__pyx_t_1) == 2)) { + PyObject* tuple = __pyx_t_1; + __pyx_t_3 = PyTuple_GET_ITEM(tuple, 0); __Pyx_INCREF(__pyx_t_3); + __pyx_t_2 = PyTuple_GET_ITEM(tuple, 1); __Pyx_INCREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_v_n); + __pyx_v_n = __pyx_t_3; + __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_v_x); + __pyx_v_x = __pyx_t_2; + __pyx_t_2 = 0; + } else { + __pyx_t_4 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 185; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_3 = __Pyx_UnpackItem(__pyx_t_4, 0); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 185; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_2 = __Pyx_UnpackItem(__pyx_t_4, 1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 185; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + if (__Pyx_EndUnpack(__pyx_t_4) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 185; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_DECREF(__pyx_v_n); + __pyx_v_n = __pyx_t_3; + __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_v_x); + __pyx_v_x = __pyx_t_2; + __pyx_t_2 = 0; + } + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":187 + * n, x = np.atleast_1d(n, x) + * + * if out is None: # <<<<<<<<<<<<<< + * out = np.zeros_like(0*n + 0*x) + * if (n % 1 != 0).any(): + */ + __pyx_t_5 = (__pyx_v_out == Py_None); + if (__pyx_t_5) { + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":188 + * + * if out is None: + * out = np.zeros_like(0*n + 0*x) # <<<<<<<<<<<<<< + * if (n % 1 != 0).any(): + * raise ValueError("Order must be integer") + */ + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 188; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__zeros_like); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 188; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyNumber_Multiply(__pyx_int_0, __pyx_v_n); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 188; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_3 = PyNumber_Multiply(__pyx_int_0, __pyx_v_x); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 188; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = PyNumber_Add(__pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 188; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 188; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_4); + __Pyx_GIVEREF(__pyx_t_4); + __pyx_t_4 = 0; + __pyx_t_4 = PyObject_Call(__pyx_t_2, __pyx_t_3, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 188; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_v_out); + __pyx_v_out = __pyx_t_4; + __pyx_t_4 = 0; + goto __pyx_L6; + } + __pyx_L6:; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":189 + * if out is None: + * out = np.zeros_like(0*n + 0*x) + * if (n % 1 != 0).any(): # <<<<<<<<<<<<<< + * raise ValueError("Order must be integer") + * + */ + __pyx_t_4 = PyNumber_Remainder(__pyx_v_n, __pyx_int_1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 189; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_t_4, __pyx_int_0, Py_NE); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 189; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_4 = PyObject_GetAttr(__pyx_t_3, __pyx_n_s__any); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 189; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyObject_Call(__pyx_t_4, ((PyObject *)__pyx_empty_tuple), NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 189; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_5 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_5 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 189; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (__pyx_t_5) { + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":190 + * out = np.zeros_like(0*n + 0*x) + * if (n % 1 != 0).any(): + * raise ValueError("Order must be integer") # <<<<<<<<<<<<<< + * + * even = (n % 2 == 0) + */ + __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 190; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_INCREF(((PyObject *)__pyx_kp_s_1)); + PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_kp_s_1)); + __Pyx_GIVEREF(((PyObject *)__pyx_kp_s_1)); + __pyx_t_4 = PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 190; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_Raise(__pyx_t_4, 0, 0); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + {__pyx_filename = __pyx_f[0]; __pyx_lineno = 190; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + goto __pyx_L7; + } + __pyx_L7:; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":192 + * raise ValueError("Order must be integer") + * + * even = (n % 2 == 0) # <<<<<<<<<<<<<< + * + * m = n[even]/2 + */ + __pyx_t_4 = PyNumber_Remainder(__pyx_v_n, __pyx_int_2); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 192; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyObject_RichCompare(__pyx_t_4, __pyx_int_0, Py_EQ); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 192; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_DECREF(__pyx_v_even); + __pyx_v_even = __pyx_t_3; + __pyx_t_3 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":194 + * even = (n % 2 == 0) + * + * m = n[even]/2 # <<<<<<<<<<<<<< + * out[even] = ((-1)**m * 2**(2*m) * gamma(1+m) + * * eval_genlaguerre(m, -0.5, x[even]**2)) + */ + __pyx_t_3 = PyObject_GetItem(__pyx_v_n, __pyx_v_even); if (!__pyx_t_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 194; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_4 = __Pyx_PyNumber_Divide(__pyx_t_3, __pyx_int_2); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 194; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_v_m); + __pyx_v_m = __pyx_t_4; + __pyx_t_4 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":195 + * + * m = n[even]/2 + * out[even] = ((-1)**m * 2**(2*m) * gamma(1+m) # <<<<<<<<<<<<<< + * * eval_genlaguerre(m, -0.5, x[even]**2)) + * + */ + __pyx_t_4 = PyNumber_Power(__pyx_int_neg_1, __pyx_v_m, Py_None); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 195; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyNumber_Multiply(__pyx_int_2, __pyx_v_m); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 195; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_2 = PyNumber_Power(__pyx_int_2, __pyx_t_3, Py_None); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 195; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyNumber_Multiply(__pyx_t_4, __pyx_t_2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 195; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__gamma); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 195; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_4 = PyNumber_Add(__pyx_int_1, __pyx_v_m); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 195; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 195; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_4); + __Pyx_GIVEREF(__pyx_t_4); + __pyx_t_4 = 0; + __pyx_t_4 = PyObject_Call(__pyx_t_2, __pyx_t_1, NULL); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 195; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyNumber_Multiply(__pyx_t_3, __pyx_t_4); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 195; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":196 + * m = n[even]/2 + * out[even] = ((-1)**m * 2**(2*m) * gamma(1+m) + * * eval_genlaguerre(m, -0.5, x[even]**2)) # <<<<<<<<<<<<<< + * + * m = (n[~even]-1)/2 + */ + __pyx_t_4 = __Pyx_GetName(__pyx_m, __pyx_n_s__eval_genlaguerre); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 196; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_3 = PyFloat_FromDouble((-0.5)); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 196; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_2 = PyObject_GetItem(__pyx_v_x, __pyx_v_even); if (!__pyx_t_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 196; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_6 = PyNumber_Power(__pyx_t_2, __pyx_int_2, Py_None); if (unlikely(!__pyx_t_6)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 196; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_6); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyTuple_New(3); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 196; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(__pyx_v_m); + PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_m); + __Pyx_GIVEREF(__pyx_v_m); + PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_t_6); + __Pyx_GIVEREF(__pyx_t_6); + __pyx_t_3 = 0; + __pyx_t_6 = 0; + __pyx_t_6 = PyObject_Call(__pyx_t_4, __pyx_t_2, NULL); if (unlikely(!__pyx_t_6)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 196; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_6); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyNumber_Multiply(__pyx_t_1, __pyx_t_6); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 196; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":195 + * + * m = n[even]/2 + * out[even] = ((-1)**m * 2**(2*m) * gamma(1+m) # <<<<<<<<<<<<<< + * * eval_genlaguerre(m, -0.5, x[even]**2)) + * + */ + if (PyObject_SetItem(__pyx_v_out, __pyx_v_even, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 195; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":198 + * * eval_genlaguerre(m, -0.5, x[even]**2)) + * + * m = (n[~even]-1)/2 # <<<<<<<<<<<<<< + * out[~even] = ((-1)**m * 2**(2*m+1) * gamma(1+m) + * * x[~even] * eval_genlaguerre(m, 0.5, x[~even]**2)) + */ + __pyx_t_2 = PyNumber_Invert(__pyx_v_even); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 198; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_6 = PyObject_GetItem(__pyx_v_n, __pyx_t_2); if (!__pyx_t_6) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 198; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_6); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyNumber_Subtract(__pyx_t_6, __pyx_int_1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 198; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + __pyx_t_6 = __Pyx_PyNumber_Divide(__pyx_t_2, __pyx_int_2); if (unlikely(!__pyx_t_6)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 198; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_6); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_v_m); + __pyx_v_m = __pyx_t_6; + __pyx_t_6 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":199 + * + * m = (n[~even]-1)/2 + * out[~even] = ((-1)**m * 2**(2*m+1) * gamma(1+m) # <<<<<<<<<<<<<< + * * x[~even] * eval_genlaguerre(m, 0.5, x[~even]**2)) + * + */ + __pyx_t_6 = PyNumber_Power(__pyx_int_neg_1, __pyx_v_m, Py_None); if (unlikely(!__pyx_t_6)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 199; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_6); + __pyx_t_2 = PyNumber_Multiply(__pyx_int_2, __pyx_v_m); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 199; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_1 = PyNumber_Add(__pyx_t_2, __pyx_int_1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 199; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyNumber_Power(__pyx_int_2, __pyx_t_1, Py_None); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 199; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyNumber_Multiply(__pyx_t_6, __pyx_t_2); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 199; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__gamma); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 199; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_6 = PyNumber_Add(__pyx_int_1, __pyx_v_m); if (unlikely(!__pyx_t_6)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 199; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_6); + __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 199; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_6); + __Pyx_GIVEREF(__pyx_t_6); + __pyx_t_6 = 0; + __pyx_t_6 = PyObject_Call(__pyx_t_2, __pyx_t_4, NULL); if (unlikely(!__pyx_t_6)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 199; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_6); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_4 = PyNumber_Multiply(__pyx_t_1, __pyx_t_6); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 199; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":200 + * m = (n[~even]-1)/2 + * out[~even] = ((-1)**m * 2**(2*m+1) * gamma(1+m) + * * x[~even] * eval_genlaguerre(m, 0.5, x[~even]**2)) # <<<<<<<<<<<<<< + * + * return out + */ + __pyx_t_6 = PyNumber_Invert(__pyx_v_even); if (unlikely(!__pyx_t_6)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 200; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_6); + __pyx_t_1 = PyObject_GetItem(__pyx_v_x, __pyx_t_6); if (!__pyx_t_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 200; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + __pyx_t_6 = PyNumber_Multiply(__pyx_t_4, __pyx_t_1); if (unlikely(!__pyx_t_6)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 200; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_6); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__eval_genlaguerre); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 200; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_4 = PyFloat_FromDouble(0.5); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 200; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __pyx_t_2 = PyNumber_Invert(__pyx_v_even); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 200; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = PyObject_GetItem(__pyx_v_x, __pyx_t_2); if (!__pyx_t_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 200; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyNumber_Power(__pyx_t_3, __pyx_int_2, Py_None); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 200; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 200; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_INCREF(__pyx_v_m); + PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_m); + __Pyx_GIVEREF(__pyx_v_m); + PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_4); + __Pyx_GIVEREF(__pyx_t_4); + PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2); + __Pyx_GIVEREF(__pyx_t_2); + __pyx_t_4 = 0; + __pyx_t_2 = 0; + __pyx_t_2 = PyObject_Call(__pyx_t_1, __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 200; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyNumber_Multiply(__pyx_t_6, __pyx_t_2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 200; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":199 + * + * m = (n[~even]-1)/2 + * out[~even] = ((-1)**m * 2**(2*m+1) * gamma(1+m) # <<<<<<<<<<<<<< + * * x[~even] * eval_genlaguerre(m, 0.5, x[~even]**2)) + * + */ + __pyx_t_2 = PyNumber_Invert(__pyx_v_even); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 199; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + if (PyObject_SetItem(__pyx_v_out, __pyx_t_2, __pyx_t_3) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 199; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":202 + * * x[~even] * eval_genlaguerre(m, 0.5, x[~even]**2)) + * + * return out # <<<<<<<<<<<<<< + * + * def eval_hermitenorm(n, x, out=None): + */ + __Pyx_XDECREF(__pyx_r); + __Pyx_INCREF(__pyx_v_out); + __pyx_r = __pyx_v_out; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_XDECREF(__pyx_t_6); + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_hermite"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_DECREF(__pyx_v_even); + __Pyx_DECREF(__pyx_v_m); + __Pyx_DECREF(__pyx_v_n); + __Pyx_DECREF(__pyx_v_x); + __Pyx_DECREF(__pyx_v_out); + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +/* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":204 + * return out + * + * def eval_hermitenorm(n, x, out=None): # <<<<<<<<<<<<<< + * """Evaluate normalized Hermite polynomial at a point.""" + * return eval_hermite(n, x/sqrt(2)) * 2**(-n/2.0) + */ + +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_hermitenorm(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ +static char __pyx_doc_5scipy_7special_15orthogonal_eval_eval_hermitenorm[] = "Evaluate normalized Hermite polynomial at a point."; +static PyObject *__pyx_pf_5scipy_7special_15orthogonal_eval_eval_hermitenorm(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { + PyObject *__pyx_v_n = 0; + PyObject *__pyx_v_x = 0; + PyObject *__pyx_v_out = 0; + PyObject *__pyx_r = NULL; + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + PyObject *__pyx_t_4 = NULL; + static PyObject **__pyx_pyargnames[] = {&__pyx_n_s__n,&__pyx_n_s__x,&__pyx_n_s__out,0}; + __Pyx_RefNannySetupContext("eval_hermitenorm"); + __pyx_self = __pyx_self; + if (unlikely(__pyx_kwds)) { + Py_ssize_t kw_args = PyDict_Size(__pyx_kwds); + PyObject* values[3] = {0,0,0}; + values[2] = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); + case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); + case 0: break; + default: goto __pyx_L5_argtuple_error; + } + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 0: + values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__n); + if (likely(values[0])) kw_args--; + else goto __pyx_L5_argtuple_error; + case 1: + values[1] = PyDict_GetItem(__pyx_kwds, __pyx_n_s__x); + if (likely(values[1])) kw_args--; + else { + __Pyx_RaiseArgtupleInvalid("eval_hermitenorm", 0, 2, 3, 1); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 204; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + case 2: + if (kw_args > 0) { + PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s__out); + if (unlikely(value)) { values[2] = value; kw_args--; } + } + } + if (unlikely(kw_args > 0)) { + if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, PyTuple_GET_SIZE(__pyx_args), "eval_hermitenorm") < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 204; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + } + __pyx_v_n = values[0]; + __pyx_v_x = values[1]; + __pyx_v_out = values[2]; + } else { + __pyx_v_out = ((PyObject *)Py_None); + switch (PyTuple_GET_SIZE(__pyx_args)) { + case 3: + __pyx_v_out = PyTuple_GET_ITEM(__pyx_args, 2); + case 2: + __pyx_v_x = PyTuple_GET_ITEM(__pyx_args, 1); + __pyx_v_n = PyTuple_GET_ITEM(__pyx_args, 0); + break; + default: goto __pyx_L5_argtuple_error; + } + } + goto __pyx_L4_argument_unpacking_done; + __pyx_L5_argtuple_error:; + __Pyx_RaiseArgtupleInvalid("eval_hermitenorm", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); {__pyx_filename = __pyx_f[0]; __pyx_lineno = 204; __pyx_clineno = __LINE__; goto __pyx_L3_error;} + __pyx_L3_error:; + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_hermitenorm"); + return NULL; + __pyx_L4_argument_unpacking_done:; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":206 + * def eval_hermitenorm(n, x, out=None): + * """Evaluate normalized Hermite polynomial at a point.""" + * return eval_hermite(n, x/sqrt(2)) * 2**(-n/2.0) # <<<<<<<<<<<<<< + */ + __Pyx_XDECREF(__pyx_r); + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__eval_hermite); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 206; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_2 = PyFloat_FromDouble(sqrt(2)); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 206; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = __Pyx_PyNumber_Divide(__pyx_v_x, __pyx_t_2); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 206; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 206; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_INCREF(__pyx_v_n); + PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_n); + __Pyx_GIVEREF(__pyx_v_n); + PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_3); + __Pyx_GIVEREF(__pyx_t_3); + __pyx_t_3 = 0; + __pyx_t_3 = PyObject_Call(__pyx_t_1, __pyx_t_2, NULL); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 206; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyNumber_Negative(__pyx_v_n); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 206; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_1 = PyFloat_FromDouble(2.0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 206; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_4 = __Pyx_PyNumber_Divide(__pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 206; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyNumber_Power(__pyx_int_2, __pyx_t_4, Py_None); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 206; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; + __pyx_t_4 = PyNumber_Multiply(__pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_4)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 206; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_4); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_r = __pyx_t_4; + __pyx_t_4 = 0; + goto __pyx_L0; + + __pyx_r = Py_None; __Pyx_INCREF(Py_None); + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + __Pyx_XDECREF(__pyx_t_4); + __Pyx_AddTraceback("scipy.special.orthogonal_eval.eval_hermitenorm"); + __pyx_r = NULL; + __pyx_L0:; + __Pyx_XGIVEREF(__pyx_r); + __Pyx_RefNannyFinishContext(); + return __pyx_r; +} + +static struct PyMethodDef __pyx_methods[] = { + {__Pyx_NAMESTR("binom"), (PyCFunction)__pyx_pf_5scipy_7special_15orthogonal_eval_binom, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_5scipy_7special_15orthogonal_eval_binom)}, + {__Pyx_NAMESTR("eval_jacobi"), (PyCFunction)__pyx_pf_5scipy_7special_15orthogonal_eval_eval_jacobi, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_5scipy_7special_15orthogonal_eval_eval_jacobi)}, + {__Pyx_NAMESTR("eval_sh_jacobi"), (PyCFunction)__pyx_pf_5scipy_7special_15orthogonal_eval_eval_sh_jacobi, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_5scipy_7special_15orthogonal_eval_eval_sh_jacobi)}, + {__Pyx_NAMESTR("eval_gegenbauer"), (PyCFunction)__pyx_pf_5scipy_7special_15orthogonal_eval_eval_gegenbauer, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_5scipy_7special_15orthogonal_eval_eval_gegenbauer)}, + {__Pyx_NAMESTR("eval_chebyt"), (PyCFunction)__pyx_pf_5scipy_7special_15orthogonal_eval_eval_chebyt, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_5scipy_7special_15orthogonal_eval_eval_chebyt)}, + {__Pyx_NAMESTR("eval_chebyu"), (PyCFunction)__pyx_pf_5scipy_7special_15orthogonal_eval_eval_chebyu, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_5scipy_7special_15orthogonal_eval_eval_chebyu)}, + {__Pyx_NAMESTR("eval_chebys"), (PyCFunction)__pyx_pf_5scipy_7special_15orthogonal_eval_eval_chebys, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_5scipy_7special_15orthogonal_eval_eval_chebys)}, + {__Pyx_NAMESTR("eval_chebyc"), (PyCFunction)__pyx_pf_5scipy_7special_15orthogonal_eval_eval_chebyc, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_5scipy_7special_15orthogonal_eval_eval_chebyc)}, + {__Pyx_NAMESTR("eval_sh_chebyt"), (PyCFunction)__pyx_pf_5scipy_7special_15orthogonal_eval_eval_sh_chebyt, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_5scipy_7special_15orthogonal_eval_eval_sh_chebyt)}, + {__Pyx_NAMESTR("eval_sh_chebyu"), (PyCFunction)__pyx_pf_5scipy_7special_15orthogonal_eval_eval_sh_chebyu, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_5scipy_7special_15orthogonal_eval_eval_sh_chebyu)}, + {__Pyx_NAMESTR("eval_legendre"), (PyCFunction)__pyx_pf_5scipy_7special_15orthogonal_eval_eval_legendre, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_5scipy_7special_15orthogonal_eval_eval_legendre)}, + {__Pyx_NAMESTR("eval_sh_legendre"), (PyCFunction)__pyx_pf_5scipy_7special_15orthogonal_eval_eval_sh_legendre, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_5scipy_7special_15orthogonal_eval_eval_sh_legendre)}, + {__Pyx_NAMESTR("eval_genlaguerre"), (PyCFunction)__pyx_pf_5scipy_7special_15orthogonal_eval_eval_genlaguerre, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_5scipy_7special_15orthogonal_eval_eval_genlaguerre)}, + {__Pyx_NAMESTR("eval_laguerre"), (PyCFunction)__pyx_pf_5scipy_7special_15orthogonal_eval_eval_laguerre, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_5scipy_7special_15orthogonal_eval_eval_laguerre)}, + {__Pyx_NAMESTR("eval_hermite"), (PyCFunction)__pyx_pf_5scipy_7special_15orthogonal_eval_eval_hermite, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_5scipy_7special_15orthogonal_eval_eval_hermite)}, + {__Pyx_NAMESTR("eval_hermitenorm"), (PyCFunction)__pyx_pf_5scipy_7special_15orthogonal_eval_eval_hermitenorm, METH_VARARGS|METH_KEYWORDS, __Pyx_DOCSTR(__pyx_doc_5scipy_7special_15orthogonal_eval_eval_hermitenorm)}, + {0, 0, 0, 0} +}; + +static void __pyx_init_filenames(void); /*proto*/ + +#if PY_MAJOR_VERSION >= 3 +static struct PyModuleDef __pyx_moduledef = { + PyModuleDef_HEAD_INIT, + __Pyx_NAMESTR("orthogonal_eval"), + __Pyx_DOCSTR(__pyx_k_2), /* m_doc */ + -1, /* m_size */ + __pyx_methods /* m_methods */, + NULL, /* m_reload */ + NULL, /* m_traverse */ + NULL, /* m_clear */ + NULL /* m_free */ +}; +#endif + +static __Pyx_StringTabEntry __pyx_string_tab[] = { + {&__pyx_kp_s_1, __pyx_k_1, sizeof(__pyx_k_1), 0, 0, 1, 0}, + {&__pyx_kp_u_10, __pyx_k_10, sizeof(__pyx_k_10), 0, 1, 0, 0}, + {&__pyx_kp_u_11, __pyx_k_11, sizeof(__pyx_k_11), 0, 1, 0, 0}, + {&__pyx_kp_u_12, __pyx_k_12, sizeof(__pyx_k_12), 0, 1, 0, 0}, + {&__pyx_kp_u_13, __pyx_k_13, sizeof(__pyx_k_13), 0, 1, 0, 0}, + {&__pyx_kp_u_14, __pyx_k_14, sizeof(__pyx_k_14), 0, 1, 0, 0}, + {&__pyx_kp_u_15, __pyx_k_15, sizeof(__pyx_k_15), 0, 1, 0, 0}, + {&__pyx_kp_u_16, __pyx_k_16, sizeof(__pyx_k_16), 0, 1, 0, 0}, + {&__pyx_kp_u_17, __pyx_k_17, sizeof(__pyx_k_17), 0, 1, 0, 0}, + {&__pyx_kp_u_18, __pyx_k_18, sizeof(__pyx_k_18), 0, 1, 0, 0}, + {&__pyx_kp_u_19, __pyx_k_19, sizeof(__pyx_k_19), 0, 1, 0, 0}, + {&__pyx_kp_u_20, __pyx_k_20, sizeof(__pyx_k_20), 0, 1, 0, 0}, + {&__pyx_n_s_4, __pyx_k_4, sizeof(__pyx_k_4), 0, 0, 1, 1}, + {&__pyx_kp_u_5, __pyx_k_5, sizeof(__pyx_k_5), 0, 1, 0, 0}, + {&__pyx_kp_u_6, __pyx_k_6, sizeof(__pyx_k_6), 0, 1, 0, 0}, + {&__pyx_kp_u_7, __pyx_k_7, sizeof(__pyx_k_7), 0, 1, 0, 0}, + {&__pyx_kp_u_8, __pyx_k_8, sizeof(__pyx_k_8), 0, 1, 0, 0}, + {&__pyx_kp_u_9, __pyx_k_9, sizeof(__pyx_k_9), 0, 1, 0, 0}, + {&__pyx_n_s__ValueError, __pyx_k__ValueError, sizeof(__pyx_k__ValueError), 0, 0, 1, 1}, + {&__pyx_n_s____main__, __pyx_k____main__, sizeof(__pyx_k____main__), 0, 0, 1, 1}, + {&__pyx_n_s____test__, __pyx_k____test__, sizeof(__pyx_k____test__), 0, 0, 1, 1}, + {&__pyx_n_s___eval_chebyt, __pyx_k___eval_chebyt, sizeof(__pyx_k___eval_chebyt), 0, 0, 1, 1}, + {&__pyx_n_s__alpha, __pyx_k__alpha, sizeof(__pyx_k__alpha), 0, 0, 1, 1}, + {&__pyx_n_s__any, __pyx_k__any, sizeof(__pyx_k__any), 0, 0, 1, 1}, + {&__pyx_n_s__atleast_1d, __pyx_k__atleast_1d, sizeof(__pyx_k__atleast_1d), 0, 0, 1, 1}, + {&__pyx_n_s__beta, __pyx_k__beta, sizeof(__pyx_k__beta), 0, 0, 1, 1}, + {&__pyx_n_s__binom, __pyx_k__binom, sizeof(__pyx_k__binom), 0, 0, 1, 1}, + {&__pyx_n_s__broadcast_arrays, __pyx_k__broadcast_arrays, sizeof(__pyx_k__broadcast_arrays), 0, 0, 1, 1}, + {&__pyx_n_s__eval_chebyc, __pyx_k__eval_chebyc, sizeof(__pyx_k__eval_chebyc), 0, 0, 1, 1}, + {&__pyx_n_s__eval_chebys, __pyx_k__eval_chebys, sizeof(__pyx_k__eval_chebys), 0, 0, 1, 1}, + {&__pyx_n_s__eval_chebyt, __pyx_k__eval_chebyt, sizeof(__pyx_k__eval_chebyt), 0, 0, 1, 1}, + {&__pyx_n_s__eval_chebyu, __pyx_k__eval_chebyu, sizeof(__pyx_k__eval_chebyu), 0, 0, 1, 1}, + {&__pyx_n_s__eval_gegenbauer, __pyx_k__eval_gegenbauer, sizeof(__pyx_k__eval_gegenbauer), 0, 0, 1, 1}, + {&__pyx_n_s__eval_genlaguerre, __pyx_k__eval_genlaguerre, sizeof(__pyx_k__eval_genlaguerre), 0, 0, 1, 1}, + {&__pyx_n_s__eval_hermite, __pyx_k__eval_hermite, sizeof(__pyx_k__eval_hermite), 0, 0, 1, 1}, + {&__pyx_n_s__eval_hermitenorm, __pyx_k__eval_hermitenorm, sizeof(__pyx_k__eval_hermitenorm), 0, 0, 1, 1}, + {&__pyx_n_s__eval_jacobi, __pyx_k__eval_jacobi, sizeof(__pyx_k__eval_jacobi), 0, 0, 1, 1}, + {&__pyx_n_s__eval_laguerre, __pyx_k__eval_laguerre, sizeof(__pyx_k__eval_laguerre), 0, 0, 1, 1}, + {&__pyx_n_s__eval_legendre, __pyx_k__eval_legendre, sizeof(__pyx_k__eval_legendre), 0, 0, 1, 1}, + {&__pyx_n_s__eval_sh_chebyt, __pyx_k__eval_sh_chebyt, sizeof(__pyx_k__eval_sh_chebyt), 0, 0, 1, 1}, + {&__pyx_n_s__eval_sh_chebyu, __pyx_k__eval_sh_chebyu, sizeof(__pyx_k__eval_sh_chebyu), 0, 0, 1, 1}, + {&__pyx_n_s__eval_sh_jacobi, __pyx_k__eval_sh_jacobi, sizeof(__pyx_k__eval_sh_jacobi), 0, 0, 1, 1}, + {&__pyx_n_s__eval_sh_legendre, __pyx_k__eval_sh_legendre, sizeof(__pyx_k__eval_sh_legendre), 0, 0, 1, 1}, + {&__pyx_n_s__exp, __pyx_k__exp, sizeof(__pyx_k__exp), 0, 0, 1, 1}, + {&__pyx_n_s__gamma, __pyx_k__gamma, sizeof(__pyx_k__gamma), 0, 0, 1, 1}, + {&__pyx_n_s__gammaln, __pyx_k__gammaln, sizeof(__pyx_k__gammaln), 0, 0, 1, 1}, + {&__pyx_n_s__hyp1f1, __pyx_k__hyp1f1, sizeof(__pyx_k__hyp1f1), 0, 0, 1, 1}, + {&__pyx_n_s__hyp2f1, __pyx_k__hyp2f1, sizeof(__pyx_k__hyp2f1), 0, 0, 1, 1}, + {&__pyx_n_s__k, __pyx_k__k, sizeof(__pyx_k__k), 0, 0, 1, 1}, + {&__pyx_n_s__n, __pyx_k__n, sizeof(__pyx_k__n), 0, 0, 1, 1}, + {&__pyx_n_s__np, __pyx_k__np, sizeof(__pyx_k__np), 0, 0, 1, 1}, + {&__pyx_n_s__numpy, __pyx_k__numpy, sizeof(__pyx_k__numpy), 0, 0, 1, 1}, + {&__pyx_n_s__out, __pyx_k__out, sizeof(__pyx_k__out), 0, 0, 1, 1}, + {&__pyx_n_s__p, __pyx_k__p, sizeof(__pyx_k__p), 0, 0, 1, 1}, + {&__pyx_n_s__q, __pyx_k__q, sizeof(__pyx_k__q), 0, 0, 1, 1}, + {&__pyx_n_s__range, __pyx_k__range, sizeof(__pyx_k__range), 0, 0, 1, 1}, + {&__pyx_n_s__x, __pyx_k__x, sizeof(__pyx_k__x), 0, 0, 1, 1}, + {&__pyx_n_s__zeros_like, __pyx_k__zeros_like, sizeof(__pyx_k__zeros_like), 0, 0, 1, 1}, + {0, 0, 0, 0, 0, 0, 0} +}; +static int __Pyx_InitCachedBuiltins(void) { + __pyx_builtin_range = __Pyx_GetName(__pyx_b, __pyx_n_s__range); if (!__pyx_builtin_range) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 33; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_builtin_ValueError = __Pyx_GetName(__pyx_b, __pyx_n_s__ValueError); if (!__pyx_builtin_ValueError) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 190; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + return 0; + __pyx_L1_error:; + return -1; +} + +static int __Pyx_InitGlobals(void) { + if (__Pyx_InitStrings(__pyx_string_tab) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + __pyx_int_0 = PyInt_FromLong(0); if (unlikely(!__pyx_int_0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + __pyx_int_1 = PyInt_FromLong(1); if (unlikely(!__pyx_int_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + __pyx_int_2 = PyInt_FromLong(2); if (unlikely(!__pyx_int_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + __pyx_int_neg_1 = PyInt_FromLong(-1); if (unlikely(!__pyx_int_neg_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + return 0; + __pyx_L1_error:; + return -1; +} + +#if PY_MAJOR_VERSION < 3 +PyMODINIT_FUNC initorthogonal_eval(void); /*proto*/ +PyMODINIT_FUNC initorthogonal_eval(void) +#else +PyMODINIT_FUNC PyInit_orthogonal_eval(void); /*proto*/ +PyMODINIT_FUNC PyInit_orthogonal_eval(void) +#endif +{ + PyObject *__pyx_t_1 = NULL; + PyObject *__pyx_t_2 = NULL; + PyObject *__pyx_t_3 = NULL; + #if CYTHON_REFNANNY + void* __pyx_refnanny = NULL; + __Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny"); + if (!__Pyx_RefNanny) { + PyErr_Clear(); + __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny"); + if (!__Pyx_RefNanny) + Py_FatalError("failed to import 'refnanny' module"); + } + __pyx_refnanny = __Pyx_RefNanny->SetupContext("PyMODINIT_FUNC PyInit_orthogonal_eval(void)", __LINE__, __FILE__); + #endif + __pyx_init_filenames(); + __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + #if PY_MAJOR_VERSION < 3 + __pyx_empty_bytes = PyString_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + #else + __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + #endif + /*--- Library function declarations ---*/ + /*--- Threads initialization code ---*/ + #if defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS + #ifdef WITH_THREAD /* Python build with threading support? */ + PyEval_InitThreads(); + #endif + #endif + /*--- Module creation code ---*/ + #if PY_MAJOR_VERSION < 3 + __pyx_m = Py_InitModule4(__Pyx_NAMESTR("orthogonal_eval"), __pyx_methods, __Pyx_DOCSTR(__pyx_k_2), 0, PYTHON_API_VERSION); + #else + __pyx_m = PyModule_Create(&__pyx_moduledef); + #endif + if (!__pyx_m) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + #if PY_MAJOR_VERSION < 3 + Py_INCREF(__pyx_m); + #endif + __pyx_b = PyImport_AddModule(__Pyx_NAMESTR(__Pyx_BUILTIN_MODULE_NAME)); + if (!__pyx_b) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + if (__Pyx_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + /*--- Initialize various global constants etc. ---*/ + if (unlikely(__Pyx_InitGlobals() < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + if (__pyx_module_is_main_scipy__special__orthogonal_eval) { + if (__Pyx_SetAttrString(__pyx_m, "__name__", __pyx_n_s____main__) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;}; + } + /*--- Builtin init code ---*/ + if (unlikely(__Pyx_InitCachedBuiltins() < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + /*--- Global init code ---*/ + /*--- Function export code ---*/ + /*--- Type init code ---*/ + /*--- Type import code ---*/ + /*--- Function import code ---*/ + /*--- Execution code ---*/ + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":71 + * cdef PyUFuncGenericFunction _id_d_funcs[1] + * + * _id_d_types[0] = NPY_LONG # <<<<<<<<<<<<<< + * _id_d_types[1] = NPY_DOUBLE + * _id_d_types[2] = NPY_DOUBLE + */ + (__pyx_v_5scipy_7special_15orthogonal_eval__id_d_types[0]) = NPY_LONG; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":72 + * + * _id_d_types[0] = NPY_LONG + * _id_d_types[1] = NPY_DOUBLE # <<<<<<<<<<<<<< + * _id_d_types[2] = NPY_DOUBLE + * + */ + (__pyx_v_5scipy_7special_15orthogonal_eval__id_d_types[1]) = NPY_DOUBLE; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":73 + * _id_d_types[0] = NPY_LONG + * _id_d_types[1] = NPY_DOUBLE + * _id_d_types[2] = NPY_DOUBLE # <<<<<<<<<<<<<< + * + * _id_d_funcs[0] = _loop_id_d + */ + (__pyx_v_5scipy_7special_15orthogonal_eval__id_d_types[2]) = NPY_DOUBLE; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":75 + * _id_d_types[2] = NPY_DOUBLE + * + * _id_d_funcs[0] = _loop_id_d # <<<<<<<<<<<<<< + * + * import_array() + */ + (__pyx_v_5scipy_7special_15orthogonal_eval__id_d_funcs[0]) = __pyx_f_5scipy_7special_15orthogonal_eval__loop_id_d; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":77 + * _id_d_funcs[0] = _loop_id_d + * + * import_array() # <<<<<<<<<<<<<< + * import_ufunc() + * + */ + import_array(); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":78 + * + * import_array() + * import_ufunc() # <<<<<<<<<<<<<< + * + * #-- + */ + import_ufunc(); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":83 + * + * cdef void *chebyt_data[1] + * chebyt_data[0] = eval_poly_chebyt # <<<<<<<<<<<<<< + * _eval_chebyt = PyUFunc_FromFuncAndData(_id_d_funcs, chebyt_data, + * _id_d_types, 1, 2, 1, 0, "", "", 0) + */ + (__pyx_v_5scipy_7special_15orthogonal_eval_chebyt_data[0]) = ((void *)__pyx_f_5scipy_7special_15orthogonal_eval_eval_poly_chebyt); + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":85 + * chebyt_data[0] = eval_poly_chebyt + * _eval_chebyt = PyUFunc_FromFuncAndData(_id_d_funcs, chebyt_data, + * _id_d_types, 1, 2, 1, 0, "", "", 0) # <<<<<<<<<<<<<< + * + * + */ + __pyx_t_1 = PyUFunc_FromFuncAndData(__pyx_v_5scipy_7special_15orthogonal_eval__id_d_funcs, __pyx_v_5scipy_7special_15orthogonal_eval_chebyt_data, __pyx_v_5scipy_7special_15orthogonal_eval__id_d_types, 1, 2, 1, 0, __pyx_k_3, __pyx_k_3, 0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 84; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + if (PyObject_SetAttr(__pyx_m, __pyx_n_s___eval_chebyt, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 84; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":92 + * #------------------------------------------------------------------------------ + * + * import numpy as np # <<<<<<<<<<<<<< + * from scipy.special._cephes import gamma, hyp2f1, hyp1f1, gammaln + * from numpy import exp + */ + __pyx_t_1 = __Pyx_Import(((PyObject *)__pyx_n_s__numpy), 0); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 92; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + if (PyObject_SetAttr(__pyx_m, __pyx_n_s__np, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 92; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":93 + * + * import numpy as np + * from scipy.special._cephes import gamma, hyp2f1, hyp1f1, gammaln # <<<<<<<<<<<<<< + * from numpy import exp + * + */ + __pyx_t_1 = PyList_New(4); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 93; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_1)); + __Pyx_INCREF(((PyObject *)__pyx_n_s__gamma)); + PyList_SET_ITEM(__pyx_t_1, 0, ((PyObject *)__pyx_n_s__gamma)); + __Pyx_GIVEREF(((PyObject *)__pyx_n_s__gamma)); + __Pyx_INCREF(((PyObject *)__pyx_n_s__hyp2f1)); + PyList_SET_ITEM(__pyx_t_1, 1, ((PyObject *)__pyx_n_s__hyp2f1)); + __Pyx_GIVEREF(((PyObject *)__pyx_n_s__hyp2f1)); + __Pyx_INCREF(((PyObject *)__pyx_n_s__hyp1f1)); + PyList_SET_ITEM(__pyx_t_1, 2, ((PyObject *)__pyx_n_s__hyp1f1)); + __Pyx_GIVEREF(((PyObject *)__pyx_n_s__hyp1f1)); + __Pyx_INCREF(((PyObject *)__pyx_n_s__gammaln)); + PyList_SET_ITEM(__pyx_t_1, 3, ((PyObject *)__pyx_n_s__gammaln)); + __Pyx_GIVEREF(((PyObject *)__pyx_n_s__gammaln)); + __pyx_t_2 = __Pyx_Import(((PyObject *)__pyx_n_s_4), ((PyObject *)__pyx_t_1)); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 93; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(((PyObject *)__pyx_t_1)); __pyx_t_1 = 0; + __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__gamma); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 93; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + if (PyObject_SetAttr(__pyx_m, __pyx_n_s__gamma, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 93; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__hyp2f1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 93; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + if (PyObject_SetAttr(__pyx_m, __pyx_n_s__hyp2f1, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 93; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__hyp1f1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 93; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + if (PyObject_SetAttr(__pyx_m, __pyx_n_s__hyp1f1, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 93; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__gammaln); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 93; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + if (PyObject_SetAttr(__pyx_m, __pyx_n_s__gammaln, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 93; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":94 + * import numpy as np + * from scipy.special._cephes import gamma, hyp2f1, hyp1f1, gammaln + * from numpy import exp # <<<<<<<<<<<<<< + * + * def binom(n, k): + */ + __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 94; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_2)); + __Pyx_INCREF(((PyObject *)__pyx_n_s__exp)); + PyList_SET_ITEM(__pyx_t_2, 0, ((PyObject *)__pyx_n_s__exp)); + __Pyx_GIVEREF(((PyObject *)__pyx_n_s__exp)); + __pyx_t_1 = __Pyx_Import(((PyObject *)__pyx_n_s__numpy), ((PyObject *)__pyx_t_2)); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 94; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __Pyx_DECREF(((PyObject *)__pyx_t_2)); __pyx_t_2 = 0; + __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__exp); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 94; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + if (PyObject_SetAttr(__pyx_m, __pyx_n_s__exp, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 94; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + + /* "/home/pauli/wrk/scipy/scipy/scipy/special/orthogonal_eval.pyx":1 + * """ # <<<<<<<<<<<<<< + * Evaluate orthogonal polynomial values using recurrence relations. + * + */ + __pyx_t_1 = PyDict_New(); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(((PyObject *)__pyx_t_1)); + __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__binom); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = __Pyx_GetAttrString(__pyx_t_2, "__doc__"); + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_5), __pyx_t_3) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyObject_GetAttr(__pyx_m, __pyx_n_s__eval_jacobi); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_6), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__eval_sh_jacobi); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = __Pyx_GetAttrString(__pyx_t_2, "__doc__"); + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_7), __pyx_t_3) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyObject_GetAttr(__pyx_m, __pyx_n_s__eval_gegenbauer); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_8), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__eval_chebyt); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = __Pyx_GetAttrString(__pyx_t_2, "__doc__"); + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_9), __pyx_t_3) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyObject_GetAttr(__pyx_m, __pyx_n_s__eval_chebyu); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_10), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__eval_chebys); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = __Pyx_GetAttrString(__pyx_t_2, "__doc__"); + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_11), __pyx_t_3) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyObject_GetAttr(__pyx_m, __pyx_n_s__eval_chebyc); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_12), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__eval_sh_chebyt); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = __Pyx_GetAttrString(__pyx_t_2, "__doc__"); + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_13), __pyx_t_3) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyObject_GetAttr(__pyx_m, __pyx_n_s__eval_sh_chebyu); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_14), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__eval_legendre); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = __Pyx_GetAttrString(__pyx_t_2, "__doc__"); + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_15), __pyx_t_3) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyObject_GetAttr(__pyx_m, __pyx_n_s__eval_sh_legendre); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_16), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__eval_genlaguerre); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = __Pyx_GetAttrString(__pyx_t_2, "__doc__"); + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_17), __pyx_t_3) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyObject_GetAttr(__pyx_m, __pyx_n_s__eval_laguerre); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_18), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __pyx_t_2 = PyObject_GetAttr(__pyx_m, __pyx_n_s__eval_hermite); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = __Pyx_GetAttrString(__pyx_t_2, "__doc__"); + __Pyx_GOTREF(__pyx_t_3); + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_19), __pyx_t_3) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + __pyx_t_3 = PyObject_GetAttr(__pyx_m, __pyx_n_s__eval_hermitenorm); if (unlikely(!__pyx_t_3)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_3); + __pyx_t_2 = __Pyx_GetAttrString(__pyx_t_3, "__doc__"); + __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; + if (PyDict_SetItem(__pyx_t_1, ((PyObject *)__pyx_kp_u_20), __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + if (PyObject_SetAttr(__pyx_m, __pyx_n_s____test__, ((PyObject *)__pyx_t_1)) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(((PyObject *)__pyx_t_1)); __pyx_t_1 = 0; + goto __pyx_L0; + __pyx_L1_error:; + __Pyx_XDECREF(__pyx_t_1); + __Pyx_XDECREF(__pyx_t_2); + __Pyx_XDECREF(__pyx_t_3); + if (__pyx_m) { + __Pyx_AddTraceback("init scipy.special.orthogonal_eval"); + Py_DECREF(__pyx_m); __pyx_m = 0; + } else if (!PyErr_Occurred()) { + PyErr_SetString(PyExc_ImportError, "init scipy.special.orthogonal_eval"); + } + __pyx_L0:; + __Pyx_RefNannyFinishContext(); + #if PY_MAJOR_VERSION < 3 + return; + #else + return __pyx_m; + #endif +} + +static const char *__pyx_filenames[] = { + "orthogonal_eval.pyx", +}; + +/* Runtime support code */ + +static void __pyx_init_filenames(void) { + __pyx_f = __pyx_filenames; +} + +static void __Pyx_RaiseDoubleKeywordsError( + const char* func_name, + PyObject* kw_name) +{ + PyErr_Format(PyExc_TypeError, + #if PY_MAJOR_VERSION >= 3 + "%s() got multiple values for keyword argument '%U'", func_name, kw_name); + #else + "%s() got multiple values for keyword argument '%s'", func_name, + PyString_AS_STRING(kw_name)); + #endif +} + +static void __Pyx_RaiseArgtupleInvalid( + const char* func_name, + int exact, + Py_ssize_t num_min, + Py_ssize_t num_max, + Py_ssize_t num_found) +{ + Py_ssize_t num_expected; + const char *number, *more_or_less; + + if (num_found < num_min) { + num_expected = num_min; + more_or_less = "at least"; + } else { + num_expected = num_max; + more_or_less = "at most"; + } + if (exact) { + more_or_less = "exactly"; + } + number = (num_expected == 1) ? "" : "s"; + PyErr_Format(PyExc_TypeError, + #if PY_VERSION_HEX < 0x02050000 + "%s() takes %s %d positional argument%s (%d given)", + #else + "%s() takes %s %zd positional argument%s (%zd given)", + #endif + func_name, more_or_less, num_expected, number, num_found); +} + +static int __Pyx_ParseOptionalKeywords( + PyObject *kwds, + PyObject **argnames[], + PyObject *kwds2, + PyObject *values[], + Py_ssize_t num_pos_args, + const char* function_name) +{ + PyObject *key = 0, *value = 0; + Py_ssize_t pos = 0; + PyObject*** name; + PyObject*** first_kw_arg = argnames + num_pos_args; + + while (PyDict_Next(kwds, &pos, &key, &value)) { + name = first_kw_arg; + while (*name && (**name != key)) name++; + if (*name) { + values[name-argnames] = value; + } else { + #if PY_MAJOR_VERSION < 3 + if (unlikely(!PyString_CheckExact(key)) && unlikely(!PyString_Check(key))) { + #else + if (unlikely(!PyUnicode_CheckExact(key)) && unlikely(!PyUnicode_Check(key))) { + #endif + goto invalid_keyword_type; + } else { + for (name = first_kw_arg; *name; name++) { + #if PY_MAJOR_VERSION >= 3 + if (PyUnicode_GET_SIZE(**name) == PyUnicode_GET_SIZE(key) && + PyUnicode_Compare(**name, key) == 0) break; + #else + if (PyString_GET_SIZE(**name) == PyString_GET_SIZE(key) && + _PyString_Eq(**name, key)) break; + #endif + } + if (*name) { + values[name-argnames] = value; + } else { + /* unexpected keyword found */ + for (name=argnames; name != first_kw_arg; name++) { + if (**name == key) goto arg_passed_twice; + #if PY_MAJOR_VERSION >= 3 + if (PyUnicode_GET_SIZE(**name) == PyUnicode_GET_SIZE(key) && + PyUnicode_Compare(**name, key) == 0) goto arg_passed_twice; + #else + if (PyString_GET_SIZE(**name) == PyString_GET_SIZE(key) && + _PyString_Eq(**name, key)) goto arg_passed_twice; + #endif + } + if (kwds2) { + if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad; + } else { + goto invalid_keyword; + } + } + } + } + } + return 0; +arg_passed_twice: + __Pyx_RaiseDoubleKeywordsError(function_name, **name); + goto bad; +invalid_keyword_type: + PyErr_Format(PyExc_TypeError, + "%s() keywords must be strings", function_name); + goto bad; +invalid_keyword: + PyErr_Format(PyExc_TypeError, + #if PY_MAJOR_VERSION < 3 + "%s() got an unexpected keyword argument '%s'", + function_name, PyString_AsString(key)); + #else + "%s() got an unexpected keyword argument '%U'", + function_name, key); + #endif +bad: + return -1; +} + +static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) { + PyErr_Format(PyExc_ValueError, + #if PY_VERSION_HEX < 0x02050000 + "need more than %d value%s to unpack", (int)index, + #else + "need more than %zd value%s to unpack", index, + #endif + (index == 1) ? "" : "s"); +} + +static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(void) { + PyErr_SetString(PyExc_ValueError, "too many values to unpack"); +} + +static PyObject *__Pyx_UnpackItem(PyObject *iter, Py_ssize_t index) { + PyObject *item; + if (!(item = PyIter_Next(iter))) { + if (!PyErr_Occurred()) { + __Pyx_RaiseNeedMoreValuesError(index); + } + } + return item; +} + +static int __Pyx_EndUnpack(PyObject *iter) { + PyObject *item; + if ((item = PyIter_Next(iter))) { + Py_DECREF(item); + __Pyx_RaiseTooManyValuesError(); + return -1; + } + else if (!PyErr_Occurred()) + return 0; + else + return -1; +} + +static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list) { + PyObject *__import__ = 0; + PyObject *empty_list = 0; + PyObject *module = 0; + PyObject *global_dict = 0; + PyObject *empty_dict = 0; + PyObject *list; + __import__ = __Pyx_GetAttrString(__pyx_b, "__import__"); + if (!__import__) + goto bad; + if (from_list) + list = from_list; + else { + empty_list = PyList_New(0); + if (!empty_list) + goto bad; + list = empty_list; + } + global_dict = PyModule_GetDict(__pyx_m); + if (!global_dict) + goto bad; + empty_dict = PyDict_New(); + if (!empty_dict) + goto bad; + module = PyObject_CallFunctionObjArgs(__import__, + name, global_dict, empty_dict, list, NULL); +bad: + Py_XDECREF(empty_list); + Py_XDECREF(__import__); + Py_XDECREF(empty_dict); + return module; +} + +static PyObject *__Pyx_GetName(PyObject *dict, PyObject *name) { + PyObject *result; + result = PyObject_GetAttr(dict, name); + if (!result) + PyErr_SetObject(PyExc_NameError, name); + return result; +} + +static CYTHON_INLINE PyObject *__Pyx_PyInt_to_py_npy_intp(npy_intp val) { + const npy_intp neg_one = (npy_intp)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(npy_intp) < sizeof(long)) { + return PyInt_FromLong((long)val); + } else if (sizeof(npy_intp) == sizeof(long)) { + if (is_unsigned) + return PyLong_FromUnsignedLong((unsigned long)val); + else + return PyInt_FromLong((long)val); + } else { /* (sizeof(npy_intp) > sizeof(long)) */ + if (is_unsigned) + return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG)val); + else + return PyLong_FromLongLong((PY_LONG_LONG)val); + } +} + +static CYTHON_INLINE void __Pyx_ErrRestore(PyObject *type, PyObject *value, PyObject *tb) { + PyObject *tmp_type, *tmp_value, *tmp_tb; + PyThreadState *tstate = PyThreadState_GET(); + + tmp_type = tstate->curexc_type; + tmp_value = tstate->curexc_value; + tmp_tb = tstate->curexc_traceback; + tstate->curexc_type = type; + tstate->curexc_value = value; + tstate->curexc_traceback = tb; + Py_XDECREF(tmp_type); + Py_XDECREF(tmp_value); + Py_XDECREF(tmp_tb); +} + +static CYTHON_INLINE void __Pyx_ErrFetch(PyObject **type, PyObject **value, PyObject **tb) { + PyThreadState *tstate = PyThreadState_GET(); + *type = tstate->curexc_type; + *value = tstate->curexc_value; + *tb = tstate->curexc_traceback; + + tstate->curexc_type = 0; + tstate->curexc_value = 0; + tstate->curexc_traceback = 0; +} + + +#if PY_MAJOR_VERSION < 3 +static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb) { + Py_XINCREF(type); + Py_XINCREF(value); + Py_XINCREF(tb); + /* First, check the traceback argument, replacing None with NULL. */ + if (tb == Py_None) { + Py_DECREF(tb); + tb = 0; + } + else if (tb != NULL && !PyTraceBack_Check(tb)) { + PyErr_SetString(PyExc_TypeError, + "raise: arg 3 must be a traceback or None"); + goto raise_error; + } + /* Next, replace a missing value with None */ + if (value == NULL) { + value = Py_None; + Py_INCREF(value); + } + #if PY_VERSION_HEX < 0x02050000 + if (!PyClass_Check(type)) + #else + if (!PyType_Check(type)) + #endif + { + /* Raising an instance. The value should be a dummy. */ + if (value != Py_None) { + PyErr_SetString(PyExc_TypeError, + "instance exception may not have a separate value"); + goto raise_error; + } + /* Normalize to raise , */ + Py_DECREF(value); + value = type; + #if PY_VERSION_HEX < 0x02050000 + if (PyInstance_Check(type)) { + type = (PyObject*) ((PyInstanceObject*)type)->in_class; + Py_INCREF(type); + } + else { + type = 0; + PyErr_SetString(PyExc_TypeError, + "raise: exception must be an old-style class or instance"); + goto raise_error; + } + #else + type = (PyObject*) Py_TYPE(type); + Py_INCREF(type); + if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) { + PyErr_SetString(PyExc_TypeError, + "raise: exception class must be a subclass of BaseException"); + goto raise_error; + } + #endif + } + + __Pyx_ErrRestore(type, value, tb); + return; +raise_error: + Py_XDECREF(value); + Py_XDECREF(type); + Py_XDECREF(tb); + return; +} + +#else /* Python 3+ */ + +static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb) { + if (tb == Py_None) { + tb = 0; + } else if (tb && !PyTraceBack_Check(tb)) { + PyErr_SetString(PyExc_TypeError, + "raise: arg 3 must be a traceback or None"); + goto bad; + } + if (value == Py_None) + value = 0; + + if (PyExceptionInstance_Check(type)) { + if (value) { + PyErr_SetString(PyExc_TypeError, + "instance exception may not have a separate value"); + goto bad; + } + value = type; + type = (PyObject*) Py_TYPE(value); + } else if (!PyExceptionClass_Check(type)) { + PyErr_SetString(PyExc_TypeError, + "raise: exception class must be a subclass of BaseException"); + goto bad; + } + + PyErr_SetObject(type, value); + + if (tb) { + PyThreadState *tstate = PyThreadState_GET(); + PyObject* tmp_tb = tstate->curexc_traceback; + if (tb != tmp_tb) { + Py_INCREF(tb); + tstate->curexc_traceback = tb; + Py_XDECREF(tmp_tb); + } + } + +bad: + return; +} +#endif + +static CYTHON_INLINE unsigned char __Pyx_PyInt_AsUnsignedChar(PyObject* x) { + const unsigned char neg_one = (unsigned char)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(unsigned char) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(unsigned char)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to unsigned char" : + "value too large to convert to unsigned char"); + } + return (unsigned char)-1; + } + return (unsigned char)val; + } + return (unsigned char)__Pyx_PyInt_AsUnsignedLong(x); +} + +static CYTHON_INLINE unsigned short __Pyx_PyInt_AsUnsignedShort(PyObject* x) { + const unsigned short neg_one = (unsigned short)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(unsigned short) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(unsigned short)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to unsigned short" : + "value too large to convert to unsigned short"); + } + return (unsigned short)-1; + } + return (unsigned short)val; + } + return (unsigned short)__Pyx_PyInt_AsUnsignedLong(x); +} + +static CYTHON_INLINE unsigned int __Pyx_PyInt_AsUnsignedInt(PyObject* x) { + const unsigned int neg_one = (unsigned int)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(unsigned int) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(unsigned int)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to unsigned int" : + "value too large to convert to unsigned int"); + } + return (unsigned int)-1; + } + return (unsigned int)val; + } + return (unsigned int)__Pyx_PyInt_AsUnsignedLong(x); +} + +static CYTHON_INLINE char __Pyx_PyInt_AsChar(PyObject* x) { + const char neg_one = (char)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(char) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(char)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to char" : + "value too large to convert to char"); + } + return (char)-1; + } + return (char)val; + } + return (char)__Pyx_PyInt_AsLong(x); +} + +static CYTHON_INLINE short __Pyx_PyInt_AsShort(PyObject* x) { + const short neg_one = (short)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(short) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(short)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to short" : + "value too large to convert to short"); + } + return (short)-1; + } + return (short)val; + } + return (short)__Pyx_PyInt_AsLong(x); +} + +static CYTHON_INLINE int __Pyx_PyInt_AsInt(PyObject* x) { + const int neg_one = (int)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(int) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(int)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to int" : + "value too large to convert to int"); + } + return (int)-1; + } + return (int)val; + } + return (int)__Pyx_PyInt_AsLong(x); +} + +static CYTHON_INLINE signed char __Pyx_PyInt_AsSignedChar(PyObject* x) { + const signed char neg_one = (signed char)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(signed char) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(signed char)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to signed char" : + "value too large to convert to signed char"); + } + return (signed char)-1; + } + return (signed char)val; + } + return (signed char)__Pyx_PyInt_AsSignedLong(x); +} + +static CYTHON_INLINE signed short __Pyx_PyInt_AsSignedShort(PyObject* x) { + const signed short neg_one = (signed short)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(signed short) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(signed short)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to signed short" : + "value too large to convert to signed short"); + } + return (signed short)-1; + } + return (signed short)val; + } + return (signed short)__Pyx_PyInt_AsSignedLong(x); +} + +static CYTHON_INLINE signed int __Pyx_PyInt_AsSignedInt(PyObject* x) { + const signed int neg_one = (signed int)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; + if (sizeof(signed int) < sizeof(long)) { + long val = __Pyx_PyInt_AsLong(x); + if (unlikely(val != (long)(signed int)val)) { + if (!unlikely(val == -1 && PyErr_Occurred())) { + PyErr_SetString(PyExc_OverflowError, + (is_unsigned && unlikely(val < 0)) ? + "can't convert negative value to signed int" : + "value too large to convert to signed int"); + } + return (signed int)-1; + } + return (signed int)val; + } + return (signed int)__Pyx_PyInt_AsSignedLong(x); +} + +static CYTHON_INLINE unsigned long __Pyx_PyInt_AsUnsignedLong(PyObject* x) { + const unsigned long neg_one = (unsigned long)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to unsigned long"); + return (unsigned long)-1; + } + return (unsigned long)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to unsigned long"); + return (unsigned long)-1; + } + return PyLong_AsUnsignedLong(x); + } else { + return PyLong_AsLong(x); + } + } else { + unsigned long val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (unsigned long)-1; + val = __Pyx_PyInt_AsUnsignedLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static CYTHON_INLINE unsigned PY_LONG_LONG __Pyx_PyInt_AsUnsignedLongLong(PyObject* x) { + const unsigned PY_LONG_LONG neg_one = (unsigned PY_LONG_LONG)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to unsigned PY_LONG_LONG"); + return (unsigned PY_LONG_LONG)-1; + } + return (unsigned PY_LONG_LONG)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to unsigned PY_LONG_LONG"); + return (unsigned PY_LONG_LONG)-1; + } + return PyLong_AsUnsignedLongLong(x); + } else { + return PyLong_AsLongLong(x); + } + } else { + unsigned PY_LONG_LONG val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (unsigned PY_LONG_LONG)-1; + val = __Pyx_PyInt_AsUnsignedLongLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static CYTHON_INLINE long __Pyx_PyInt_AsLong(PyObject* x) { + const long neg_one = (long)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to long"); + return (long)-1; + } + return (long)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to long"); + return (long)-1; + } + return PyLong_AsUnsignedLong(x); + } else { + return PyLong_AsLong(x); + } + } else { + long val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (long)-1; + val = __Pyx_PyInt_AsLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static CYTHON_INLINE PY_LONG_LONG __Pyx_PyInt_AsLongLong(PyObject* x) { + const PY_LONG_LONG neg_one = (PY_LONG_LONG)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to PY_LONG_LONG"); + return (PY_LONG_LONG)-1; + } + return (PY_LONG_LONG)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to PY_LONG_LONG"); + return (PY_LONG_LONG)-1; + } + return PyLong_AsUnsignedLongLong(x); + } else { + return PyLong_AsLongLong(x); + } + } else { + PY_LONG_LONG val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (PY_LONG_LONG)-1; + val = __Pyx_PyInt_AsLongLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static CYTHON_INLINE signed long __Pyx_PyInt_AsSignedLong(PyObject* x) { + const signed long neg_one = (signed long)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to signed long"); + return (signed long)-1; + } + return (signed long)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to signed long"); + return (signed long)-1; + } + return PyLong_AsUnsignedLong(x); + } else { + return PyLong_AsLong(x); + } + } else { + signed long val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (signed long)-1; + val = __Pyx_PyInt_AsSignedLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +static CYTHON_INLINE signed PY_LONG_LONG __Pyx_PyInt_AsSignedLongLong(PyObject* x) { + const signed PY_LONG_LONG neg_one = (signed PY_LONG_LONG)-1, const_zero = 0; + const int is_unsigned = neg_one > const_zero; +#if PY_VERSION_HEX < 0x03000000 + if (likely(PyInt_Check(x))) { + long val = PyInt_AS_LONG(x); + if (is_unsigned && unlikely(val < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to signed PY_LONG_LONG"); + return (signed PY_LONG_LONG)-1; + } + return (signed PY_LONG_LONG)val; + } else +#endif + if (likely(PyLong_Check(x))) { + if (is_unsigned) { + if (unlikely(Py_SIZE(x) < 0)) { + PyErr_SetString(PyExc_OverflowError, + "can't convert negative value to signed PY_LONG_LONG"); + return (signed PY_LONG_LONG)-1; + } + return PyLong_AsUnsignedLongLong(x); + } else { + return PyLong_AsLongLong(x); + } + } else { + signed PY_LONG_LONG val; + PyObject *tmp = __Pyx_PyNumber_Int(x); + if (!tmp) return (signed PY_LONG_LONG)-1; + val = __Pyx_PyInt_AsSignedLongLong(tmp); + Py_DECREF(tmp); + return val; + } +} + +#include "compile.h" +#include "frameobject.h" +#include "traceback.h" + +static void __Pyx_AddTraceback(const char *funcname) { + PyObject *py_srcfile = 0; + PyObject *py_funcname = 0; + PyObject *py_globals = 0; + PyCodeObject *py_code = 0; + PyFrameObject *py_frame = 0; + + #if PY_MAJOR_VERSION < 3 + py_srcfile = PyString_FromString(__pyx_filename); + #else + py_srcfile = PyUnicode_FromString(__pyx_filename); + #endif + if (!py_srcfile) goto bad; + if (__pyx_clineno) { + #if PY_MAJOR_VERSION < 3 + py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, __pyx_clineno); + #else + py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, __pyx_clineno); + #endif + } + else { + #if PY_MAJOR_VERSION < 3 + py_funcname = PyString_FromString(funcname); + #else + py_funcname = PyUnicode_FromString(funcname); + #endif + } + if (!py_funcname) goto bad; + py_globals = PyModule_GetDict(__pyx_m); + if (!py_globals) goto bad; + py_code = PyCode_New( + 0, /*int argcount,*/ + #if PY_MAJOR_VERSION >= 3 + 0, /*int kwonlyargcount,*/ + #endif + 0, /*int nlocals,*/ + 0, /*int stacksize,*/ + 0, /*int flags,*/ + __pyx_empty_bytes, /*PyObject *code,*/ + __pyx_empty_tuple, /*PyObject *consts,*/ + __pyx_empty_tuple, /*PyObject *names,*/ + __pyx_empty_tuple, /*PyObject *varnames,*/ + __pyx_empty_tuple, /*PyObject *freevars,*/ + __pyx_empty_tuple, /*PyObject *cellvars,*/ + py_srcfile, /*PyObject *filename,*/ + py_funcname, /*PyObject *name,*/ + __pyx_lineno, /*int firstlineno,*/ + __pyx_empty_bytes /*PyObject *lnotab*/ + ); + if (!py_code) goto bad; + py_frame = PyFrame_New( + PyThreadState_GET(), /*PyThreadState *tstate,*/ + py_code, /*PyCodeObject *code,*/ + py_globals, /*PyObject *globals,*/ + 0 /*PyObject *locals*/ + ); + if (!py_frame) goto bad; + py_frame->f_lineno = __pyx_lineno; + PyTraceBack_Here(py_frame); +bad: + Py_XDECREF(py_srcfile); + Py_XDECREF(py_funcname); + Py_XDECREF(py_code); + Py_XDECREF(py_frame); +} + +static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) { + while (t->p) { + #if PY_MAJOR_VERSION < 3 + if (t->is_unicode) { + *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL); + } else if (t->intern) { + *t->p = PyString_InternFromString(t->s); + } else { + *t->p = PyString_FromStringAndSize(t->s, t->n - 1); + } + #else /* Python 3+ has unicode identifiers */ + if (t->is_unicode | t->is_str) { + if (t->intern) { + *t->p = PyUnicode_InternFromString(t->s); + } else if (t->encoding) { + *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL); + } else { + *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1); + } + } else { + *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1); + } + #endif + if (!*t->p) + return -1; + ++t; + } + return 0; +} + +/* Type Conversion Functions */ + +static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { + if (x == Py_True) return 1; + else if ((x == Py_False) | (x == Py_None)) return 0; + else return PyObject_IsTrue(x); +} + +static CYTHON_INLINE PyObject* __Pyx_PyNumber_Int(PyObject* x) { + PyNumberMethods *m; + const char *name = NULL; + PyObject *res = NULL; +#if PY_VERSION_HEX < 0x03000000 + if (PyInt_Check(x) || PyLong_Check(x)) +#else + if (PyLong_Check(x)) +#endif + return Py_INCREF(x), x; + m = Py_TYPE(x)->tp_as_number; +#if PY_VERSION_HEX < 0x03000000 + if (m && m->nb_int) { + name = "int"; + res = PyNumber_Int(x); + } + else if (m && m->nb_long) { + name = "long"; + res = PyNumber_Long(x); + } +#else + if (m && m->nb_int) { + name = "int"; + res = PyNumber_Long(x); + } +#endif + if (res) { +#if PY_VERSION_HEX < 0x03000000 + if (!PyInt_Check(res) && !PyLong_Check(res)) { +#else + if (!PyLong_Check(res)) { +#endif + PyErr_Format(PyExc_TypeError, + "__%s__ returned non-%s (type %.200s)", + name, name, Py_TYPE(res)->tp_name); + Py_DECREF(res); + return NULL; + } + } + else if (!PyErr_Occurred()) { + PyErr_SetString(PyExc_TypeError, + "an integer is required"); + } + return res; +} + +static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { + Py_ssize_t ival; + PyObject* x = PyNumber_Index(b); + if (!x) return -1; + ival = PyInt_AsSsize_t(x); + Py_DECREF(x); + return ival; +} + +static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) { +#if PY_VERSION_HEX < 0x02050000 + if (ival <= LONG_MAX) + return PyInt_FromLong((long)ival); + else { + unsigned char *bytes = (unsigned char *) &ival; + int one = 1; int little = (int)*(unsigned char*)&one; + return _PyLong_FromByteArray(bytes, sizeof(size_t), little, 0); + } +#else + return PyInt_FromSize_t(ival); +#endif +} + +static CYTHON_INLINE size_t __Pyx_PyInt_AsSize_t(PyObject* x) { + unsigned PY_LONG_LONG val = __Pyx_PyInt_AsUnsignedLongLong(x); + if (unlikely(val == (unsigned PY_LONG_LONG)-1 && PyErr_Occurred())) { + return (size_t)-1; + } else if (unlikely(val != (unsigned PY_LONG_LONG)(size_t)val)) { + PyErr_SetString(PyExc_OverflowError, + "value too large to convert to size_t"); + return (size_t)-1; + } + return (size_t)val; +} + + +#endif /* Py_PYTHON_H */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/orthogonal.py python-scipy-0.8.0+dfsg1/scipy/special/orthogonal.py --- python-scipy-0.7.2+dfsg1/scipy/special/orthogonal.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/orthogonal.py 2010-07-26 15:48:36.000000000 +0100 @@ -90,12 +90,24 @@ import _cephes as cephes _gam = cephes.gamma +__all__ = ['legendre', 'chebyt', 'chebyu', 'chebyc', 'chebys', + 'jacobi', 'laguerre', 'genlaguerre', 'hermite', 'hermitenorm', + 'gegenbauer', 'sh_legendre', 'sh_chebyt', 'sh_chebyu', 'sh_jacobi', + 'p_roots', 'ps_roots', 'j_roots', 'js_roots', 'l_roots', 'la_roots', + 'he_roots', 'ts_roots', 'us_roots', 's_roots', 't_roots', 'u_roots', + 'c_roots', 'cg_roots', 'h_roots', + 'eval_legendre', 'eval_chebyt', 'eval_chebyu', 'eval_chebyc', + 'eval_chebys', 'eval_jacobi', 'eval_laguerre', 'eval_genlaguerre', + 'eval_hermite', 'eval_hermitenorm', 'eval_gegenbauer', + 'eval_sh_legendre', 'eval_sh_chebyt', 'eval_sh_chebyu', + 'eval_sh_jacobi', 'poch', 'binom'] + def poch(z,m): """Pochhammer symbol (z)_m = (z)(z+1)....(z+m-1) = gamma(z+m)/gamma(z)""" return _gam(z+m) / _gam(z) class orthopoly1d(np.poly1d): - def __init__(self, roots, weights=None, hn=1.0, kn=1.0, wfunc=None, limits=None, monic=0): + def __init__(self, roots, weights=None, hn=1.0, kn=1.0, wfunc=None, limits=None, monic=0,eval_func=None): np.poly1d.__init__(self, roots, r=1) equiv_weights = [weights[k] / wfunc(roots[k]) for k in range(len(roots))] self.__dict__['weights'] = np.array(zip(roots,weights,equiv_weights)) @@ -103,11 +115,31 @@ self.__dict__['limits'] = limits mu = sqrt(hn) if monic: + evf = eval_func + if evf: + eval_func = lambda x: evf(x)/kn mu = mu / abs(kn) kn = 1.0 self.__dict__['normcoef'] = mu self.__dict__['coeffs'] *= kn + # Note: eval_func will be discarded on arithmetic + self.__dict__['_eval_func'] = eval_func + + def __call__(self, v): + if self._eval_func and (isinstance(v, np.ndarray) or np.isscalar(v)): + return self._eval_func(v) + else: + return np.poly1d.__call__(self, v) + + def _scale(self, p): + if p == 1.0: + return + self.__dict__['coeffs'] *= p + evf = self.__dict__['_eval_func'] + if evf: + self.__dict__['_eval_func'] = lambda x: evf(x) * p + self.__dict__['normcoef'] *= p def gen_roots_and_weights(n,an_func,sqrt_bn_func,mu): """[x,w] = gen_roots_and_weights(n,an_func,sqrt_bn_func,mu) @@ -172,14 +204,16 @@ assert(n>=0), "n must be nonnegative" wfunc = lambda x: (1-x)**alpha * (1+x)**beta if n==0: - return orthopoly1d([],[],1.0,1.0,wfunc,(-1,1),monic) + return orthopoly1d([],[],1.0,1.0,wfunc,(-1,1),monic, + eval_func=np.ones_like) x,w,mu = j_roots(n,alpha,beta,mu=1) ab1 = alpha+beta+1.0 hn = 2**ab1/(2*n+ab1)*_gam(n+alpha+1) hn *= _gam(n+beta+1.0) / _gam(n+1) / _gam(n+ab1) kn = _gam(2*n+ab1)/2.0**n / _gam(n+1) / _gam(n+ab1) # here kn = coefficient on x^n term - p = orthopoly1d(x,w,hn,kn,wfunc,(-1,1),monic) + p = orthopoly1d(x,w,hn,kn,wfunc,(-1,1),monic, + lambda x: eval_jacobi(n,alpha,beta,x)) return p # Jacobi Polynomials shifted G_n(p,q,x) @@ -231,15 +265,17 @@ raise ValueError("n must be nonnegative") wfunc = lambda x: (1.0-x)**(p-q) * (x)**(q-1.) if n==0: - return orthopoly1d([],[],1.0,1.0,wfunc,(-1,1),monic) + return orthopoly1d([],[],1.0,1.0,wfunc,(-1,1),monic, + eval_func=np.ones_like) n1 = n x,w,mu0 = js_roots(n1,p,q,mu=1) hn = _gam(n+1)*_gam(n+q)*_gam(n+p)*_gam(n+p-q+1) hn /= (2*n+p)*(_gam(2*n+p)**2) # kn = 1.0 in standard form so monic is redundant. Kept for compatibility. kn = 1.0 - p = orthopoly1d(x,w,hn,kn,wfunc=wfunc,limits=(0,1),monic=monic) - return p + pp = orthopoly1d(x,w,hn,kn,wfunc=wfunc,limits=(0,1),monic=monic, + eval_func=lambda x: eval_sh_jacobi(n, p, q, x)) + return pp # Generalized Laguerre L^(alpha)_n(x) def la_roots(n,alpha,mu=0): @@ -277,7 +313,8 @@ if n==0: x,w = [],[] hn = _gam(n+alpha+1)/_gam(n+1) kn = (-1)**n / _gam(n+1) - p = orthopoly1d(x,w,hn,kn,wfunc,(0,inf),monic) + p = orthopoly1d(x,w,hn,kn,wfunc,(0,inf),monic, + lambda x: eval_genlaguerre(n,alpha,x)) return p # Laguerre L_n(x) @@ -301,7 +338,8 @@ if n==0: x,w = [],[] hn = 1.0 kn = (-1)**n / _gam(n+1) - p = orthopoly1d(x,w,hn,kn,lambda x: exp(-x),(0,inf),monic) + p = orthopoly1d(x,w,hn,kn,lambda x: exp(-x),(0,inf),monic, + lambda x: eval_laguerre(n,x)) return p @@ -335,7 +373,8 @@ if n==0: x,w = [],[] hn = 2**n * _gam(n+1)*sqrt(pi) kn = 2**n - p = orthopoly1d(x,w,hn,kn,wfunc,(-inf,inf),monic) + p = orthopoly1d(x,w,hn,kn,wfunc,(-inf,inf),monic, + lambda x: eval_hermite(n,x)) return p # Hermite 2 He_n(x) @@ -368,7 +407,8 @@ if n==0: x,w = [],[] hn = sqrt(2*pi)*_gam(n+1) kn = 1.0 - p = orthopoly1d(x,w,hn,kn,wfunc=wfunc,limits=(-inf,inf),monic=monic) + p = orthopoly1d(x,w,hn,kn,wfunc=wfunc,limits=(-inf,inf),monic=monic, + eval_func=lambda x: eval_hermitenorm(n,x)) return p ## The remainder of the polynomials can be derived from the ones above. @@ -393,7 +433,8 @@ return base # Abrahmowitz and Stegan 22.5.20 factor = _gam(2*alpha+n)*_gam(alpha+0.5) / _gam(2*alpha) / _gam(alpha+0.5+n) - return base * factor + base._scale(factor) + return base # Chebyshev of the first kind: T_n(x) = n! sqrt(pi) / _gam(n+1./2)* P^(-1/2,-1/2)_n(x) # Computed anew. @@ -423,16 +464,16 @@ assert(n>=0), "n must be nonnegative" wfunc = lambda x: 1.0/sqrt(1-x*x) if n==0: - return orthopoly1d([],[],pi,1.0,wfunc,(-1,1),monic) + return orthopoly1d([],[],pi,1.0,wfunc,(-1,1),monic, + lambda x: eval_chebyt(n,x)) n1 = n x,w,mu = t_roots(n1,mu=1) hn = pi/2 kn = 2**(n-1) - p = orthopoly1d(x,w,hn,kn,wfunc,(-1,1),monic) + p = orthopoly1d(x,w,hn,kn,wfunc,(-1,1),monic, + lambda x: eval_chebyt(n,x)) return p - return jacobi(n,-0.5,-0.5,monic=monic) - # Chebyshev of the second kind # U_n(x) = (n+1)! sqrt(pi) / (2*_gam(n+3./2)) * P^(1/2,1/2)_n(x) def u_roots(n,mu=0): @@ -452,7 +493,8 @@ if monic: return base factor = sqrt(pi)/2.0*_gam(n+2) / _gam(n+1.5) - return base * factor + base._scale(factor) + return base # Chebyshev of the first kind C_n(x) def c_roots(n,mu=0): @@ -482,7 +524,8 @@ kn = 1.0 p = orthopoly1d(x,w,hn,kn,wfunc=lambda x: 1.0/sqrt(1-x*x/4.0),limits=(-2,2),monic=monic) if not monic: - p = p * 2.0/p(2) + p._scale(2.0/p(2)) + p.__dict__['_eval_func'] = lambda x: eval_chebyc(n,x) return p # Chebyshev of the second kind S_n(x) @@ -513,7 +556,9 @@ kn = 1.0 p = orthopoly1d(x,w,hn,kn,wfunc=lambda x: sqrt(1-x*x/4.0),limits=(-2,2),monic=monic) if not monic: - p = p * (n+1.0)/p(2) + factor = (n+1.0)/p(2) + p._scale(factor) + p.__dict__['_eval_func'] = lambda x: eval_chebys(n,x) return p # Shifted Chebyshev of the first kind T^*_n(x) @@ -531,12 +576,14 @@ Orthogonal over [0,1] with weight function (x-x**2)**(-1/2). """ base = sh_jacobi(n,0.0,0.5,monic=monic) - if monic: return base + if monic: + return base if n > 0: factor = 4**n / 2.0 else: factor = 1.0 - return base * factor + base._scale(factor) + return base # Shifted Chebyshev of the second kind U^*_n(x) @@ -556,7 +603,8 @@ base = sh_jacobi(n,2.0,1.5,monic=monic) if monic: return base factor = 4**n - return base * factor + base._scale(factor) + return base # Legendre def p_roots(n,mu=0): @@ -579,7 +627,8 @@ if n==0: x,w = [],[] hn = 2.0/(2*n+1) kn = _gam(2*n+1)/_gam(n+1)**2 / 2.0**n - p = orthopoly1d(x,w,hn,kn,wfunc=lambda x: 1.0,limits=(-1,1),monic=monic) + p = orthopoly1d(x,w,hn,kn,wfunc=lambda x: 1.0,limits=(-1,1),monic=monic, + eval_func=lambda x: eval_legendre(n,x)) return p # Shifted Legendre P^*_n(x) @@ -598,9 +647,20 @@ """ assert(n>=0), "n must be nonnegative" wfunc = lambda x: 0.0*x + 1.0 - if n==0: return orthopoly1d([],[],1.0,1.0,wfunc,(0,1),monic) + if n==0: return orthopoly1d([],[],1.0,1.0,wfunc,(0,1),monic, + lambda x: eval_sh_legendre(n,x)) x,w,mu0 = ps_roots(n,mu=1) hn = 1.0/(2*n+1.0) kn = _gam(2*n+1)/_gam(n+1)**2 - p = orthopoly1d(x,w,hn,kn,wfunc,limits=(0,1),monic=monic) + p = orthopoly1d(x,w,hn,kn,wfunc,limits=(0,1),monic=monic, + eval_func=lambda x: eval_sh_legendre(n,x)) return p + +#------------------------------------------------------------------------------ +# Vectorized functions for evaluation +#------------------------------------------------------------------------------ +from orthogonal_eval import \ + binom, eval_jacobi, eval_sh_jacobi, eval_gegenbauer, eval_chebyt, \ + eval_chebyu, eval_chebys, eval_chebyc, eval_sh_chebyt, eval_sh_chebyu, \ + eval_legendre, eval_sh_legendre, eval_genlaguerre, eval_laguerre, \ + eval_hermite, eval_hermitenorm diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/SConscript python-scipy-0.8.0+dfsg1/scipy/special/SConscript --- python-scipy-0.7.2+dfsg1/scipy/special/SConscript 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/SConscript 2010-07-26 15:48:36.000000000 +0100 @@ -2,7 +2,7 @@ # vim:syntax=python from os.path import join as pjoin, basename as pbasename import sys -from numpy.distutils.misc_util import get_numpy_include_dirs +from numpy.distutils.misc_util import get_numpy_include_dirs, get_pkg_info from numscons import GetNumpyEnvironment from numscons import CheckF77Clib @@ -14,15 +14,15 @@ if sys.platform=='win32': # define_macros.append(('NOINFINITIES',None)) # define_macros.append(('NONANS',None)) - env.AppendUnique(CPPDEFINES = '_USE_MATH_DEFINES') + env.PrependUnique(CPPDEFINES = '_USE_MATH_DEFINES') config = env.NumpyConfigure(custom_tests = {'CheckF77Clib' : CheckF77Clib}) if not config.CheckF77Clib(): raise RuntimeError("Could not get C/F77 runtime information") +config.CheckF77Mangling() config.Finish() -env.AppendUnique(CPPPATH = env["PYEXTCPPPATH"]) -env.AppendUnique(CPPPATH = get_numpy_include_dirs()) +env.PrependUnique(CPPPATH=[get_numpy_include_dirs(), env["PYEXTCPPPATH"]]) def build_lib(name, ext, libname = None): """ext should be .f or .c""" if not libname: @@ -43,7 +43,16 @@ build_lib('cdflib', '.f', 'sc_cdf') build_lib('specfun', '.f', 'sc_specfunlib') -env.AppendUnique(LIBPATH = ['.']) +math_info = get_pkg_info("npymath") +env.MergeFlags(math_info.cflags()) +env.MergeFlags(math_info.libs()) +env.PrependUnique(LIBPATH = ['.']) + +# orthogonal_eval extension +env.NumpyPythonExtension('orthogonal_eval', source = 'orthogonal_eval.c') + +# lambertw extension +env.NumpyPythonExtension('lambertw', source = 'lambertw.c') # Cephes extension src = ['_cephesmodule.c', 'amos_wrappers.c', 'specfun_wrappers.c', \ diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/setup.py python-scipy-0.8.0+dfsg1/scipy/special/setup.py --- python-scipy-0.7.2+dfsg1/scipy/special/setup.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/setup.py 2010-07-26 15:48:36.000000000 +0100 @@ -4,8 +4,15 @@ import sys from os.path import join from distutils.sysconfig import get_python_inc +import numpy from numpy.distutils.misc_util import get_numpy_include_dirs +try: + from numpy.distutils.misc_util import get_info +except ImportError: + raise ValueError("numpy >= 1.4 is required (detected %s from %s)" % \ + (numpy.__version__, numpy.__file__)) + def configuration(parent_package='',top_path=None): from numpy.distutils.misc_util import Configuration config = Configuration('special', parent_package, top_path) @@ -17,7 +24,9 @@ define_macros.append(('_USE_MATH_DEFINES',None)) # C libraries - config.add_library('sc_c_misc',sources=[join('c_misc','*.c')]) + config.add_library('sc_c_misc',sources=[join('c_misc','*.c')], + include_dirs=[get_python_inc(), get_numpy_include_dirs()], + macros=define_macros) config.add_library('sc_cephes',sources=[join('cephes','*.c')], include_dirs=[get_python_inc(), get_numpy_include_dirs()], macros=define_macros) @@ -41,7 +50,8 @@ "cdf_wrappers.h", "specfun_wrappers.h", "c_misc/misc.h", "cephes_doc.h", "cephes/mconf.h", "cephes/cephes_names.h"], - define_macros = define_macros + define_macros = define_macros, + extra_info=get_info("npymath") ) # Extension specfun @@ -51,7 +61,21 @@ define_macros=[], libraries=['sc_specfun']) - config.add_data_dir('tests') + # Extension orthogonal_eval + config.add_extension('orthogonal_eval', + sources=['orthogonal_eval.c'], + define_macros=[], + extra_info=get_info("npymath")) + + # Extension lambertw + config.add_extension('lambertw', + sources=['lambertw.c'], + define_macros=[], + extra_info=get_info("npymath")) + + config.add_data_files('tests/*.py') + config.add_data_files('tests/data/README') + config.add_data_files('tests/data/*.npz') return config diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/specfun/specfun.f python-scipy-0.8.0+dfsg1/scipy/special/specfun/specfun.f --- python-scipy-0.7.2+dfsg1/scipy/special/specfun/specfun.f 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/specfun/specfun.f 2010-07-26 15:48:36.000000000 +0100 @@ -2453,7 +2453,18 @@ IF (DBLE(Z).LT.0.0) THEN Z1=-Z ENDIF - IF (A0.LE.5.8D0) THEN +C +C Cutoff radius R = 4.36; determined by balancing rounding error +C and asymptotic expansion error, see below. +C +C The resulting maximum global accuracy expected is around 1e-8 +C + IF (A0.LE.4.36D0) THEN +C +C Rounding error in the Taylor expansion is roughly +C +C ~ R*R * EPSILON * R**(2 R**2) / (2 R**2 Gamma(R**2 + 1/2)) +C CS=Z1 CR=Z1 DO 10 K=1,120 @@ -2465,7 +2476,15 @@ ELSE CL=1.0D0/Z1 CR=CL - DO 20 K=1,13 +C +C Asymptotic series; maximum K must be at most ~ R^2. +C +C The maximum accuracy obtainable from this expansion is roughly +C +C ~ Gamma(2R**2 + 2) / ( +C (2 R**2)**(R**2 + 1/2) Gamma(R**2 + 3/2) 2**(R**2 + 1/2)) +C + DO 20 K=1,20 CR=-CR*(K-0.5D0)/(Z1*Z1) CL=CL+CR IF (CDABS(CR/CL).LT.1.0D-15) GO TO 25 @@ -5507,13 +5526,17 @@ C ============================================ C Purpose: Compute exponential integral Ei(x) C Input : x --- Argument of Ei(x) -C Output: EI --- Ei(x) ( x > 0 ) +C Output: EI --- Ei(x) C ============================================ C IMPLICIT DOUBLE PRECISION (A-H,O-Z) IF (X.EQ.0.0) THEN EI=-1.0D+300 - ELSE IF (X.LE.40.0) THEN + ELSE IF (X .LT. 0) THEN + CALL E1XB(-X, EI) + EI = -EI + ELSE IF (DABS(X).LE.40.0) THEN +C Power series around x=0 EI=1.0D0 R=1.0D0 DO 15 K=1,100 @@ -5524,6 +5547,7 @@ 20 GA=0.5772156649015328D0 EI=GA+DLOG(X)+X*EI ELSE +C Asymptotic expansion (the series is not convergent) EI=1.0D0 R=1.0D0 DO 25 K=1,20 @@ -5536,6 +5560,23 @@ C ********************************** + SUBROUTINE EIXZ(Z,CEI) +C +C ============================================ +C Purpose: Compute exponential integral Ei(x) +C Input : x --- Complex argument of Ei(x) +C Output: EI --- Ei(x) +C ============================================ +C + IMPLICIT NONE + DOUBLE COMPLEX Z, CEI + CALL E1Z(-Z, CEI) + CEI = -CEI + (CDLOG(Z) - CDLOG(1D0/Z))/2D0 - CDLOG(-Z) + RETURN + END + +C ********************************** + SUBROUTINE E1XB(X,E1) C C ============================================ diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/specfun_wrappers.c python-scipy-0.8.0+dfsg1/scipy/special/specfun_wrappers.c --- python-scipy-0.7.2+dfsg1/scipy/special/specfun_wrappers.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/specfun_wrappers.c 2010-07-26 15:48:36.000000000 +0100 @@ -119,7 +119,7 @@ l0 = ((c == floor(c)) && (c < 0)); l1 = ((fabs(1-REAL(z)) < 1e-15) && (IMAG(z) == 0) && (c-a-b <= 0)); if (l0 || l1) { - REAL(outz) = INFINITY; + REAL(outz) = NPY_INFINITY; IMAG(outz) = 0.0; return outz; } @@ -132,7 +132,7 @@ F_FUNC(cchg,CCHG)(&a, &b, &z, &outz); if (REAL(outz) == 1e300) { - REAL(outz) = INFINITY; + REAL(outz) = NPY_INFINITY; } return outz; } @@ -143,7 +143,7 @@ int md; /* method code --- not returned */ F_FUNC(chgu,CHGU)(&a, &b, &x, &out, &md); - if (out == 1e300) out = INFINITY; + if (out == 1e300) out = NPY_INFINITY; return out; } @@ -153,7 +153,7 @@ F_FUNC(chgm,CHGM)(&a, &b, &x, &outy); if (outy == 1e300) { - outy = INFINITY; + outy = NPY_INFINITY; } return outy; } @@ -203,6 +203,14 @@ return out; } +Py_complex cexpi_wrap(Py_complex z) { + Py_complex outz; + + F_FUNC(eixz,EIXZ)(&z, &outz); + ZCONVINF(outz); + return outz; +} + Py_complex cerf_wrap(Py_complex z) { Py_complex outz; @@ -225,7 +233,7 @@ flag = 0; } else { /* non-integer v and x < 0 => complex-valued */ - return NAN; + return NPY_NAN; } } @@ -252,17 +260,17 @@ double out; int flag=0; - if ((x < 0) & (floor(v)!=v)) return NAN; + if ((x < 0) & (floor(v)!=v)) return NPY_NAN; if (v==0.0) { if (x < 0) {x = -x; flag=1;} - F_FUNC(stvl0,STVl0)(&x,&out); + F_FUNC(stvl0,STVL0)(&x,&out); CONVINF(out); if (flag) out = -out; return out; } if (v==1.0) { if (x < 0) x=-x; - F_FUNC(stvl1,STVl1)(&x,&out); + F_FUNC(stvl1,STVL1)(&x,&out); CONVINF(out); return out; } @@ -270,7 +278,7 @@ x = -x; flag = 1; } - F_FUNC(stvlv,STVlV)(&v,&x,&out); + F_FUNC(stvlv,STVLV)(&v,&x,&out); CONVINF(out); if (flag && (!((int)floor(v) % 2))) out = -out; return out; @@ -332,7 +340,7 @@ { Py_complex Be, Ke, Bep, Kep; - if (x<0) return NAN; + if (x<0) return NPY_NAN; F_FUNC(klvna,KLVNA)(&x, CADDR(Be), CADDR(Ke), CADDR(Bep), CADDR(Kep)); ZCONVINF(Ke); return REAL(Ke); @@ -342,7 +350,7 @@ { Py_complex Be, Ke, Bep, Kep; - if (x<0) return NAN; + if (x<0) return NPY_NAN; F_FUNC(klvna,KLVNA)(&x, CADDR(Be), CADDR(Ke), CADDR(Bep), CADDR(Kep)); ZCONVINF(Ke); return IMAG(Ke); @@ -376,7 +384,7 @@ { Py_complex Be, Ke, Bep, Kep; - if (x<0) return NAN; + if (x<0) return NPY_NAN; F_FUNC(klvna,KLVNA)(&x, CADDR(Be), CADDR(Ke), CADDR(Bep), CADDR(Kep)); ZCONVINF(Kep); return REAL(Kep); @@ -386,7 +394,7 @@ { Py_complex Be, Ke, Bep, Kep; - if (x<0) return NAN; + if (x<0) return NPY_NAN; F_FUNC(klvna,KLVNA)(&x, CADDR(Be), CADDR(Ke), CADDR(Bep), CADDR(Kep)); ZCONVINF(Kep); return IMAG(Kep); @@ -405,10 +413,10 @@ if (flag) { REAL(*Bep) = -REAL(*Bep); IMAG(*Bep) = -IMAG(*Bep); - REAL(*Ke) = NAN; - IMAG(*Ke) = NAN; - REAL(*Kep) = NAN; - IMAG(*Kep) = NAN; + REAL(*Ke) = NPY_NAN; + IMAG(*Ke) = NPY_NAN; + REAL(*Kep) = NPY_NAN; + IMAG(*Kep) = NPY_NAN; } return 0; } @@ -426,7 +434,7 @@ F_FUNC(itjya, ITJYA)(&x, j0int, y0int); if (flag) { *j0int = -(*j0int); - *y0int = NAN; /* domain error */ + *y0int = NPY_NAN; /* domain error */ } return 0; } @@ -441,7 +449,7 @@ if (x < 0) {x=-x; flag=1;} F_FUNC(ittjya, ITTJYA)(&x, j0int, y0int); if (flag) { - *y0int = NAN; /* domain error */ + *y0int = NPY_NAN; /* domain error */ } return 0; } @@ -456,7 +464,7 @@ F_FUNC(itika, ITIKA)(&x, i0int, k0int); if (flag) { *i0int = -(*i0int); - *k0int = NAN; /* domain error */ + *k0int = NPY_NAN; /* domain error */ } return 0; } @@ -468,7 +476,7 @@ if (x < 0) {x=-x; flag=1;} F_FUNC(ittika, ITTIKA)(&x, i0int, k0int); if (flag) { - *k0int = NAN; /* domain error */ + *k0int = NPY_NAN; /* domain error */ } return 0; } @@ -491,7 +499,7 @@ double out; if ((m < 0) || (m != floor(m))) - return NAN; + return NPY_NAN; int_m = (int )m; if (int_m % 2) kd=2; F_FUNC(cva2,CVA2)(&kd, &int_m, &q, &out); @@ -503,7 +511,7 @@ double out; if ((m < 1) || (m != floor(m))) - return NAN; + return NPY_NAN; int_m = (int )m; if (int_m % 2) kd=3; F_FUNC(cva2,CVA2)(&kd, &int_m, &q, &out); @@ -515,8 +523,8 @@ { int int_m, kf=1; if ((m < 1) || (m != floor(m)) || (q<0)) { - *csf = NAN; - *csd = NAN; + *csf = NPY_NAN; + *csd = NPY_NAN; } int_m = (int )m; F_FUNC(mtu0,MTU0)(&kf,&int_m, &q, &x, csf, csd); @@ -527,8 +535,8 @@ { int int_m, kf=2; if ((m < 1) || (m != floor(m)) || (q<0)) { - *csf = NAN; - *csd = NAN; + *csf = NPY_NAN; + *csd = NPY_NAN; } int_m = (int )m; F_FUNC(mtu0,MTU0)(&kf,&int_m, &q, &x, csf, csd); @@ -542,8 +550,8 @@ double f2r, d2r; if ((m < 1) || (m != floor(m)) || (q<0)) { - *f1r = NAN; - *d1r = NAN; + *f1r = NPY_NAN; + *d1r = NPY_NAN; } int_m = (int )m; F_FUNC(mtu12,MTU12)(&kf,&kc,&int_m, &q, &x, f1r, d1r, &f2r, &d2r); @@ -556,8 +564,8 @@ double f2r, d2r; if ((m < 1) || (m != floor(m)) || (q<0)) { - *f1r = NAN; - *d1r = NAN; + *f1r = NPY_NAN; + *d1r = NPY_NAN; } int_m = (int )m; F_FUNC(mtu12,MTU12)(&kf,&kc,&int_m, &q, &x, f1r, d1r, &f2r, &d2r); @@ -570,8 +578,8 @@ double f1r, d1r; if ((m < 1) || (m != floor(m)) || (q<0)) { - *f2r = NAN; - *d2r = NAN; + *f2r = NPY_NAN; + *d2r = NPY_NAN; } int_m = (int )m; F_FUNC(mtu12,MTU12)(&kf,&kc,&int_m, &q, &x, &f1r, &d1r, f2r, d2r); @@ -584,8 +592,8 @@ double f1r, d1r; if ((m < 1) || (m != floor(m)) || (q<0)) { - *f2r = NAN; - *d2r = NAN; + *f2r = NPY_NAN; + *d2r = NPY_NAN; } int_m = (int )m; F_FUNC(mtu12,MTU12)(&kf,&kc,&int_m, &q, &x, &f1r, &d1r, f2r, d2r); @@ -597,7 +605,7 @@ int int_m; double out; - if (m != floor(m)) return NAN; + if (m != floor(m)) return NPY_NAN; int_m = (int ) m; F_FUNC(lpmv,LPMV)(&v, &int_m, &x, &out); return out; @@ -635,8 +643,8 @@ dv = (double *)PyMem_Malloc(sizeof(double)*2*num); if (dv==NULL) { printf("Warning: Memory allocation error.\n"); - *pdf = NAN; - *pdd = NAN; + *pdf = NPY_NAN; + *pdd = NPY_NAN; return -1; } dp = dv + num; @@ -655,8 +663,8 @@ vv = (double *)PyMem_Malloc(sizeof(double)*2*num); if (vv==NULL) { printf("Warning: Memory allocation error.\n"); - *pvf = NAN; - *pvd = NAN; + *pvf = NPY_NAN; + *pvd = NPY_NAN; return -1; } vp = vv + num; @@ -672,14 +680,14 @@ double cv, *eg; if ((m<0) || (n198)) { - return NAN; + return NPY_NAN; } int_m = (int) m; int_n = (int) n; eg = (double *)PyMem_Malloc(sizeof(double)*(n-m+2)); if (eg==NULL) { printf("Warning: Memory allocation error.\n"); - return NAN; + return NPY_NAN; } F_FUNC(segv,SEGV)(&int_m,&int_n,&c,&kd,&cv,eg); PyMem_Free(eg); @@ -693,14 +701,14 @@ double cv, *eg; if ((m<0) || (n198)) { - return NAN; + return NPY_NAN; } int_m = (int) m; int_n = (int) n; eg = (double *)PyMem_Malloc(sizeof(double)*(n-m+2)); if (eg==NULL) { printf("Warning: Memory allocation error.\n"); - return NAN; + return NPY_NAN; } F_FUNC(segv,SEGV)(&int_m,&int_n,&c,&kd,&cv,eg); PyMem_Free(eg); @@ -716,16 +724,16 @@ if ((x >=1) || (x <=-1) || (m<0) || (n198)) { - *s1d = NAN; - return NAN; + *s1d = NPY_NAN; + return NPY_NAN; } int_m = (int )m; int_n = (int )n; eg = (double *)PyMem_Malloc(sizeof(double)*(n-m+2)); if (eg==NULL) { printf("Warning: Memory allocation error.\n"); - *s1d = NAN; - return NAN; + *s1d = NPY_NAN; + return NPY_NAN; } F_FUNC(segv,SEGV)(&int_m,&int_n,&c,&kd,&cv,eg); F_FUNC(aswfa,ASWFA)(&int_m,&int_n,&c,&x,&kd,&cv,&s1f,s1d); @@ -742,16 +750,16 @@ if ((x >=1) || (x <=-1) || (m<0) || (n198)) { - *s1d = NAN; - return NAN; + *s1d = NPY_NAN; + return NPY_NAN; } int_m = (int )m; int_n = (int )n; eg = (double *)PyMem_Malloc(sizeof(double)*(n-m+2)); if (eg==NULL) { printf("Warning: Memory allocation error.\n"); - *s1d = NAN; - return NAN; + *s1d = NPY_NAN; + return NPY_NAN; } F_FUNC(segv,SEGV)(&int_m,&int_n,&c,&kd,&cv,eg); F_FUNC(aswfa,ASWFA)(&int_m,&int_n,&c,&x,&kd,&cv,&s1f,s1d); @@ -767,8 +775,8 @@ if ((x >=1) || (x <=-1) || (m<0) || (n=1) || (x <=-1) || (m<0) || (n198)) { - *r1d = NAN; - return NAN; + *r1d = NPY_NAN; + return NPY_NAN; } int_m = (int )m; int_n = (int )n; eg = (double *)PyMem_Malloc(sizeof(double)*(n-m+2)); if (eg==NULL) { printf("Warning: Memory allocation error.\n"); - *r1d = NAN; - return NAN; + *r1d = NPY_NAN; + return NPY_NAN; } F_FUNC(segv,SEGV)(&int_m,&int_n,&c,&kd,&cv,eg); F_FUNC(rswfp,RSWFP)(&int_m,&int_n,&c,&x,&cv,&kf,&r1f,r1d,&r2f,&r2d); @@ -829,16 +837,16 @@ if ((x <=1.0) || (m<0) || (n198)) { - *r2d = NAN; - return NAN; + *r2d = NPY_NAN; + return NPY_NAN; } int_m = (int )m; int_n = (int )n; eg = (double *)PyMem_Malloc(sizeof(double)*(n-m+2)); if (eg==NULL) { printf("Warning: Memory allocation error.\n"); - *r2d = NAN; - return NAN; + *r2d = NPY_NAN; + return NPY_NAN; } F_FUNC(segv,SEGV)(&int_m,&int_n,&c,&kd,&cv,eg); F_FUNC(rswfp,RSWFP)(&int_m,&int_n,&c,&x,&cv,&kf,&r1f,&r1d,&r2f,r2d); @@ -854,8 +862,8 @@ if ((x <= 1.0) || (m<0) || (n198)) { - *r1d = NAN; - return NAN; + *r1d = NPY_NAN; + return NPY_NAN; } int_m = (int )m; int_n = (int )n; eg = (double *)PyMem_Malloc(sizeof(double)*(n-m+2)); if (eg==NULL) { printf("Warning: Memory allocation error.\n"); - *r1d = NAN; - return NAN; + *r1d = NPY_NAN; + return NPY_NAN; } F_FUNC(segv,SEGV)(&int_m,&int_n,&c,&kd,&cv,eg); F_FUNC(rswfo,RSWFO)(&int_m,&int_n,&c,&x,&cv,&kf,&r1f,r1d,&r2f,&r2d); @@ -915,16 +923,16 @@ if ((x < 0.0) || (m<0) || (n198)) { - *r2d = NAN; - return NAN; + *r2d = NPY_NAN; + return NPY_NAN; } int_m = (int )m; int_n = (int )n; eg = (double *)PyMem_Malloc(sizeof(double)*(n-m+2)); if (eg==NULL) { printf("Warning: Memory allocation error.\n"); - *r2d = NAN; - return NAN; + *r2d = NPY_NAN; + return NPY_NAN; } F_FUNC(segv,SEGV)(&int_m,&int_n,&c,&kd,&cv,eg); F_FUNC(rswfo,RSWFO)(&int_m,&int_n,&c,&x,&cv,&kf,&r1f,&r1d,&r2f,r2d); @@ -940,8 +948,8 @@ if ((x <0.0) || (m<0) || (n -#undef NAN -#undef INFINITY - -extern double NAN; -extern double INFINITY; extern double PI; #define REAL(z) (z).real #define IMAG(z) (z).imag #define ABSQ(z) (z).real*(z).real + (z).imag*(z).imag; -#define ZCONVINF(z) if (REAL((z))==1.0e300) REAL((z))=INFINITY; if (REAL((z))==-1.0e300) REAL((z))=-INFINITY -#define CONVINF(x) if ((x)==1.0e300) (x)=INFINITY; if ((x)==-1.0e300) (x)=-INFINITY +#define ZCONVINF(z) if (REAL((z))==1.0e300) REAL((z))=NPY_INFINITY; if (REAL((z))==-1.0e300) REAL((z))=-NPY_INFINITY +#define CONVINF(x) if ((x)==1.0e300) (x)=NPY_INFINITY; if ((x)==-1.0e300) (x)=-NPY_INFINITY #define ABS(x) ((x)<0 ? -(x) : (x)) Py_complex cgamma_wrap( Py_complex z); @@ -35,8 +31,9 @@ double hypU_wrap(double a, double b, double x); double exp1_wrap(double x); double expi_wrap(double x); -Py_complex cexp1_wrap( Py_complex z); -Py_complex cerf_wrap( Py_complex z); +Py_complex cexp1_wrap(Py_complex z); +Py_complex cexpi_wrap(Py_complex z); +Py_complex cerf_wrap(Py_complex z); int itairy_wrap(double x, double *apt, double *bpt, double *ant, double *bnt); double struve_wrap(double v, double x); Binary files /tmp/tTuQn84fcN/python-scipy-0.7.2+dfsg1/scipy/special/tests/data/boost.npz and /tmp/9QioNV5RG6/python-scipy-0.8.0+dfsg1/scipy/special/tests/data/boost.npz differ diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/tests/data/README python-scipy-0.8.0+dfsg1/scipy/special/tests/data/README --- python-scipy-0.7.2+dfsg1/scipy/special/tests/data/README 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/tests/data/README 2010-07-26 15:48:36.000000000 +0100 @@ -0,0 +1,6 @@ +This directory contains numerical data for testing special functions. +The data is in version control as text files, but it is distributed as +compressed NPZ files which are also checked in. + +To rebuild the npz files, use ../../utils/makenpz.py on the directories. + diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/tests/fncs_cep.dat python-scipy-0.8.0+dfsg1/scipy/special/tests/fncs_cep.dat --- python-scipy-0.7.2+dfsg1/scipy/special/tests/fncs_cep.dat 2010-03-03 14:34:13.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/special/tests/fncs_cep.dat 1970-01-01 01:00:00.000000000 +0100 @@ -1,18 +0,0 @@ -# here are the tests for functions defined in cephes -# library -# -# each test should be written as CONSECUTIVE lines. -# If you split a test on more than one line, -# remember putting commas at the end of each line, excluding the -# last one. -# An empty line, or a comment only one, marks the end -# of each test's definition. -# -cephes.airy,'airy',in_vars={'z':(0.+0.j,10.+10j)}, # You can put comments -out_vars={'Ai': 0, 'Aip': 0, 'Bi':0, 'Bip':0} # on test lines too! - -cephes.airye,'airye',in_vars={'z':(0.+0j,10.+10j)}, -out_vars={'Ai': 0, 'Aip': 0, 'Bi':0, 'Bip':0} - -cephes.ellpj,'ellpj_sn',in_vars={'m':(0.,1.),'u':(-100,100)}, -out_vars={'sn':0,'cn':0,'dn':0,'ph':0} diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/tests/fncs_wof.dat python-scipy-0.8.0+dfsg1/scipy/special/tests/fncs_wof.dat --- python-scipy-0.7.2+dfsg1/scipy/special/tests/fncs_wof.dat 2010-03-03 14:34:13.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/special/tests/fncs_wof.dat 1970-01-01 01:00:00.000000000 +0100 @@ -1,8 +0,0 @@ -# Here are defined tests for functions coming from ToMS -# -cephes.wofz2,'wofz2',in_vars={'x':(-100.,100.),'y':(0.,1.)}, -out_vars={'u': 0,'v':0} - -cephes.wofz,'wofz',in_vars={'zeta':(-100.+0.j,100.+5j)}, -out_vars={'w': 0} - diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/tests/test_basic.py python-scipy-0.8.0+dfsg1/scipy/special/tests/test_basic.py --- python-scipy-0.7.2+dfsg1/scipy/special/tests/test_basic.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/tests/test_basic.py 2010-07-26 15:48:36.000000000 +0100 @@ -28,14 +28,7 @@ import scipy.special._cephes as cephes import numpy as np -def assert_tol_equal(a, b, rtol=1e-7, atol=0, err_msg='', verbose=True): - """Assert that `a` and `b` are equal to tolerance ``atol + rtol*abs(b)``""" - def compare(x, y): - return allclose(x, y, rtol=rtol, atol=atol) - a, b = asanyarray(a), asanyarray(b) - header = 'Not equal to tolerance rtol=%g, atol=%g' % (rtol, atol) - np.testing.utils.assert_array_compare(compare, a, b, err_msg=str(err_msg), - verbose=verbose, header=header) +from testutils import * class TestCephes(TestCase): def test_airy(self): @@ -340,14 +333,16 @@ cephes.nctdtrit(.1,0.2,.5) def test_ndtr(self): - assert_equal(cephes.ndtr(0),0.5) + assert_equal(cephes.ndtr(0), 0.5) + assert_almost_equal(cephes.ndtr(1), 0.84134474606) + def test_ndtri(self): assert_equal(cephes.ndtri(0.5),0.0) def test_nrdtrimn(self): assert_approx_equal(cephes.nrdtrimn(0.5,1,1),1.0) def test_nrdtrisd(self): - # abs() because nrdtrisd(0.5,0.5,0.5) returns -0.0, should be +0.0 - assert_equal(np.abs(cephes.nrdtrisd(0.5,0.5,0.5)), 0.0) + assert_tol_equal(cephes.nrdtrisd(0.5,0.5,0.5), 0.0, + atol=0, rtol=0) def test_obl_ang1(self): cephes.obl_ang1(1,1,1,0) @@ -417,6 +412,15 @@ cephes.shichi(1) def test_sici(self): cephes.sici(1) + + s, c = cephes.sici(np.inf) + assert_almost_equal(s, np.pi * 0.5) + assert_almost_equal(c, 0) + + s, c = cephes.sici(-np.inf) + assert_almost_equal(s, -np.pi * 0.5) + assert_(np.isnan(c), "cosine integral(-inf) is not nan") + def test_sindg(self): assert_equal(cephes.sindg(90),1.0) def test_smirnov(self): @@ -709,64 +713,6 @@ comp = betainc(2,4,y) assert_almost_equal(comp,.5,5) -class TestCheby(TestCase): - def test_chebyc(self): - C0 = chebyc(0) - C1 = chebyc(1) - C2 = chebyc(2) - C3 = chebyc(3) - C4 = chebyc(4) - C5 = chebyc(5) - - assert_array_almost_equal(C0.c,[2],13) - assert_array_almost_equal(C1.c,[1,0],13) - assert_array_almost_equal(C2.c,[1,0,-2],13) - assert_array_almost_equal(C3.c,[1,0,-3,0],13) - assert_array_almost_equal(C4.c,[1,0,-4,0,2],13) - assert_array_almost_equal(C5.c,[1,0,-5,0,5,0],13) - - def test_chebys(self): - S0 = chebys(0) - S1 = chebys(1) - S2 = chebys(2) - S3 = chebys(3) - S4 = chebys(4) - S5 = chebys(5) - assert_array_almost_equal(S0.c,[1],13) - assert_array_almost_equal(S1.c,[1,0],13) - assert_array_almost_equal(S2.c,[1,0,-1],13) - assert_array_almost_equal(S3.c,[1,0,-2,0],13) - assert_array_almost_equal(S4.c,[1,0,-3,0,1],13) - assert_array_almost_equal(S5.c,[1,0,-4,0,3,0],13) - - def test_chebyt(self): - T0 = chebyt(0) - T1 = chebyt(1) - T2 = chebyt(2) - T3 = chebyt(3) - T4 = chebyt(4) - T5 = chebyt(5) - assert_array_almost_equal(T0.c,[1],13) - assert_array_almost_equal(T1.c,[1,0],13) - assert_array_almost_equal(T2.c,[2,0,-1],13) - assert_array_almost_equal(T3.c,[4,0,-3,0],13) - assert_array_almost_equal(T4.c,[8,0,-8,0,1],13) - assert_array_almost_equal(T5.c,[16,0,-20,0,5,0],13) - - def test_chebyu(self): - U0 = chebyu(0) - U1 = chebyu(1) - U2 = chebyu(2) - U3 = chebyu(3) - U4 = chebyu(4) - U5 = chebyu(5) - assert_array_almost_equal(U0.c,[1],13) - assert_array_almost_equal(U1.c,[2,0],13) - assert_array_almost_equal(U2.c,[4,0,-1],13) - assert_array_almost_equal(U3.c,[8,0,-4,0],13) - assert_array_almost_equal(U4.c,[16,0,-12,0,1],13) - assert_array_almost_equal(U5.c,[32,0,-32,0,6,0],13) - class TestTrigonometric(TestCase): def test_cbrt(self): cb = cbrt(27) @@ -870,7 +816,7 @@ class TestEllip(TestCase): def test_ellipj_nan(self): - """Regression test for #946.""" + """Regression test for #912.""" ellipj(0.5, np.nan) def test_ellipj(self): @@ -1054,6 +1000,7 @@ gcinv = gammaincinv(.5,.5) assert_almost_equal(gccinv,gcinv,8) + @with_special_errors def test_gammaincinv(self): y = gammaincinv(.4,.4) x = gammainc(.4,y) @@ -1065,6 +1012,19 @@ x = gammaincinv(50, 8.20754777388471303050299243573393e-18) assert_almost_equal(11.0, x, decimal=10) + @with_special_errors + def test_975(self): + # Regression test for ticket #975 -- switch point in algorithm + # check that things work OK at the point, immediately next floats + # around it, and a bit further away + pts = [0.25, + np.nextafter(0.25, 0), 0.25 - 1e-12, + np.nextafter(0.25, 1), 0.25 + 1e-12] + for xp in pts: + y = gammaincinv(.4, xp) + x = gammainc(0.4, y) + assert_tol_equal(x, xp, rtol=1e-12) + def test_rgamma(self): rgam = rgamma(8) rlgam = 1/gamma(8) @@ -1103,69 +1063,6 @@ hankrl2e = hankel2e(1,.1) assert_almost_equal(hank2e,hankrl2e,8) -class TestHermite(TestCase): - def test_hermite(self): - H0 = hermite(0) - H1 = hermite(1) - H2 = hermite(2) - H3 = hermite(3) - H4 = hermite(4) - H5 = hermite(5) - assert_array_almost_equal(H0.c,[1],13) - assert_array_almost_equal(H1.c,[2,0],13) - assert_array_almost_equal(H2.c,[4,0,-2],13) - assert_array_almost_equal(H3.c,[8,0,-12,0],13) - assert_array_almost_equal(H4.c,[16,0,-48,0,12],12) - assert_array_almost_equal(H5.c,[32,0,-160,0,120,0],12) - - def test_hermitenorm(self): - # He_n(x) = 2**(-n/2) H_n(x/sqrt(2)) - psub = poly1d([1.0/sqrt(2),0]) - H0 = hermitenorm(0) - H1 = hermitenorm(1) - H2 = hermitenorm(2) - H3 = hermitenorm(3) - H4 = hermitenorm(4) - H5 = hermitenorm(5) - he0 = hermite(0)(psub) - he1 = hermite(1)(psub) / sqrt(2) - he2 = hermite(2)(psub) / 2.0 - he3 = hermite(3)(psub) / (2*sqrt(2)) - he4 = hermite(4)(psub) / 4.0 - he5 = hermite(5)(psub) / (4.0*sqrt(2)) - - assert_array_almost_equal(H0.c,he0.c,13) - assert_array_almost_equal(H1.c,he1.c,13) - assert_array_almost_equal(H2.c,he2.c,13) - assert_array_almost_equal(H3.c,he3.c,13) - assert_array_almost_equal(H4.c,he4.c,13) - assert_array_almost_equal(H5.c,he5.c,13) - -_gam = cephes.gamma - -class TestGegenbauer(TestCase): - - def test_gegenbauer(self): - a = 5*rand()-0.5 - if any(a==0): a = -0.2 - Ca0 = gegenbauer(0,a) - Ca1 = gegenbauer(1,a) - Ca2 = gegenbauer(2,a) - Ca3 = gegenbauer(3,a) - Ca4 = gegenbauer(4,a) - Ca5 = gegenbauer(5,a) - - assert_array_almost_equal(Ca0.c,array([1]),13) - assert_array_almost_equal(Ca1.c,array([2*a,0]),13) - assert_array_almost_equal(Ca2.c,array([2*a*(a+1),0,-a]),13) - assert_array_almost_equal(Ca3.c,array([4*poch(a,3),0,-6*a*(a+1), - 0])/3.0,11) - assert_array_almost_equal(Ca4.c,array([4*poch(a,4),0,-12*poch(a,3), - 0,3*a*(a+1)])/6.0,11) - assert_array_almost_equal(Ca5.c,array([4*poch(a,5),0,-20*poch(a,4), - 0,15*poch(a,3),0])/15.0,11) - - class TestHyper(TestCase): def test_h1vp(self): h1 = h1vp(1,.1) @@ -1318,6 +1215,14 @@ # and some others # ticket #424 [1.5, -0.5, 1.0, -10.0, 4.1300097765277476484], + # negative integer a or b, with c-a-b integer and x > 0.9 + [-2,3,1,0.95,0.715], + [2,-3,1,0.95,-0.007], + [-6,3,1,0.95,0.0000810625], + [2,-5,1,0.95,-0.000029375], + # huge negative integers + (10, -900, 10.5, 0.99, 1.91853705796607664803709475658e-24), + (10, -900, -10.5, 0.99, 3.54279200040355710199058559155e-18), ] for i, (a, b, c, x, v) in enumerate(values): cv = hyp2f1(a, b, c, x) @@ -1410,7 +1315,7 @@ 123.70194191713507279, 129.02417238949092824, 134.00114761868422559]), rtol=1e-13) - + jn301 = jn_zeros(301,5) assert_tol_equal(jn301, array([313.59097866698830153, 323.21549776096288280, @@ -1423,7 +1328,7 @@ assert_tol_equal(jn0[260-1], 816.02884495068867280, rtol=1e-13) assert_tol_equal(jn0[280-1], 878.86068707124422606, rtol=1e-13) assert_tol_equal(jn0[300-1], 941.69253065317954064, rtol=1e-13) - + jn10 = jn_zeros(10, 300) assert_tol_equal(jn10[260-1], 831.67668514305631151, rtol=1e-13) assert_tol_equal(jn10[280-1], 894.51275095371316931, rtol=1e-13) @@ -1598,7 +1503,7 @@ an = yn_zeros(4,2) assert_array_almost_equal(an,array([ 5.64515, 9.36162]),5) an = yn_zeros(443,5) - assert_tol_equal(an, [450.13573091578090314, 463.05692376675001542, + assert_tol_equal(an, [450.13573091578090314, 463.05692376675001542, 472.80651546418663566, 481.27353184725625838, 488.98055964441374646], rtol=1e-15) @@ -1652,7 +1557,7 @@ for z in [-1300, -11, -10, -1, 1., 10., 200.5, 401., 600.5, 700.6, 1300, 10003]: yield v, z - + # check half-integers; these are problematic points at least # for cephes/iv for v in 0.5 + arange(-60, 60): @@ -1687,7 +1592,7 @@ self.check_cephes_vs_amos(yv, yn, rtol=1e-11, atol=1e-305, skip=skipper) def test_iv_cephes_vs_amos(self): - self.check_cephes_vs_amos(iv, iv, rtol=1e-12, atol=1e-305) + self.check_cephes_vs_amos(iv, iv, rtol=5e-9, atol=1e-305) @dec.slow def test_iv_cephes_vs_amos_mass_test(self): @@ -1702,6 +1607,10 @@ c1 = iv(v, x) c2 = iv(v, x+0j) + # deal with differences in the inf cutoffs + c1[abs(c1) > 1e300] = np.inf + c2[abs(c2) > 1e300] = np.inf + dc = abs(c1/c2 - 1) dc[np.isnan(dc)] = 0 @@ -1709,7 +1618,7 @@ # Most error apparently comes from AMOS and not our implementation; # there are some problems near integer orders there - assert dc[k] < 1e-9, (iv(v[k], x[k]), iv(v[k], x[k]+0j)) + assert dc[k] < 1e-9, (v[k], x[k], iv(v[k], x[k]), iv(v[k], x[k]+0j)) def test_kv_cephes_vs_amos(self): #self.check_cephes_vs_amos(kv, kn, rtol=1e-9, atol=1e-305) @@ -1745,12 +1654,12 @@ assert_tol_equal(iv(-2, 1+0j), 0.1357476697670383) assert_tol_equal(kv(-1, 1+0j), 0.6019072301972347) assert_tol_equal(kv(-2, 1+0j), 1.624838898635178) - + assert_tol_equal(jv(-0.5, 1+0j), 0.43109886801837607952) assert_tol_equal(jv(-0.5, 1+1j), 0.2628946385649065-0.827050182040562j) assert_tol_equal(yv(-0.5, 1+0j), 0.6713967071418031) assert_tol_equal(yv(-0.5, 1+1j), 0.967901282890131+0.0602046062142816j) - + assert_tol_equal(iv(-0.5, 1+0j), 1.231200214592967) assert_tol_equal(iv(-0.5, 1+1j), 0.77070737376928+0.39891821043561j) assert_tol_equal(kv(-0.5, 1+0j), 0.4610685044478945) @@ -1877,7 +1786,7 @@ y=(iv(0,2)+iv(2,2))/2 x = ivp(1,2) assert_almost_equal(x,y,10) - + class TestLaguerre(TestCase): def test_laguerre(self): @@ -2049,7 +1958,7 @@ eps = 1e-7 + 1e-7*abs(x) dp = (pbdv(eta, x + eps)[0] - pbdv(eta, x - eps)[0]) / eps / 2. assert_tol_equal(p[1], dp, rtol=1e-6, atol=1e-6) - + def test_pbvv_gradient(self): x = np.linspace(-4, 4, 8)[:,None] eta = np.linspace(-10, 10, 5)[None,:] @@ -2058,7 +1967,7 @@ eps = 1e-7 + 1e-7*abs(x) dp = (pbvv(eta, x + eps)[0] - pbvv(eta, x - eps)[0]) / eps / 2. assert_tol_equal(p[1], dp, rtol=1e-6, atol=1e-6) - + class TestPolygamma(TestCase): # from Table 6.2 (pg. 271) of A&S @@ -2112,113 +2021,42 @@ # correctly written. rndrl = (10,10,10,11) assert_array_equal(rnd,rndrl) + -class _test_sh_legendre(TestCase): +def test_sph_harm(): + # Tests derived from tables in + # http://en.wikipedia.org/wiki/Table_of_spherical_harmonics + sh = sph_harm + pi = np.pi + exp = np.exp + sqrt = np.sqrt + sin = np.sin + cos = np.cos + yield (assert_array_almost_equal, sh(0,0,0,0), + 0.5/sqrt(pi)) + yield (assert_array_almost_equal, sh(-2,2,0.,pi/4), + 0.25*sqrt(15./(2.*pi))* + (sin(pi/4))**2.) + yield (assert_array_almost_equal, sh(-2,2,0.,pi/2), + 0.25*sqrt(15./(2.*pi))) + yield (assert_array_almost_equal, sh(2,2,pi,pi/2), + 0.25*sqrt(15/(2.*pi))* + exp(0+2.*pi*1j)*sin(pi/2.)**2.) + yield (assert_array_almost_equal, sh(2,4,pi/4.,pi/3.), + (3./8.)*sqrt(5./(2.*pi))* + exp(0+2.*pi/4.*1j)* + sin(pi/3.)**2.* + (7.*cos(pi/3.)**2.-1)) + yield (assert_array_almost_equal, sh(4,4,pi/8.,pi/6.), + (3./16.)*sqrt(35./(2.*pi))* + exp(0+4.*pi/8.*1j)*sin(pi/6.)**4.) - def test_sh_legendre(self): - # P*_n(x) = P_n(2x-1) - psub = poly1d([2,-1]) - Ps0 = sh_legendre(0) - Ps1 = sh_legendre(1) - Ps2 = sh_legendre(2) - Ps3 = sh_legendre(3) - Ps4 = sh_legendre(4) - Ps5 = sh_legendre(5) - pse0 = legendre(0)(psub) - pse1 = legendre(1)(psub) - pse2 = legendre(2)(psub) - pse3 = legendre(3)(psub) - pse4 = legendre(4)(psub) - pse5 = legendre(5)(psub) - assert_array_almost_equal(Ps0.c,pse0.c,13) - assert_array_almost_equal(Ps1.c,pse1.c,13) - assert_array_almost_equal(Ps2.c,pse2.c,13) - assert_array_almost_equal(Ps3.c,pse3.c,13) - assert_array_almost_equal(Ps4.c,pse4.c,12) - assert_array_almost_equal(Ps5.c,pse5.c,12) - -class _test_sh_chebyt(TestCase): - - def test_sh_chebyt(self): - # T*_n(x) = T_n(2x-1) - psub = poly1d([2,-1]) - Ts0 = sh_chebyt(0) - Ts1 = sh_chebyt(1) - Ts2 = sh_chebyt(2) - Ts3 = sh_chebyt(3) - Ts4 = sh_chebyt(4) - Ts5 = sh_chebyt(5) - tse0 = chebyt(0)(psub) - tse1 = chebyt(1)(psub) - tse2 = chebyt(2)(psub) - tse3 = chebyt(3)(psub) - tse4 = chebyt(4)(psub) - tse5 = chebyt(5)(psub) - assert_array_almost_equal(Ts0.c,tse0.c,13) - assert_array_almost_equal(Ts1.c,tse1.c,13) - assert_array_almost_equal(Ts2.c,tse2.c,13) - assert_array_almost_equal(Ts3.c,tse3.c,13) - assert_array_almost_equal(Ts4.c,tse4.c,12) - assert_array_almost_equal(Ts5.c,tse5.c,12) - - -class _test_sh_chebyu(TestCase): - - def test_sh_chebyu(self): - # U*_n(x) = U_n(2x-1) - psub = poly1d([2,-1]) - Us0 = sh_chebyu(0) - Us1 = sh_chebyu(1) - Us2 = sh_chebyu(2) - Us3 = sh_chebyu(3) - Us4 = sh_chebyu(4) - Us5 = sh_chebyu(5) - use0 = chebyu(0)(psub) - use1 = chebyu(1)(psub) - use2 = chebyu(2)(psub) - use3 = chebyu(3)(psub) - use4 = chebyu(4)(psub) - use5 = chebyu(5)(psub) - assert_array_almost_equal(Us0.c,use0.c,13) - assert_array_almost_equal(Us1.c,use1.c,13) - assert_array_almost_equal(Us2.c,use2.c,13) - assert_array_almost_equal(Us3.c,use3.c,13) - assert_array_almost_equal(Us4.c,use4.c,12) - assert_array_almost_equal(Us5.c,use5.c,11) - -class _test_sh_jacobi(TestCase): - - def test_sh_jacobi(self): - # G^(p,q)_n(x) = n! gamma(n+p)/gamma(2*n+p) * P^(p-q,q-1)_n(2*x-1) - conv = lambda n,p: _gam(n+1)*_gam(n+p)/_gam(2*n+p) - psub = poly1d([2,-1]) - q = 4*rand() - p = q-1 + 2*rand() - #print "shifted jacobi p,q = ", p, q - G0 = sh_jacobi(0,p,q) - G1 = sh_jacobi(1,p,q) - G2 = sh_jacobi(2,p,q) - G3 = sh_jacobi(3,p,q) - G4 = sh_jacobi(4,p,q) - G5 = sh_jacobi(5,p,q) - ge0 = jacobi(0,p-q,q-1)(psub) * conv(0,p) - ge1 = jacobi(1,p-q,q-1)(psub) * conv(1,p) - ge2 = jacobi(2,p-q,q-1)(psub) * conv(2,p) - ge3 = jacobi(3,p-q,q-1)(psub) * conv(3,p) - ge4 = jacobi(4,p-q,q-1)(psub) * conv(4,p) - ge5 = jacobi(5,p-q,q-1)(psub) * conv(5,p) - - assert_array_almost_equal(G0.c,ge0.c,13) - assert_array_almost_equal(G1.c,ge1.c,13) - assert_array_almost_equal(G2.c,ge2.c,13) - assert_array_almost_equal(G3.c,ge3.c,13) - assert_array_almost_equal(G4.c,ge4.c,13) - assert_array_almost_equal(G5.c,ge5.c,13) class TestSpherical(TestCase): def test_sph_harm(self): + # see test_sph_harm function pass - + def test_sph_in(self): i1n = sph_in(1,.2) inp0 = (i1n[0][1]) @@ -2300,5 +2138,15 @@ assert_tol_equal(struve(-2.0, 20 - 1e-8), struve(-2.0, 20 + 1e-8)) assert_tol_equal(struve(-4.3, 20 - 1e-8), struve(-4.3, 20 + 1e-8)) +def test_chi2_smalldf(): + assert_almost_equal(chdtr(0.6,3), 0.957890536704110) + +def test_chi2c_smalldf(): + assert_almost_equal(chdtrc(0.6,3), 1-0.957890536704110) + +def test_chi2_inv_smalldf(): + assert_almost_equal(chdtri(0.6,1-0.957890536704110), 3) + + if __name__ == "__main__": run_module_suite() diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/tests/test_data.py python-scipy-0.8.0+dfsg1/scipy/special/tests/test_data.py --- python-scipy-0.7.2+dfsg1/scipy/special/tests/test_data.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/tests/test_data.py 2010-07-26 15:48:36.000000000 +0100 @@ -0,0 +1,205 @@ +import os + +import numpy as np +from numpy.testing import * +from scipy.special import ( + arccosh, arcsinh, arctanh, erf, erfc, log1p, expm1, + jn, jv, yn, yv, iv, kv, kn, gamma, gammaln, digamma, beta, cbrt, + ellipe, ellipeinc, ellipk, ellipj, erfinv, erfcinv, exp1, expi, expn, + zeta, gammaincinv, +) + +from testutils import * + +DATASETS = np.load(os.path.join(os.path.dirname(__file__), + "data", "boost.npz")) + +def data(func, dataname, *a, **kw): + kw.setdefault('dataname', dataname) + return FuncData(func, DATASETS[dataname], *a, **kw) + +def ellipk_(k): + return ellipk(k*k) +def ellipe_(k): + return ellipe(k*k) +def ellipeinc_(f, k): + return ellipeinc(f, k*k) +def ellipj_(k): + return ellipj(k*k) +def zeta_(x): + return zeta(x, 1.) + +def test_boost(): + TESTS = [ + data(arccosh, 'acosh_data_ipp-acosh_data', 0, 1, rtol=5e-13), + data(arccosh, 'acosh_data_ipp-acosh_data', 0j, 1, rtol=5e-14), + + data(arcsinh, 'asinh_data_ipp-asinh_data', 0, 1, rtol=1e-11), + data(arcsinh, 'asinh_data_ipp-asinh_data', 0j, 1, rtol=1e-11), + + data(arctanh, 'atanh_data_ipp-atanh_data', 0, 1, rtol=1e-11), + data(arctanh, 'atanh_data_ipp-atanh_data', 0j, 1, rtol=1e-11), + + data(beta, 'beta_exp_data_ipp-beta_exp_data', (0,1), 2, rtol=1e-13), + data(beta, 'beta_exp_data_ipp-beta_exp_data', (0,1), 2, rtol=1e-13), + data(beta, 'beta_small_data_ipp-beta_small_data', (0,1), 2), + + data(cbrt, 'cbrt_data_ipp-cbrt_data', 1, 0), + + data(digamma, 'digamma_data_ipp-digamma_data', 0, 1), + data(digamma, 'digamma_data_ipp-digamma_data', 0j, 1), + data(digamma, 'digamma_neg_data_ipp-digamma_neg_data', 0, 1, rtol=1e-13), + data(digamma, 'digamma_neg_data_ipp-digamma_neg_data', 0j, 1, rtol=1e-13), + data(digamma, 'digamma_root_data_ipp-digamma_root_data', 0, 1, rtol=1e-11), + data(digamma, 'digamma_root_data_ipp-digamma_root_data', 0j, 1, rtol=1e-11), + data(digamma, 'digamma_small_data_ipp-digamma_small_data', 0, 1), + data(digamma, 'digamma_small_data_ipp-digamma_small_data', 0j, 1), + + data(ellipk_, 'ellint_k_data_ipp-ellint_k_data', 0, 1), + data(ellipe_, 'ellint_e_data_ipp-ellint_e_data', 0, 1), + data(ellipeinc_, 'ellint_e2_data_ipp-ellint_e2_data', (0,1), 2, rtol=1e-14), + + data(erf, 'erf_data_ipp-erf_data', 0, 1), + data(erf, 'erf_data_ipp-erf_data', 0j, 1, rtol=1e-14), + data(erfc, 'erf_data_ipp-erf_data', 0, 2), + data(erf, 'erf_large_data_ipp-erf_large_data', 0, 1), + data(erf, 'erf_large_data_ipp-erf_large_data', 0j, 1), + data(erfc, 'erf_large_data_ipp-erf_large_data', 0, 2), + data(erf, 'erf_small_data_ipp-erf_small_data', 0, 1), + data(erf, 'erf_small_data_ipp-erf_small_data', 0j, 1), + data(erfc, 'erf_small_data_ipp-erf_small_data', 0, 2), + + data(erfinv, 'erf_inv_data_ipp-erf_inv_data', 0, 1), + data(erfcinv, 'erfc_inv_data_ipp-erfc_inv_data', 0, 1), + #data(erfcinv, 'erfc_inv_big_data_ipp-erfc_inv_big_data', 0, 1), + + data(exp1, 'expint_1_data_ipp-expint_1_data', 1, 2), + data(exp1, 'expint_1_data_ipp-expint_1_data', 1j, 2, rtol=5e-9), + data(expi, 'expinti_data_ipp-expinti_data', 0, 1, rtol=1e-13), + data(expi, 'expinti_data_double_ipp-expinti_data_double', 0, 1), + + data(expn, 'expint_small_data_ipp-expint_small_data', (0,1), 2), + data(expn, 'expint_data_ipp-expint_data', (0,1), 2, rtol=1e-14), + + data(gamma, 'test_gamma_data_ipp-near_0', 0, 1), + data(gamma, 'test_gamma_data_ipp-near_1', 0, 1), + data(gamma, 'test_gamma_data_ipp-near_2', 0, 1), + data(gamma, 'test_gamma_data_ipp-near_m10', 0, 1), + data(gamma, 'test_gamma_data_ipp-near_m55', 0, 1), + data(gamma, 'test_gamma_data_ipp-near_0', 0j, 1, rtol=2e-9), + data(gamma, 'test_gamma_data_ipp-near_1', 0j, 1, rtol=2e-9), + data(gamma, 'test_gamma_data_ipp-near_2', 0j, 1, rtol=2e-9), + data(gamma, 'test_gamma_data_ipp-near_m10', 0j, 1, rtol=2e-9), + data(gamma, 'test_gamma_data_ipp-near_m55', 0j, 1, rtol=2e-9), + data(gammaln, 'test_gamma_data_ipp-near_0', 0, 2, rtol=5e-11), + data(gammaln, 'test_gamma_data_ipp-near_1', 0, 2, rtol=5e-11), + data(gammaln, 'test_gamma_data_ipp-near_2', 0, 2, rtol=2e-10), + data(gammaln, 'test_gamma_data_ipp-near_m10', 0, 2, rtol=5e-11), + data(gammaln, 'test_gamma_data_ipp-near_m55', 0, 2, rtol=5e-11), + + data(log1p, 'log1p_expm1_data_ipp-log1p_expm1_data', 0, 1), + data(expm1, 'log1p_expm1_data_ipp-log1p_expm1_data', 0, 2), + + data(iv, 'bessel_i_data_ipp-bessel_i_data', (0,1), 2, rtol=1e-12), + data(iv, 'bessel_i_data_ipp-bessel_i_data', (0,1j), 2, rtol=2e-10, atol=1e-306), + data(iv, 'bessel_i_int_data_ipp-bessel_i_int_data', (0,1), 2, rtol=1e-9), + data(iv, 'bessel_i_int_data_ipp-bessel_i_int_data', (0,1j), 2, rtol=2e-10), + + data(jn, 'bessel_j_int_data_ipp-bessel_j_int_data', (0,1), 2, rtol=1e-12), + data(jn, 'bessel_j_int_data_ipp-bessel_j_int_data', (0,1j), 2, rtol=1e-12), + data(jn, 'bessel_j_large_data_ipp-bessel_j_large_data', (0,1), 2, rtol=6e-11), + data(jn, 'bessel_j_large_data_ipp-bessel_j_large_data', (0,1j), 2, rtol=6e-11), + + data(jv, 'bessel_j_int_data_ipp-bessel_j_int_data', (0,1), 2, rtol=1e-12), + data(jv, 'bessel_j_int_data_ipp-bessel_j_int_data', (0,1j), 2, rtol=1e-12), + data(jv, 'bessel_j_data_ipp-bessel_j_data', (0,1), 2, rtol=1e-12), + data(jv, 'bessel_j_data_ipp-bessel_j_data', (0,1j), 2, rtol=1e-12), + + data(kn, 'bessel_k_int_data_ipp-bessel_k_int_data', (0,1), 2, rtol=1e-12, + knownfailure="Known bug in Cephes kn implementation"), + + data(kv, 'bessel_k_int_data_ipp-bessel_k_int_data', (0,1), 2, rtol=1e-12), + data(kv, 'bessel_k_int_data_ipp-bessel_k_int_data', (0,1j), 2, rtol=1e-12), + data(kv, 'bessel_k_data_ipp-bessel_k_data', (0,1), 2, rtol=1e-12), + data(kv, 'bessel_k_data_ipp-bessel_k_data', (0,1j), 2, rtol=1e-12), + + data(yn, 'bessel_y01_data_ipp-bessel_y01_data', (0,1), 2, rtol=1e-12), + data(yn, 'bessel_yn_data_ipp-bessel_yn_data', (0,1), 2, rtol=1e-12), + + data(yv, 'bessel_yn_data_ipp-bessel_yn_data', (0,1), 2, rtol=1e-12), + data(yv, 'bessel_yn_data_ipp-bessel_yn_data', (0,1j), 2, rtol=1e-12), + data(yv, 'bessel_yv_data_ipp-bessel_yv_data', (0,1), 2, rtol=1e-12, + knownfailure="Known bug in Cephes yv implementation"), + data(yv, 'bessel_yv_data_ipp-bessel_yv_data', (0,1j), 2, rtol=1e-10), + + data(zeta_, 'zeta_data_ipp-zeta_data', 0, 1, param_filter=(lambda s: s > 1)), + data(zeta_, 'zeta_neg_data_ipp-zeta_neg_data', 0, 1, param_filter=(lambda s: s > 1)), + data(zeta_, 'zeta_1_up_data_ipp-zeta_1_up_data', 0, 1, param_filter=(lambda s: s > 1)), + data(zeta_, 'zeta_1_below_data_ipp-zeta_1_below_data', 0, 1, param_filter=(lambda s: s > 1)), + + data(gammaincinv, 'gamma_inv_data_ipp-gamma_inv_data', (0,1), 2, + rtol=1e-12), + data(gammaincinv, 'gamma_inv_big_data_ipp-gamma_inv_big_data', + (0,1), 2, rtol=1e-11), + + # XXX: the data file needs reformatting... + #data(gammaincinv, 'gamma_inv_small_data_ipp-gamma_inv_small_data', + # (0,1), 2), + + # -- not used yet: + # assoc_legendre_p.txt + # binomial_data.txt + # binomial_large_data.txt + # binomial_quantile_data.txt + # ellint_f_data.txt + # ellint_pi2_data.txt + # ellint_pi3_data.txt + # ellint_pi3_large_data.txt + # ellint_rc_data.txt + # ellint_rd_data.txt + # ellint_rf_data.txt + # ellint_rj_data.txt + # expinti_data_long.txt + # factorials.txt + # gammap1m1_data.txt + # hermite.txt + # ibeta_data.txt + # ibeta_int_data.txt + # ibeta_inv_data.txt + # ibeta_inva_data.txt + # ibeta_large_data.txt + # ibeta_small_data.txt + # igamma_big_data.txt + # igamma_int_data.txt + # igamma_inva_data.txt + # igamma_med_data.txt + # igamma_small_data.txt + # laguerre2.txt + # laguerre3.txt + # legendre_p.txt + # legendre_p_large.txt + # ncbeta.txt + # ncbeta_big.txt + # nccs.txt + # near_0.txt + # near_1.txt + # near_2.txt + # near_m10.txt + # near_m55.txt + # negative_binomial_quantile_data.txt + # poisson_quantile_data.txt + # sph_bessel_data.txt + # sph_neumann_data.txt + # spherical_harmonic.txt + # tgamma_delta_ratio_data.txt + # tgamma_delta_ratio_int.txt + # tgamma_delta_ratio_int2.txt + # tgamma_ratio_data.txt + ] + + for test in TESTS: + yield _test_factory, test + +def _test_factory(test, dtype=np.double): + """Boost test""" + test.check(dtype=dtype) diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/tests/test_lambertw.py python-scipy-0.8.0+dfsg1/scipy/special/tests/test_lambertw.py --- python-scipy-0.7.2+dfsg1/scipy/special/tests/test_lambertw.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/tests/test_lambertw.py 2010-07-26 15:48:36.000000000 +0100 @@ -0,0 +1,84 @@ +# +# Tests for the lambertw function, +# Adapted from the MPMath tests [1] by Yosef Meller, mellerf@netvision.net.il +# Distributed under the same license as SciPy itself. +# +# [1] mpmath source code, Subversion revision 992 +# http://code.google.com/p/mpmath/source/browse/trunk/mpmath/tests/test_functions2.py?spec=svn994&r=992 + +from numpy.testing import * +from scipy.special import lambertw +from numpy import nan, inf, pi, e, isnan, log, r_, array, complex_ + +from testutils import * + +def test_values(): + assert isnan(lambertw(nan)) + assert_equal(lambertw(inf,1).real, inf) + assert_equal(lambertw(inf,1).imag, 2*pi) + assert_equal(lambertw(-inf,1).real, inf) + assert_equal(lambertw(-inf,1).imag, 3*pi) + + assert_equal(lambertw(1.), lambertw(1., 0)) + + data = [ + (0,0, 0), + (0+0j,0, 0), + (inf,0, inf), + (0,-1, -inf), + (0,1, -inf), + (0,3, -inf), + (e,0, 1), + (1,0, 0.567143290409783873), + (-pi/2,0, 1j*pi/2), + (-log(2)/2,0, -log(2)), + (0.25,0, 0.203888354702240164), + (-0.25,0, -0.357402956181388903), + (-1./10000,0, -0.000100010001500266719), + (-0.25,-1, -2.15329236411034965), + (0.25,-1, -3.00899800997004620-4.07652978899159763j), + (-0.25,-1, -2.15329236411034965), + (0.25,1, -3.00899800997004620+4.07652978899159763j), + (-0.25,1, -3.48973228422959210+7.41405453009603664j), + (-4,0, 0.67881197132094523+1.91195078174339937j), + (-4,1, -0.66743107129800988+7.76827456802783084j), + (-4,-1, 0.67881197132094523-1.91195078174339937j), + (1000,0, 5.24960285240159623), + (1000,1, 4.91492239981054535+5.44652615979447070j), + (1000,-1, 4.91492239981054535-5.44652615979447070j), + (1000,5, 3.5010625305312892+29.9614548941181328j), + (3+4j,0, 1.281561806123775878+0.533095222020971071j), + (-0.4+0.4j,0, -0.10396515323290657+0.61899273315171632j), + (3+4j,1, -0.11691092896595324+5.61888039871282334j), + (3+4j,-1, 0.25856740686699742-3.85211668616143559j), + (-0.5,-1, -0.794023632344689368-0.770111750510379110j), + (-1./10000,1, -11.82350837248724344+6.80546081842002101j), + (-1./10000,-1, -11.6671145325663544), + (-1./10000,-2, -11.82350837248724344-6.80546081842002101j), + (-1./100000,4, -14.9186890769540539+26.1856750178782046j), + (-1./100000,5, -15.0931437726379218666+32.5525721210262290086j), + ((2+1j)/10,0, 0.173704503762911669+0.071781336752835511j), + ((2+1j)/10,1, -3.21746028349820063+4.56175438896292539j), + ((2+1j)/10,-1, -3.03781405002993088-3.53946629633505737j), + ((2+1j)/10,4, -4.6878509692773249+23.8313630697683291j), + (-(2+1j)/10,0, -0.226933772515757933-0.164986470020154580j), + (-(2+1j)/10,1, -2.43569517046110001+0.76974067544756289j), + (-(2+1j)/10,-1, -3.54858738151989450-6.91627921869943589j), + (-(2+1j)/10,4, -4.5500846928118151+20.6672982215434637j), + (pi,0, 1.073658194796149172092178407024821347547745350410314531), + + # Former bug in generated branch, + (-0.5+0.002j,0, -0.78917138132659918344 + 0.76743539379990327749j), + (-0.5-0.002j,0, -0.78917138132659918344 - 0.76743539379990327749j), + (-0.448+0.4j,0, -0.11855133765652382241 + 0.66570534313583423116j), + (-0.448-0.4j,0, -0.11855133765652382241 - 0.66570534313583423116j), + ] + data = array(data, dtype=complex_) + + def w(x, y): + return lambertw(x, y.real.astype(int)) + FuncData(w, data, (0,1), 2, rtol=1e-10, atol=1e-13).check() + +def test_ufunc(): + assert_array_almost_equal( + lambertw(r_[0., e, 1.]), r_[0., 1., 0.567143290409783873]) diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/tests/test_mpmath.py python-scipy-0.8.0+dfsg1/scipy/special/tests/test_mpmath.py --- python-scipy-0.7.2+dfsg1/scipy/special/tests/test_mpmath.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/tests/test_mpmath.py 2010-07-26 15:48:36.000000000 +0100 @@ -0,0 +1,175 @@ +""" +Test Scipy functions versus mpmath, if available. + +""" +import re +import numpy as np +from numpy.testing import * +import scipy.special as sc + +from testutils import * + +try: + import mpmath +except ImportError: + try: + import sympy.mpmath as mpmath + except ImportError: + mpmath = None + +def mpmath_check(min_ver): + if mpmath is None: + return dec.skipif(True, "mpmath library is not present") + + def try_int(v): + try: return int(v) + except ValueError: return v + + def get_version(v): + return map(try_int, re.split('[^0-9]', v)) + + return dec.skipif(get_version(min_ver) > get_version(mpmath.__version__), + "mpmath %s required" % min_ver) + + +#------------------------------------------------------------------------------ +# expi +#------------------------------------------------------------------------------ + +@mpmath_check('0.10') +def test_expi_complex(): + dataset = [] + for r in np.logspace(-99, 2, 10): + for p in np.linspace(0, 2*np.pi, 30): + z = r*np.exp(1j*p) + dataset.append((z, complex(mpmath.ei(z)))) + dataset = np.array(dataset, dtype=np.complex_) + + FuncData(sc.expi, dataset, 0, 1).check() + + +#------------------------------------------------------------------------------ +# hyp2f1 +#------------------------------------------------------------------------------ + +@mpmath_check('0.12') +@dec.knownfailureif(True, + "Currently, special.hyp2f1 uses a *different* convention from mpmath and Mathematica for the cases a=c or b=c negative integers") +def test_hyp2f1_strange_points(): + pts = [ + (2,-1,-1,3), + (2,-2,-2,3), + ] + dataset = [p + (float(mpmath.hyp2f1(*p)),) for p in pts] + dataset = np.array(dataset, dtype=np.float_) + + FuncData(sc.hyp2f1, dataset, (0,1,2,3), 4, rtol=1e-10).check() + +@mpmath_check('0.13') +def test_hyp2f1_real_some_points(): + pts = [ + (1,2,3,0), + (1./3, 2./3, 5./6, 27./32), + (1./4, 1./2, 3./4, 80./81), + (2,-2,-3,3), + (2,-3,-2,3), + (2,-1.5,-1.5,3), + (1,2,3,0), + (0.7235, -1, -5, 0.3), + (0.25, 1./3, 2, 0.999), + (0.25, 1./3, 2, -1), + (2,3,5,0.99), + (3./2,-0.5,3,0.99), + (2,2.5,-3.25,0.999), + (-8, 18.016500331508873, 10.805295997850628, 0.90875647507000001), + (-10,900,-10.5,0.99), + (-10,900,10.5,0.99), + ] + dataset = [p + (float(mpmath.hyp2f1(*p)),) for p in pts] + dataset = np.array(dataset, dtype=np.float_) + + FuncData(sc.hyp2f1, dataset, (0,1,2,3), 4, rtol=1e-10).check() + +@mpmath_check('0.14') +def test_hyp2f1_some_points_2(): + # Taken from mpmath unit tests -- this point failed for mpmath 0.13 but + # was fixed in their SVN since then + pts = [ + (112, (51,10), (-9,10), -0.99999), + (10,-900,10.5,0.99), + (10,-900,-10.5,0.99), + ] + + def fev(x): + if isinstance(x, tuple): + return float(x[0]) / x[1] + else: + return x + + dataset = [tuple(map(fev, p)) + (float(mpmath.hyp2f1(*p)),) for p in pts] + dataset = np.array(dataset, dtype=np.float_) + + FuncData(sc.hyp2f1, dataset, (0,1,2,3), 4, rtol=1e-10).check() + +@mpmath_check('0.13') +def test_hyp2f1_real_some(): + dataset = [] + for a in [-10, -5, -1.8, 1.8, 5, 10]: + for b in [-2.5, -1, 1, 7.4]: + for c in [-9, -1.8, 5, 20.4]: + for z in [-10, -1.01, -0.99, 0, 0.6, 0.95, 1.5, 10]: + try: + v = float(mpmath.hyp2f1(a, b, c, z)) + except: + continue + dataset.append((a, b, c, z, v)) + dataset = np.array(dataset, dtype=np.float_) + FuncData(sc.hyp2f1, dataset, (0,1,2,3), 4, rtol=1e-9).check() + +@mpmath_check('0.12') +@dec.slow +def test_hyp2f1_real_random(): + dataset = [] + + npoints = 500 + dataset = np.zeros((npoints, 5), np.float_) + + np.random.seed(1234) + dataset[:,0] = np.random.pareto(1.5, npoints) + dataset[:,1] = np.random.pareto(1.5, npoints) + dataset[:,2] = np.random.pareto(1.5, npoints) + dataset[:,3] = 2*np.random.rand(npoints) - 1 + + dataset[:,0] *= (-1)**np.random.randint(2, npoints) + dataset[:,1] *= (-1)**np.random.randint(2, npoints) + dataset[:,2] *= (-1)**np.random.randint(2, npoints) + + for ds in dataset: + if mpmath.__version__ < '0.14': + # mpmath < 0.14 fails for c too much smaller than a, b + if abs(ds[:2]).max() > abs(ds[2]): + ds[2] = abs(ds[:2]).max() + ds[4] = float(mpmath.hyp2f1(*tuple(ds[:4]))) + + FuncData(sc.hyp2f1, dataset, (0,1,2,3), 4, rtol=1e-9).check() + +#------------------------------------------------------------------------------ +# erf (complex) +#------------------------------------------------------------------------------ + +@mpmath_check('0.14') +def test_erf_complex(): + # need to increase mpmath precision for this test + old_dps, old_prec = mpmath.mp.dps, mpmath.mp.prec + try: + mpmath.mp.dps = 70 + x1, y1 = np.meshgrid(np.linspace(-10, 1, 11), np.linspace(-10, 1, 11)) + x2, y2 = np.meshgrid(np.logspace(-10, .8, 11), np.logspace(-10, .8, 11)) + points = np.r_[x1.ravel(),x2.ravel()] + 1j*np.r_[y1.ravel(),y2.ravel()] + + # note that the global accuracy of our complex erf algorithm is limited + # roughly to 2e-8 + assert_func_equal(sc.erf, lambda x: complex(mpmath.erf(x)), points, + vectorized=False, rtol=2e-8) + finally: + mpmath.mp.dps, mpmath.mp.prec = old_dps, old_prec diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/tests/test_orthogonal_eval.py python-scipy-0.8.0+dfsg1/scipy/special/tests/test_orthogonal_eval.py --- python-scipy-0.7.2+dfsg1/scipy/special/tests/test_orthogonal_eval.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/tests/test_orthogonal_eval.py 2010-07-26 15:48:36.000000000 +0100 @@ -0,0 +1,113 @@ +from numpy.testing import * +import numpy as np +from numpy import array, sqrt +from scipy.special.orthogonal import * +from scipy.special import gamma + +from testutils import * + +def test_eval_chebyt(): + n = np.arange(0, 10000, 7) + x = 2*np.random.rand() - 1 + v1 = np.cos(n*np.arccos(x)) + v2 = eval_chebyt(n, x) + assert np.allclose(v1, v2, rtol=1e-15) + +class TestPolys(object): + """ + Check that the eval_* functions agree with the constructed polynomials + + """ + + def check_poly(self, func, cls, param_ranges=[], x_range=[], nn=10, + nparam=10, nx=10, rtol=1e-8): + np.random.seed(1234) + + dataset = [] + for n in np.arange(nn): + params = [a + (b-a)*np.random.rand(nparam) for a,b in param_ranges] + params = np.asarray(params).T + if not param_ranges: + params = [0] + for p in params: + if param_ranges: + p = (n,) + tuple(p) + else: + p = (n,) + x = x_range[0] + (x_range[1] - x_range[0])*np.random.rand(nx) + poly = np.poly1d(cls(*p)) + z = np.c_[np.tile(p, (nx,1)), x, poly(x)] + dataset.append(z) + + dataset = np.concatenate(dataset, axis=0) + + def polyfunc(*p): + p = (p[0].astype(int),) + p[1:] + return func(*p) + + ds = FuncData(polyfunc, dataset, range(len(param_ranges)+2), -1, + rtol=rtol) + ds.check() + + def test_jacobi(self): + self.check_poly(eval_jacobi, jacobi, + param_ranges=[(-0.99, 10), (-0.99, 10)], x_range=[-1, 1], + rtol=1e-5) + + def test_sh_jacobi(self): + self.check_poly(eval_sh_jacobi, sh_jacobi, + param_ranges=[(1, 10), (0, 1)], x_range=[0, 1], + rtol=1e-5) + + def test_gegenbauer(self): + self.check_poly(eval_gegenbauer, gegenbauer, + param_ranges=[(-0.499, 10)], x_range=[-1, 1], + rtol=1e-7) + + def test_chebyt(self): + self.check_poly(eval_chebyt, chebyt, + param_ranges=[], x_range=[-1, 1]) + + def test_chebyu(self): + self.check_poly(eval_chebyu, chebyu, + param_ranges=[], x_range=[-1, 1]) + + def test_chebys(self): + self.check_poly(eval_chebys, chebys, + param_ranges=[], x_range=[-2, 2]) + + def test_chebyc(self): + self.check_poly(eval_chebyc, chebyc, + param_ranges=[], x_range=[-2, 2]) + + def test_sh_chebyt(self): + self.check_poly(eval_sh_chebyt, sh_chebyt, + param_ranges=[], x_range=[0, 1]) + + def test_sh_chebyu(self): + self.check_poly(eval_sh_chebyu, sh_chebyu, + param_ranges=[], x_range=[0, 1]) + + def test_legendre(self): + self.check_poly(eval_legendre, legendre, + param_ranges=[], x_range=[-1, 1]) + + def test_sh_legendre(self): + self.check_poly(eval_sh_legendre, sh_legendre, + param_ranges=[], x_range=[0, 1]) + + def test_genlaguerre(self): + self.check_poly(eval_genlaguerre, genlaguerre, + param_ranges=[(-0.99, 10)], x_range=[0, 100]) + + def test_laguerre(self): + self.check_poly(eval_laguerre, laguerre, + param_ranges=[], x_range=[0, 100]) + + def test_hermite(self): + self.check_poly(eval_hermite, hermite, + param_ranges=[], x_range=[-100, 100]) + + def test_hermitenorm(self): + self.check_poly(eval_hermitenorm, hermitenorm, + param_ranges=[], x_range=[-100, 100]) diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/tests/test_orthogonal.py python-scipy-0.8.0+dfsg1/scipy/special/tests/test_orthogonal.py --- python-scipy-0.7.2+dfsg1/scipy/special/tests/test_orthogonal.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/tests/test_orthogonal.py 2010-07-26 15:48:36.000000000 +0100 @@ -0,0 +1,254 @@ +from numpy.testing import * +import numpy as np +from numpy import array, sqrt +from scipy.special.orthogonal import * +from scipy.special import gamma + +from testutils import * + + +class TestCheby(TestCase): + def test_chebyc(self): + C0 = chebyc(0) + C1 = chebyc(1) + C2 = chebyc(2) + C3 = chebyc(3) + C4 = chebyc(4) + C5 = chebyc(5) + + assert_array_almost_equal(C0.c,[2],13) + assert_array_almost_equal(C1.c,[1,0],13) + assert_array_almost_equal(C2.c,[1,0,-2],13) + assert_array_almost_equal(C3.c,[1,0,-3,0],13) + assert_array_almost_equal(C4.c,[1,0,-4,0,2],13) + assert_array_almost_equal(C5.c,[1,0,-5,0,5,0],13) + + def test_chebys(self): + S0 = chebys(0) + S1 = chebys(1) + S2 = chebys(2) + S3 = chebys(3) + S4 = chebys(4) + S5 = chebys(5) + assert_array_almost_equal(S0.c,[1],13) + assert_array_almost_equal(S1.c,[1,0],13) + assert_array_almost_equal(S2.c,[1,0,-1],13) + assert_array_almost_equal(S3.c,[1,0,-2,0],13) + assert_array_almost_equal(S4.c,[1,0,-3,0,1],13) + assert_array_almost_equal(S5.c,[1,0,-4,0,3,0],13) + + def test_chebyt(self): + T0 = chebyt(0) + T1 = chebyt(1) + T2 = chebyt(2) + T3 = chebyt(3) + T4 = chebyt(4) + T5 = chebyt(5) + assert_array_almost_equal(T0.c,[1],13) + assert_array_almost_equal(T1.c,[1,0],13) + assert_array_almost_equal(T2.c,[2,0,-1],13) + assert_array_almost_equal(T3.c,[4,0,-3,0],13) + assert_array_almost_equal(T4.c,[8,0,-8,0,1],13) + assert_array_almost_equal(T5.c,[16,0,-20,0,5,0],13) + + def test_chebyu(self): + U0 = chebyu(0) + U1 = chebyu(1) + U2 = chebyu(2) + U3 = chebyu(3) + U4 = chebyu(4) + U5 = chebyu(5) + assert_array_almost_equal(U0.c,[1],13) + assert_array_almost_equal(U1.c,[2,0],13) + assert_array_almost_equal(U2.c,[4,0,-1],13) + assert_array_almost_equal(U3.c,[8,0,-4,0],13) + assert_array_almost_equal(U4.c,[16,0,-12,0,1],13) + assert_array_almost_equal(U5.c,[32,0,-32,0,6,0],13) + +class TestGegenbauer(TestCase): + + def test_gegenbauer(self): + a = 5*rand()-0.5 + if np.any(a==0): a = -0.2 + Ca0 = gegenbauer(0,a) + Ca1 = gegenbauer(1,a) + Ca2 = gegenbauer(2,a) + Ca3 = gegenbauer(3,a) + Ca4 = gegenbauer(4,a) + Ca5 = gegenbauer(5,a) + + assert_array_almost_equal(Ca0.c,array([1]),13) + assert_array_almost_equal(Ca1.c,array([2*a,0]),13) + assert_array_almost_equal(Ca2.c,array([2*a*(a+1),0,-a]),13) + assert_array_almost_equal(Ca3.c,array([4*poch(a,3),0,-6*a*(a+1), + 0])/3.0,11) + assert_array_almost_equal(Ca4.c,array([4*poch(a,4),0,-12*poch(a,3), + 0,3*a*(a+1)])/6.0,11) + assert_array_almost_equal(Ca5.c,array([4*poch(a,5),0,-20*poch(a,4), + 0,15*poch(a,3),0])/15.0,11) + +class TestHermite(TestCase): + def test_hermite(self): + H0 = hermite(0) + H1 = hermite(1) + H2 = hermite(2) + H3 = hermite(3) + H4 = hermite(4) + H5 = hermite(5) + assert_array_almost_equal(H0.c,[1],13) + assert_array_almost_equal(H1.c,[2,0],13) + assert_array_almost_equal(H2.c,[4,0,-2],13) + assert_array_almost_equal(H3.c,[8,0,-12,0],13) + assert_array_almost_equal(H4.c,[16,0,-48,0,12],12) + assert_array_almost_equal(H5.c,[32,0,-160,0,120,0],12) + + def test_hermitenorm(self): + # He_n(x) = 2**(-n/2) H_n(x/sqrt(2)) + psub = np.poly1d([1.0/sqrt(2),0]) + H0 = hermitenorm(0) + H1 = hermitenorm(1) + H2 = hermitenorm(2) + H3 = hermitenorm(3) + H4 = hermitenorm(4) + H5 = hermitenorm(5) + he0 = hermite(0)(psub) + he1 = hermite(1)(psub) / sqrt(2) + he2 = hermite(2)(psub) / 2.0 + he3 = hermite(3)(psub) / (2*sqrt(2)) + he4 = hermite(4)(psub) / 4.0 + he5 = hermite(5)(psub) / (4.0*sqrt(2)) + + assert_array_almost_equal(H0.c,he0.c,13) + assert_array_almost_equal(H1.c,he1.c,13) + assert_array_almost_equal(H2.c,he2.c,13) + assert_array_almost_equal(H3.c,he3.c,13) + assert_array_almost_equal(H4.c,he4.c,13) + assert_array_almost_equal(H5.c,he5.c,13) + +class _test_sh_legendre(TestCase): + + def test_sh_legendre(self): + # P*_n(x) = P_n(2x-1) + psub = np.poly1d([2,-1]) + Ps0 = sh_legendre(0) + Ps1 = sh_legendre(1) + Ps2 = sh_legendre(2) + Ps3 = sh_legendre(3) + Ps4 = sh_legendre(4) + Ps5 = sh_legendre(5) + pse0 = legendre(0)(psub) + pse1 = legendre(1)(psub) + pse2 = legendre(2)(psub) + pse3 = legendre(3)(psub) + pse4 = legendre(4)(psub) + pse5 = legendre(5)(psub) + assert_array_almost_equal(Ps0.c,pse0.c,13) + assert_array_almost_equal(Ps1.c,pse1.c,13) + assert_array_almost_equal(Ps2.c,pse2.c,13) + assert_array_almost_equal(Ps3.c,pse3.c,13) + assert_array_almost_equal(Ps4.c,pse4.c,12) + assert_array_almost_equal(Ps5.c,pse5.c,12) + +class _test_sh_chebyt(TestCase): + + def test_sh_chebyt(self): + # T*_n(x) = T_n(2x-1) + psub = np.poly1d([2,-1]) + Ts0 = sh_chebyt(0) + Ts1 = sh_chebyt(1) + Ts2 = sh_chebyt(2) + Ts3 = sh_chebyt(3) + Ts4 = sh_chebyt(4) + Ts5 = sh_chebyt(5) + tse0 = chebyt(0)(psub) + tse1 = chebyt(1)(psub) + tse2 = chebyt(2)(psub) + tse3 = chebyt(3)(psub) + tse4 = chebyt(4)(psub) + tse5 = chebyt(5)(psub) + assert_array_almost_equal(Ts0.c,tse0.c,13) + assert_array_almost_equal(Ts1.c,tse1.c,13) + assert_array_almost_equal(Ts2.c,tse2.c,13) + assert_array_almost_equal(Ts3.c,tse3.c,13) + assert_array_almost_equal(Ts4.c,tse4.c,12) + assert_array_almost_equal(Ts5.c,tse5.c,12) + +class _test_sh_chebyu(TestCase): + + def test_sh_chebyu(self): + # U*_n(x) = U_n(2x-1) + psub = np.poly1d([2,-1]) + Us0 = sh_chebyu(0) + Us1 = sh_chebyu(1) + Us2 = sh_chebyu(2) + Us3 = sh_chebyu(3) + Us4 = sh_chebyu(4) + Us5 = sh_chebyu(5) + use0 = chebyu(0)(psub) + use1 = chebyu(1)(psub) + use2 = chebyu(2)(psub) + use3 = chebyu(3)(psub) + use4 = chebyu(4)(psub) + use5 = chebyu(5)(psub) + assert_array_almost_equal(Us0.c,use0.c,13) + assert_array_almost_equal(Us1.c,use1.c,13) + assert_array_almost_equal(Us2.c,use2.c,13) + assert_array_almost_equal(Us3.c,use3.c,13) + assert_array_almost_equal(Us4.c,use4.c,12) + assert_array_almost_equal(Us5.c,use5.c,11) + +class _test_sh_jacobi(TestCase): + def test_sh_jacobi(self): + # G^(p,q)_n(x) = n! gamma(n+p)/gamma(2*n+p) * P^(p-q,q-1)_n(2*x-1) + conv = lambda n,p: gamma(n+1)*gamma(n+p)/gamma(2*n+p) + psub = np.poly1d([2,-1]) + q = 4*rand() + p = q-1 + 2*rand() + #print "shifted jacobi p,q = ", p, q + G0 = sh_jacobi(0,p,q) + G1 = sh_jacobi(1,p,q) + G2 = sh_jacobi(2,p,q) + G3 = sh_jacobi(3,p,q) + G4 = sh_jacobi(4,p,q) + G5 = sh_jacobi(5,p,q) + ge0 = jacobi(0,p-q,q-1)(psub) * conv(0,p) + ge1 = jacobi(1,p-q,q-1)(psub) * conv(1,p) + ge2 = jacobi(2,p-q,q-1)(psub) * conv(2,p) + ge3 = jacobi(3,p-q,q-1)(psub) * conv(3,p) + ge4 = jacobi(4,p-q,q-1)(psub) * conv(4,p) + ge5 = jacobi(5,p-q,q-1)(psub) * conv(5,p) + + assert_array_almost_equal(G0.c,ge0.c,13) + assert_array_almost_equal(G1.c,ge1.c,13) + assert_array_almost_equal(G2.c,ge2.c,13) + assert_array_almost_equal(G3.c,ge3.c,13) + assert_array_almost_equal(G4.c,ge4.c,13) + assert_array_almost_equal(G5.c,ge5.c,13) + +class TestCall(object): + def test_call(self): + poly = [] + for n in xrange(5): + poly.extend([x.strip() for x in + (""" + jacobi(%(n)d,0.3,0.9) + sh_jacobi(%(n)d,0.3,0.9) + genlaguerre(%(n)d,0.3) + laguerre(%(n)d) + hermite(%(n)d) + hermitenorm(%(n)d) + gegenbauer(%(n)d,0.3) + chebyt(%(n)d) + chebyu(%(n)d) + chebyc(%(n)d) + chebys(%(n)d) + sh_chebyt(%(n)d) + sh_chebyu(%(n)d) + legendre(%(n)d) + sh_legendre(%(n)d) + """ % dict(n=n)).split() + ]) + for pstr in poly: + p = eval(pstr) + assert_almost_equal(p(0.315), np.poly1d(p)(0.315), err_msg=pstr) + diff -Nru python-scipy-0.7.2+dfsg1/scipy/special/tests/testutils.py python-scipy-0.8.0+dfsg1/scipy/special/tests/testutils.py --- python-scipy-0.7.2+dfsg1/scipy/special/tests/testutils.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/special/tests/testutils.py 2010-07-26 15:48:36.000000000 +0100 @@ -0,0 +1,235 @@ +import os +import warnings + +import numpy as np +from numpy.testing.noseclasses import KnownFailureTest + +import scipy.special as sc + +__all__ = ['with_special_errors', 'assert_tol_equal', 'assert_func_equal', + 'FuncData'] + +#------------------------------------------------------------------------------ +# Enable convergence and loss of precision warnings -- turn off one by one +#------------------------------------------------------------------------------ + +def with_special_errors(func): + """ + Enable special function errors (such as underflow, overflow, + loss of precision, etc.) + """ + def wrapper(*a, **kw): + old_filters = list(getattr(warnings, 'filters', [])) + old_errprint = sc.errprint(1) + warnings.filterwarnings("error", category=sc.SpecialFunctionWarning) + try: + return func(*a, **kw) + finally: + sc.errprint(old_errprint) + setattr(warnings, 'filters', old_filters) + wrapper.__name__ = func.__name__ + wrapper.__doc__ = func.__doc__ + return wrapper + +#------------------------------------------------------------------------------ +# Comparing function values at many data points at once, with helpful +#------------------------------------------------------------------------------ + +def assert_tol_equal(a, b, rtol=1e-7, atol=0, err_msg='', verbose=True): + """Assert that `a` and `b` are equal to tolerance ``atol + rtol*abs(b)``""" + def compare(x, y): + return np.allclose(x, y, rtol=rtol, atol=atol) + a, b = np.asanyarray(a), np.asanyarray(b) + header = 'Not equal to tolerance rtol=%g, atol=%g' % (rtol, atol) + np.testing.utils.assert_array_compare(compare, a, b, err_msg=str(err_msg), + verbose=verbose, header=header) + +#------------------------------------------------------------------------------ +# Comparing function values at many data points at once, with helpful +# error reports +#------------------------------------------------------------------------------ + +def assert_func_equal(func, results, points, rtol=None, atol=None, + param_filter=None, knownfailure=None, + vectorized=True, dtype=None): + if hasattr(points, 'next'): + # it's a generator + points = list(points) + + points = np.asarray(points) + if points.ndim == 1: + points = points[:,None] + + if hasattr(results, '__name__'): + # function + if vectorized: + results = results(*tuple(points.T)) + else: + results = np.array([results(*tuple(p)) for p in points]) + if results.dtype == object: + try: + results = results.astype(float) + except TypeError: + results = results.astype(complex) + else: + results = np.asarray(results) + + npoints = points.shape[1] + + data = np.c_[points, results] + fdata = FuncData(func, data, range(npoints), range(npoints, data.shape[1]), + rtol=rtol, atol=atol, param_filter=param_filter, + knownfailure=knownfailure) + fdata.check() + +class FuncData(object): + """ + Data set for checking a special function. + + Parameters + ---------- + func : function + Function to test + filename : str + Input file name + param_columns : int or tuple of ints + Columns indices in which the parameters to `func` lie. + Can be imaginary integers to indicate that the parameter + should be cast to complex. + result_columns : int or tuple of ints + Column indices for expected results from `func`. + rtol : float, optional + Required relative tolerance. Default is 5*eps. + atol : float, optional + Required absolute tolerance. Default is 5*tiny. + param_filter : function, or tuple of functions/Nones, optional + Filter functions to exclude some parameter ranges. + If omitted, no filtering is done. + knownfailure : str, optional + Known failure error message to raise when the test is run. + If omitted, no exception is raised. + + """ + + def __init__(self, func, data, param_columns, result_columns, + rtol=None, atol=None, param_filter=None, knownfailure=None, + dataname=None): + self.func = func + self.data = data + self.dataname = dataname + if not hasattr(param_columns, '__len__'): + param_columns = (param_columns,) + if not hasattr(result_columns, '__len__'): + result_columns = (result_columns,) + self.param_columns = tuple(param_columns) + self.result_columns = tuple(result_columns) + self.rtol = rtol + self.atol = atol + if not hasattr(param_filter, '__len__'): + param_filter = (param_filter,) + self.param_filter = param_filter + self.knownfailure = knownfailure + + def get_tolerances(self, dtype): + info = np.finfo(dtype) + rtol, atol = self.rtol, self.atol + if rtol is None: + rtol = 5*info.eps + if atol is None: + atol = 5*info.tiny + return rtol, atol + + def check(self, data=None, dtype=None): + """Check the special function against the data.""" + + if self.knownfailure: + raise KnownFailureTest(self.knownfailure) + + if data is None: + data = self.data + + if dtype is None: + dtype = data.dtype + else: + data = data.astype(dtype) + + rtol, atol = self.get_tolerances(dtype) + + # Apply given filter functions + if self.param_filter: + param_mask = np.ones((data.shape[0],), np.bool_) + for j, filter in zip(self.param_columns, self.param_filter): + if filter: + param_mask &= filter(data[:,j]) + data = data[param_mask] + + # Pick parameters and results from the correct columns + params = [] + for j in self.param_columns: + if np.iscomplexobj(j): + j = int(j.imag) + params.append(data[:,j].astype(np.complex)) + else: + params.append(data[:,j]) + wanted = tuple([data[:,j] for j in self.result_columns]) + + # Evaluate + got = self.func(*params) + if not isinstance(got, tuple): + got = (got,) + + # Check the validity of each output returned + + assert len(got) == len(wanted) + + for output_num, (x, y) in enumerate(zip(got, wanted)): + pinf_x = np.isinf(x) & (x > 0) + pinf_y = np.isinf(y) & (x > 0) + minf_x = np.isinf(x) & (x < 0) + minf_y = np.isinf(y) & (x < 0) + nan_x = np.isnan(x) + nan_y = np.isnan(y) + + abs_y = np.absolute(y) + abs_y[~np.isfinite(abs_y)] = 0 + diff = np.absolute(x - y) + diff[~np.isfinite(diff)] = 0 + + rdiff = diff / np.absolute(y) + rdiff[~np.isfinite(rdiff)] = 0 + + tol_mask = (diff < atol + rtol*abs_y) + pinf_mask = (pinf_x == pinf_y) + minf_mask = (minf_x == minf_y) + nan_mask = (nan_x == nan_y) + + bad_j = ~(tol_mask & pinf_mask & minf_mask & nan_mask) + + if np.any(bad_j): + # Some bad results: inform what, where, and how bad + msg = [""] + msg.append("Max |adiff|: %g" % diff.max()) + msg.append("Max |rdiff|: %g" % rdiff.max()) + msg.append("Bad results for the following points (in output %d):" + % output_num) + for j in np.where(bad_j)[0]: + j = int(j) + fmt = lambda x: "%30s" % np.array2string(x[j], precision=18) + a = " ".join(map(fmt, params)) + b = " ".join(map(fmt, got)) + c = " ".join(map(fmt, wanted)) + d = fmt(rdiff) + msg.append("%s => %s != %s (rdiff %s)" % (a, b, c, d)) + assert False, "\n".join(msg) + + def __repr__(self): + """Pretty-printing, esp. for Nose output""" + if np.any(map(np.iscomplexobj, self.param_columns)): + is_complex = " (complex)" + else: + is_complex = "" + if self.dataname: + return "" % (self.func.__name__, is_complex, + os.path.basename(self.dataname)) + else: + return "" % (self.func.__name__, is_complex) diff -Nru python-scipy-0.7.2+dfsg1/scipy/stats/distributions.py python-scipy-0.8.0+dfsg1/scipy/stats/distributions.py --- python-scipy-0.7.2+dfsg1/scipy/stats/distributions.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/stats/distributions.py 2010-07-26 15:48:36.000000000 +0100 @@ -25,7 +25,7 @@ from scipy.special import gammaln as gamln from copy import copy import vonmises_cython -import textwrap + __all__ = [ 'rv_continuous', @@ -48,6 +48,7 @@ 'entropy', 'rv_discrete', 'binom', 'bernoulli', 'nbinom', 'geom', 'hypergeom', 'logser', 'poisson', 'planck', 'boltzmann', 'randint', 'zipf', 'dlaplace', + 'skellam' ] floatinfo = numpy.finfo(float) @@ -55,15 +56,204 @@ errp = special.errprint arr = asarray gam = special.gamma +lgam = special.gammaln + import types import stats as st +from scipy.misc import doccer all = alltrue sgf = vectorize import new + +# These are the docstring parts used for substitution in specific +# distribution docstrings. + +docheaders = {'methods':"""\nMethods\n-------\n""", + 'parameters':"""\nParameters\n---------\n""", + 'notes':"""\nNotes\n-----\n""", + 'examples':"""\nExamples\n--------\n"""} + +_doc_rvs = \ +"""rvs(%(shapes)s, loc=0, scale=1, size=1) + Random variates. +""" +_doc_pdf = \ +"""pdf(x, %(shapes)s, loc=0, scale=1) + Probability density function. +""" +_doc_pmf = \ +"""pmf(x, %(shapes)s, loc=0, scale=1) + Probability mass function. +""" +_doc_cdf = \ +"""cdf(x, %(shapes)s, loc=0, scale=1) + Cumulative density function. +""" +_doc_sf = \ +"""sf(x, %(shapes)s, loc=0, scale=1) + Survival function (1-cdf --- sometimes more accurate). +""" +_doc_ppf = \ +"""ppf(q, %(shapes)s, loc=0, scale=1) + Percent point function (inverse of cdf --- percentiles). +""" +_doc_isf = \ +"""isf(q, %(shapes)s, loc=0, scale=1) + Inverse survival function (inverse of sf). +""" +_doc_stats = \ +"""stats(%(shapes)s, loc=0, scale=1, moments='mv') + Mean('m'), variance('v'), skew('s'), and/or kurtosis('k'). +""" +_doc_entropy = \ +"""entropy(%(shapes)s, loc=0, scale=1) + (Differential) entropy of the RV. +""" +_doc_fit = \ +"""fit(data, %(shapes)s, loc=0, scale=1) + Parameter estimates for generic data. +""" +_doc_allmethods = ''.join([docheaders['methods'], _doc_rvs, _doc_pdf, + _doc_cdf, _doc_sf, _doc_ppf, _doc_isf, + _doc_stats, _doc_entropy, _doc_fit]) + +# Note that the two lines for %(shapes) are searched for and replaced in +# rv_continuous and rv_discrete - update there if the exact string changes +_doc_default_callparams = \ +""" +Parameters +---------- +x : array-like + quantiles +q : array-like + lower or upper tail probability +%(shapes)s : array-like + shape parameters +loc : array-like, optional + location parameter (default=0) +scale : array-like, optional + scale parameter (default=1) +size : int or tuple of ints, optional + shape of random variates (default computed from input arguments ) +moments : str, optional + composed of letters ['mvsk'] specifying which moments to compute where + 'm' = mean, 'v' = variance, 's' = (Fisher's) skew and + 'k' = (Fisher's) kurtosis. (default='mv') +""" +_doc_default_longsummary = \ +"""Continuous random variables are defined from a standard form and may +require some shape parameters to complete its specification. Any +optional keyword parameters can be passed to the methods of the RV +object as given below: +""" +_doc_default_frozen_note = \ +""" +Alternatively, the object may be called (as a function) to fix the shape, +location, and scale parameters returning a "frozen" continuous RV object: + +rv = %(name)s(%(shapes)s, loc=0, scale=1) + - Frozen RV object with the same methods but holding the given shape, + location, and scale fixed. +""" +_doc_default_example = \ +"""Examples +-------- +>>> import matplotlib.pyplot as plt +>>> numargs = %(name)s.numargs +>>> [ %(shapes)s ] = [0.9,] * numargs +>>> rv = %(name)s(%(shapes)s) + +Display frozen pdf + +>>> x = np.linspace(0, np.minimum(rv.dist.b, 3)) +>>> h = plt.plot(x, rv.pdf(x)) + +Check accuracy of cdf and ppf + +>>> prb = %(name)s.cdf(x, %(shapes)s) +>>> h = plt.semilogy(np.abs(x - %(name)s.ppf(prb, %(shapes)s)) + 1e-20) + +Random number generation + +>>> R = %(name)s.rvs(%(shapes)s, size=100) +""" + +_doc_default = ''.join([_doc_default_longsummary, + _doc_allmethods, + _doc_default_callparams, + _doc_default_frozen_note, + _doc_default_example]) + +_doc_default_before_notes = ''.join([_doc_default_longsummary, + _doc_allmethods, + _doc_default_callparams, + _doc_default_frozen_note]) + +docdict = {'rvs':_doc_rvs, + 'pdf':_doc_pdf, + 'cdf':_doc_cdf, + 'sf':_doc_sf, + 'ppf':_doc_ppf, + 'isf':_doc_isf, + 'stats':_doc_stats, + 'entropy':_doc_entropy, + 'fit':_doc_fit, + 'allmethods':_doc_allmethods, + 'callparams':_doc_default_callparams, + 'longsummary':_doc_default_longsummary, + 'frozennote':_doc_default_frozen_note, + 'example':_doc_default_example, + 'default':_doc_default, + 'before_notes':_doc_default_before_notes} + +# Reuse common content between continous and discrete docs, change some +# minor bits. +docdict_discrete = docdict.copy() + +docdict_discrete['pmf'] = _doc_pmf +_doc_disc_methods = ['rvs', 'pmf', 'cdf', 'sf', 'ppf', 'isf', 'stats', + 'entropy', 'fit'] +for obj in _doc_disc_methods: + docdict_discrete[obj] = docdict_discrete[obj].replace(', scale=1', '') +docdict_discrete.pop('pdf') + +_doc_allmethods = ''.join([docdict_discrete[obj] for obj in + _doc_disc_methods]) +docdict_discrete['allmethods'] = docheaders['methods'] + _doc_allmethods + +docdict_discrete['longsummary'] = _doc_default_longsummary.replace(\ + 'Continuous', 'Discrete') +_doc_default_frozen_note = \ +""" +Alternatively, the object may be called (as a function) to fix the shape and +location parameters returning a "frozen" continuous RV object: + +rv = %(name)s(%(shapes)s, loc=0) + - Frozen RV object with the same methods but holding the given shape and + location fixed. +""" +docdict_discrete['frozennote'] = _doc_default_frozen_note + +docdict_discrete['example'] = _doc_default_example.replace('[0.9,]', + 'Replace with reasonable value') + +_doc_default_disc = ''.join([docdict_discrete['longsummary'], + docdict_discrete['allmethods'], + docdict_discrete['frozennote'], + docdict_discrete['example']]) +docdict_discrete['default'] = _doc_default_disc + + +# clean up all the separate docstring elements, we do not need them anymore +for obj in [s for s in dir() if s.startswith('_doc_')]: + exec('del ' + obj) +del s, obj + + def _build_random_array(fun, args, size=None): # Build an array by applying function fun to # the arguments in args, creating an array with @@ -238,15 +428,31 @@ # This should be rewritten def argsreduce(cond, *args): - """Return a sequence of arguments converted to the dimensions of cond + """Return the sequence of ravel(args[i]) where ravel(condition) is + True in 1D. + + Examples + -------- + >>> import numpy as np + >>> rand = np.random.random_sample + >>> A = rand((4,5)) + >>> B = 2 + >>> C = rand((1,5)) + >>> cond = np.ones(A.shape) + >>> [A1,B1,C1] = argsreduce(cond,A,B,C) + >>> B1.shape + (20,) + >>> cond[2,:] = 0 + >>> [A2,B2,C2] = argsreduce(cond,A,B,C) + >>> B2.shape + (15,) + """ - newargs = list(args) + newargs = atleast_1d(*args) + if not isinstance(newargs, list): + newargs = [newargs,] expand_arr = (cond==cond) - for k in range(len(args)): - # make sure newarr is not a scalar - newarr = atleast_1d(args[k]) - newargs[k] = extract(cond,newarr*expand_arr) - return newargs + return [extract(cond, arr1 * expand_arr) for arr1 in newargs] class rv_generic(object): """Class which encapsulates common functionality between rv_discrete @@ -311,7 +517,7 @@ if self._size > 1: size = numpy.array(size, ndmin=1) - if scale == 0: + if np.all(scale == 0): return loc*ones(size, 'd') vals = self._rvs(*args) @@ -332,50 +538,86 @@ class rv_continuous(rv_generic): """ - A Generic continuous random variable. + A generic continuous random variable class meant for subclassing. - Continuous random variables are defined from a standard form and may - require some shape parameters to complete its specification. Any - optional keyword parameters can be passed to the methods of the RV - object as given below: + `rv_continuous` is a base class to construct specific distribution classes + and instances from for continuous random variables. It cannot be used + directly as a distribution. + + Parameters + ---------- + momtype : int, optional + The type of generic moment calculation to use (check this). + a : float, optional + Lower bound of the support of the distribution, default is minus + infinity. + b : float, optional + Upper bound of the support of the distribution, default is plus + infinity. + xa : float, optional + Lower bound for fixed point calculation for generic ppf. + xb : float, optional + Upper bound for fixed point calculation for generic ppf. + xtol : float, optional + The tolerance for fixed point calculation for generic ppf. + badvalue : object, optional + The value in a result arrays that indicates a value that for which + some argument restriction is violated, default is np.nan. + name : str, optional + The name of the instance. This string is used to construct the default + example for distributions. + longname : str, optional + This string is used as part of the first line of the docstring returned + when a subclass has no docstring of its own. Note: `longname` exists + for backwards compatibility, do not use for new subclasses. + shapes : str, optional + The shape of the distribution. For example ``"m, n"`` for a + distribution that takes two integers as the two shape arguments for all + its methods. + extradoc : str, optional + This string is used as the last part of the docstring returned when a + subclass has no docstring of its own. Note: `extradoc` exists for + backwards compatibility, do not use for new subclasses. Methods ------- - generic.rvs(,loc=0,scale=1,size=1) - - random variates + rvs(, loc=0, scale=1, size=1) + random variates - generic.pdf(x,,loc=0,scale=1) - - probability density function + pdf(x, , loc=0, scale=1) + probability density function - generic.cdf(x,,loc=0,scale=1) - - cumulative density function + cdf(x, , loc=0, scale=1) + cumulative density function - generic.sf(x,,loc=0,scale=1) - - survival function (1-cdf --- sometimes more accurate) + sf(x, , loc=0, scale=1) + survival function (1-cdf --- sometimes more accurate) - generic.ppf(q,,loc=0,scale=1) - - percent point function (inverse of cdf --- percentiles) + ppf(q, , loc=0, scale=1) + percent point function (inverse of cdf --- quantiles) - generic.isf(q,,loc=0,scale=1) - - inverse survival function (inverse of sf) + isf(q, , loc=0, scale=1) + inverse survival function (inverse of sf) - generic.stats(,loc=0,scale=1,moments='mv') - - mean('m'), variance('v'), skew('s'), and/or kurtosis('k') + moments(n, ) + non-central n-th moment of the standard distribution (oc=0, scale=1) - generic.entropy(,loc=0,scale=1) - - (differential) entropy of the RV. + stats(, loc=0, scale=1, moments='mv') + mean('m'), variance('v'), skew('s'), and/or kurtosis('k') - generic.fit(data,,loc=0,scale=1) - - Parameter estimates for generic data + entropy(, loc=0, scale=1) + (differential) entropy of the RV. - Alternatively, the object may be called (as a function) to fix the shape, - location, and scale parameters returning a "frozen" continuous RV object: + fit(data, , loc=0, scale=1) + Parameter estimates for generic data - rv = generic(,loc=0,scale=1) - - frozen RV object with the same methods but holding the given shape, location, and scale fixed + __call__(, loc=0, scale=1) + calling a distribution instance creates a frozen RV object with the + same methods but holding the given shape, location, and scale fixed. + see Notes section + + **Parameters for Methods** - Parameters - ---------- x : array-like quantiles q : array-like @@ -392,30 +634,94 @@ composed of letters ['mvsk'] specifying which moments to compute where 'm' = mean, 'v' = variance, 's' = (Fisher's) skew and 'k' = (Fisher's) kurtosis. (default='mv') + n : int + order of moment to calculate in method moments - Examples - -------- - >>> import matplotlib.pyplot as plt - >>> numargs = generic.numargs - >>> [ ] = [0.9,]*numargs - >>> rv = generic() + **Methods that can be overwritten by subclasses** + :: + + _rvs + _pdf + _cdf + _sf + _ppf + _isf + _stats + _munp + _entropy + _argcheck + + There are additional (internal and private) generic methods that can + be useful for cross-checking and for debugging, but might work in all + cases when directly called. + + + Notes + ----- + + **Frozen Distribution** + + Alternatively, the object may be called (as a function) to fix the shape, + location, and scale parameters returning a "frozen" continuous RV object: + + rv = generic(, loc=0, scale=1) + frozen RV object with the same methods but holding the given shape, + location, and scale fixed + + **Subclassing** + + New random variables can be defined by subclassing rv_continuous class + and re-defining at least the + + _pdf or the cdf method which will be given clean arguments (in between a + and b) and passing the argument check method + + If postive argument checking is not correct for your RV + then you will also need to re-define :: + + _argcheck - Display frozen pdf + Correct, but potentially slow defaults exist for the remaining + methods but for speed and/or accuracy you can over-ride :: - >>> x = np.linspace(0,np.minimum(rv.dist.b,3)) - >>> h=plt.plot(x,rv.pdf(x)) + _cdf, _ppf, _rvs, _isf, _sf - Check accuracy of cdf and ppf + Rarely would you override _isf and _sf but you could. - >>> prb = generic.cdf(x,) - >>> h=plt.semilogy(np.abs(x-generic.ppf(prb,c))+1e-20) + Statistics are computed using numerical integration by default. + For speed you can redefine this using - Random number generation + _stats + - take shape parameters and return mu, mu2, g1, g2 + - If you can't compute one of these, return it as None + - Can also be defined with a keyword argument moments= + where is a string composed of 'm', 'v', 's', + and/or 'k'. Only the components appearing in string + should be computed and returned in the order 'm', 'v', + 's', or 'k' with missing values returned as None - >>> R = generic.rvs(,size=100) + OR + + You can override + + _munp + takes n and shape parameters and returns + the nth non-central moment of the distribution. + + + Examples + -------- + To create a new Gaussian distribution, we would do the following:: + + class gaussian_gen(rv_continuous): + "Gaussian distribution" + def _pdf: + ... + ... """ + def __init__(self, momtype=1, a=None, b=None, xa=-10.0, xb=10.0, xtol=1e-14, badvalue=None, name=None, longname=None, shapes=None, extradoc=None): @@ -454,7 +760,7 @@ self.vecentropy = sgf(self._entropy,otypes='d') self.vecentropy.nin = self.numargs + 1 self.veccdf = sgf(self._cdf_single_call,otypes='d') - self.veccdf.nin = self.numargs+1 + self.veccdf.nin = self.numargs + 1 self.shapes = shapes self.extradoc = extradoc if momtype == 0: @@ -468,20 +774,41 @@ if name[0] in ['aeiouAEIOU']: hstr = "An " else: hstr = "A " longname = hstr + name + + # generate docstring for subclass instances if self.__doc__ is None: - self.__doc__ = rv_continuous.__doc__ - if self.__doc__ is not None: - self.__doc__ = textwrap.dedent(self.__doc__) - if longname is not None: - self.__doc__ = self.__doc__.replace("A Generic",longname) - if name is not None: - self.__doc__ = self.__doc__.replace("generic",name) - if shapes is None: - self.__doc__ = self.__doc__.replace(",","") - else: - self.__doc__ = self.__doc__.replace("",shapes) - if extradoc is not None: - self.__doc__ += textwrap.dedent(extradoc) + self._construct_default_doc(longname=longname, extradoc=extradoc) + else: + self._construct_doc() + + ## This only works for old-style classes... + # self.__class__.__doc__ = self.__doc__ + + def _construct_default_doc(self, longname=None, extradoc=None): + """Construct instance docstring from the default template.""" + if extradoc.startswith('\n\n'): + extradoc = extradoc[2:] + self.__doc__ = ''.join(['%s continuous random variable.'%longname, + '\n\n%(before_notes)s\n', docheaders['notes'], + extradoc, '\n%(example)s']) + self._construct_doc() + + def _construct_doc(self): + """Construct the instance docstring with string substitutions.""" + tempdict = docdict.copy() + tempdict['name'] = self.name or 'distname' + tempdict['shapes'] = self.shapes or '' + + if self.shapes is None: + # remove shapes from call parameters if there are none + for item in ['callparams', 'default', 'before_notes']: + tempdict[item] = tempdict[item].replace(\ + "\n%(shapes)s : array-like\n shape parameters", "") + for i in range(2): + if self.shapes is None: + # necessary because we use %(shapes)s in two forms (w w/o ", ") + self.__doc__ = self.__doc__.replace("%(shapes)s, ", "") + self.__doc__ = doccer.docformat(self.__doc__, tempdict) def _ppf_to_solve(self, x, q,*args): return apply(self.cdf, (x, )+args)-q @@ -924,6 +1251,24 @@ return -sum(log(self._pdf(x, *args)),axis=0) def nnlf(self, theta, x): + """Negative log likelihood function. + + This function should be minimized to produce maximum likelihood estimates (MLE). + + Paramters + --------- + theta : array-like + Parameters that the log-likelihood function depends on (shape, loc, scale) + where loc and scale are always the last two parameters. + x : array-like + The value of x to evaluate the log-likelihood function at (the observed data). + + Returns + ------- + nnlf : float + For an array of x values, this reeturns the sum (along axis=0) of the log-likelihood + (i.e. assumes independent observations). + """ # - sum (log pdf(x, theta),axis=0) # where theta are the parameters (including loc and scale) # @@ -1070,6 +1415,8 @@ return _norm_pdf(x) def _cdf(self,x): return _norm_cdf(x) + def _sf(self, x): + return _norm_cdf(-x) def _ppf(self,q): return _norm_ppf(q) def _isf(self,q): @@ -1176,7 +1523,7 @@ g2 = 6.0*(a**3 + a**2*(1-2*b) + b**2*(1+b) - 2*a*b*(2+b)) g2 /= a*b*(a+b+2)*(a+b+3) return mn, var, g1, g2 -beta = beta_gen(a=0.0, b=1.0, name='beta',shapes='a,b',extradoc=""" +beta = beta_gen(a=0.0, b=1.0, name='beta',shapes='a, b',extradoc=""" Beta distribution @@ -1210,7 +1557,7 @@ *(b-2.0)*(b-1.0)), inf) else: raise NotImplementedError -betaprime = betaprime_gen(a=0.0, b=500.0, name='betaprime', shapes='a,b', +betaprime = betaprime_gen(a=0.0, b=500.0, name='betaprime', shapes='a, b', extradoc=""" Beta prime distribution @@ -1290,7 +1637,7 @@ g2 -= 3*g1c**4 * g1cd**4 -4*gd**2*g3c*g1c*g1cd*g3cd return mu, mu2, g1, g2 burr = burr_gen(a=0.0, name='burr', longname="Burr", - shapes="c,d", extradoc=""" + shapes="c, d", extradoc=""" Burr distribution @@ -1387,9 +1734,10 @@ def _rvs(self, df): return mtrand.chisquare(df,self._size) def _pdf(self, x, df): - Px = x**(df/2.0-1)*exp(-x/2.0) - Px /= special.gamma(df/2.0)* 2**(df/2.0) - return Px + return exp((df/2.-1)*log(x)-x/2.-gamln(df/2.)-(log(2)*df)/2.) +## Px = x**(df/2.0-1)*exp(-x/2.0) +## Px /= special.gamma(df/2.0)* 2**(df/2.0) +## return Px def _cdf(self, x, df): return special.chdtr(df, x) def _sf(self, x, df): @@ -1566,7 +1914,7 @@ return (-log1p(-q**(1.0/a)))**arr(1.0/c) exponweib = exponweib_gen(a=0.0,name='exponweib', longname="An exponentiated Weibull", - shapes="a,c",extradoc=""" + shapes="a, c",extradoc=""" Exponentiated Weibull distribution @@ -1687,7 +2035,7 @@ g2 = 3/(2*v2-16)*(8+g1*g1*(v2-6)) g2 = where(v2 > 8, g2, nan) return mu, mu2, g1, g2 -f = f_gen(a=0.0,name='f',longname='An F',shapes="dfn,dfd", +f = f_gen(a=0.0,name='f',longname='An F',shapes="dfn, dfd", extradoc=""" F distribution @@ -1884,7 +2232,7 @@ return -expm1((-a-b)*x + b*(-expm1(-c*x))/c) genexpon = genexpon_gen(a=0.0,name='genexpon', longname='A generalized exponential', - shapes='a,b,c',extradoc=""" + shapes='a, b, c',extradoc=""" Generalized exponential distribution (Ryu 1993) @@ -2041,7 +2389,7 @@ return a*(1-val) + 1.0/c*val + special.gammaln(a)-log(abs(c)) gengamma = gengamma_gen(a=0.0, name='gengamma', longname='A generalized gamma', - shapes="a,c", extradoc=""" + shapes="a, c", extradoc=""" Generalized gamma distribution @@ -2272,7 +2620,7 @@ return fac*num / den gausshyper = gausshyper_gen(a=0.0, b=1.0, name='gausshyper', longname="A Gauss hypergeometric", - shapes="a,b,c,z", + shapes="a, b, c, z", extradoc=""" Gauss hypergeometric distribution @@ -2375,7 +2723,7 @@ return 1.0/(1+exp(-1.0/b*(norm.ppf(q)-a))) johnsonsb = johnsonsb_gen(a=0.0,b=1.0,name='johnsonb', longname="A Johnson SB", - shapes="a,b",extradoc=""" + shapes="a, b",extradoc=""" Johnson SB distribution @@ -2397,7 +2745,7 @@ def _ppf(self, q, a, b): return sinh((norm.ppf(q)-a)/b) johnsonsu = johnsonsu_gen(name='johnsonsu',longname="A Johnson SU", - shapes="a,b", extradoc=""" + shapes="a, b", extradoc=""" Johnson SU distribution @@ -2660,10 +3008,27 @@ # MAXWELL -# a special case of chi with df = 3, loc=0.0, and given scale = 1.0/sqrt(a) -# where a is the parameter used in mathworld description class maxwell_gen(rv_continuous): + """A Maxwell continuous random variable. + + %(before_notes)s + + Notes + ----- + A special case of a `chi` distribution, with ``df = 3``, ``loc = 0.0``, + and given ``scale = 1.0 / sqrt(a)``, where a is the parameter used in + the Mathworld description [1]_. + + Probability density function. Given by :math:`\sqrt(2/\pi)x^2 exp(-x^2/2)` + for ``x > 0``. + + References + ---------- + .. [1] http://mathworld.wolfram.com/MaxwellDistribution.html + + %(example)s + """ def _rvs(self): return chi.rvs(3.0,size=self._size) def _pdf(self, x): @@ -2678,8 +3043,7 @@ (-12*pi*pi + 160*pi - 384) / val**2.0 def _entropy(self): return _EULER + 0.5*log(2*pi)-0.5 -maxwell = maxwell_gen(a=0.0, name='maxwell', longname="A Maxwell", - extradoc=""" +maxwell = maxwell_gen(a=0.0, name='maxwell', extradoc=""" Maxwell distribution @@ -2688,6 +3052,7 @@ """ ) + # Mielke's Beta-Kappa class mielke_gen(rv_continuous): @@ -2699,7 +3064,7 @@ qsk = pow(q,s*1.0/k) return pow(qsk/(1.0-qsk),1.0/s) mielke = mielke_gen(a=0.0, name='mielke', longname="A Mielke's Beta-Kappa", - shapes="k,s", extradoc=""" + shapes="k, s", extradoc=""" Mielke's Beta-Kappa distribution @@ -2755,7 +3120,7 @@ return df + nc, 2*val, sqrt(8)*(val+nc)/val**1.5, \ 12.0*(val+2*nc)/val**2.0 ncx2 = ncx2_gen(a=0.0, name='ncx2', longname="A non-central chi-squared", - shapes="df,nc", extradoc=""" + shapes="df, nc", extradoc=""" Non-central chi-squared distribution @@ -2798,7 +3163,7 @@ ((dfd-2.0)**2.0 * (dfd-4.0))) return mu, mu2, None, None ncf = ncf_gen(a=0.0, name='ncf', longname="A non-central F distribution", - shapes="dfn,dfd,nc", extradoc=""" + shapes="dfn, dfd, nc", extradoc=""" Non-central F distribution @@ -2827,8 +3192,12 @@ return Px def _cdf(self, x, df): return special.stdtr(df, x) + def _sf(self, x, df): + return special.stdtr(df, -x) def _ppf(self, q, df): return special.stdtrit(df, q) + def _isf(self, q, df): + return -special.stdtrit(df, q) def _stats(self, df): mu2 = where(df > 2, df / (df-2.0), inf) g1 = where(df > 3, 0.0, nan) @@ -2899,7 +3268,7 @@ g2 = g2n / g2d return mu, mu2, g1, g2 nct = nct_gen(name="nct", longname="A Noncentral T", - shapes="df,nc", extradoc=""" + shapes="df, nc", extradoc=""" Non-central Student T distribution @@ -3004,7 +3373,7 @@ Power-function distribution -powerlaw.pdf(x,a) = a**x**(a-1) +powerlaw.pdf(x,a) = a*x**(a-1) for 0 <= x <= 1, a > 0. """ ) @@ -3021,7 +3390,7 @@ return exp(-s*norm.ppf(pow(1.0-q,1.0/c))) powerlognorm = powerlognorm_gen(a=0.0, name="powerlognorm", longname="A power log-normal", - shapes="c,s", extradoc=""" + shapes="c, s", extradoc=""" Power log-normal distribution @@ -3122,7 +3491,7 @@ return 0.5*log(a*b)+log(log(b/a)) reciprocal = reciprocal_gen(name="reciprocal", longname="A reciprocal", - shapes="a,b", extradoc=""" + shapes="a, b", extradoc=""" Reciprocal distribution @@ -3295,7 +3664,7 @@ mu2 = 1 + (a*pA - b*pB) / d - mu*mu return mu, mu2, None, None truncnorm = truncnorm_gen(name='truncnorm', longname="A truncated normal", - shapes="a,b", extradoc=""" + shapes="a, b", extradoc=""" Truncated Normal distribution. @@ -3415,6 +3784,17 @@ ## Wald distribution (Inverse Normal with shape parameter mu=1.0) class wald_gen(invnorm_gen): + """A Wald continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function, `pdf`, is defined by + ``1/sqrt(2*pi*x**3) * exp(-(x-1)**2/(2*x))``, for ``x > 0``. + + %(example)s + """ def _rvs(self): return invnorm_gen._rvs(self, 1.0) def _pdf(self, x): @@ -3423,8 +3803,7 @@ return invnorm.cdf(x,1,0) def _stats(self): return 1.0, 1.0, 3.0, 15.0 -wald = wald_gen(a=0.0, name="wald", longname="A Wald", - extradoc=""" +wald = wald_gen(a=0.0, name="wald", extradoc=""" Wald distribution @@ -3433,6 +3812,8 @@ """ ) + + ## Weibull ## See Frechet @@ -3449,15 +3830,17 @@ c1 = x self.moment_tol)): - diff = pos**n * self.pmf(pos,*args) + diff = np.power(pos, n) * self.pmf(pos,*args) # use pmf because _pmf does not check support in randint # and there might be problems ? with correct self.a, self.b at this stage tot += diff @@ -3566,7 +3949,8 @@ pos = -self.inc while (pos >= self.a) and ((pos >= llimit) or \ (diff > self.moment_tol)): - diff = pos**n * self.pmf(pos,*args) #using pmf instead of _pmf + diff = np.power(pos, n) * self.pmf(pos,*args) + #using pmf instead of _pmf, see above tot += diff pos -= self.inc count += 1 @@ -3638,79 +4022,137 @@ class rv_discrete(rv_generic): """ - A Generic discrete random variable. + A generic discrete random variable class meant for subclassing. + + `rv_discrete` is a base class to construct specific distribution classes + and instances from for discrete random variables. rv_discrete can be used + to construct an arbitrary distribution with defined by a list of support + points and the corresponding probabilities. + + Parameters + ---------- + a : float, optional + Lower bound of the support of the distribution, default: 0 + b : float, optional + Upper bound of the support of the distribution, default: plus infinity + moment_tol : float, optional + The tolerance for the generic calculation of moments + values : tuple of two array_like + (xk, pk) where xk are points (integers) with positive probability pk + with sum(pk) = 1 + inc : integer + increment for the support of the distribution, default: 1 + other values have not been tested + badvalue : object, optional + The value in (masked) arrays that indicates a value that should be + ignored. + name : str, optional + The name of the instance. This string is used to construct the default + example for distributions. + longname : str, optional + This string is used as part of the first line of the docstring returned + when a subclass has no docstring of its own. Note: `longname` exists + for backwards compatibility, do not use for new subclasses. + shapes : str, optional + The shape of the distribution. For example ``"m, n"`` for a + distribution that takes two integers as the first two arguments for all + its methods. + extradoc : str, optional + This string is used as the last part of the docstring returned when a + subclass has no docstring of its own. Note: `extradoc` exists for + backwards compatibility, do not use for new subclasses. - Discrete random variables are defined from a standard form and may require - some shape parameters to complete its specification. Any optional keyword - parameters can be passed to the methods of the RV object as given below: Methods ------- - generic.rvs(,loc=0,size=1) - - random variates - generic.pmf(x,,loc=0) - - probability mass function + generic.rvs(, loc=0, size=1) + random variates + + generic.pmf(x, , loc=0) + probability mass function - generic.cdf(x,,loc=0) - - cumulative density function + generic.cdf(x, , loc=0) + cumulative density function - generic.sf(x,,loc=0) - - survival function (1-cdf --- sometimes more accurate) + generic.sf(x, , loc=0) + survival function (1-cdf --- sometimes more accurate) - generic.ppf(q,,loc=0) - - percent point function (inverse of cdf --- percentiles) + generic.ppf(q, , loc=0) + percent point function (inverse of cdf --- percentiles) - generic.isf(q,,loc=0) - - inverse survival function (inverse of sf) + generic.isf(q, , loc=0) + inverse survival function (inverse of sf) - generic.stats(,loc=0,moments='mv') - - mean('m',axis=0), variance('v'), skew('s'), and/or kurtosis('k') + generic.stats(, loc=0, moments='mv') + mean('m', axis=0), variance('v'), skew('s'), and/or kurtosis('k') - generic.entropy(,loc=0) - - entropy of the RV + generic.entropy(, loc=0) + entropy of the RV + + generic(, loc=0) + calling a distribution instance returns a frozen distribution + + Notes + ----- Alternatively, the object may be called (as a function) to fix the shape and location parameters returning a "frozen" discrete RV object: - myrv = generic(,loc=0) - - frozen RV object with the same methods but holding the given shape and location fixed. + myrv = generic(, loc=0) + - frozen RV object with the same methods but holding the given shape + and location fixed. You can construct an aribtrary discrete rv where P{X=xk} = pk - by passing to the rv_discrete initialization method (through the values= - keyword) a tuple of sequences (xk,pk) which describes only those values of - X (xk) that occur with nonzero probability (pk). + by passing to the rv_discrete initialization method (through the + values=keyword) a tuple of sequences (xk, pk) which describes only those + values of X (xk) that occur with nonzero probability (pk). + + To create a new discrete distribution, we would do the following:: + + class poisson_gen(rv_continuous): + #"Poisson distribution" + def _pmf(self, k, mu): + ... + + and create an instance + + poisson = poisson_gen(name="poisson", shapes="mu", longname='A Poisson') + + The docstring can be created from a template. + Examples -------- >>> import matplotlib.pyplot as plt >>> numargs = generic.numargs - >>> [ ] = ['Replace with resonable value',]*numargs + >>> [ ] = ['Replace with resonable value', ]*numargs Display frozen pmf: >>> rv = generic() - >>> x = np.arange(0,np.min(rv.dist.b,3)+1) - >>> h = plt.plot(x,rv.pmf(x)) + >>> x = np.arange(0, np.min(rv.dist.b, 3)+1) + >>> h = plt.plot(x, rv.pmf(x)) Check accuracy of cdf and ppf: - >>> prb = generic.cdf(x,) - >>> h = plt.semilogy(np.abs(x-generic.ppf(prb,))+1e-20) + >>> prb = generic.cdf(x, ) + >>> h = plt.semilogy(np.abs(x-generic.ppf(prb, ))+1e-20) Random number generation: - >>> R = generic.rvs(,size=100) + >>> R = generic.rvs(, size=100) Custom made discrete distribution: - >>> vals = [arange(7),(0.1,0.2,0.3,0.1,0.1,0.1,0.1)] - >>> custm = rv_discrete(name='custm',values=vals) - >>> h = plt.plot(vals[0],custm.pmf(vals[0])) + >>> vals = [arange(7), (0.1, 0.2, 0.3, 0.1, 0.1, 0.1, 0.1)] + >>> custm = rv_discrete(name='custm', values=vals) + >>> h = plt.plot(vals[0], custm.pmf(vals[0])) """ + def __init__(self, a=0, b=inf, name=None, badvalue=None, moment_tol=1e-8,values=None,inc=1,longname=None, shapes=None, extradoc=None): @@ -3777,30 +4219,44 @@ self._vecppf = new.instancemethod(_vppf, self, rv_discrete) - - #now that self.numargs is defined, we can adjust nin self._cdfvec.nin = self.numargs + 1 - if longname is None: - if name[0] in ['aeiouAEIOU']: hstr = "An " - else: hstr = "A " - longname = hstr + name + # generate docstring for subclass instances if self.__doc__ is None: - self.__doc__ = rv_discrete.__doc__ - if self.__doc__ is not None: - self.__doc__ = textwrap.dedent(self.__doc__) - self.__doc__ = self.__doc__.replace("A Generic",longname) - if name is not None: - self.__doc__ = self.__doc__.replace("generic",name) - if shapes is None: - self.__doc__ = self.__doc__.replace(",","") - else: - self.__doc__ = self.__doc__.replace("",shapes) - ind = self.__doc__.find("You can construct an arbitrary") - self.__doc__ = self.__doc__[:ind].strip() - if extradoc is not None: - self.__doc__ += textwrap.dedent(extradoc) + self._construct_default_doc(longname=longname, extradoc=extradoc) + else: + self._construct_doc() + + ## This only works for old-style classes... + # self.__class__.__doc__ = self.__doc__ + + def _construct_default_doc(self, longname=None, extradoc=None): + """Construct instance docstring from the rv_discrete template.""" + if extradoc.startswith('\n\n'): + extradoc = extradoc[2:] + self.__doc__ = ''.join(['%s discrete random variable.'%longname, + '\n\n%(before_notes)s\n', docheaders['notes'], + extradoc, '\n%(example)s']) + self._construct_doc() + + def _construct_doc(self): + """Construct the instance docstring with string substitutions.""" + tempdict = docdict_discrete.copy() + tempdict['name'] = self.name or 'distname' + tempdict['shapes'] = self.shapes or '' + + if self.shapes is None: + # remove shapes from call parameters if there are none + for item in ['callparams', 'default', 'before_notes']: + tempdict[item] = tempdict[item].replace(\ + "\n%(shapes)s : array-like\n shape parameters", "") + for i in range(2): + if self.shapes is None: + # necessary because we use %(shapes)s in two forms (w w/o ", ") + self.__doc__ = self.__doc__.replace("%(shapes)s, ", "") + self.__doc__ = doccer.docformat(self.__doc__, tempdict) + def _rvs(self, *args): return self._ppf(mtrand.random_sample(self._size),*args) @@ -4273,6 +4729,11 @@ def _argcheck(self, n, pr): self.b = n return (n>=0) & (pr >= 0) & (pr <= 1) + def _pmf(self, x, n, pr): + k = floor(x) + combiln = (special.gammaln(n+1) - (special.gammaln(k+1) + + special.gammaln(n-k+1))) + return np.exp(combiln + k*np.log(pr) + (n-k)*np.log(1-pr)) def _cdf(self, x, n, pr): k = floor(x) vals = special.bdtr(k,n,pr) @@ -4297,7 +4758,7 @@ vals = self._pmf(k,n,pr) lvals = where(vals==0,0.0,log(vals)) return -sum(vals*lvals,axis=0) -binom = binom_gen(name='binom',shapes="n,pr",extradoc=""" +binom = binom_gen(name='binom',shapes="n, pr",extradoc=""" Binomial distribution @@ -4315,6 +4776,8 @@ return binom_gen._rvs(self, 1, pr) def _argcheck(self, pr): return (pr >=0 ) & (pr <= 1) + def _pmf(self, x, pr): + return binom_gen._pmf(self, x, 1, pr) def _cdf(self, x, pr): return binom_gen._cdf(self, x, 1, pr) def _sf(self, x, pr): @@ -4340,6 +4803,17 @@ # Negative binomial class nbinom_gen(rv_discrete): + """A negative binomial discrete random variable. + + %(before_notes)s + + Notes + ----- + Probability mass function, given by + ``np.choose(k+n-1, n-1) * p**n * (1-p)**k`` for ``k >= 0``. + + %(example)s + """ def _rvs(self, n, pr): return mtrand.negative_binomial(n, pr, self._size) def _argcheck(self, n, pr): @@ -4356,8 +4830,8 @@ return special.nbdtrc(k,n,pr) def _ppf(self, q, n, pr): vals = ceil(special.nbdtrik(q,n,pr)) - vals1 = vals-1 - temp = special.nbdtr(vals1,n,pr) + vals1 = (vals-1).clip(0.0, np.inf) + temp = self._cdf(vals1,n,pr) return where(temp >= q, vals1, vals) def _stats(self, n, pr): Q = 1.0 / pr @@ -4367,8 +4841,7 @@ g1 = (Q+P)/sqrt(n*P*Q) g2 = (1.0 + 6*P*Q) / (n*P*Q) return mu, var, g1, g2 -nbinom = nbinom_gen(name='nbinom', longname="A negative binomial", - shapes="n,pr", extradoc=""" +nbinom = nbinom_gen(name='nbinom', shapes="n, pr", extradoc=""" Negative binomial distribution @@ -4377,6 +4850,7 @@ """ ) + ## Geometric distribution class geom_gen(rv_discrete): @@ -4427,7 +4901,11 @@ def _pmf(self, k, M, n, N): tot, good = M, n bad = tot - good - return comb(good,k) * comb(bad,N-k) / comb(tot,N) + return np.exp(lgam(good+1) - lgam(good-k+1) - lgam(k+1) + lgam(bad+1) + - lgam(bad-N+k+1) - lgam(N-k+1) - lgam(tot+1) + lgam(tot-N+1) + + lgam(N+1)) + #same as the following but numerically more precise + #return comb(good,k) * comb(bad,N-k) / comb(tot,N) def _stats(self, M, n, N): tot, good = M, n n = good*1.0 @@ -4452,7 +4930,7 @@ lvals = where(vals==0.0,0.0,log(vals)) return -sum(vals*lvals,axis=0) hypergeom = hypergeom_gen(name='hypergeom',longname="A hypergeometric", - shapes="M,n,N", extradoc=""" + shapes="M, n, N", extradoc=""" Hypergeometric distribution @@ -4555,8 +5033,10 @@ k = floor(x) return 1-exp(-lambda_*(k+1)) def _ppf(self, q, lambda_): - val = ceil(-1.0/lambda_ * log1p(-q)-1) - return val + vals = ceil(-1.0/lambda_ * log1p(-q)-1) + vals1 = (vals-1).clip(self.a, np.inf) + temp = self._cdf(vals1, lambda_) + return where(temp >= q, vals1, vals) def _stats(self, lambda_): mu = 1/(exp(lambda_)-1) var = exp(-lambda_)/(expm1(-lambda_))**2 @@ -4568,7 +5048,7 @@ C = (1-exp(-l)) return l*exp(-l)/C - log(C) planck = planck_gen(name='planck',longname='A discrete exponential ', - shapes="lambda_", + shapes="lamda", extradoc=""" Planck (Discrete Exponential) @@ -4587,8 +5067,10 @@ return (1-exp(-lambda_*(k+1)))/(1-exp(-lambda_*N)) def _ppf(self, q, lambda_, N): qnew = q*(1-exp(-lambda_*N)) - val = ceil(-1.0/lambda_ * log(1-qnew)-1) - return val + vals = ceil(-1.0/lambda_ * log(1-qnew)-1) + vals1 = (vals-1).clip(0.0, np.inf) + temp = self._cdf(vals1, lambda_, N) + return where(temp >= q, vals1, vals) def _stats(self, lambda_, N): z = exp(-lambda_) zN = exp(-lambda_*N) @@ -4603,7 +5085,7 @@ return mu, var, g1, g2 boltzmann = boltzmann_gen(name='boltzmann',longname='A truncated discrete exponential ', - shapes="lambda_,N", + shapes="lamda, N", extradoc=""" Boltzmann (Truncated Discrete Exponential) @@ -4630,8 +5112,10 @@ k = floor(x) return (k-min+1)*1.0/(max-min) def _ppf(self, q, min, max): - val = ceil(q*(max-min)+min)-1 - return val + vals = ceil(q*(max-min)+min)-1 + vals1 = (vals-1).clip(min, max) + temp = self._cdf(vals1, min, max) + return where(temp >= q, vals1, vals) def _stats(self, min, max): m2, m1 = arr(max), arr(min) mu = (m2 + m1 - 1.0) / 2 @@ -4650,7 +5134,7 @@ def _entropy(self, min, max): return log(max-min) randint = randint_gen(name='randint',longname='A discrete uniform '\ - '(random integer)', shapes="min,max", + '(random integer)', shapes="min, max", extradoc=""" Discrete Uniform @@ -4716,7 +5200,10 @@ const = 1.0/(1+exp(-a)) cons2 = 1+exp(a) ind = q < const - return ceil(where(ind, log(q*cons2)/a-1, -log((1-q)*cons2)/a)) + vals = ceil(where(ind, log(q*cons2)/a-1, -log((1-q)*cons2)/a)) + vals1 = (vals-1) + temp = self._cdf(vals1, a) + return where(temp >= q, vals1, vals) def _stats_skip(self, a): # variance mu2 does not aggree with sample variance, @@ -4742,3 +5229,53 @@ for a > 0. """ ) + + +class skellam_gen(rv_discrete): + def _rvs(self, mu1, mu2): + n = self._size + return np.random.poisson(mu1, n)-np.random.poisson(mu2, n) + def _pmf(self, x, mu1, mu2): + px = np.where(x < 0, ncx2.pdf(2*mu2, 2*(1-x), 2*mu1)*2, + ncx2.pdf(2*mu1, 2*(x+1), 2*mu2)*2) + #ncx2.pdf() returns nan's for extremely low probabilities + return px + def _cdf(self, x, mu1, mu2): + x = np.floor(x) + px = np.where(x < 0, ncx2.cdf(2*mu2, -2*x, 2*mu1), + 1-ncx2.cdf(2*mu1, 2*(x+1), 2*mu2)) + return px + +# enable later +## def _cf(self, w, mu1, mu2): +## # characteristic function +## poisscf = poisson._cf +## return poisscf(w, mu1) * poisscf(-w, mu2) + + def _stats(self, mu1, mu2): + mean = mu1 - mu2 + var = mu1 + mu2 + g1 = mean / np.sqrt((var)**3) + g2 = 1 / var + return mean, var, g1, g2 +skellam = skellam_gen(a=-np.inf, name="skellam", longname='A Skellam', + shapes="mu1,mu2", extradoc=""" + +Skellam distribution + + Probability distribution of the difference of two correlated or + uncorrelated Poisson random variables. + + Let k1 and k2 be two Poisson-distributed r.v. with expected values + lam1 and lam2. Then, k1-k2 follows a Skellam distribution with + parameters mu1 = lam1 - rho*sqrt(lam1*lam2) and + mu2 = lam2 - rho*sqrt(lam1*lam2), where rho is the correlation + coefficient between k1 and k2. If the two Poisson-distributed r.v. + are independent then rho = 0. + + Parameters mu1 and mu2 must be strictly positive. + + For details see: http://en.wikipedia.org/wiki/Skellam_distribution + +""" + ) diff -Nru python-scipy-0.7.2+dfsg1/scipy/stats/kde.py python-scipy-0.8.0+dfsg1/scipy/stats/kde.py --- python-scipy-0.7.2+dfsg1/scipy/stats/kde.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/stats/kde.py 2010-07-26 15:48:37.000000000 +0100 @@ -24,6 +24,7 @@ from scipy import linalg, special from numpy import atleast_2d, reshape, zeros, newaxis, dot, exp, pi, sqrt, \ ravel, power, atleast_1d, squeeze, sum, transpose +import numpy as np from numpy.random import randint, multivariate_normal # Local imports. @@ -35,15 +36,13 @@ class gaussian_kde(object): - """ Representation of a kernel-density estimate using Gaussian kernels. + """ + Representation of a kernel-density estimate using Gaussian kernels. - Parameters - ---------- - dataset : (# of dims, # of data)-array - datapoints to estimate from - Members - ------- + + Attributes + ---------- d : int number of dimensions n : int @@ -63,14 +62,22 @@ integrate pdf over a rectangular space between low_bounds and high_bounds kde.integrate_kde(other_kde) : float integrate two kernel density estimates multiplied together + kde.resample(size=None) : array + randomly sample a dataset from the estimated pdf. - Internal Methods - ---------------- + Internal Methods + ---------------- kde.covariance_factor() : float computes the coefficient that multiplies the data covariance matrix to obtain the kernel covariance matrix. Set this method to kde.scotts_factor or kde.silverman_factor (or subclass to provide your own). The default is scotts_factor. + + Parameters + ---------- + dataset : (# of dims, # of data)-array + datapoints to estimate from + """ def __init__(self, dataset): @@ -207,8 +214,8 @@ normalized_low = ravel((low - self.dataset)/stdev) normalized_high = ravel((high - self.dataset)/stdev) - value = stats.mean(special.ndtr(normalized_high) - - special.ndtr(normalized_low)) + value = np.mean(special.ndtr(normalized_high) - + special.ndtr(normalized_low)) return value @@ -329,7 +336,7 @@ covariance_factor """ self.factor = self.covariance_factor() - self.covariance = atleast_2d(stats.cov(self.dataset, rowvar=1) * + self.covariance = atleast_2d(np.cov(self.dataset, rowvar=1, bias=False) * self.factor * self.factor) self.inv_cov = linalg.inv(self.covariance) self._norm_factor = sqrt(linalg.det(2*pi*self.covariance)) * self.n diff -Nru python-scipy-0.7.2+dfsg1/scipy/stats/morestats.py python-scipy-0.8.0+dfsg1/scipy/stats/morestats.py --- python-scipy-0.7.2+dfsg1/scipy/stats/morestats.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/stats/morestats.py 2010-07-26 15:48:37.000000000 +0100 @@ -332,7 +332,7 @@ """ N = len(data) y = boxcox(data,lmb) - my = stats.mean(y) + my = np.mean(y, axis=0) f = (lmb-1)*sum(log(data),axis=0) f -= N/2.0*log(sum((y-my)**2.0/N,axis=0)) return f @@ -431,20 +431,41 @@ return svals, ppcc def shapiro(x,a=None,reta=0): - """Shapiro and Wilk test for normality. - - Given random variates x, compute the W statistic and its p-value - for a normality test. - - If p-value is high, one cannot reject the null hypothesis of normality - with this test. P-value is probability that the W statistic is - as low as it is if the samples are actually from a normal distribution. + """ + Perform the Shapiro-Wilk test for normality. - Output: W statistic and its p-value + The Shapiro-Wilk test tests the null hypothesis that the + data was drawn from a normal distribution. - if reta is nonzero then also return the computed "a" values - as the third output. If these are known for a given size - they can be given as input instead of computed internally. + Parameters + ---------- + x : array_like + array of sample data + a : array_like, optional + array of internal parameters used in the calculation. If these + are not given, they will be computed internally. If x has length + n, then a must have length n/2. + reta : {True, False} + whether or not to return the internally computed a values. The + default is False. + + Returns + ------- + W : float + The test statistic + p-value : float + The p-value for the hypothesis test + a : array_like, optional + If `reta` is True, then these are the internally computed "a" + values that may be passed into this function on future calls. + + See Also + -------- + anderson : The Anderson-Darling test for normality + + References + ---------- + .. [1] http://www.itl.nist.gov/div898/handbook/prc/section2/prc213.htm """ N = len(x) @@ -480,29 +501,78 @@ # Vol. 66, Issue 3, Dec. 1979, pp 591-595. _Avals_logistic = array([0.426, 0.563, 0.660, 0.769, 0.906, 1.010]) def anderson(x,dist='norm'): - """Anderson and Darling test for normal, exponential, or Gumbel - (Extreme Value Type I) distribution. + """ + Anderson-Darling test for data coming from a particular distribution + + The Anderson-Darling test is a modification of the Kolmogorov- + Smirnov test kstest_ for the null hypothesis that a sample is + drawn from a population that follows a particular distribution. + For the Anderson-Darling test, the critical values depend on + which distribution is being tested against. This function works + for normal, exponential, logistic, or Gumbel (Extreme Value + Type I) distributions. + + Parameters + ---------- + x : array_like + array of sample data + dist : {'norm','expon','logistic','gumbel','extreme1'}, optional + the type of distribution to test against. The default is 'norm' + and 'extreme1' is a synonym for 'gumbel' + + Returns + ------- + A2 : float + The Anderson-Darling test statistic + critical : list + The critical values for this distribution + sig : list + The significance levels for the corresponding critical values + in percents. The function returns critical values for a + differing set of significance levels depending on the + distribution that is being tested against. + + Notes + ----- + Critical values provided are for the following significance levels: + + normal/exponenential + 15%, 10%, 5%, 2.5%, 1% + logistic + 25%, 10%, 5%, 2.5%, 1%, 0.5% + Gumbel + 25%, 10%, 5%, 2.5%, 1% + + If A2 is larger than these critical values then for the corresponding + significance level, the null hypothesis that the data come from the + chosen distribution can be rejected. + + References + ---------- + .. [1] http://www.itl.nist.gov/div898/handbook/prc/section2/prc213.htm + .. [2] Stephens, M. A. (1974). EDF Statistics for Goodness of Fit and + Some Comparisons, Journal of the American Statistical Association, + Vol. 69, pp. 730-737. + .. [3] Stephens, M. A. (1976). Asymptotic Results for Goodness-of-Fit + Statistics with Unknown Parameters, Annals of Statistics, Vol. 4, + pp. 357-369. + .. [4] Stephens, M. A. (1977). Goodness of Fit for the Extreme Value + Distribution, Biometrika, Vol. 64, pp. 583-588. + .. [5] Stephens, M. A. (1977). Goodness of Fit with Special Reference + to Tests for Exponentiality , Technical Report No. 262, + Department of Statistics, Stanford University, Stanford, CA. + .. [6] Stephens, M. A. (1979). Tests of Fit for the Logistic Distribution + Based on the Empirical Distribution Function, Biometrika, Vol. 66, + pp. 591-595. - Given samples x, return A2, the Anderson-Darling statistic, - the significance levels in percentages, and the corresponding - critical values. - - Critical values provided are for the following significance levels - norm/expon: 15%, 10%, 5%, 2.5%, 1% - Gumbel: 25%, 10%, 5%, 2.5%, 1% - logistic: 25%, 10%, 5%, 2.5%, 1%, 0.5% - - If A2 is larger than these critical values then for that significance - level, the hypothesis that the data come from a normal (exponential) - can be rejected. """ if not dist in ['norm','expon','gumbel','extreme1','logistic']: raise ValueError, "Invalid distribution." y = sort(x) - xbar = stats.mean(x) + xbar = np.mean(x, axis=0) N = len(y) if dist == 'norm': - s = stats.std(x) + s = np.std(x, ddof=1, axis=0) w = (y-xbar)/s z = distributions.norm.cdf(w) sig = array([15,10,5,2.5,1]) @@ -520,25 +590,30 @@ val = [sum(1.0/(1+tmp2),axis=0)-0.5*N, sum(tmp*(1.0-tmp2)/(1+tmp2),axis=0)+N] return array(val) - sol0=array([xbar,stats.std(x)]) + sol0=array([xbar,np.std(x, ddof=1, axis=0)]) sol = optimize.fsolve(rootfunc,sol0,args=(x,N),xtol=1e-5) w = (y-sol[0])/sol[1] z = distributions.logistic.cdf(w) sig = array([25,10,5,2.5,1,0.5]) critical = around(_Avals_logistic / (1.0+0.25/N),3) - else: - def fixedsolve(th,xj,N): - val = stats.sum(xj)*1.0/N - tmp = exp(-xj/th) - term = sum(xj*tmp,axis=0) - term /= sum(tmp,axis=0) - return val - term - s = optimize.fixed_point(fixedsolve, 1.0, args=(x,N),xtol=1e-5) - xbar = -s*log(sum(exp(-x/s),axis=0)*1.0/N) + elif (dist == 'gumbel') or (dist == 'extreme1'): + #the following is incorrect, see ticket:1097 +## def fixedsolve(th,xj,N): +## val = stats.sum(xj)*1.0/N +## tmp = exp(-xj/th) +## term = sum(xj*tmp,axis=0) +## term /= sum(tmp,axis=0) +## return val - term +## s = optimize.fixed_point(fixedsolve, 1.0, args=(x,N),xtol=1e-5) +## xbar = -s*log(sum(exp(-x/s),axis=0)*1.0/N) + xbar, s = distributions.gumbel_l.fit(x) w = (y-xbar)/s z = distributions.gumbel_l.cdf(w) sig = array([25,10,5,2.5,1]) critical = around(_Avals_gumbel / (1.0 + 0.2/sqrt(N)),3) + else: + raise ValueError("dist has to be one of 'norm','expon','logistic'", + "'gumbel','extreme1'") i = arange(1,N+1) S = sum((2*i-1.0)/N*(log(z)+log(1-z[::-1])),axis=0) A2 = -N-S @@ -574,15 +649,39 @@ return replist, repnum def ansari(x,y): - """Determine if the scale parameter for two distributions with equal - medians is the same using the Ansari-Bradley statistic. + """ + Perform the Ansari-Bradley test for equal scale parameters - Specifically, compute the AB statistic and the probability of error - that the null hypothesis is true but rejected with the computed - statistic as the critical value. + The Ansari-Bradley test is a non-parametric test for the equality + of the scale parameter of the distributions from which two + samples were drawn. + + Parameters + ---------- + x, y : array_like + arrays of sample data + + Returns + ------- + p-value : float + The p-value of the hypothesis test + + See Also + -------- + fligner : A non-parametric test for the equality of k variances + mood : A non-parametric test for the equality of two scale parameters + + Notes + ----- + The p-value given is exact when the sample sizes are both less than + 55 and there are no ties, otherwise a normal approximation for the + p-value is used. + + References + ---------- + .. [1] Sprent, Peter and N.C. Smeeton. Applied nonparametric statistical + methods. 3rd ed. Chapman and Hall/CRC. 2001. Section 5.8.2. - One can reject the null hypothesis that the ratio of variances is 1 if - returned probability of error is small (say < 0.05) """ x,y = asarray(x),asarray(y) n = len(x) @@ -634,31 +733,38 @@ else: # N even varAB = m*n*(16*fac-N*(N+2)**2)/(16.0 * N * (N-1)) z = (AB - mnAB)/sqrt(varAB) - pval = (1-distributions.norm.cdf(abs(z)))*2.0 + pval = distributions.norm.sf(abs(z)) * 2.0 return AB, pval def bartlett(*args): - """Perform Bartlett test with the null hypothesis that all input samples - have equal variances. - - Inputs are sample vectors: bartlett(x,y,z,...) - - Outputs: (T, pval) + """ + Perform Bartlett's test for equal variances - T -- the Test statistic - pval -- significance level if null is rejected with this value of T - (prob. that null is true but rejected with this p-value.) + Bartlett's test tests the null hypothesis that all input samples + are from populations with equal variances. For samples + from significantly non-normal populations, Levene's test + `levene`_ is more robust. + + Parameters + ---------- + sample1, sample2,... : array_like + arrays of sample data. May be different lengths. + + Returns + ------- + T : float + the test statistic + p-value : float + the p-value of the test - Sensitive to departures from normality. The Levene test is - an alternative that is less sensitive to departures from - normality. + References + ---------- - References: + .. [1] http://www.itl.nist.gov/div898/handbook/eda/section3/eda357.htm - http://www.itl.nist.gov/div898/handbook/eda/section3/eda357.htm + .. [2] Snedecor, George W. and Cochran, William G. (1989), Statistical + Methods, Eighth Edition, Iowa State University Press. - Snedecor, George W. and Cochran, William G. (1989), Statistical - Methods, Eighth Edition, Iowa State University Press. """ k = len(args) if k < 2: @@ -667,7 +773,7 @@ ssq = zeros(k,'d') for j in range(k): Ni[j] = len(args[j]) - ssq[j] = stats.var(args[j]) + ssq[j] = np.var(args[j], ddof=1) Ntot = sum(Ni,axis=0) spsq = sum((Ni-1)*ssq,axis=0)/(1.0*(Ntot-k)) numer = (Ntot*1.0-k)*log(spsq) - sum((Ni-1.0)*log(ssq),axis=0) @@ -678,33 +784,50 @@ def levene(*args,**kwds): - """Perform Levene test with the null hypothesis that all input samples - have equal variances. - - Inputs are sample vectors: bartlett(x,y,z,...) - - One keyword input, center, can be used with values - center = 'mean', center='median' (default), center='trimmed' - - center='median' is recommended for skewed (non-normal) distributions - center='mean' is recommended for symmetric, moderate-tailed, dist. - center='trimmed' is recommended for heavy-tailed distributions. - - Outputs: (W, pval) - - W -- the Test statistic - pval -- significance level if null is rejected with this value of W - (prob. that null is true but rejected with this p-value.) - - References: + """ + Perform Levene test for equal variances - http://www.itl.nist.gov/div898/handbook/eda/section3/eda35a.htm + The Levene test tests the null hypothesis that all input samples + are from populations with equal variances. Levene's test is an + alternative to Bartlett's test `bartlett`_ in the case where + there are significant deviations from normality. + + Parameters + ---------- + sample1, sample2, ... : array_like + The sample data, possibly with different lengths + center : {'mean', 'median', 'trimmed'}, optional + Which function of the data to use in the test. The default + is 'median'. + + Returns + ------- + W : float + the test statistic + p-value : float + the p-value for the test + + Notes + ----- + Three variations of Levene's test are possible. The possibilities + and their recommended usages are: + + 'median' + Recommended for skewed (non-normal) distributions + 'mean' + Recommended for symmetric, moderate-tailed distributions + 'trimmed' + Recommended for heavy-tailed distributions + + References + ---------- + .. [1] http://www.itl.nist.gov/div898/handbook/eda/section3/eda35a.htm + .. [2] Levene, H. (1960). In Contributions to Probability and Statistics: + Essays in Honor of Harold Hotelling, I. Olkin et al. eds., + Stanford University Press, pp. 278-292. + .. [3] Brown, M. B. and Forsythe, A. B. (1974), Journal of the American + Statistical Association, 69, 364-367 - Levene, H. (1960). In Contributions to Probability and Statistics: - Essays in Honor of Harold Hotelling, I. Olkin et al. eds., - Stanford University Press, pp. 278-292. - Brown, M. B. and Forsythe, A. B. (1974), Journal of the American - Statistical Association, 69, 364-367 """ k = len(args) if k < 2: @@ -719,9 +842,9 @@ raise ValueError, "Keyword argument
must be 'mean', 'median'"\ + "or 'trimmed'." if center == 'median': - func = stats.median + func = lambda x: np.median(x, axis=0) elif center == 'mean': - func = stats.mean + func = lambda x: np.mean(x, axis=0) else: func = stats.trim_mean for j in range(k): @@ -737,7 +860,7 @@ Zbari = zeros(k,'d') Zbar = 0.0 for i in range(k): - Zbari[i] = stats.mean(Zij[i]) + Zbari[i] = np.mean(Zij[i], axis=0) Zbar += Zbari[i]*Ni[i] Zbar /= Ntot @@ -756,18 +879,34 @@ @setastest(False) def binom_test(x,n=None,p=0.5): - """An exact (two-sided) test of the null hypothesis that the - probability of success in a Bernoulli experiment is p. - - Inputs: + """ + Perform a test that the probability of success is p. - x -- Number of successes (or a vector of length 2 giving the - number of successes and number of failures respectively) - n -- Number of trials (ignored if x has length 2) - p -- Hypothesized probability of success + This is an exact, two-sided test of the null hypothesis + that the probability of success in a Bernoulli experiment + is `p`. + + Parameters + ---------- + x : integer or array_like + the number of successes, or if x has length 2, it is the + number of successes and the number of failures. + n : integer + the number of trials. This is ignored if x gives both the + number of successes and failures + p : float, optional + The hypothesized probability of success. 0 <= p <= 1. The + default value is p = 0.5 + + Returns + ------- + p-value : float + The p-value of the hypothesis test + + References + ---------- + .. [1] http://en.wikipedia.org/wiki/Binomial_test - Returns pval -- Probability that null test is rejected for this set - of x and n even though it is true. """ x = atleast_1d(x).astype(np.integer) if len(x) == 2: @@ -787,12 +926,12 @@ d = distributions.binom.pmf(x,n,p) rerr = 1+1e-7 if (x < p*n): - i = arange(x+1,n+1) - y = sum(distributions.binom.pmf(i,n,p) <= d*rerr,axis=0) + i = np.arange(np.ceil(p*n),n+1) + y = np.sum(distributions.binom.pmf(i,n,p) <= d*rerr,axis=0) pval = distributions.binom.cdf(x,n,p) + distributions.binom.sf(n-y,n,p) else: - i = arange(0,x) - y = sum(distributions.binom.pmf(i,n,p) <= d*rerr,axis=0) + i = np.arange(np.floor(p*n)) + y = np.sum(distributions.binom.pmf(i,n,p) <= d*rerr,axis=0) pval = distributions.binom.cdf(y-1,n,p) + distributions.binom.sf(x-1,n,p) return min(1.0,pval) @@ -808,27 +947,45 @@ return asarray(output) def fligner(*args,**kwds): - """Perform Levene test with the null hypothesis that all input samples - have equal variances. - - Inputs are sample vectors: bartlett(x,y,z,...) - - One keyword input, center, can be used with values - center = 'mean', center='median' (default), center='trimmed' - - Outputs: (Xsq, pval) - - Xsq -- the Test statistic - pval -- significance level if null is rejected with this value of X - (prob. that null is true but rejected with this p-value.) - - References: + """ + Perform Fligner's test for equal variances - http://www.stat.psu.edu/~bgl/center/tr/TR993.ps + Fligner's test tests the null hypothesis that all input samples + are from populations with equal variances. Fligner's test is + non-parametric in contrast to Bartlett's test bartlett_ and + Levene's test levene_. + + Parameters + ---------- + sample1, sample2, ... : array_like + arrays of sample data. Need not be the same length + center : {'mean', 'median', 'trimmed'}, optional + keyword argument controlling which function of the data + is used in computing the test statistic. The default + is 'median'. + + Returns + ------- + Xsq : float + the test statistic + p-value : float + the p-value for the hypothesis test + + Notes + ----- + As with Levene's test there are three variants + of Fligner's test that differ by the measure of central + tendency used in the test. See levene_ for more information. + + References + ---------- + + .. [1] http://www.stat.psu.edu/~bgl/center/tr/TR993.ps + + .. [2] Fligner, M.A. and Killeen, T.J. (1976). Distribution-free two-sample + tests for scale. 'Journal of the American Statistical Association.' + 71(353), 210-213. - Fligner, M.A. and Killeen, T.J. (1976). Distribution-free two-sample - tests for scale. 'Journal of the American Statistical Association.' - 71(353), 210-213. """ k = len(args) if k < 2: @@ -841,9 +998,9 @@ raise ValueError, "Keyword argument
must be 'mean', 'median'"\ + "or 'trimmed'." if center == 'median': - func = stats.median + func = lambda x: np.median(x, axis=0) elif center == 'mean': - func = stats.mean + func = lambda x: np.mean(x, axis=0) else: func = stats.trim_mean @@ -862,8 +1019,8 @@ # compute Aibar Aibar = _apply_func(a,g,sum) / Ni - anbar = stats.mean(a) - varsq = stats.var(a) + anbar = np.mean(a, axis=0) + varsq = np.var(a,axis=0, ddof=1) Xsq = sum(Ni*(asarray(Aibar)-anbar)**2.0,axis=0)/varsq @@ -872,15 +1029,36 @@ def mood(x,y): - """Determine if the scale parameter for two distributions with equal - medians is the same using a Mood test. + """ + Perform Mood's test for equal scale parameters - Specifically, compute the z statistic and the probability of error - that the null hypothesis is true but rejected with the computed - statistic as the critical value. + Mood's two-sample test for scale parameters is a non-parametric + test for the null hypothesis that two samples are drawn from the + same distribution with the same scale parameter. + + Parameters + ---------- + x, y : array_like + arrays of sample data + + Returns + ------- + p-value : float + The p-value for the hypothesis test + + See Also + -------- + fligner : A non-parametric test for the equality of k variances + ansari : A non-parametric test for the equality of 2 variances + bartlett : A parametric test for equality of k variances in normal samples + levene : A parametric test for equality of k variances + + Notes + ----- + The data are assumed to be drawn from probability distributions f(x) and + f(x/s)/s respectively, for some probability density function f. The + null hypothesis is that s = 1. - One can reject the null hypothesis that the ratio of scale parameters is - 1 if the returned probability of error is small (say < 0.05) """ n = len(x) m = len(y) @@ -895,8 +1073,15 @@ mnM = n*(N*N-1.0)/12 varM = m*n*(N+1.0)*(N+2)*(N-2)/180 z = (M-mnM)/sqrt(varM) - p = distributions.norm.cdf(z) - pval = 2*min(p,1-p) + + # Numerically better than p = norm.cdf(x); p = min(p, 1 - p) + if z > 0: + pval = distributions.norm.sf(z) + else: + pval = distributions.norm.cdf(z) + + # Account for two-sidedness + pval *= 2. return z, pval @@ -921,8 +1106,8 @@ evar = 0 Ni = array([len(args[i]) for i in range(k)]) - Mi = array([stats.mean(args[i]) for i in range(k)]) - Vi = array([stats.var(args[i]) for i in range(k)]) + Mi = array([np.mean(args[i], axis=0) for i in range(k)]) + Vi = array([np.var(args[i]) for i in range(k)]) Wi = Ni / Vi swi = sum(Wi,axis=0) N = sum(Ni,axis=0) @@ -942,12 +1127,40 @@ def wilcoxon(x,y=None): """ -Calculates the Wilcoxon signed-rank test for the null hypothesis that two -samples come from the same distribution. A non-parametric T-test. -(need N > 20) + Calculate the Wilcoxon signed-rank test + + The Wilcoxon signed-rank test tests the null hypothesis that two + related samples come from the same distribution. It is a a + non-parametric version of the paired T-test. + + Parameters + ---------- + x : array_like + The first set of measurements + y : array_like, optional, default None + The second set of measurements. If y is not given, then the x array + is considered to be the differences between the two sets of + measurements. + + Returns + ------- + z-statistic : float + The test statistic under the large-sample approximation that the + signed-rank statistic is normally distributed. + p-value : float + The two-sided p-value for the test + + Notes + ----- + Because the normal approximation is used for the calculations, the + samples used should be large. A typical rule is to require that + n > 20. + + References + ---------- + .. [1] http://en.wikipedia.org/wiki/Wilcoxon_signed-rank_test -Returns: t-statistic, two-tailed p-value -""" + """ if y is None: d = x else: @@ -974,7 +1187,7 @@ V = se*se - corr se = sqrt((count*V - T*T)/(count-1.0)) z = (T - mn)/se - prob = 2*(1.0 -stats.zprob(abs(z))) + prob = 2 * distributions.norm.sf(abs(z)) return T, prob def _hermnorm(N): @@ -989,6 +1202,10 @@ plist[n] = plist[n-1].deriv() - poly1d([1,0])*plist[n-1] return plist +@np.lib.deprecate(message=""" +scipy.stats.pdf_moments is broken. It will be removed from scipy in 0.9 +unless it is fixed. +""") def pdf_moments(cnt): """Return the Gaussian expanded pdf function given the list of central moments (first one is mean). @@ -1044,6 +1261,10 @@ return totp(xn)*exp(-xn*xn/2.0) return thefunc +@np.lib.deprecate(message=""" +scipy.stats.pdfapprox is broken. It will be removed from scipy in 0.9 +unless it is fixed. +""") def pdfapprox(samples): """Return a function that approximates the pdf of a set of samples using a Gaussian expansion computed from the mean, variance, skewness @@ -1068,7 +1289,7 @@ """Compute the circular mean for samples assumed to be in the range [low to high] """ ang = (samples - low)*2*pi / (high-low) - res = angle(stats.mean(exp(1j*ang))) + res = angle(np.mean(exp(1j*ang), axis=0)) if (res < 0): res = res + 2*pi return res*(high-low)/2.0/pi + low @@ -1077,7 +1298,7 @@ """Compute the circular variance for samples assumed to be in the range [low to high] """ ang = (samples - low)*2*pi / (high-low) - res = stats.mean(exp(1j*ang)) + res = np.mean(exp(1j*ang), axis=0) V = 1-abs(res) return ((high-low)/2.0/pi)**2 * V @@ -1085,7 +1306,7 @@ """Compute the circular standard deviation for samples assumed to be in the range [low to high] """ ang = (samples - low)*2*pi / (high-low) - res = stats.mean(exp(1j*ang)) + res = np.mean(exp(1j*ang), axis=0) V = 1-abs(res) return ((high-low)/2.0/pi) * sqrt(V) diff -Nru python-scipy-0.7.2+dfsg1/scipy/stats/mstats_basic.py python-scipy-0.8.0+dfsg1/scipy/stats/mstats_basic.py --- python-scipy-0.7.2+dfsg1/scipy/stats/mstats_basic.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/stats/mstats_basic.py 2010-07-26 15:48:37.000000000 +0100 @@ -145,30 +145,32 @@ def count_tied_groups(x, use_missing=False): - """Counts the number of tied values in x, and returns a dictionary - (nb of ties: nb of groups). + """ + Counts the number of tied values in x, and returns a dictionary + (nb of ties: nb of groups). -Parameters ----------- + Parameters + ---------- x : sequence Sequence of data on which to counts the ties use_missing : boolean Whether to consider missing values as tied. -Example -------- - >>>z = [0, 0, 0, 2, 2, 2, 3, 3, 4, 5, 6] - >>>count_tied_groups(z) - >>>{2:1, 3:2} - >>># The ties were 0 (3x), 2 (3x) and 3 (2x) - >>>z = ma.array([0, 0, 1, 2, 2, 2, 3, 3, 4, 5, 6]) - >>>count_tied_groups(z) - >>>{2:2, 3:1} - >>># The ties were 0 (2x), 2 (3x) and 3 (2x) - >>>z[[1,-1]] = masked - >>>count_tied_groups(z) - >>>{2:2, 3:1} - >>># The ties were 2 (3x), 3 (2x) and masked (2x) + Examples + -------- + >>> z = [0, 0, 0, 2, 2, 2, 3, 3, 4, 5, 6] + >>> count_tied_groups(z) + >>> {2:1, 3:2} + >>> # The ties were 0 (3x), 2 (3x) and 3 (2x) + >>> z = ma.array([0, 0, 1, 2, 2, 2, 3, 3, 4, 5, 6]) + >>> count_tied_groups(z) + >>> {2:2, 3:1} + >>> # The ties were 0 (2x), 2 (3x) and 3 (2x) + >>> z[[1,-1]] = masked + >>> count_tied_groups(z, use_missing=True) + >>> {2:2, 3:1} + >>> # The ties were 2 (3x), 3 (2x) and masked (2x) + """ nmasked = ma.getmask(x).sum() # We need the copy as find_repeats will overwrite the initial data @@ -1581,46 +1583,90 @@ #####-------------------------------------------------------------------------- -def mquantiles(data, prob=list([.25,.5,.75]), alphap=.4, betap=.4, axis=None, +def mquantiles(a, prob=list([.25,.5,.75]), alphap=.4, betap=.4, axis=None, limit=()): - """Computes empirical quantiles for a *1xN* data array. -Samples quantile are defined by: -*Q(p) = (1-g).x[i] +g.x[i+1]* -where *x[j]* is the jth order statistic, -with *i = (floor(n*p+m))*, *m=alpha+p*(1-alpha-beta)* and *g = n*p + m - i)*. - -Typical values of (alpha,beta) are: - - - (0,1) : *p(k) = k/n* : linear interpolation of cdf (R, type 4) - - (.5,.5) : *p(k) = (k+1/2.)/n* : piecewise linear function (R, type 5) - - (0,0) : *p(k) = k/(n+1)* : (R type 6) - - (1,1) : *p(k) = (k-1)/(n-1)*. In this case, p(k) = mode[F(x[k])]. - That's R default (R type 7) - - (1/3,1/3): *p(k) = (k-1/3)/(n+1/3)*. Then p(k) ~ median[F(x[k])]. - The resulting quantile estimates are approximately median-unbiased - regardless of the distribution of x. (R type 8) - - (3/8,3/8): *p(k) = (k-3/8)/(n+1/4)*. Blom. - The resulting quantile estimates are approximately unbiased - if x is normally distributed (R type 9) - - (.4,.4) : approximately quantile unbiased (Cunnane) - - (.35,.35): APL, used with PWM + """ + Computes empirical quantiles for a data array. -Parameters ----------- - x : sequence + Samples quantile are defined by :math:`Q(p) = (1-g).x[i] +g.x[i+1]`, + where :math:`x[j]` is the *j*th order statistic, and + `i = (floor(n*p+m))`, `m=alpha+p*(1-alpha-beta)` and `g = n*p + m - i`. + + Typical values of (alpha,beta) are: + - (0,1) : *p(k) = k/n* : linear interpolation of cdf (R, type 4) + - (.5,.5) : *p(k) = (k+1/2.)/n* : piecewise linear + function (R, type 5) + - (0,0) : *p(k) = k/(n+1)* : (R type 6) + - (1,1) : *p(k) = (k-1)/(n-1)*. In this case, p(k) = mode[F(x[k])]. + That's R default (R type 7) + - (1/3,1/3): *p(k) = (k-1/3)/(n+1/3)*. Then p(k) ~ median[F(x[k])]. + The resulting quantile estimates are approximately median-unbiased + regardless of the distribution of x. (R type 8) + - (3/8,3/8): *p(k) = (k-3/8)/(n+1/4)*. Blom. + The resulting quantile estimates are approximately unbiased + if x is normally distributed (R type 9) + - (.4,.4) : approximately quantile unbiased (Cunnane) + - (.35,.35): APL, used with PWM + + Parameters + ---------- + a : array-like Input data, as a sequence or array of dimension at most 2. - prob : sequence + prob : array-like, optional List of quantiles to compute. - alpha : {0.4, float} optional - Plotting positions parameter. - beta : {0.4, float} optional - Plotting positions parameter. - axis : {None, int} optional + alpha : float, optional + Plotting positions parameter, default is 0.4. + beta : float, optional + Plotting positions parameter, default is 0.4. + axis : int, optional Axis along which to perform the trimming. - If None, the input array is first flattened. + If None (default), the input array is first flattened. limit : tuple - Tuple of (lower, upper) values. Values of a outside this closed interval - are ignored. + Tuple of (lower, upper) values. + Values of `a` outside this closed interval are ignored. + + Returns + ------- + quants : MaskedArray + An array containing the calculated quantiles. + + Examples + -------- + >>> from scipy.stats.mstats import mquantiles + >>> a = np.array([6., 47., 49., 15., 42., 41., 7., 39., 43., 40., 36.]) + >>> mquantiles(a) + array([ 19.2, 40. , 42.8]) + + Using a 2D array, specifying axis and limit. + + >>> data = np.array([[ 6., 7., 1.], + [ 47., 15., 2.], + [ 49., 36., 3.], + [ 15., 39., 4.], + [ 42., 40., -999.], + [ 41., 41., -999.], + [ 7., -999., -999.], + [ 39., -999., -999.], + [ 43., -999., -999.], + [ 40., -999., -999.], + [ 36., -999., -999.]]) + >>> mquantiles(data, axis=0, limit=(0, 50)) + array([[ 19.2 , 14.6 , 1.45], + [ 40. , 37.5 , 2.5 ], + [ 42.8 , 40.05, 3.55]]) + + >>> data[:, 2] = -999. + >>> mquantiles(data, axis=0, limit=(0, 50)) + masked_array(data = + [[19.2 14.6 --] + [40.0 37.5 --] + [42.8 40.05 --]], + mask = + [[False False True] + [False False True] + [False False True]], + fill_value = 1e+20) + """ def _quantiles1D(data,m,p): x = np.sort(data.compressed()) @@ -1635,18 +1681,20 @@ return (1.-gamma)*x[(k-1).tolist()] + gamma*x[k.tolist()] # Initialization & checks --------- - data = ma.array(data, copy=False) + data = ma.array(a, copy=False) + if data.ndim > 2: + raise TypeError("Array should be 2D at most !") + # if limit: condition = (limit[0]0): # Harmonic mean only defined if greater than zero + if isinstance(a, np.ma.MaskedArray): + size = a.count(axis) + else: + if axis == None: + a=a.ravel() + size = a.shape[0] + else: + size = a.shape[axis] + return size / np.sum(1.0/a, axis=axis, dtype=dtype) + else: + raise ValueError("Harmonic mean only defined if all elements greater than zero") + + def mean(a, axis=0): - # fixme: This seems to be redundant with numpy.mean(,axis=0) or even - # the ndarray.mean() method. - """Returns the arithmetic mean of m along the given dimension. + """ + Returns the arithmetic mean of m along the given dimension. That is: (x1 + x2 + .. + xn) / n @@ -410,16 +478,30 @@ The arithmetic mean computed over a single dimension of the input array or all values in the array if axis=None. The return value will have a floating point dtype even if the input data are integers. + + + Notes + ----- + scipy.stats.mean is deprecated; please update your code to use numpy.mean. + + Please note that: + - numpy.mean axis argument defaults to None, not 0 + - numpy.mean has a ddof argument to replace bias in a more general + manner. + - scipy.stats.mean(a, bias=True) can be replaced by :: + + numpy.mean(x, axis=0, ddof=1) + + removed in scipy 0.8.0 + """ - warnings.warn("""\ + raise DeprecationWarning("""\ scipy.stats.mean is deprecated; please update your code to use numpy.mean. Please note that: - numpy.mean axis argument defaults to None, not 0 - numpy.mean has a ddof argument to replace bias in a more general manner. scipy.stats.mean(a, bias=True) can be replaced by numpy.mean(x, -axis=0, ddof=1).""", DeprecationWarning) - a, axis = _chk_asarray(a, axis) - return a.mean(axis) +axis=0, ddof=1).""") def cmedian(a, numbins=1000): # fixme: numpy.median() always seems to be a better choice. @@ -446,7 +528,12 @@ References ---------- - [CRCProbStat2000] Section 2.2.6 + [CRCProbStat2000]_ Section 2.2.6 + + .. [CRCProbStat2000] Zwillinger, D. and Kokoska, S. (2000). CRC Standard + Probablity and Statistics Tables and Formulae. Chapman & Hall: New + York. 2000. + """ a = np.ravel(a) n = float(len(a)) @@ -489,14 +576,13 @@ The median of each remaining axis, or of all of the values in the array if axis is None. """ - warnings.warn("""\ + raise DeprecationWarning("""\ scipy.stats.median is deprecated; please update your code to use numpy.median. Please note that: - numpy.median axis argument defaults to None, not 0 - numpy.median has a ddof argument to replace bias in a more general manner. scipy.stats.median(a, bias=True) can be replaced by numpy.median(x, -axis=0, ddof=1).""", DeprecationWarning) - return np.median(a, axis) +axis=0, ddof=1).""") def mode(a, axis=0): """Returns an array of the modal (most common) value in the passed array. @@ -569,24 +655,30 @@ return am def tmean(a, limits=None, inclusive=(True, True)): - """Returns the arithmetic mean of all values in an array, ignoring values - strictly outside given limits. + """ + Compute the trimmed mean + + This function finds the arithmetic mean of given values, ignoring values + outside the given `limits`. Parameters ---------- - a : array - limits : None or (lower limit, upper limit) + a : array_like + array of values + limits : None or (lower limit, upper limit), optional Values in the input array less than the lower limit or greater than the - upper limit will be masked out. When limits is None, then all values are + upper limit will be ignored. When limits is None, then all values are used. Either of the limit values in the tuple can also be None - representing a half-open interval. - inclusive : (bool, bool) + representing a half-open interval. The default value is None. + inclusive : (bool, bool), optional A tuple consisting of the (lower flag, upper flag). These flags - determine whether values exactly equal to lower or upper are allowed. + determine whether values exactly equal to the lower or upper limits + are included. The default value is (True, True). Returns ------- - A float. + tmean : float + """ a = asarray(a) @@ -597,7 +689,7 @@ # No trimming. if limits is None: - return mean(a,None) + return np.mean(a,None) am = mask_to_limits(a.ravel(), limits, inclusive) return am.mean() @@ -609,12 +701,30 @@ return s / n def tvar(a, limits=None, inclusive=(1,1)): - """Returns the sample variance of values in an array, (i.e., using - N-1), ignoring values strictly outside the sequence passed to - 'limits'. Note: either limit in the sequence, or the value of - limits itself, can be set to None. The inclusive list/tuple - determines whether the lower and upper limiting bounds - (respectively) are open/exclusive (0) or closed/inclusive (1). + """ + Compute the trimmed variance + + This function computes the sample variance of an array of values, + while ignoring values which are outside of given `limits`. + + Parameters + ---------- + a : array_like + array of values + limits : None or (lower limit, upper limit), optional + Values in the input array less than the lower limit or greater than the + upper limit will be ignored. When limits is None, then all values are + used. Either of the limit values in the tuple can also be None + representing a half-open interval. The default value is None. + inclusive : (bool, bool), optional + A tuple consisting of the (lower flag, upper flag). These flags + determine whether values exactly equal to the lower or upper limits + are included. The default value is (True, True). + + Returns + ------- + tvar : float + """ a = asarray(a) a = a.astype(float).ravel() @@ -625,42 +735,122 @@ return masked_var(am) def tmin(a, lowerlimit=None, axis=0, inclusive=True): - """Returns the minimum value of a, along axis, including only values - less than (or equal to, if inclusive is True) lowerlimit. If the - limit is set to None, all values in the array are used. + """ + Compute the trimmed minimum + + This function finds the miminum value of an array `a` along the + specified axis, but only considering values greater than a specified + lower limit. + + Parameters + ---------- + a : array_like + array of values + lowerlimit : None or float, optional + Values in the input array less than the given limit will be ignored. + When lowerlimit is None, then all values are used. The default value + is None. + axis : None or int, optional + Operate along this axis. None means to use the flattened array and + the default is zero + inclusive : {True, False}, optional + This flag determines whether values exactly equal to the lower limit + are included. The default value is True. + + Returns + ------- + tmin: float + """ a, axis = _chk_asarray(a, axis) am = mask_to_limits(a, (lowerlimit, None), (inclusive, False)) return ma.minimum.reduce(am, axis) def tmax(a, upperlimit, axis=0, inclusive=True): - """Returns the maximum value of a, along axis, including only values - greater than (or equal to, if inclusive is True) upperlimit. If the limit - is set to None, a limit larger than the max value in the array is - used. + """ + Compute the trimmed maximum + + This function computes the maximum value of an array along a given axis, + while ignoring values larger than a specified upper limit. + + Parameters + ---------- + a : array_like + array of values + upperlimit : None or float, optional + Values in the input array greater than the given limit will be ignored. + When upperlimit is None, then all values are used. The default value + is None. + axis : None or int, optional + Operate along this axis. None means to use the flattened array and + the default is zero. + inclusive : {True, False}, optional + This flag determines whether values exactly equal to the upper limit + are included. The default value is True. + + Returns + ------- + tmax : float + """ a, axis = _chk_asarray(a, axis) am = mask_to_limits(a, (None, upperlimit), (False, inclusive)) return ma.maximum.reduce(am, axis) def tstd(a, limits=None, inclusive=(1,1)): - """Returns the standard deviation of all values in an array, - ignoring values strictly outside the sequence passed to 'limits'. - Note: either limit in the sequence, or the value of limits itself, - can be set to None. The inclusive list/tuple determines whether the - lower and upper limiting bounds (respectively) are open/exclusive - (0) or closed/inclusive (1). + """ + Compute the trimmed sample standard deviation + + This function finds the sample standard deviation of given values, + ignoring values outside the given `limits`. + + Parameters + ---------- + a : array_like + array of values + limits : None or (lower limit, upper limit), optional + Values in the input array less than the lower limit or greater than the + upper limit will be ignored. When limits is None, then all values are + used. Either of the limit values in the tuple can also be None + representing a half-open interval. The default value is None. + inclusive : (bool, bool), optional + A tuple consisting of the (lower flag, upper flag). These flags + determine whether values exactly equal to the lower or upper limits + are included. The default value is (True, True). + + Returns + ------- + tstd : float + """ return np.sqrt(tvar(a,limits,inclusive)) def tsem(a, limits=None, inclusive=(True,True)): - """Returns the standard error of the mean for the values in an array, - (i.e., using N for the denominator), ignoring values strictly outside - the sequence passed to 'limits'. Note: either limit in the - sequence, or the value of limits itself, can be set to None. The - inclusive list/tuple determines whether the lower and upper limiting - bounds (respectively) are open/exclusive (0) or closed/inclusive (1). + """ + Compute the trimmed standard error of the mean + + This function finds the standard error of the mean for given + values, ignoring values outside the given `limits`. + + Parameters + ---------- + a : array_like + array of values + limits : None or (lower limit, upper limit), optional + Values in the input array less than the lower limit or greater than the + upper limit will be ignored. When limits is None, then all values are + used. Either of the limit values in the tuple can also be None + representing a half-open interval. The default value is None. + inclusive : (bool, bool), optional + A tuple consisting of the (lower flag, upper flag). These flags + determine whether values exactly equal to the lower or upper limits + are included. The default value is (True, True). + + Returns + ------- + tsem : float + """ a = np.asarray(a).ravel() if limits is None: @@ -676,21 +866,29 @@ ##################################### def moment(a, moment=1, axis=0): - """Calculates the nth moment about the mean for a sample. + """ + Calculates the nth moment about the mean for a sample. Generally used to calculate coefficients of skewness and kurtosis. Parameters ---------- - a : array + a : array_like + data moment : int + order of central moment that is returned axis : int or None + Axis along which the central moment is computed. If None, then the data + array is raveled. The default axis is zero. Returns ------- - The appropriate moment along the given axis or over all values if axis is - None. + n-th central moment : ndarray or float + The appropriate moment along the given axis or over all values if axis + is None. The denominator for the moment calculation is the number of + observations, no degrees of freedom correction is done. + """ a, axis = _chk_asarray(a, axis) if moment == 1: @@ -720,7 +918,12 @@ References ---------- - [CRCProbStat2000] section 2.2.20 + [CRCProbStat2000]_ Section 2.2.20 + + .. [CRCProbStat2000] Zwillinger, D. and Kokoska, S. (2000). CRC Standard + Probablity and Statistics Tables and Formulae. Chapman & Hall: New + York. 2000. + """ a, axis = _chk_asarray(a, axis) n = a.shape[axis] @@ -728,7 +931,8 @@ def skew(a, axis=0, bias=True): - """Computes the skewness of a data set. + """ + Computes the skewness of a data set. For normally distributed data, the skewness should be about 0. A skewness value > 0 means that there is more weight in the left tail of the @@ -737,19 +941,27 @@ Parameters ---------- - a : array + a : ndarray + data axis : int or None + axis along which skewness is calculated bias : bool If False, then the calculations are corrected for statistical bias. Returns ------- - The skewness of values along an axis, returning 0 where all values are - equal. + skewness : ndarray + The skewness of values along an axis, returning 0 where all values are + equal. References ---------- - [CRCProbStat2000] section 2.2.24.1 + [CRCProbStat2000]_ Section 2.2.24.1 + + .. [CRCProbStat2000] Zwillinger, D. and Kokoska, S. (2000). CRC Standard + Probablity and Statistics Tables and Formulae. Chapman & Hall: New + York. 2000. + """ a, axis = _chk_asarray(a,axis) n = a.shape[axis] @@ -769,21 +981,24 @@ return vals def kurtosis(a, axis=0, fisher=True, bias=True): - """Computes the kurtosis (Fisher or Pearson) of a dataset. + """ + Computes the kurtosis (Fisher or Pearson) of a dataset. - Kurtosis is the fourth central moment divided by the square of the variance. - If Fisher's definition is used, then 3.0 is subtracted from the result to - give 0.0 for a normal distribution. + Kurtosis is the fourth central moment divided by the square of the + variance. If Fisher's definition is used, then 3.0 is subtracted from + the result to give 0.0 for a normal distribution. If bias is False then the kurtosis is calculated using k statistics to - eliminate bias comming from biased moment estimators + eliminate bias coming from biased moment estimators Use kurtosistest() to see if result is close enough to normal. Parameters ---------- a : array + data for which the kurtosis is calculated axis : int or None + Axis along which the kurtosis is calculated fisher : bool If True, Fisher's definition is used (normal ==> 0.0). If False, Pearson's definition is used (normal ==> 3.0). @@ -792,13 +1007,19 @@ Returns ------- - The kurtosis of values along an axis. If all values are equal, return -3 for Fisher's - definition and 0 for Pearson's definition. + kurtosis : array + The kurtosis of values along an axis. If all values are equal, + return -3 for Fisher's definition and 0 for Pearson's definition. References ---------- - [CRCProbStat2000] section 2.2.25 + [CRCProbStat2000]_ Section 2.2.25 + + .. [CRCProbStat2000] Zwillinger, D. and Kokoska, S. (2000). CRC Standard + Probablity and Statistics Tables and Formulae. Chapman & Hall: New + York. 2000. + """ a, axis = _chk_asarray(a, axis) n = a.shape[axis] @@ -823,21 +1044,40 @@ return vals def describe(a, axis=0): - """Computes several descriptive statistics of the passed array. + """ + Computes several descriptive statistics of the passed array. Parameters ---------- - a : array + a : array_like + data axis : int or None + axis along which statistics are calculated. If axis is None, then data + array is raveled. The default axis is zero. Returns ------- - (size of the data, - (min, max), - arithmetic mean, - unbiased variance, - biased skewness, - biased kurtosis) + size of the data : int + length of data along axis + (min, max): tuple of ndarrays or floats + minimum and maximum value of data array + arithmetic mean : ndarray or float + mean of data along axis + unbiased variance : ndarray or float + variance of the data along axis, denominator is number of observations + minus one. + biased skewness : ndarray or float + skewness, based on moment calculations with denominator equal to the + number of observations, i.e. no degrees of freedom correction + biased kurtosis : ndarray or float + kurtosis (Fisher), the kurtosis is normalized so that it is zero for the + normal distribution. No degrees of freedom or bias correction is used. + + See Also + -------- + skew + kurtosis + """ a, axis = _chk_asarray(a, axis) n = a.shape[axis] @@ -854,10 +1094,12 @@ ##################################### def skewtest(a, axis=0): - """Tests whether the skew is significantly different from a normal - distribution. + """ + Tests whether the skew is different from the normal distribution. - The size of the dataset should be >= 8. + This function tests the null hypothesis that the skewness of + the population that the sample was drawn from is the same + as that of a corresponding normal distribution. Parameters ---------- @@ -866,9 +1108,13 @@ Returns ------- - (Z-score, - 2-tail Z-probability, - ) + p-value : float + a 2-sided p-value for the hypothesis test + + Notes + ----- + The sample size should be at least 8. + """ a, axis = _chk_asarray(a, axis) if axis is None: @@ -887,25 +1133,33 @@ alpha = math.sqrt(2.0/(W2-1)) y = np.where(y==0, 1, y) Z = delta*np.log(y/alpha + np.sqrt((y/alpha)**2+1)) - return Z, (1.0 - zprob(np.abs(Z)))*2 - + return Z, 2 * distributions.norm.sf(np.abs(Z)) def kurtosistest(a, axis=0): - """Tests whether a dataset has normal kurtosis (i.e., - kurtosis=3(n-1)/(n+1)). + """ + Tests whether a dataset has normal kurtosis - Valid only for n>20. + This function tests the null hypothesis that the kurtosis + of the population from which the sample was drawn is that + of the normal distribution: kurtosis=3(n-1)/(n+1). Parameters ---------- a : array + array of the sample data axis : int or None + the axis to operate along, or None to work on the whole array. + The default is the first axis. Returns ------- - (Z-score, - 2-tail Z-probability) - The Z-score is set to 0 for bad entries. + p-value : float + The 2-sided p-value for the hypothesis test + + Notes + ----- + Valid only for n>20. The Z-score is set to 0 for bad entries. + """ a, axis = _chk_asarray(a, axis) n = float(a.shape[axis]) @@ -930,11 +1184,18 @@ Z = Z[()] #JPNote: p-value sometimes larger than 1 #zprob uses upper tail, so Z needs to be positive - return Z, (1.0-zprob(np.abs(Z)))*2 + return Z, 2 * distributions.norm.sf(np.abs(Z)) def normaltest(a, axis=0): - """Tests whether skew and/or kurtosis of dataset differs from normal curve. + """ + Tests whether a sample differs from a normal distribution + + This function tests the null hypothesis that a sample comes + from a normal distribution. It is based on D'Agostino and + Pearson's [1]_, [2]_ test that combines skew and kurtosis to + produce an omnibus test of normality. + Parameters ---------- @@ -943,17 +1204,17 @@ Returns ------- - (Chi^2 score, - 2-tail probability) - - Based on the D'Agostino and Pearson's test that combines skew and - kurtosis to produce an omnibus test of normality. + p-value : float + A 2-sided chi squared probability for the hypothesis test - D'Agostino, R. B. and Pearson, E. S. (1971), "An Omnibus Test of - Normality for Moderate and Large Sample Size," Biometrika, 58, 341-348 + References + ---------- + .. [1] D'Agostino, R. B. and Pearson, E. S. (1971), "An Omnibus Test of + Normality for Moderate and Large Sample Size," + Biometrika, 58, 341-348 - D'Agostino, R. B. and Pearson, E. S. (1973), "Testing for departures from - Normality," Biometrika, 60, 613-622 + .. [2] D'Agostino, R. B. and Pearson, E. S. (1973), "Testing for + departures from Normality," Biometrika, 60, 613-622 """ a, axis = _chk_asarray(a, axis) @@ -1146,49 +1407,72 @@ return n[ 1:]-n[:-1] +def histogram(a, numbins=10, defaultlimits=None, weights=None, printextras=False): + """ + Separates the range into several bins and returns the number of instances + of a in each bin. This histogram is based on numpy's histogram but has a + larger range by default if default limits is not set. + Parameters + ---------- + a: array like + Array of scores which will be put into bins. + numbins: integer, optional + The number of bins to use for the histogram. Default is 10. + defaultlimits: tuple (lower, upper), optional + The lower and upper values for the range of the histogram. + If no value is given, a range slightly larger then the range of the + values in a is used. Specifically (a.min() - s, a.max() + s), + where s is (1/2)(a.max() - a.min()) / (numbins - 1) + weights: array like, same length as a, optional + The weights for each value in a. Default is None, which gives each + value a weight of 1.0 + printextras: boolean, optional + If True, the number of extra points is printed to standard output. + Default is False -def histogram(a, numbins=10, defaultlimits=None, printextras=True): - # fixme: use numpy.histogram() to implement - """ -Returns (i) an array of histogram bin counts, (ii) the smallest value -of the histogram binning, and (iii) the bin width (the last 2 are not -necessarily integers). Default number of bins is 10. Defaultlimits -can be None (the routine picks bins spanning all the numbers in the -a) or a 2-sequence (lowerlimit, upperlimit). Returns all of the -following: array of bin values, lowerreallimit, binsize, extrapoints. + Returns + ------- + histogram: array + Number of points (or sum of weights) in each bin + low_range: float + Lowest value of histogram, the lower limit of the first bin. + binsize: float + The size of the bins (all bins have the same size). + extrapoints: integer + The number of points outside the range of the histogram -Returns: (array of bin counts, bin-minimum, min-width, #-points-outside-range) -""" + See Also + -------- + numpy.histogram + + """ a = np.ravel(a) # flatten any >1D arrays - if (defaultlimits is not None): - lowerreallimit = defaultlimits[0] - upperreallimit = defaultlimits[1] - binsize = (upperreallimit-lowerreallimit) / float(numbins) - else: - Min = a.min() - Max = a.max() - estbinwidth = float(Max - Min)/float(numbins - 1) - binsize = (Max-Min+estbinwidth)/float(numbins) - lowerreallimit = Min - binsize/2.0 #lower real limit,1st bin - bins = zeros(numbins) - extrapoints = 0 - for num in a: - try: - if (num-lowerreallimit) < 0: - extrapoints += 1 - else: - bintoincrement = int((num-lowerreallimit) / float(binsize)) - bins[bintoincrement] = bins[bintoincrement] + 1 - except: # point outside lower/upper limits - extrapoints += 1 + if defaultlimits is None: + # no range given, so use values in a + data_min = a.min() + data_max = a.max() + # Have bins extend past min and max values slightly + s = (data_max - data_min) / (2. * (numbins - 1.)) + defaultlimits = (data_min - s, data_max + s) + # use numpy's histogram method to compute bins + hist, bin_edges = np.histogram(a, bins=numbins, range=defaultlimits, + weights=weights) + # hist are not always floats, convert to keep with old output + hist = np.array(hist, dtype=float) + # fixed width for bins is assumed, as numpy's histogram gives + # fixed width bins for int values for 'bins' + binsize = bin_edges[1] - bin_edges[0] + # calculate number of extra points + extrapoints = len([v for v in a + if defaultlimits[0] > v or v > defaultlimits[1]]) if extrapoints > 0 and printextras: # fixme: warnings.warn() print '\nPoints outside given histogram range =',extrapoints - return (bins, lowerreallimit, binsize, extrapoints) + return (hist, defaultlimits[0], binsize, extrapoints) -def cumfreq(a, numbins=10, defaultreallimits=None): +def cumfreq(a, numbins=10, defaultreallimits=None, weights=None): """ Returns a cumulative frequency histogram, using the histogram function. Defaultreallimits can be None (use all data), or a 2-sequence containing @@ -1196,12 +1480,12 @@ Returns: array of cumfreq bin values, lowerreallimit, binsize, extrapoints """ - h,l,b,e = histogram(a,numbins,defaultreallimits) + h,l,b,e = histogram(a, numbins, defaultreallimits, weights=weights) cumhist = np.cumsum(h*1, axis=0) return cumhist,l,b,e -def relfreq(a, numbins=10, defaultreallimits=None): +def relfreq(a, numbins=10, defaultreallimits=None, weights=None): """ Returns a relative frequency histogram, using the histogram function. Defaultreallimits can be None (use all data), or a 2-sequence containing @@ -1209,7 +1493,7 @@ Returns: array of cumfreq bin values, lowerreallimit, binsize, extrapoints """ - h,l,b,e = histogram(a,numbins,defaultreallimits) + h,l,b,e = histogram(a,numbins,defaultreallimits, weights=weights) h = array(h/float(a.shape[0])) return h,l,b,e @@ -1237,8 +1521,8 @@ for i in range(k): nargs.append(args[i].astype(float)) n[i] = float(len(nargs[i])) - v[i] = var(nargs[i]) - m[i] = mean(nargs[i],None) + v[i] = np.var(nargs[i], ddof=1) + m[i] = np.mean(nargs[i]) for j in range(k): for i in range(int(n[j])): t1 = (n[j]-1.5)*n[j]*(nargs[j][i]-m[j])**2 @@ -1247,7 +1531,7 @@ nargs[j][i] = (t1-t2) / float(t3) check = 1 for j in range(k): - if v[j] - mean(nargs[j],None) > TINY: + if v[j] - np.mean(nargs[j]) > TINY: check = 0 if check != 1: raise ValueError, 'Lack of convergence in obrientransform.' @@ -1255,20 +1539,31 @@ return array(nargs) +@np.lib.deprecate(message=""" +scipy.stats.samplevar is deprecated; please update your code to use +numpy.var. + +Please note that `numpy.var` axis argument defaults to None, not 0. +""") def samplevar(a, axis=0): """ -Returns the sample standard deviation of the values in the passed -array (i.e., using N). Axis can equal None (ravel array first), -an integer (the axis over which to operate) -""" + Returns the sample standard deviation of the values in the passed + array (i.e., using N). Axis can equal None (ravel array first), + an integer (the axis over which to operate) + """ a, axis = _chk_asarray(a, axis) - mn = np.expand_dims(mean(a, axis), axis) + mn = np.expand_dims(np.mean(a, axis), axis) deviations = a - mn n = a.shape[axis] svar = ss(deviations,axis) / float(n) return svar +@np.lib.deprecate(message=""" +scipy.stats.samplestd is deprecated; please update your code to use +numpy.std. +Please note that `numpy.std` axis argument defaults to None, not 0. +""") def samplestd(a, axis=0): """Returns the sample standard deviation of the values in the passed array (i.e., using N). Axis can equal None (ravel array first), @@ -1277,16 +1572,31 @@ return np.sqrt(samplevar(a,axis)) -def signaltonoise(instack, axis=0): +def signaltonoise(a, axis=0, ddof=0): """ -Calculates signal-to-noise. Axis can equal None (ravel array -first), an integer (the axis over which to operate). + Calculates the signal-to-noise ratio, defined as the ratio between the mean + and the standard deviation. + -Returns: array containing the value of (mean/stdev) along axis, - or 0 when stdev=0 -""" - m = mean(instack,axis) - sd = samplestd(instack,axis) + Parameters + ---------- + a: array-like + An array like object containing the sample data + axis: int or None, optional + If axis is equal to None, the array is first ravel'd. If axis is an + integer, this is the axis over which to operate. Defaults to None???0 + ddof : integer, optional, default 0 + degrees of freedom correction for standard deviation + + + Returns + ------- + array containing the value of the ratio of the mean to the standard + deviation along axis, or 0, when the standard deviation is equal to 0 + """ + a = np.asanyarray(a) + m = a.mean(axis) + sd = a.std(axis=axis, ddof=ddof) return np.where(sd == 0, 0, m/sd) def var(a, axis=0, bias=False): @@ -1295,23 +1605,14 @@ array (i.e., N-1). Axis can equal None (ravel array first), or an integer (the axis over which to operate). """ - warnings.warn("""\ + raise DeprecationWarning("""\ scipy.stats.var is deprecated; please update your code to use numpy.var. Please note that: - numpy.var axis argument defaults to None, not 0 - numpy.var has a ddof argument to replace bias in a more general manner. scipy.stats.var(a, bias=True) can be replaced by numpy.var(x, axis=0, ddof=0), scipy.stats.var(a, bias=False) by var(x, axis=0, - ddof=1).""", DeprecationWarning) - a, axis = _chk_asarray(a, axis) - mn = np.expand_dims(mean(a,axis),axis) - deviations = a - mn - n = a.shape[axis] - vals = sum(abs(deviations)**2,axis)/(n-1.0) - if bias: - return vals * (n-1.0)/n - else: - return vals + ddof=1).""") def std(a, axis=0, bias=False): """ @@ -1319,39 +1620,60 @@ the passed array (i.e., N-1). Axis can equal None (ravel array first), or an integer (the axis over which to operate). """ - warnings.warn("""\ + raise DeprecationWarning("""\ scipy.stats.std is deprecated; please update your code to use numpy.std. Please note that: - numpy.std axis argument defaults to None, not 0 - numpy.std has a ddof argument to replace bias in a more general manner. scipy.stats.std(a, bias=True) can be replaced by numpy.std(x, axis=0, ddof=0), scipy.stats.std(a, bias=False) by numpy.std(x, axis=0, - ddof=1).""", DeprecationWarning) - return np.sqrt(var(a,axis,bias)) + ddof=1).""") - -def stderr(a, axis=0): +@np.lib.deprecate(message=""" +scipy.stats.stderr is deprecated; please update your code to use +scipy.stats.sem. +""") +def stderr(a, axis=0, ddof=1): """ Returns the estimated population standard error of the values in the passed array (i.e., N-1). Axis can equal None (ravel array first), or an integer (the axis over which to operate). """ a, axis = _chk_asarray(a, axis) - return std(a,axis) / float(np.sqrt(a.shape[axis])) + return np.std(a,axis,ddof=1) / float(np.sqrt(a.shape[axis])) -def sem(a, axis=0): +def sem(a, axis=0, ddof=1): """ -Returns the standard error of the mean (i.e., using N) of the values -in the passed array. Axis can equal None (ravel array first), or an -integer (the axis over which to operate) + Calculates the standard error of the mean (or standard error of + measurement) of the values in the passed array. + + Parameters + ---------- + a: array like + An array containing the values for which + axis: int or None, optional. + if equal None, ravel array first. If equal to an integer, this will be + the axis over which to operate. Defaults to 0. + ddof: int + Delta degrees-of-freedom. How many degrees of freedom to adjust for + bias in limited samples relative to the population estimate of variance + + Returns + ------- + The standard error of the mean in the sample(s), along the input axis + """ a, axis = _chk_asarray(a, axis) n = a.shape[axis] - s = samplestd(a,axis) / np.sqrt(n-1) + #s = samplestd(a,axis) / np.sqrt(n-1) + s = np.std(a,axis=axis, ddof=ddof) / np.sqrt(n) #JP check normalization return s - +@np.lib.deprecate(message=""" +scipy.stats.z is deprecated; please update your code to use +scipy.stats.zscore_compare. +""") def z(a, score): """ Returns the z-score of a given input score, given thearray from which @@ -1359,20 +1681,102 @@ arrays > 1D. """ - z = (score-mean(a,None)) / samplestd(a) + z = (score-np.mean(a,None)) / samplestd(a) return z - +@np.lib.deprecate(message=""" +scipy.stats.zs is deprecated; please update your code to use +scipy.stats.zscore. +""") def zs(a): """ Returns a 1D array of z-scores, one for each score in the passed array, computed relative to the passed array. """ - mu = mean(a,None) + mu = np.mean(a,None) sigma = samplestd(a) return (array(a)-mu)/sigma + +def zscore(a, axis=0, ddof=0): + """ + Calculates the z score of each value in the sample, relative to the sample + mean and standard deviation. + + Parameters + ---------- + a: array_like + An array like object containing the sample data + axis: int or None, optional + If axis is equal to None, the array is first ravel'd. If axis is an + integer, this is the axis over which to operate. Defaults to 0. + + Returns + ------- + zscore: array_like + the z-scores, standardized by mean and standard deviation of input + array + + Notes + ----- + This function does not convert array classes, and works also with + matrices and masked arrays. + + """ + a = np.asanyarray(a) + mns = a.mean(axis=axis) + sstd = a.std(axis=axis, ddof=ddof) + if axis and mns.ndim < a.ndim: + return ((a - np.expand_dims(mns, axis=axis) / + np.expand_dims(sstd,axis=axis))) + else: + return (a - mns) / sstd + + + +def zmap(scores, compare, axis=0, ddof=0): + """ + Calculates the zscores relative to the mean and standard deviation + of second input. + + Returns an array of z-scores, i.e. scores that are standardized to zero + mean and unit variance, where mean and variance are calculated from the + comparison array. + + Parameters + ---------- + scores : array-like + The input for which z scores are calculated + compare : array-like + The input from which the mean and standard deviation of the + normalization are taken, assumed to have same dimension as scores + axis : integer or None, {optional, default 0) + axis over which mean and std of compare array are calculated + + Returns + ------- + zscore : array_like + zscore in the same shape as scores + + Notes + ----- + This function does not convert array classes, and works also with + matrices and masked arrays. + + """ + scores, compare = map(np.asanyarray, [scores, compare]) + mns = compare.mean(axis=axis) + sstd = compare.std(axis=axis, ddof=ddof) + if axis and mns.ndim < compare.ndim: + return ((scores - np.expand_dims(mns, axis=axis) / + np.expand_dims(sstd,axis=axis))) + else: + return (scores - mns) / sstd + + + + def zmap(scores, compare, axis=0): """ Returns an array of z-scores the shape of scores (e.g., [x,y]), compared to @@ -1380,7 +1784,7 @@ of the compare array. """ - mns = mean(compare,axis) + mns = np.mean(compare,axis) sstd = samplestd(compare,0) return (scores - mns) / sstd @@ -1410,6 +1814,68 @@ return a + +def sigmaclip(a, low=4., high=4.): + """Iterative sigma-clipping of array elements. + + The output array contains only those elements of the input array `c` + that satisfy the conditions :: + + mean(c) - std(c)*low < c < mean(c) + std(c)*high + + Parameters + ---------- + a : array_like + data array, will be raveled if not 1d + low : float + lower bound factor of sigma clipping + high : float + upper bound factor of sigma clipping + + Returns + ------- + c : array + input array with clipped elements removed + critlower : float + lower threshold value use for clipping + critlupper : float + upper threshold value use for clipping + + + Examples + -------- + >>> a = np.concatenate((np.linspace(9.5,10.5,31),np.linspace(0,20,5))) + >>> fact = 1.5 + >>> c, low, upp = sigmaclip(a, fact, fact) + >>> c + array([ 9.96666667, 10. , 10.03333333, 10. ]) + >>> c.var(), c.std() + (0.00055555555555555165, 0.023570226039551501) + >>> low, c.mean() - fact*c.std(), c.min() + (9.9646446609406727, 9.9646446609406727, 9.9666666666666668) + >>> upp, c.mean() + fact*c.std(), c.max() + (10.035355339059327, 10.035355339059327, 10.033333333333333) + + >>> a = np.concatenate((np.linspace(9.5,10.5,11), + np.linspace(-100,-50,3))) + >>> c, low, upp = sigmaclip(a, 1.8, 1.8) + >>> (c == np.linspace(9.5,10.5,11)).all() + True + + """ + c = np.asarray(a).ravel() + delta = 1 + while delta: + c_std = c.std() + c_mean = c.mean() + size = c.size + critlower = c_mean - c_std*low + critupper = c_mean + c_std*high + c = c[(c>critlower) & (c 2. + a, b : 1D or 2D array_like, b is optional + One or two 1-D or 2-D arrays containing multiple variables and + observations. Each column of m represents a variable, and each row + entry a single observation of those variables. Also see axis below. + Both arrays need to have the same length in the `axis` dimension. + + axis : int or None, optional + If axis=0 (default), then each column represents a variable, with + observations in the rows. If axis=0, the relationship is transposed: + each row represents a variable, while the columns contain observations. + If axis=None, then both arrays will be raveled Returns ------- - (Spearman correlation coefficient, - 2-tailed p-value) + rho: float or array (2D square) + Spearman correlation matrix or correlation coefficient (if only 2 variables + are given as parameters. Correlation matrix is square with length + equal to total number of variables (columns or rows) in a and b + combined + p-value : float + The two-sided p-value for a hypothesis test whose null hypothesis is + that two sets of data are uncorrelated, has same dimension as rho + + Notes + ----- + changes in scipy 0.8: rewrite to add tie-handling, and axis References ---------- - [CRCProbStat2000] section 14.7 - """ - x = np.asanyarray(x) - y = np.asanyarray(y) - n = len(x) - m = len(y) - if n != m: - raise ValueError("lengths of x and y must match: %s != %s" % (n, m)) - if n <= 2: - raise ValueError("length must be > 2") - rankx = rankdata(x) - ranky = rankdata(y) - dsq = np.add.reduce((rankx-ranky)**2) - rs = 1 - 6*dsq / float(n*(n**2-1)) - df = n-2 + [CRCProbStat2000]_ Section 14.7 - try: - t = rs * np.sqrt((n-2) / ((rs+1.0)*(1.0-rs))) - probrs = betai(0.5*df, 0.5, df/(df+t*t)) - except ZeroDivisionError: - probrs = 0.0 + .. [CRCProbStat2000] Zwillinger, D. and Kokoska, S. (2000). CRC Standard + Probablity and Statistics Tables and Formulae. Chapman & Hall: New + York. 2000. - return rs, probrs + Examples + -------- + + >>> spearmanr([1,2,3,4,5],[5,6,7,8,7]) + (0.82078268166812329, 0.088587005313543798) + >>> np.random.seed(1234321) + >>> x2n=np.random.randn(100,2) + >>> y2n=np.random.randn(100,2) + >>> spearmanr(x2n) + (0.059969996999699973, 0.55338590803773591) + >>> spearmanr(x2n[:,0], x2n[:,1]) + (0.059969996999699973, 0.55338590803773591) + >>> rho, pval = spearmanr(x2n,y2n) + >>> rho + array([[ 1. , 0.05997 , 0.18569457, 0.06258626], + [ 0.05997 , 1. , 0.110003 , 0.02534653], + [ 0.18569457, 0.110003 , 1. , 0.03488749], + [ 0.06258626, 0.02534653, 0.03488749, 1. ]]) + >>> pval + array([[ 0. , 0.55338591, 0.06435364, 0.53617935], + [ 0.55338591, 0. , 0.27592895, 0.80234077], + [ 0.06435364, 0.27592895, 0. , 0.73039992], + [ 0.53617935, 0.80234077, 0.73039992, 0. ]]) + >>> rho, pval = spearmanr(x2n.T, y2n.T, axis=1) + >>> rho + array([[ 1. , 0.05997 , 0.18569457, 0.06258626], + [ 0.05997 , 1. , 0.110003 , 0.02534653], + [ 0.18569457, 0.110003 , 1. , 0.03488749], + [ 0.06258626, 0.02534653, 0.03488749, 1. ]]) + >>> spearmanr(x2n, y2n, axis=None) + (0.10816770419260482, 0.1273562188027364) + >>> spearmanr(x2n.ravel(), y2n.ravel()) + (0.10816770419260482, 0.1273562188027364) + + >>> xint = np.random.randint(10,size=(100,2)) + >>> spearmanr(xint) + (0.052760927029710199, 0.60213045837062351) + + """ + a, axisout = _chk_asarray(a, axis) + ar = np.apply_along_axis(rankdata,axisout,a) + + br = None + if not b is None: + b, axisout = _chk_asarray(b, axis) + br = np.apply_along_axis(rankdata,axisout,b) + n = a.shape[axisout] + rs = np.corrcoef(ar,br,rowvar=axisout) + + t = rs * np.sqrt((n-2) / ((rs+1.0)*(1.0-rs))) + prob = distributions.t.sf(np.abs(t),n-2)*2 + + if rs.shape == (2,2): + return rs[1,0], prob[1,0] + else: + return rs, prob def pointbiserialr(x, y): @@ -1718,7 +2281,8 @@ y0m = y0.mean() y1m = y1.mean() - rpb = (y1m - y0m)*np.sqrt(phat * (1-phat)) / y.std() + # phat - phat**2 is more stable than phat*(1-phat) + rpb = (y1m - y0m) * np.sqrt(phat - phat**2) / y.std() df = n-2 # fixme: see comment about TINY in pearsonr() @@ -1729,10 +2293,29 @@ def kendalltau(x, y): - """Calculates Kendall's tau, a correlation measure for ordinal data, and an - associated p-value. + """ + Calculates Kendall's tau, a correlation measure for ordinal data + + Kendall's tau is a measure of the correspondence between two rankings. + Values close to 1 indicate strong agreement, values close to -1 indicate + strong disagreement. This is the tau-b version of Kendall's tau which + accounts for ties. + + Parameters + ---------- + x : array_like + array of rankings + y : array_like + second array of rankings, must be the same length as x + + Returns + ------- + Kendall's tau : float + The tau statistic + p-value : float + The two-sided p-value for a hypothesis test whose null hypothesis is + an absence of association, tau = 0. - Returns: Kendall's tau, two-tailed p-value """ n1 = 0 n2 = 0 @@ -1757,16 +2340,52 @@ tau = iss / np.sqrt(float(n1*n2)) svar = (4.0*len(x)+10.0) / (9.0*len(x)*(len(x)-1)) z = tau / np.sqrt(svar) - prob = erfc(abs(z)/1.4142136) + prob = special.erfc(abs(z)/1.4142136) return tau, prob def linregress(*args): - """Calculates a regression line on two arrays, x and y, corresponding to - x,y pairs. If a single 2D array is passed, linregress finds dim with 2 - levels and splits data into x,y pairs along that dim. + """ + Calculate a regression line + + This computes a least-squares regression for two sets of measurements. + + Parameters + ---------- + x, y : array_like + two sets of measurements. Both arrays should have the same length. + If only x is given, then it must be a two-dimensional array where one + dimension has length 2. The two sets of measurements are then found + by splitting the array along the length-2 dimension. + + Returns + ------- + slope : float + slope of the regression line + intercept : float + intercept of the regression line + r-value : float + correlation coefficient + p-value : float + two-sided p-value for a hypothesis test whose null hypothesis is + that the slope is zero. + stderr : float + Standard error of the estimate + + + Examples + -------- + >>> from scipy import stats + >>> import numpy as np + >>> x = np.random.random(10) + >>> y = np.random.random(10) + >>> slope, intercept, r_value, p_value, std_err = stats.linregress(x,y) + + # To get coefficient of determination (r_squared) + + >>> print "r-squared:", r_value**2 + r-squared: 0.15286643777 - Returns: slope, intercept, r, two-tailed prob, stderr-of-the-estimate """ TINY = 1.0e-20 if len(args) == 1: # more than 1D array? @@ -1959,7 +2578,7 @@ n2 = b.shape[axis] df = n1+n2-2 - d = mean(a,axis) - mean(b,axis) + d = np.mean(a,axis) - np.mean(b,axis) svar = ((n1-1)*v1+(n2-1)*v2) / float(df) t = d/np.sqrt(svar*(1.0/n1 + 1.0/n2)) @@ -1979,7 +2598,8 @@ def ttest_rel(a,b,axis=0): - """Calculates the T-test on TWO RELATED samples of scores, a and b. + """ + Calculates the T-test on TWO RELATED samples of scores, a and b. This is a two-sided test for the null hypothesis that 2 related or repeated samples have identical average (expected) values. @@ -1999,40 +2619,35 @@ prob : float or array two-tailed p-value - Notes ----- - Examples for the use are scores of the same set of student in different exams, or repeated sampling from the same units. The test measures whether the average score differs significantly across samples (e.g. exams). If we observe a large p-value, for - example greater than 0.5 or 0.1 then we cannot reject the null + example greater than 0.05 or 0.1 then we cannot reject the null hypothesis of identical average scores. If the p-value is smaller than the threshold, e.g. 1%, 5% or 10%, then we reject the null hypothesis of equal averages. Small p-values are associated with large t-statistics. - References - ---------- + References + ---------- - http://en.wikipedia.org/wiki/T-test#Dependent_t-test + http://en.wikipedia.org/wiki/T-test#Dependent_t-test Examples -------- >>> from scipy import stats - >>> import numpy as np - - >>> #fix random seed to get the same result - >>> np.random.seed(12345678) + >>> np.random.seed(12345678) # fix random seed to get same numbers >>> rvs1 = stats.norm.rvs(loc=5,scale=10,size=500) - >>> rvs2 = stats.norm.rvs(loc=5,scale=10,size=500) + \ - stats.norm.rvs(scale=0.2,size=500) + >>> rvs2 = (stats.norm.rvs(loc=5,scale=10,size=500) + + ... stats.norm.rvs(scale=0.2,size=500)) >>> stats.ttest_rel(rvs1,rvs2) (0.24101764965300962, 0.80964043445811562) - >>> rvs3 = stats.norm.rvs(loc=8,scale=10,size=500) + \ - stats.norm.rvs(scale=0.2,size=500) + >>> rvs3 = (stats.norm.rvs(loc=8,scale=10,size=500) + + ... stats.norm.rvs(scale=0.2,size=500)) >>> stats.ttest_rel(rvs1,rvs3) (-3.9995108708727933, 7.3082402191726459e-005) @@ -2070,7 +2685,7 @@ #import distributions def kstest(rvs, cdf, args=(), N=20, alternative = 'two_sided', mode='approx',**kwds): """ - Return the D-value and the p-value for a Kolmogorov-Smirnov test + Perform the Kolmogorov-Smirnov test for goodness of fit This performs a test of the distribution G(x) of an observed random variable against a given distribution F(x). Under the null @@ -2118,15 +2733,11 @@ Notes ----- - In the two one-sided test, the alternative is that the empirical + In the one-sided test, the alternative is that the empirical cumulative distribution function of the random variable is "less" - or "greater" then the cumulative distribution function F(x) of the + or "greater" than the cumulative distribution function F(x) of the hypothesis, G(x)<=F(x), resp. G(x)>=F(x). - If the p-value is greater than the significance level (say 5%), then we - cannot reject the hypothesis that the data come from the given - distribution. - Examples -------- @@ -2228,12 +2839,49 @@ else: return D, distributions.ksone.sf(D,N)*2 -def chisquare(f_obs, f_exp=None): - """ Calculates a one-way chi square for array of observed frequencies - and returns the result. If no expected frequencies are given, the total - N is assumed to be equally distributed across all groups. +def chisquare(f_obs, f_exp=None, ddof=0): + """ + Calculates a one-way chi square test. + + The chi square test tests the null hypothesis that the categorical data + has the given frequencies. + + Parameters + ---------- + f_obs : array + observed frequencies in each category + f_exp : array, optional + expected frequencies in each category. By default the categories are + assumed to be equally likely. + ddof : int, optional + adjustment to the degrees of freedom for the p-value + + Returns + ------- + chisquare statistic : float + The chisquare test statistic + p : float + The p-value of the test. + + Notes + ----- + This test is invalid when the observed or expected frequencies in each + category are too small. A typical rule is that all of the observed + and expected frequencies should be at least 5. + The default degrees of freedom, k-1, are for the case when no parameters + of the distribution are estimated. If p parameters are estimated by + efficient maximum likelihood then the correct degrees of freedom are + k-1-p. If the parameters are estimated in a different way, then then + the dof can be between k-1-p and k-1. However, it is also possible that + the asymptotic distributions is not a chisquare, in which case this + test is notappropriate. + + References + ---------- + + .. [1] Lowry, Richard. "Concepts and Applications of Inferential + Statistics". Chapter 8. http://faculty.vassar.edu/lowry/ch8pt1.html - Returns: chisquare-statistic, associated p-value """ f_obs = asarray(f_obs) @@ -2242,11 +2890,12 @@ f_exp = array([np.sum(f_obs,axis=0)/float(k)] * len(f_obs),float) f_exp = f_exp.astype(float) chisq = np.add.reduce((f_obs-f_exp)**2 / f_exp) - return chisq, chisqprob(chisq, k-1) + return chisq, chisqprob(chisq, k-1-ddof) def ks_2samp(data1, data2): - """ Computes the Kolmogorov-Smirnof statistic on 2 samples. + """ + Computes the Kolmogorov-Smirnof statistic on 2 samples. This is a two-sided test for the null hypothesis that 2 independent samples are drawn from the same continuous distribution. @@ -2280,8 +2929,8 @@ reject the hypothesis that the distributions of the two samples are the same. - Examples: - --------- + Examples + -------- >>> from scipy import stats >>> import numpy as np @@ -2338,22 +2987,23 @@ def mannwhitneyu(x, y, use_continuity=True): - """Computes the Mann-Whitney rank test on samples x and y. - + """ + Computes the Mann-Whitney rank test on samples x and y. Parameters ---------- - x : array_like 1d - y : array_like 1d - use_continuity : {True, False} optional, default True - Whether a continuity correction (1/2.) should be taken into account. + x, y : array_like + Array of samples, should be one-dimensional. + use_continuity : bool, optional + Whether a continuity correction (1/2.) should be taken into + account. Default is True. Returns ------- - u : float - The Mann-Whitney statistics - prob : float - one-sided p-value assuming a asymptotic normal distribution. + u : float + The Mann-Whitney statistics. + prob : float + One-sided p-value assuming a asymptotic normal distribution. Notes ----- @@ -2365,7 +3015,7 @@ This test corrects for ties and by default uses a continuity correction. The reported p-value is for a one-sided hypothesis, to get the two-sided p-value multiply the returned p-value by 2. - + """ x = asarray(x) y = asarray(y) @@ -2417,10 +3067,36 @@ def ranksums(x, y): - """Calculates the rank sums statistic on the provided scores and - returns the result. + """ + Compute the Wilcoxon rank-sum statistic for two samples. + + The Wilcoxon rank-sum test tests the null hypothesis that two sets + of measurements are drawn from the same distribution. The alternative + hypothesis is that values in one sample are more likely to be + larger than the values in the other sample. + + This test should be used to compare two samples from continuous + distributions. It does not handle ties between measurements + in x and y. For tie-handling and an optional continuity correction + see `stats.mannwhitneyu`_ + + Parameters + ---------- + x,y : array_like + The data from the two samples + + Returns + ------- + z-statistic : float + The test statistic under the large-sample approximation that the + rank sum statistic is normally distributed + p-value : float + The two-sided p-value of the test + + References + ---------- + .. [1] http://en.wikipedia.org/wiki/Wilcoxon_rank-sum_test - Returns: z-statistic, two-tailed p-value """ x,y = map(np.asarray, (x, y)) n1 = len(x) @@ -2432,18 +3108,46 @@ s = np.sum(x,axis=0) expected = n1*(n1+n2+1) / 2.0 z = (s - expected) / np.sqrt(n1*n2*(n1+n2+1)/12.0) - prob = 2*(1.0 -zprob(abs(z))) + prob = 2 * distributions.norm.sf(abs(z)) return z, prob def kruskal(*args): - """The Kruskal-Wallis H-test is a non-parametric ANOVA for 2 or more - groups, requiring at least 5 subjects in each group. This function - calculates the Kruskal-Wallis H and associated p-value for 2 or more - independent samples. + """ + Compute the Kruskal-Wallis H-test for independent samples + + The Kruskal-Wallis H-test tests the null hypothesis that the population + median of all of the groups are equal. It is a non-parametric version of + ANOVA. The test works on 2 or more independent samples, which may have + different sizes. Note that rejecting the null hypothesis does not + indicate which of the groups differs. Post-hoc comparisons between + groups are required to determine which groups are different. + + Parameters + ---------- + sample1, sample2, ... : array_like + Two or more arrays with the sample measurements can be given as + arguments. + + Returns + ------- + H-statistic : float + The Kruskal-Wallis H statistic, corrected for ties + p-value : float + The p-value for the test using the assumption that H has a chi + square distribution + + Notes + ----- + Due to the assumption that H has a chi square distribution, the number + of samples in each group must not be too small. A typical rule is + that each sample must have at least 5 measurements. + + References + ---------- + .. [1] http://en.wikipedia.org/wiki/Kruskal-Wallis_one-way_analysis_of_variance - Returns: H-statistic (corrected for ties), associated p-value """ assert len(args) >= 2, "Need at least 2 groups in stats.kruskal()" n = map(len,args) @@ -2471,15 +3175,40 @@ def friedmanchisquare(*args): - """Friedman Chi-Square is a non-parametric, one-way within-subjects - ANOVA. This function calculates the Friedman Chi-square test for - repeated measures and returns the result, along with the associated - probability value. + """ + Computes the Friedman test for repeated measurements + + The Friedman test tests the null hypothesis that repeated measurements of + the same individuals have the same distribution. It is often used + to test for consistency among measurements obtained in different ways. + For example, if two measurement techniques are used on the same set of + individuals, the Friedman test can be used to determine if the two + measurement techniques are consistent. - This function uses Chisquared aproximation of Friedman Chisquared - distribution. This is exact only if n > 10 and factor levels > 6. + Parameters + ---------- + measurements1, measurements2, measurements3... : array_like + Arrays of measurements. All of the arrays must have the same number + of elements. At least 3 sets of measurements must be given. + + Returns + ------- + friedman chi-square statistic : float + the test statistic, correcting for ties + p-value : float + the associated p-value assuming that the test statistic has a chi + squared distribution + + Notes + ----- + Due to the assumption that the test statistic has a chi squared + distribution, the p-vale is only reliable for n > 10 and more than + 6 repeated measurements. + + References + ---------- + .. [1] http://en.wikipedia.org/wiki/Friedman_test - Returns: friedman chi-square statistic, associated p-valueIt assumes 3 or more repeated measures. Only 3 """ k = len(args) if k < 3: @@ -2488,8 +3217,6 @@ for i in range(1,k): if len(args[i]) <> n: raise ValueError, 'Unequal N in friedmanchisquare. Aborting.' - if n < 10 and k < 6: - print 'Warning: friedmanchisquare test using Chisquared aproximation' # Rank data data = apply(_support.abut,args) @@ -2515,7 +3242,8 @@ ##################################### zprob = special.ndtr -erfc = special.erfc +erfc = np.lib.deprecate(special.erfc, old_name="scipy.stats.erfc", + new_name="scipy.special.erfc") def chisqprob(chisq, df): """Returns the (1-tail) probability value associated with the provided @@ -2619,11 +3347,29 @@ return n_um / d_en def f_value(ER, EF, dfR, dfF): - """Returns an F-statistic given the following: - ER = error associated with the null hypothesis (the Restricted model) - EF = error associated with the alternate hypothesis (the Full model) - dfR = degrees of freedom the Restricted model - dfF = degrees of freedom associated with the Restricted model + """ + Returns an F-statistic for a restricted vs. unrestricted model. + + Parameters + ---------- + ER : float + `ER` is the sum of squared residuals for the restricted model + or null hypothesis + + EF : float + `EF` is the sum of squared residuals for the unrestricted model + or alternate hypothesis + + dfR : int + `dfR` is the degrees of freedom in the restricted model + + dfF : int + `dfF` is the degrees of freedom in the unrestricted model + + Returns + ------- + F-statistic : float + """ return ((ER-EF)/float(dfR-dfF) / (EF/float(dfF))) diff -Nru python-scipy-0.7.2+dfsg1/scipy/stats/tests/test_continuous_basic.py python-scipy-0.8.0+dfsg1/scipy/stats/tests/test_continuous_basic.py --- python-scipy-0.7.2+dfsg1/scipy/stats/tests/test_continuous_basic.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/stats/tests/test_continuous_basic.py 2010-07-26 15:48:37.000000000 +0100 @@ -23,7 +23,7 @@ """ #currently not used -DECIMAL = 0 # specify the precision of the tests +DECIMAL = 5 # specify the precision of the tests # increased from 0 to 5 DECIMAL_kurt = 0 distcont = [ @@ -73,7 +73,7 @@ ['invweibull', (10.58,)], # sample mean test fails at(0.58847112119264788,)] ['johnsonsb', (4.3172675099141058, 3.1837781130785063)], ['johnsonsu', (2.554395574161155, 2.2482281679651965)], - ['ksone', (22,)], # new added + ['ksone', (1000,)], #replace 22 by 100 to avoid failing range, ticket 956 ['kstwobign', ()], ['laplace', ()], ['levy', ()], @@ -285,7 +285,7 @@ ## assert abs(sm) > 10000, 'infinite moment, sm = ' + str(sm) def check_cdf_ppf(distfn,arg,msg): - npt.assert_almost_equal(distfn.cdf(distfn.ppf([0.001,0.5,0.990], *arg), *arg), + npt.assert_almost_equal(distfn.cdf(distfn.ppf([0.001,0.5,0.999], *arg), *arg), [0.001,0.5,0.999], decimal=DECIMAL, err_msg= msg + \ ' - cdf-ppf roundtrip') diff -Nru python-scipy-0.7.2+dfsg1/scipy/stats/tests/test_discrete_basic.py python-scipy-0.8.0+dfsg1/scipy/stats/tests/test_discrete_basic.py --- python-scipy-0.7.2+dfsg1/scipy/stats/tests/test_discrete_basic.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/stats/tests/test_discrete_basic.py 2010-07-26 15:48:37.000000000 +0100 @@ -13,12 +13,15 @@ ['dlaplace', (0.8,)], #0.5 ['geom', (0.5,)], ['hypergeom',(30, 12, 6)], - ['logser', (0.6,)], + ['hypergeom',(21,3,12)], #numpy.random (3,18,12) numpy ticket:921 + ['hypergeom',(21,18,11)], #numpy.random (18,3,11) numpy ticket:921 + ['logser', (0.6,)], # reenabled, numpy ticket:921 ['nbinom', (5, 0.5)], ['nbinom', (0.4, 0.4)], #from tickets: 583 ['planck', (0.51,)], #4.1 ['poisson', (0.6,)], ['randint', (7, 31)], + ['skellam', (15, 8)], ['zipf', (4,)] ] # arg=4 is ok, # Zipf broken for arg = 2, e.g. weird .stats # looking closer, mean, var should be inf for arg=2 @@ -31,19 +34,21 @@ #assert stats.dlaplace.rvs(0.8) is not None np.random.seed(9765456) rvs = distfn.rvs(size=2000,*arg) + supp = np.unique(rvs) m,v = distfn.stats(*arg) #yield npt.assert_almost_equal(rvs.mean(), m, decimal=4,err_msg='mean') #yield npt.assert_almost_equal, rvs.mean(), m, 2, 'mean' # does not work yield check_sample_meanvar, rvs.mean(), m, distname + ' sample mean test' yield check_sample_meanvar, rvs.var(), v, distname + ' sample var test' yield check_cdf_ppf, distfn, arg, distname + ' cdf_ppf' + yield check_cdf_ppf2, distfn, arg, supp, distname + ' cdf_ppf' yield check_pmf_cdf, distfn, arg, distname + ' pmf_cdf' yield check_oth, distfn, arg, distname + ' oth' skurt = stats.kurtosis(rvs) sskew = stats.skew(rvs) yield check_sample_skew_kurt, distfn, arg, skurt, sskew, \ distname + ' skew_kurt' - if not distname in ['logser']: #known failure + if not distname in ['']:#['logser']: #known failure, fixed alpha = 0.01 yield check_discrete_chisquare, distfn, arg, rvs, alpha, \ distname + ' chisquare' @@ -94,6 +99,14 @@ err_msg=msg + 'ppf-cdf-median') assert (distfn.ppf(cdf05+1e-4,*arg)>ppf05), msg + 'ppf-cdf-next' +def check_cdf_ppf2(distfn,arg,supp,msg): + npt.assert_array_equal(distfn.ppf(distfn.cdf(supp,*arg),*arg), + supp, msg + '-roundtrip') + npt.assert_array_equal(distfn.ppf(distfn.cdf(supp,*arg)-1e-8,*arg), + supp, msg + '-roundtrip') + # -1e-8 could cause an error if pmf < 1e-8 + + def check_cdf_ppf_private(distfn,arg,msg): ppf05 = distfn._ppf(0.5,*arg) cdf05 = distfn.cdf(ppf05,*arg) @@ -236,7 +249,7 @@ histsupp[0] = distfn.a # find sample frequencies and perform chisquare test - freq,hsupp = np.histogram(rvs,histsupp,new=True) + freq,hsupp = np.histogram(rvs,histsupp) cdfs = distfn.cdf(distsupp,*arg) (chis,pval) = stats.chisquare(np.array(freq),n*distmass) diff -Nru python-scipy-0.7.2+dfsg1/scipy/stats/tests/test_distributions.py python-scipy-0.8.0+dfsg1/scipy/stats/tests/test_distributions.py --- python-scipy-0.7.2+dfsg1/scipy/stats/tests/test_distributions.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/stats/tests/test_distributions.py 2010-07-26 15:48:37.000000000 +0100 @@ -6,8 +6,10 @@ import numpy +import numpy as np from numpy import typecodes, array import scipy.stats as stats +from scipy.stats.distributions import argsreduce def kolmogorov_check(diststr,args=(),N=20,significance=0.01): qtest = stats.ksoneisf(significance,N) @@ -65,12 +67,30 @@ vals[1] = vals[0] + 1.0 args = tuple(vals) elif dist == 'vonmises': - yield check_distribution, dist, (100,), alpha + yield check_distribution, dist, (10,), alpha + yield check_distribution, dist, (101,), alpha args = tuple(1.0+rand(nargs)) else: args = tuple(1.0+rand(nargs)) yield check_distribution, dist, args, alpha +def check_vonmises_pdf_periodic(k,l,s,x): + vm = stats.vonmises(k,loc=l,scale=s) + assert_almost_equal(vm.pdf(x),vm.pdf(x%(2*numpy.pi*s))) +def check_vonmises_cdf_periodic(k,l,s,x): + vm = stats.vonmises(k,loc=l,scale=s) + assert_almost_equal(vm.cdf(x)%1,vm.cdf(x%(2*numpy.pi*s))%1) + +def test_vonmises_pdf_periodic(): + for k in [0.1, 1, 101]: + for x in [0,1,numpy.pi,10,100]: + yield check_vonmises_pdf_periodic, k, 0, 1, x + yield check_vonmises_pdf_periodic, k, 1, 1, x + yield check_vonmises_pdf_periodic, k, 0, 10, x + + yield check_vonmises_cdf_periodic, k, 0, 1, x + yield check_vonmises_cdf_periodic, k, 1, 1, x + yield check_vonmises_cdf_periodic, k, 0, 10, x class TestRandInt(TestCase): def test_rvs(self): @@ -261,6 +281,72 @@ assert_almost_equal(stats.exponpow.cdf(1e-10, 2.), 1e-20) assert_almost_equal(stats.exponpow.isf(stats.exponpow.sf(5, .8), .8), 5) + +class TestSkellam(TestCase): + def test_pmf(self): + #comparison to R + k = numpy.arange(-10, 15) + mu1, mu2 = 10, 5 + skpmfR = numpy.array( + [4.2254582961926893e-005, 1.1404838449648488e-004, + 2.8979625801752660e-004, 6.9177078182101231e-004, + 1.5480716105844708e-003, 3.2412274963433889e-003, + 6.3373707175123292e-003, 1.1552351566696643e-002, + 1.9606152375042644e-002, 3.0947164083410337e-002, + 4.5401737566767360e-002, 6.1894328166820688e-002, + 7.8424609500170578e-002, 9.2418812533573133e-002, + 1.0139793148019728e-001, 1.0371927988298846e-001, + 9.9076583077406091e-002, 8.8546660073089561e-002, + 7.4187842052486810e-002, 5.8392772862200251e-002, + 4.3268692953013159e-002, 3.0248159818374226e-002, + 1.9991434305603021e-002, 1.2516877303301180e-002, + 7.4389876226229707e-003]) + + assert_almost_equal(stats.skellam.pmf(k, mu1, mu2), skpmfR, decimal=15) + + def test_cdf(self): + #comparison to R, only 5 decimals + k = numpy.arange(-10, 15) + mu1, mu2 = 10, 5 + skcdfR = numpy.array( + [6.4061475386192104e-005, 1.7810985988267694e-004, + 4.6790611790020336e-004, 1.1596768997212152e-003, + 2.7077485103056847e-003, 5.9489760066490718e-003, + 1.2286346724161398e-002, 2.3838698290858034e-002, + 4.3444850665900668e-002, 7.4392014749310995e-002, + 1.1979375231607835e-001, 1.8168808048289900e-001, + 2.6011268998306952e-001, 3.5253150251664261e-001, + 4.5392943399683988e-001, 5.5764871387982828e-001, + 6.5672529695723436e-001, 7.4527195703032389e-001, + 8.1945979908281064e-001, 8.7785257194501087e-001, + 9.2112126489802404e-001, 9.5136942471639818e-001, + 9.7136085902200120e-001, 9.8387773632530240e-001, + 9.9131672394792536e-001]) + + assert_almost_equal(stats.skellam.cdf(k, mu1, mu2), skcdfR, decimal=5) + + +class TestHypergeom(TestCase): + def test_precision(self): + # comparison number from mpmath + + M,n,N = 2500,50,500 + tot=M;good=n;bad=tot-good + hgpmf = stats.hypergeom.pmf(2,tot,good,N) + + assert_almost_equal(hgpmf, 0.0010114963068932233, 11) + +class TestChi2(TestCase): + # regression tests after precision improvements, ticket:1041, not verified + def test_precision(self): + assert_almost_equal(stats.chi2.pdf(1000, 1000), 8.919133934753128e-003, 14) + assert_almost_equal(stats.chi2.pdf(100, 100), 0.028162503162596778, 14) + +class TestArrayArgument(TestCase): #test for ticket:992 + def test_noexception(self): + rvs = stats.norm.rvs(loc=(np.arange(5)), scale=np.ones(5), size=(10,5)) + assert_equal(rvs.shape, (10,5)) + class TestDocstring(TestCase): def test_docstrings(self): """See ticket #761""" @@ -279,5 +365,21 @@ assert(0.0 == eself) assert(edouble >= 0.0) +def TestArgsreduce(): + a = array([1,3,2,1,2,3,3]) + b,c = argsreduce(a > 1, a, 2) + + assert_array_equal(b, [3,2,2,3,3]) + assert_array_equal(c, [2,2,2,2,2]) + + b,c = argsreduce(2 > 1, a, 2) + assert_array_equal(b, a[0]) + assert_array_equal(c, [2]) + + b,c = argsreduce(a > 0, a, 2) + assert_array_equal(b, a) + assert_array_equal(c, [2] * numpy.size(a)) + + if __name__ == "__main__": run_module_suite() diff -Nru python-scipy-0.7.2+dfsg1/scipy/stats/tests/test_kdeoth.py python-scipy-0.8.0+dfsg1/scipy/stats/tests/test_kdeoth.py --- python-scipy-0.7.2+dfsg1/scipy/stats/tests/test_kdeoth.py 1970-01-01 01:00:00.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/stats/tests/test_kdeoth.py 2010-07-26 15:48:37.000000000 +0100 @@ -0,0 +1,36 @@ + + + +from scipy import stats +import numpy as np +from numpy.testing import assert_almost_equal, assert_ + +def test_kde_1d(): + #some basic tests comparing to normal distribution + np.random.seed(8765678) + n_basesample = 500 + xn = np.random.randn(n_basesample) + xnmean = xn.mean() + xnstd = xn.std(ddof=1) + + # get kde for original sample + gkde = stats.gaussian_kde(xn) + + # evaluate the density funtion for the kde for some points + xs = np.linspace(-7,7,501) + kdepdf = gkde.evaluate(xs) + normpdf = stats.norm.pdf(xs, loc=xnmean, scale=xnstd) + intervall = xs[1] - xs[0] + + assert_(np.sum((kdepdf - normpdf)**2)*intervall < 0.01) + prob1 = gkde.integrate_box_1d(xnmean, np.inf) + prob2 = gkde.integrate_box_1d(-np.inf, xnmean) + assert_almost_equal(prob1, 0.5, decimal=1) + assert_almost_equal(prob2, 0.5, decimal=1) + assert_almost_equal(gkde.integrate_box(xnmean, np.inf), prob1, decimal=13) + assert_almost_equal(gkde.integrate_box(-np.inf, xnmean), prob2, decimal=13) + + assert_almost_equal(gkde.integrate_kde(gkde), + (kdepdf**2).sum()*intervall, decimal=2) + assert_almost_equal(gkde.integrate_gaussian(xnmean, xnstd**2), + (kdepdf*normpdf).sum()*intervall, decimal=2) diff -Nru python-scipy-0.7.2+dfsg1/scipy/stats/tests/test_morestats.py python-scipy-0.8.0+dfsg1/scipy/stats/tests/test_morestats.py --- python-scipy-0.7.2+dfsg1/scipy/stats/tests/test_morestats.py 2010-03-03 14:34:13.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/stats/tests/test_morestats.py 2010-07-26 15:48:37.000000000 +0100 @@ -1,8 +1,9 @@ # Author: Travis Oliphant, 2002 # -from numpy.testing import * +import warnings +from numpy.testing import * import scipy.stats as stats @@ -122,5 +123,8 @@ assert_array_almost_equal(stats.mood(x1,x1**2), (-1.3830857299399906, 0.16663858066771478), 11) +# First Anssari test yields this warning +warnings.filterwarnings("ignore", "Ties preclude use of exact statistic") + if __name__ == "__main__": run_module_suite() diff -Nru python-scipy-0.7.2+dfsg1/scipy/stats/tests/test_mstats_basic.py python-scipy-0.8.0+dfsg1/scipy/stats/tests/test_mstats_basic.py --- python-scipy-0.7.2+dfsg1/scipy/stats/tests/test_mstats_basic.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/stats/tests/test_mstats_basic.py 2010-07-26 15:48:37.000000000 +0100 @@ -14,6 +14,30 @@ assert_array_almost_equal + +class TestMquantiles(TestCase): + """Regression tests for mstats module.""" + def test_mquantiles_limit_keyword(self): + """Ticket #867""" + data = np.array([[ 6., 7., 1.], + [ 47., 15., 2.], + [ 49., 36., 3.], + [ 15., 39., 4.], + [ 42., 40., -999.], + [ 41., 41., -999.], + [ 7., -999., -999.], + [ 39., -999., -999.], + [ 43., -999., -999.], + [ 40., -999., -999.], + [ 36., -999., -999.]]) + desired = [[19.2, 14.6, 1.45], + [40.0, 37.5, 2.5 ], + [42.8, 40.05, 3.55]] + quants = mstats.mquantiles(data, axis=0, limit=(0, 50)) + assert_almost_equal(quants, desired) + + + class TestGMean(TestCase): def test_1D(self): a = (1,2,3,4) diff -Nru python-scipy-0.7.2+dfsg1/scipy/stats/tests/test_stats.py python-scipy-0.8.0+dfsg1/scipy/stats/tests/test_stats.py --- python-scipy-0.7.2+dfsg1/scipy/stats/tests/test_stats.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/stats/tests/test_stats.py 2010-07-26 15:48:37.000000000 +0100 @@ -105,11 +105,11 @@ """ def test_meanX(self): - y = stats.mean(X) + y = np.mean(X) assert_almost_equal(y, 5.0) def test_stdX(self): - y = stats.std(X) + y = np.std(X, ddof=1) assert_almost_equal(y, 2.738612788) def test_tmeanX(self): @@ -125,16 +125,16 @@ assert_almost_equal(y, 2.1602468994692865) def test_meanZERO(self): - y = stats.mean(ZERO) + y = np.mean(ZERO) assert_almost_equal(y, 0.0) def test_stdZERO(self): - y = stats.std(ZERO) + y = np.std(ZERO, ddof=1) assert_almost_equal(y, 0.0) ## Really need to write these tests to handle missing values properly ## def test_meanMISS(self): -## y = stats.mean(MISS) +## y = np.mean(MISS) ## assert_almost_equal(y, 0.0) ## ## def test_stdMISS(self): @@ -142,44 +142,44 @@ ## assert_almost_equal(y, 0.0) def test_meanBIG(self): - y = stats.mean(BIG) + y = np.mean(BIG) assert_almost_equal(y, 99999995.00) def test_stdBIG(self): - y = stats.std(BIG) + y = np.std(BIG, ddof=1) assert_almost_equal(y, 2.738612788) def test_meanLITTLE(self): - y = stats.mean(LITTLE) + y = np.mean(LITTLE) assert_approx_equal(y, 0.999999950) def test_stdLITTLE(self): - y = stats.std(LITTLE) + y = np.std(LITTLE, ddof=1) assert_approx_equal(y, 2.738612788e-8) def test_meanHUGE(self): - y = stats.mean(HUGE) + y = np.mean(HUGE) assert_approx_equal(y, 5.00000e+12) def test_stdHUGE(self): - y = stats.std(HUGE) + y = np.std(HUGE, ddof=1) assert_approx_equal(y, 2.738612788e12) def test_meanTINY(self): - y = stats.mean(TINY) + y = np.mean(TINY) assert_almost_equal(y, 0.0) def test_stdTINY(self): - y = stats.std(TINY) + y = np.std(TINY, ddof=1) assert_almost_equal(y, 0.0) def test_meanROUND(self): - y = stats.mean(ROUND) + y = np.mean(ROUND) assert_approx_equal(y, 4.500000000) def test_stdROUND(self): - y = stats.std(ROUND) + y = np.std(ROUND, ddof=1) assert_approx_equal(y, 2.738612788) class TestNanFunc(TestCase): @@ -213,27 +213,31 @@ def test_nanstd_none(self): """Check nanstd when no values are nan.""" s = stats.nanstd(self.X) - assert_approx_equal(s, stats.std(self.X)) + assert_approx_equal(s, np.std(self.X, ddof=1)) def test_nanstd_some(self): """Check nanstd when some values only are nan.""" s = stats.nanstd(self.Xsome) - assert_approx_equal(s, stats.std(self.Xsomet)) + assert_approx_equal(s, np.std(self.Xsomet, ddof=1)) def test_nanstd_all(self): """Check nanstd when all values are nan.""" s = stats.nanstd(self.Xall) assert np.isnan(s) + def test_nanstd_negative_axis(self): + x = np.array([1, 2, 3]) + assert_equal(stats.nanstd(x, -1), 1) + def test_nanmedian_none(self): """Check nanmedian when no values are nan.""" m = stats.nanmedian(self.X) - assert_approx_equal(m, stats.median(self.X)) + assert_approx_equal(m, np.median(self.X)) def test_nanmedian_some(self): """Check nanmedian when some values only are nan.""" m = stats.nanmedian(self.Xsome) - assert_approx_equal(m, stats.median(self.Xsomet)) + assert_approx_equal(m, np.median(self.Xsomet)) def test_nanmedian_all(self): """Check nanmedian when all values are nan.""" @@ -501,6 +505,140 @@ res = (1.0, 5.0, 0.98229948625750, 7.45259691e-008, 0.063564172616372733) assert_array_almost_equal(stats.linregress(x,y),res,decimal=14) +class TestHistogram(TestCase): + """ Tests that histogram works as it should, and keeps old behaviour + """ + # what is untested: + # - multidimensional arrays (since 'a' is ravel'd as the first line in the method) + # - very large arrays + # - Nans, Infs, empty and otherwise bad inputs + + # sample arrays to test the histogram with + low_values = np.array([0.2, 0.3, 0.4, 0.5, 0.5, 0.6, 0.7, 0.8, 0.9, 1.1, 1.2], + dtype=float) # 11 values + high_range = np.array([2, 3, 4, 2, 21, 32, 78, 95, 65, 66, 66, 66, 66, 4], + dtype=float) # 14 values + low_range = np.array([2, 3, 3, 2, 3, 2.4, 2.1, 3.1, 2.9, 2.6, 2.7, 2.8, 2.2, 2.001], + dtype=float) # 14 values + few_values = np.array([2.0, 3.0, -1.0, 0.0], dtype=float) # 4 values + + def test_simple(self): + """ Tests that each of the tests works as expected with default params + """ + # basic tests, with expected results (no weighting) + # results taken from the previous (slower) version of histogram + basic_tests = ((self.low_values, (np.array([ 1., 1., 1., 2., 2., + 1., 1., 0., 1., 1.]), + 0.14444444444444446, 0.11111111111111112, 0)), + (self.high_range, (np.array([ 5., 0., 1., 1., 0., + 0., 5., 1., 0., 1.]), + -3.1666666666666661, 10.333333333333332, 0)), + (self.low_range, (np.array([ 3., 1., 1., 1., 0., 1., + 1., 2., 3., 1.]), + 1.9388888888888889, 0.12222222222222223, 0)), + (self.few_values, (np.array([ 1., 0., 1., 0., 0., 0., + 0., 1., 0., 1.]), + -1.2222222222222223, 0.44444444444444448, 0)), + ) + for inputs, expected_results in basic_tests: + given_results = stats.histogram(inputs) + assert_array_almost_equal(expected_results[0], given_results[0], + decimal=2) + for i in range(1, 4): + assert_almost_equal(expected_results[i], given_results[i], + decimal=2) + + def test_weighting(self): + """ Tests that weights give expected histograms + """ + # basic tests, with expected results, given a set of weights + # weights used (first n are used for each test, where n is len of array) (14 values) + weights = np.array([1., 3., 4.5, 0.1, -1.0, 0.0, 0.3, 7.0, 103.2, 2, 40, 0, 0, 1]) + # results taken from the numpy version of histogram + basic_tests = ((self.low_values, (np.array([ 4.0, 0.0, 4.5, -0.9, 0.0, + 0.3,110.2, 0.0, 0.0, 42.0]), + 0.2, 0.1, 0)), + (self.high_range, (np.array([ 9.6, 0. , -1. , 0. , 0. , + 0. ,145.2, 0. , 0.3, 7. ]), + 2.0, 9.3, 0)), + (self.low_range, (np.array([ 2.4, 0. , 0. , 0. , 0. , + 2. , 40. , 0. , 103.2, 13.5]), + 2.0, 0.11, 0)), + (self.few_values, (np.array([ 4.5, 0. , 0.1, 0. , 0. , 0. , + 0. , 1. , 0. , 3. ]), + -1., 0.4, 0)), + + ) + for inputs, expected_results in basic_tests: + # use the first lot of weights for test + # default limits given to reproduce output of numpy's test better + given_results = stats.histogram(inputs, defaultlimits=(inputs.min(), + inputs.max()), + weights=weights[:len(inputs)]) + assert_array_almost_equal(expected_results[0], given_results[0], + decimal=2) + for i in range(1, 4): + assert_almost_equal(expected_results[i], given_results[i], + decimal=2) + + def test_reduced_bins(self): + """ Tests that reducing the number of bins produces expected results + """ + # basic tests, with expected results (no weighting), + # except number of bins is halved to 5 + # results taken from the previous (slower) version of histogram + basic_tests = ((self.low_values, (np.array([ 2., 3., 3., 1., 2.]), + 0.075000000000000011, 0.25, 0)), + (self.high_range, (np.array([ 5., 2., 0., 6., 1.]), + -9.625, 23.25, 0)), + (self.low_range, (np.array([ 4., 2., 1., 3., 4.]), + 1.8625, 0.27500000000000002, 0)), + (self.few_values, (np.array([ 1., 1., 0., 1., 1.]), + -1.5, 1.0, 0)), + ) + for inputs, expected_results in basic_tests: + given_results = stats.histogram(inputs, numbins=5) + assert_array_almost_equal(expected_results[0], given_results[0], + decimal=2) + for i in range(1, 4): + assert_almost_equal(expected_results[i], given_results[i], + decimal=2) + + def test_increased_bins(self): + """ Tests that increasing the number of bins produces expected results + """ + # basic tests, with expected results (no weighting), + # except number of bins is double to 20 + # results taken from the previous (slower) version of histogram + basic_tests = ((self.low_values, (np.array([ 1., 0., 1., 0., 1., + 0., 2., 0., 1., 0., + 1., 1., 0., 1., 0., + 0., 0., 1., 0., 1.]), + 0.1736842105263158, 0.052631578947368418, 0)), + (self.high_range, (np.array([ 5., 0., 0., 0., 1., + 0., 1., 0., 0., 0., + 0., 0., 0., 5., 0., + 0., 1., 0., 0., 1.]), + -0.44736842105263142, 4.8947368421052628, 0)), + (self.low_range, (np.array([ 3., 0., 1., 1., 0., 0., + 0., 1., 0., 0., 1., 0., + 1., 0., 1., 0., 1., 3., + 0., 1.]), + 1.9710526315789474, 0.057894736842105263, 0)), + (self.few_values, (np.array([ 1., 0., 0., 0., 0., 1., + 0., 0., 0., 0., 0., 0., + 0., 0., 1., 0., 0., 0., + 0., 1.]), + -1.1052631578947367, 0.21052631578947367, 0)), + ) + for inputs, expected_results in basic_tests: + given_results = stats.histogram(inputs, numbins=20) + assert_array_almost_equal(expected_results[0], given_results[0], + decimal=2) + for i in range(1, 4): + assert_almost_equal(expected_results[i], given_results[i], + decimal=2) + # Utility @@ -606,11 +744,11 @@ mn1 = 0.0 for el in a: mn1 += el / float(Na) - assert_almost_equal(stats.mean(a),mn1,11) + assert_almost_equal(np.mean(a),mn1,11) mn2 = 0.0 for el in af: mn2 += el / float(Naf) - assert_almost_equal(stats.mean(af),mn2,11) + assert_almost_equal(np.mean(af),mn2,11) def test_2d(self): a = [[1.0, 2.0, 3.0], @@ -621,20 +759,19 @@ mn1 = zeros(N2, dtype=float) for k in range(N1): mn1 += A[k,:] / N1 - assert_almost_equal(stats.mean(a, axis=0), mn1, decimal=13) - assert_almost_equal(stats.mean(a), mn1, decimal=13) + assert_almost_equal(np.mean(a, axis=0), mn1, decimal=13) mn2 = zeros(N1, dtype=float) for k in range(N2): mn2 += A[:,k] mn2 /= N2 - assert_almost_equal(stats.mean(a, axis=1), mn2, decimal=13) + assert_almost_equal(np.mean(a, axis=1), mn2, decimal=13) def test_ravel(self): a = rand(5,3,5) A = 0 for val in ravel(a): A += val - assert_almost_equal(stats.mean(a,axis=None),A/(5*3.0*5)) + assert_almost_equal(np.mean(a,axis=None),A/(5*3.0*5)) class TestPercentile(TestCase): def setUp(self): @@ -643,9 +780,9 @@ self.a3 = [3.,4,5,10,-3,-5,-6,7.0] def test_median(self): - assert_equal(stats.median(self.a1), 4) - assert_equal(stats.median(self.a2), 2.5) - assert_equal(stats.median(self.a3), 3.5) + assert_equal(np.median(self.a1), 4) + assert_equal(np.median(self.a2), 2.5) + assert_equal(np.median(self.a3), 3.5) def test_percentile(self): x = arange(8) * 0.5 @@ -667,8 +804,8 @@ def test_basic(self): a = [3,4,5,10,-3,-5,6] b = [3,4,5,10,-3,-5,-6] - assert_almost_equal(stats.std(a),5.2098807225172772,11) - assert_almost_equal(stats.std(b),5.9281411203561225,11) + assert_almost_equal(np.std(a, ddof=1),5.2098807225172772,11) + assert_almost_equal(np.std(b, ddof=1),5.9281411203561225,11) def test_2d(self): a = [[1.0, 2.0, 3.0], @@ -677,9 +814,8 @@ b1 = array((3.7859388972001824, 5.2915026221291814, 2.0816659994661335)) b2 = array((1.0,2.0,2.64575131106)) - assert_array_almost_equal(stats.std(a),b1,11) - assert_array_almost_equal(stats.std(a,axis=0),b1,11) - assert_array_almost_equal(stats.std(a,axis=1),b2,11) + assert_array_almost_equal(np.std(a,ddof=1,axis=0),b1,11) + assert_array_almost_equal(np.std(a,ddof=1,axis=1),b2,11) class TestCMedian(TestCase): @@ -693,22 +829,23 @@ def test_basic(self): data1 = [1,3,5,2,3,1,19,-10,2,4.0] data2 = [3,5,1,10,23,-10,3,-2,6,8,15] - assert_almost_equal(stats.median(data1),2.5) - assert_almost_equal(stats.median(data2),5) + assert_almost_equal(np.median(data1),2.5) + assert_almost_equal(np.median(data2),5) def test_basic2(self): a1 = [3,4,5,10,-3,-5,6] a2 = [3,-6,-2,8,7,4,2,1] a3 = [3.,4,5,10,-3,-5,-6,7.0] - assert_equal(stats.median(a1),4) - assert_equal(stats.median(a2),2.5) - assert_equal(stats.median(a3),3.5) + assert_equal(np.median(a1),4) + assert_equal(np.median(a2),2.5) + assert_equal(np.median(a3),3.5) def test_axis(self): """Regression test for #760.""" a1 = np.array([[3,4,5], [10,-3,-5]]) - assert_equal(stats.median(a1), np.array([6.5, 0.5, 0.])) - assert_equal(stats.median(a1, axis=-1), np.array([4., -3])) + assert_equal(np.median(a1), 3.5) + assert_equal(np.median(a1, axis=0), np.array([6.5, 0.5, 0.])) + assert_equal(np.median(a1, axis=-1), np.array([4., -3])) class TestMode(TestCase): def test_basic(self): @@ -724,7 +861,7 @@ """ testcase = [1,2,3,4] def test_std(self): - y = stats.std(self.testcase) + y = np.std(self.testcase, ddof=1) assert_approx_equal(y,1.290994449) def test_var(self): @@ -732,7 +869,9 @@ var(testcase) = 1.666666667 """ #y = stats.var(self.shoes[0]) #assert_approx_equal(y,6.009) - y = stats.var(self.testcase) + y = np.var(self.testcase) + assert_approx_equal(y,1.25) + y = np.var(self.testcase, ddof=1) assert_approx_equal(y,1.666666667) def test_samplevar(self): @@ -784,7 +923,7 @@ not in R, so used (10-mean(testcase,axis=0))/sqrt(var(testcase)*3/4) """ - y = stats.z(self.testcase,stats.mean(self.testcase)) + y = stats.z(self.testcase,np.mean(self.testcase, axis=0)) assert_almost_equal(y,0.0) def test_zs(self): @@ -796,7 +935,25 @@ desired = ([-1.3416407864999, -0.44721359549996 , 0.44721359549996 , 1.3416407864999]) assert_array_almost_equal(desired,y,decimal=12) + def test_zmap(self): + """ + not in R, so tested by using + (testcase[i]-mean(testcase,axis=0))/sqrt(var(testcase)*3/4) + copied from test_zs + """ + y = stats.zmap(self.testcase,self.testcase) + desired = ([-1.3416407864999, -0.44721359549996 , 0.44721359549996 , 1.3416407864999]) + assert_array_almost_equal(desired,y,decimal=12) + def test_zscore(self): + """ + not in R, so tested by using + (testcase[i]-mean(testcase,axis=0))/sqrt(var(testcase)*3/4) + copied from test_zs as regression test for new function + """ + y = stats.zscore(self.testcase) + desired = ([-1.3416407864999, -0.44721359549996 , 0.44721359549996 , 1.3416407864999]) + assert_array_almost_equal(desired,y,decimal=12) class TestMoments(TestCase): """ @@ -1074,8 +1231,9 @@ np.array((0.12464329735846891, 0.089444888711820769)), 15) assert_almost_equal( np.array(stats.kstest(x,'norm', alternative = 'less')), np.array((0.12464329735846891, 0.040989164077641749)), 15) + # this 'greater' test fails with precision of decimal=14 assert_almost_equal( np.array(stats.kstest(x,'norm', alternative = 'greater')), - np.array((0.0072115233216310994, 0.98531158590396228)), 14) + np.array((0.0072115233216310994, 0.98531158590396228)), 12) #missing: no test that uses *args @@ -1140,7 +1298,7 @@ #test zero division problem t,p = stats.ttest_rel([0,0,0],[1,1,1]) assert_equal((np.abs(t),p), (np.inf, 0)) - assert_equal(stats.ttest_rel([0,0,0], [0,0,0]), (1.0, 0.42264973081037427)) + assert_almost_equal(stats.ttest_rel([0,0,0], [0,0,0]), (1.0, 0.42264973081037421)) #check that nan in input array result in nan output anan = np.array([[1,np.nan],[-1,1]]) @@ -1181,7 +1339,7 @@ #test zero division problem t,p = stats.ttest_ind([0,0,0],[1,1,1]) assert_equal((np.abs(t),p), (np.inf, 0)) - assert_equal(stats.ttest_ind([0,0,0], [0,0,0]), (1.0, 0.37390096630005898)) + assert_almost_equal(stats.ttest_ind([0,0,0], [0,0,0]), (1.0, 0.37390096630005898)) #check that nan in input array result in nan output anan = np.array([[1,np.nan],[-1,1]]) @@ -1221,7 +1379,7 @@ #test zero division problem t,p = stats.ttest_1samp([0,0,0], 1) assert_equal((np.abs(t),p), (np.inf, 0)) - assert_equal(stats.ttest_1samp([0,0,0], 0), (1.0, 0.42264973081037427)) + assert_almost_equal(stats.ttest_1samp([0,0,0], 0), (1.0, 0.42264973081037421)) #check that nan in input array result in nan output anan = np.array([[1,np.nan],[-1,1]]) @@ -1314,7 +1472,251 @@ assert_array_almost_equal(stats.obrientransform(x1, 2*x1), result, decimal=8) - +class HarMeanTestCase: + def test_1dlist(self): + ''' Test a 1d list''' + a=[10, 20, 30, 40, 50, 60, 70, 80, 90, 100] + b = 34.1417152147 + self.do(a, b) + def test_1darray(self): + ''' Test a 1d array''' + a=np.array([10, 20, 30, 40, 50, 60, 70, 80, 90, 100]) + b = 34.1417152147 + self.do(a, b) + def test_1dma(self): + ''' Test a 1d masked array''' + a=np.ma.array([10, 20, 30, 40, 50, 60, 70, 80, 90, 100]) + b = 34.1417152147 + self.do(a, b) + def test_1dmavalue(self): + ''' Test a 1d masked array with a masked value''' + a=np.ma.array([10, 20, 30, 40, 50, 60, 70, 80, 90, 100], + mask=[0,0,0,0,0,0,0,0,0,1]) + b = 31.8137186141 + self.do(a, b) + + # Note the next tests use axis=None as default, not axis=0 + def test_2dlist(self): + ''' Test a 2d list''' + a=[[10, 20, 30, 40], [50, 60, 70, 80], [90, 100, 110, 120]] + b = 38.6696271841 + self.do(a, b) + def test_2darray(self): + ''' Test a 2d array''' + a=[[10, 20, 30, 40], [50, 60, 70, 80], [90, 100, 110, 120]] + b = 38.6696271841 + self.do(np.array(a), b) + def test_2dma(self): + ''' Test a 2d masked array''' + a=[[10, 20, 30, 40], [50, 60, 70, 80], [90, 100, 110, 120]] + b = 38.6696271841 + self.do(np.ma.array(a), b) + def test_2daxis0(self): + ''' Test a 2d list with axis=0''' + a=[[10, 20, 30, 40], [50, 60, 70, 80], [90, 100, 110, 120]] + b = np.array([ 22.88135593, 39.13043478, 52.90076336, 65.45454545]) + self.do(a, b, axis=0) + def test_2daxis1(self): + ''' Test a 2d list with axis=1''' + a=[[10, 20, 30, 40], [50, 60, 70, 80], [90, 100, 110, 120]] + b = np.array([ 19.2 , 63.03939962, 103.80078637]) + self.do(a, b, axis=1) + def test_2dmatrixdaxis0(self): + ''' Test a 2d list with axis=0''' + a=[[10, 20, 30, 40], [50, 60, 70, 80], [90, 100, 110, 120]] + b = np.matrix([[ 22.88135593, 39.13043478, 52.90076336, 65.45454545]]) + self.do(np.matrix(a), b, axis=0) + def test_2dmatrixaxis1(self): + ''' Test a 2d list with axis=1''' + a=[[10, 20, 30, 40], [50, 60, 70, 80], [90, 100, 110, 120]] + b = np.matrix([[ 19.2 , 63.03939962, 103.80078637]]).T + self.do(np.matrix(a), b, axis=1) +## def test_dtype(self): +## ''' Test a 1d list with a new dtype''' +## a=[10, 20, 30, 40, 50, 60, 70, 80, 90, 100] +## b = 34.1417152147 +## self.do(a, b, dtype=np.float128) # does not work on Win32 + +class TestHarMean(HarMeanTestCase, TestCase): + def do(self, a, b, axis=None, dtype=None): + x = stats.hmean(a, axis=axis, dtype=dtype) + assert_almost_equal(b, x) + assert_equal(x.dtype, dtype) + +class GeoMeanTestCase: + def test_1dlist(self): + ''' Test a 1d list''' + a=[10, 20, 30, 40, 50, 60, 70, 80, 90, 100] + b = 45.2872868812 + self.do(a, b) + def test_1darray(self): + ''' Test a 1d array''' + a=np.array([10, 20, 30, 40, 50, 60, 70, 80, 90, 100]) + b = 45.2872868812 + self.do(a, b) + def test_1dma(self): + ''' Test a 1d masked array''' + a=np.ma.array([10, 20, 30, 40, 50, 60, 70, 80, 90, 100]) + b = 45.2872868812 + self.do(a, b) + def test_1dmavalue(self): + ''' Test a 1d masked array with a masked value''' + a=np.ma.array([10, 20, 30, 40, 50, 60, 70, 80, 90, 100], mask=[0,0,0,0,0,0,0,0,0,1]) + b = 41.4716627439 + self.do(a, b) + + # Note the next tests use axis=None as default, not axis=0 + def test_2dlist(self): + ''' Test a 2d list''' + a=[[10, 20, 30, 40], [50, 60, 70, 80], [90, 100, 110, 120]] + b = 52.8885199 + self.do(a, b) + def test_2darray(self): + ''' Test a 2d array''' + a=[[10, 20, 30, 40], [50, 60, 70, 80], [90, 100, 110, 120]] + b = 52.8885199 + self.do(np.array(a), b) + def test_2dma(self): + ''' Test a 2d masked array''' + a=[[10, 20, 30, 40], [50, 60, 70, 80], [90, 100, 110, 120]] + b = 52.8885199 + self.do(np.ma.array(a), b) + def test_2daxis0(self): + ''' Test a 2d list with axis=0''' + a=[[10, 20, 30, 40], [50, 60, 70, 80], [90, 100, 110, 120]] + b = np.array([35.56893304, 49.32424149, 61.3579244 , 72.68482371]) + self.do(a, b, axis=0) + def test_2daxis1(self): + ''' Test a 2d list with axis=1''' + a=[[10, 20, 30, 40], [50, 60, 70, 80], [90, 100, 110, 120]] + b = np.array([ 22.13363839, 64.02171746, 104.40086817]) + self.do(a, b, axis=1) + def test_2dmatrixdaxis0(self): + ''' Test a 2d list with axis=0''' + a=[[10, 20, 30, 40], [50, 60, 70, 80], [90, 100, 110, 120]] + b = np.matrix([[35.56893304, 49.32424149, 61.3579244 , 72.68482371]]) + self.do(np.matrix(a), b, axis=0) + def test_2dmatrixaxis1(self): + ''' Test a 2d list with axis=1''' + a=[[10, 20, 30, 40], [50, 60, 70, 80], [90, 100, 110, 120]] + b = np.matrix([[ 22.13363839, 64.02171746, 104.40086817]]).T + self.do(np.matrix(a), b, axis=1) +## def test_dtype(self): +## ''' Test a 1d list with a new dtype''' +## a=[10, 20, 30, 40, 50, 60, 70, 80, 90, 100] +## b = 45.2872868812 +## self.do(a, b, dtype=np.float128) # does not exist on win32 + def test_1dlist0(self): + ''' Test a 1d list with zero element''' + a=[10, 20, 30, 40, 50, 60, 70, 80, 90, 0] + b = 0.0 # due to exp(-inf)=0 + self.do(a, b) + def test_1darray0(self): + ''' Test a 1d array with zero element''' + a=np.array([10, 20, 30, 40, 50, 60, 70, 80, 90, 0]) + b = 0.0 # due to exp(-inf)=0 + self.do(a, b) + def test_1dma0(self): + ''' Test a 1d masked array with zero element''' + a=np.ma.array([10, 20, 30, 40, 50, 60, 70, 80, 90, 0]) + b = 41.4716627439 + self.do(a, b) + def test_1dmainf(self): + ''' Test a 1d masked array with negative element''' + a=np.ma.array([10, 20, 30, 40, 50, 60, 70, 80, 90, -1]) + b = 41.4716627439 + self.do(a, b) + +class TestGeoMean(GeoMeanTestCase, TestCase): + def do(self, a, b, axis=None, dtype=None): + #Note this doesn't test when axis is not specified + x = stats.gmean(a, axis=axis, dtype=dtype) + assert_almost_equal(b, x) + assert_equal(x.dtype, dtype) + + +def test_binomtest(): + # precision tests compared to R for ticket:986 + pp = np.concatenate(( np.linspace(0.1,0.2,5), np.linspace(0.45,0.65,5), + np.linspace(0.85,0.95,5))) + n = 501 + x = 450 + results = [0.0, 0.0, 1.0159969301994141e-304, + 2.9752418572150531e-275, 7.7668382922535275e-250, + 2.3381250925167094e-099, 7.8284591587323951e-081, + 9.9155947819961383e-065, 2.8729390725176308e-050, + 1.7175066298388421e-037, 0.0021070691951093692, + 0.12044570587262322, 0.88154763174802508, 0.027120993063129286, + 2.6102587134694721e-006] + + for p, res in zip(pp,results): + assert_approx_equal(stats.binom_test(x, n, p), res, + significant=12, err_msg='fail forp=%f'%p) + + assert_approx_equal(stats.binom_test(50,100,0.1), 5.8320387857343647e-024, + significant=12, err_msg='fail forp=%f'%p) + +class Test_Trim(object): + # test trim functions + def test_trim1(self): + a = np.arange(11) + assert_equal(stats.trim1(a, 0.1), np.arange(10)) + assert_equal(stats.trim1(a, 0.2), np.arange(9)) + assert_equal(stats.trim1(a, 0.2, tail='left'), np.arange(2,11)) + assert_equal(stats.trim1(a, 3/11., tail='left'), np.arange(3,11)) + + def test_trimboth(self): + a = np.arange(11) + assert_equal(stats.trimboth(a, 3/11.), np.arange(3,8)) + assert_equal(stats.trimboth(a, 0.2), np.array([2, 3, 4, 5, 6, 7, 8])) + assert_equal(stats.trimboth(np.arange(24).reshape(6,4), 0.2), + np.arange(4,20).reshape(4,4)) + assert_equal(stats.trimboth(np.arange(24).reshape(4,6).T, 2/6.), + np.array([[ 2, 8, 14, 20],[ 3, 9, 15, 21]])) + assert_raises(ValueError, stats.trimboth, + np.arange(24).reshape(4,6).T, 4/6.) + + def test_trim_mean(self): + a = np.arange(11) + assert_equal(stats.trim_mean(np.arange(24).reshape(4,6).T, 2/6.), + np.array([ 2.5, 8.5, 14.5, 20.5])) + assert_equal(stats.trim_mean(np.arange(24).reshape(4,6), 2/6.), + np.array([ 9., 10., 11., 12., 13., 14.])) + assert_equal(stats.trim_mean(np.arange(24), 2/6.), 11.5) + assert_equal(stats.trim_mean([5,4,3,1,2,0], 2/6.), 2.5) + + +class TestSigamClip(object): + def test_sigmaclip1(self): + a = np.concatenate((np.linspace(9.5,10.5,31),np.linspace(0,20,5))) + fact = 4 #default + c, low, upp = stats.sigmaclip(a) + assert_(c.min()>low) + assert_(c.max()low) + assert_(c.max()low) + assert_(c.max()(1+a1+a2*temp_ks[i]-a3/(temp_ks[i]+a4)) */ - __pyx_t_2 = PyObject_GetItem(__pyx_v_bk, __pyx_v_c_small_k); if (!__pyx_t_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 62; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__astype); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 62; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_1 = PyObject_GetItem(__pyx_v_bk, __pyx_v_c_small_k); if (!__pyx_t_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 62; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 62; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_2 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__astype); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 62; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_2); - __pyx_t_10 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__float); if (unlikely(!__pyx_t_10)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 62; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 62; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_10 = PyObject_GetAttr(__pyx_t_1, __pyx_n_s__float); if (unlikely(!__pyx_t_10)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 62; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 62; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_10); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 62; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_10); __Pyx_GIVEREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_10 = PyObject_Call(__pyx_t_1, __pyx_t_2, NULL); if (unlikely(!__pyx_t_10)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 62; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_10 = PyObject_Call(__pyx_t_2, __pyx_t_1, NULL); if (unlikely(!__pyx_t_10)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 62; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; if (!(likely(((__pyx_t_10) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_10, __pyx_ptype_5numpy_ndarray))))) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 62; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __pyx_t_11 = ((PyArrayObject *)__pyx_t_10); { @@ -1989,7 +2001,7 @@ __pyx_v_temp_ks = ((PyArrayObject *)__pyx_t_10); __pyx_t_10 = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/stats/vonmises_cython.pyx":63 + /* "/Users/mb312/dev_trees/scipy-work/scipy/stats/vonmises_cython.pyx":63 * temp_xs = bx[c_small_k].astype(np.float) * temp_ks = bk[c_small_k].astype(np.float) * for i in range(len(temp)): # <<<<<<<<<<<<<< @@ -2000,7 +2012,7 @@ for (__pyx_t_17 = 0; __pyx_t_17 < __pyx_t_16; __pyx_t_17+=1) { __pyx_v_i = __pyx_t_17; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/stats/vonmises_cython.pyx":64 + /* "/Users/mb312/dev_trees/scipy-work/scipy/stats/vonmises_cython.pyx":64 * temp_ks = bk[c_small_k].astype(np.float) * for i in range(len(temp)): * p = (1+a1+a2*temp_ks[i]-a3/(temp_ks[i]+a4)) # <<<<<<<<<<<<<< @@ -2016,7 +2028,7 @@ } __pyx_v_p = ((int)(((1 + __pyx_v_a1) + (__pyx_v_a2 * (*__Pyx_BufPtrStrided1d(double *, __pyx_bstruct_temp_ks.buf, __pyx_t_18, __pyx_bstride_0_temp_ks)))) - (__pyx_v_a3 / __pyx_t_8))); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/stats/vonmises_cython.pyx":65 + /* "/Users/mb312/dev_trees/scipy-work/scipy/stats/vonmises_cython.pyx":65 * for i in range(len(temp)): * p = (1+a1+a2*temp_ks[i]-a3/(temp_ks[i]+a4)) * temp[i] = von_mises_cdf_series(temp_ks[i],temp_xs[i],p) # <<<<<<<<<<<<<< @@ -2028,7 +2040,7 @@ __pyx_t_22 = __pyx_v_i; *__Pyx_BufPtrStrided1d(double *, __pyx_bstruct_temp.buf, __pyx_t_22, __pyx_bstride_0_temp) = __pyx_f_5scipy_5stats_15vonmises_cython_von_mises_cdf_series((*__Pyx_BufPtrStrided1d(double *, __pyx_bstruct_temp_ks.buf, __pyx_t_20, __pyx_bstride_0_temp_ks)), (*__Pyx_BufPtrStrided1d(double *, __pyx_bstruct_temp_xs.buf, __pyx_t_21, __pyx_bstride_0_temp_xs)), __pyx_v_p); - /* "/home/david/src/numeric/scipy/scipy-git/scipy/stats/vonmises_cython.pyx":66 + /* "/Users/mb312/dev_trees/scipy-work/scipy/stats/vonmises_cython.pyx":66 * p = (1+a1+a2*temp_ks[i]-a3/(temp_ks[i]+a4)) * temp[i] = von_mises_cdf_series(temp_ks[i],temp_xs[i],p) * if temp[i]<0: # <<<<<<<<<<<<<< @@ -2039,7 +2051,7 @@ __pyx_t_4 = ((*__Pyx_BufPtrStrided1d(double *, __pyx_bstruct_temp.buf, __pyx_t_23, __pyx_bstride_0_temp)) < 0); if (__pyx_t_4) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/stats/vonmises_cython.pyx":67 + /* "/Users/mb312/dev_trees/scipy-work/scipy/stats/vonmises_cython.pyx":67 * temp[i] = von_mises_cdf_series(temp_ks[i],temp_xs[i],p) * if temp[i]<0: * temp[i]=0 # <<<<<<<<<<<<<< @@ -2051,7 +2063,7 @@ goto __pyx_L8; } - /* "/home/david/src/numeric/scipy/scipy-git/scipy/stats/vonmises_cython.pyx":68 + /* "/Users/mb312/dev_trees/scipy-work/scipy/stats/vonmises_cython.pyx":68 * if temp[i]<0: * temp[i]=0 * elif temp[i]>1: # <<<<<<<<<<<<<< @@ -2062,7 +2074,7 @@ __pyx_t_4 = ((*__Pyx_BufPtrStrided1d(double *, __pyx_bstruct_temp.buf, __pyx_t_25, __pyx_bstride_0_temp)) > 1); if (__pyx_t_4) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/stats/vonmises_cython.pyx":69 + /* "/Users/mb312/dev_trees/scipy-work/scipy/stats/vonmises_cython.pyx":69 * temp[i]=0 * elif temp[i]>1: * temp[i]=1 # <<<<<<<<<<<<<< @@ -2076,7 +2088,7 @@ __pyx_L8:; } - /* "/home/david/src/numeric/scipy/scipy-git/scipy/stats/vonmises_cython.pyx":70 + /* "/Users/mb312/dev_trees/scipy-work/scipy/stats/vonmises_cython.pyx":70 * elif temp[i]>1: * temp[i]=1 * result[c_small_k] = temp # <<<<<<<<<<<<<< @@ -2085,7 +2097,7 @@ */ if (PyObject_SetItem(__pyx_v_result, __pyx_v_c_small_k, ((PyObject *)__pyx_v_temp)) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 70; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - /* "/home/david/src/numeric/scipy/scipy-git/scipy/stats/vonmises_cython.pyx":71 + /* "/Users/mb312/dev_trees/scipy-work/scipy/stats/vonmises_cython.pyx":71 * temp[i]=1 * result[c_small_k] = temp * result[~c_small_k] = von_mises_cdf_normalapprox(bk[~c_small_k],bx[~c_small_k],C1) # <<<<<<<<<<<<<< @@ -2094,102 +2106,78 @@ */ __pyx_t_10 = __Pyx_GetName(__pyx_m, __pyx_n_s_1); if (unlikely(!__pyx_t_10)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 71; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_10); - __pyx_t_2 = PyNumber_Invert(__pyx_v_c_small_k); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 71; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_GetItem(__pyx_v_bk, __pyx_t_2); if (!__pyx_t_1) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 71; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_1 = PyNumber_Invert(__pyx_v_c_small_k); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 71; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Invert(__pyx_v_c_small_k); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 71; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_2 = PyObject_GetItem(__pyx_v_bk, __pyx_t_1); if (!__pyx_t_2) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 71; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetItem(__pyx_v_bx, __pyx_t_2); if (!__pyx_t_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 71; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyNumber_Invert(__pyx_v_c_small_k); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 71; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_3 = PyObject_GetItem(__pyx_v_bx, __pyx_t_1); if (!__pyx_t_3) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 71; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyFloat_FromDouble(__pyx_v_C1); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 71; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; + __pyx_t_1 = PyFloat_FromDouble(__pyx_v_C1); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 71; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); __pyx_t_9 = PyTuple_New(3); if (unlikely(!__pyx_t_9)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 71; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_9); - PyTuple_SET_ITEM(__pyx_t_9, 0, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); + PyTuple_SET_ITEM(__pyx_t_9, 0, __pyx_t_2); + __Pyx_GIVEREF(__pyx_t_2); PyTuple_SET_ITEM(__pyx_t_9, 1, __pyx_t_3); __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_9, 2, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __pyx_t_1 = 0; - __pyx_t_3 = 0; + PyTuple_SET_ITEM(__pyx_t_9, 2, __pyx_t_1); + __Pyx_GIVEREF(__pyx_t_1); __pyx_t_2 = 0; - __pyx_t_2 = PyObject_Call(__pyx_t_10, __pyx_t_9, NULL); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 71; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); + __pyx_t_3 = 0; + __pyx_t_1 = 0; + __pyx_t_1 = PyObject_Call(__pyx_t_10, __pyx_t_9, NULL); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 71; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; __pyx_t_9 = PyNumber_Invert(__pyx_v_c_small_k); if (unlikely(!__pyx_t_9)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 71; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_9); - if (PyObject_SetItem(__pyx_v_result, __pyx_t_9, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 71; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + if (PyObject_SetItem(__pyx_v_result, __pyx_t_9, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 71; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/stats/vonmises_cython.pyx":73 + /* "/Users/mb312/dev_trees/scipy-work/scipy/stats/vonmises_cython.pyx":73 * result[~c_small_k] = von_mises_cdf_normalapprox(bk[~c_small_k],bx[~c_small_k],C1) * * if not zerodim: # <<<<<<<<<<<<<< - * return result+(2*np.pi)*ix + * return result+ix * else: */ __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_v_zerodim); if (unlikely(__pyx_t_4 < 0)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 73; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __pyx_t_27 = (!__pyx_t_4); if (__pyx_t_27) { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/stats/vonmises_cython.pyx":74 + /* "/Users/mb312/dev_trees/scipy-work/scipy/stats/vonmises_cython.pyx":74 * * if not zerodim: - * return result+(2*np.pi)*ix # <<<<<<<<<<<<<< + * return result+ix # <<<<<<<<<<<<<< * else: - * return (result+(2*np.pi)*ix)[0] + * return (result+ix)[0] */ __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 74; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_9 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__pi); if (unlikely(!__pyx_t_9)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 74; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Multiply(__pyx_int_2, __pyx_t_9); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 74; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_9 = PyNumber_Multiply(__pyx_t_2, __pyx_v_ix); if (unlikely(!__pyx_t_9)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 74; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Add(__pyx_v_result, __pyx_t_9); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 74; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; + __pyx_t_1 = PyNumber_Add(__pyx_v_result, __pyx_v_ix); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 74; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_r = __pyx_t_1; + __pyx_t_1 = 0; goto __pyx_L0; goto __pyx_L9; } /*else*/ { - /* "/home/david/src/numeric/scipy/scipy-git/scipy/stats/vonmises_cython.pyx":76 - * return result+(2*np.pi)*ix + /* "/Users/mb312/dev_trees/scipy-work/scipy/stats/vonmises_cython.pyx":76 + * return result+ix * else: - * return (result+(2*np.pi)*ix)[0] # <<<<<<<<<<<<<< + * return (result+ix)[0] # <<<<<<<<<<<<<< */ __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_GetName(__pyx_m, __pyx_n_s__np); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 76; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_9 = PyObject_GetAttr(__pyx_t_2, __pyx_n_s__pi); if (unlikely(!__pyx_t_9)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 76; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Multiply(__pyx_int_2, __pyx_t_9); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 76; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_9 = PyNumber_Multiply(__pyx_t_2, __pyx_v_ix); if (unlikely(!__pyx_t_9)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 76; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyNumber_Add(__pyx_v_result, __pyx_t_9); if (unlikely(!__pyx_t_2)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 76; __pyx_clineno = __LINE__; goto __pyx_L1_error;} - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_9 = __Pyx_GetItemInt(__pyx_t_2, 0, sizeof(long), PyInt_FromLong); if (!__pyx_t_9) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 76; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __pyx_t_1 = PyNumber_Add(__pyx_v_result, __pyx_v_ix); if (unlikely(!__pyx_t_1)) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 76; __pyx_clineno = __LINE__; goto __pyx_L1_error;} + __Pyx_GOTREF(__pyx_t_1); + __pyx_t_9 = __Pyx_GetItemInt(__pyx_t_1, 0, sizeof(long), PyInt_FromLong); if (!__pyx_t_9) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 76; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; + __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __pyx_r = __pyx_t_9; __pyx_t_9 = 0; goto __pyx_L0; @@ -2234,7 +2222,7 @@ return __pyx_r; } -/* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":187 +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":187 * # experimental exception made for __getbuffer__ and __releasebuffer__ * # -- the details of this may change. * def __getbuffer__(ndarray self, Py_buffer* info, int flags): # <<<<<<<<<<<<<< @@ -2270,7 +2258,7 @@ __Pyx_GIVEREF(__pyx_v_info->obj); __Pyx_INCREF((PyObject *)__pyx_v_self); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":193 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":193 * # of flags * cdef int copy_shape, i, ndim * cdef int endian_detector = 1 # <<<<<<<<<<<<<< @@ -2279,7 +2267,7 @@ */ __pyx_v_endian_detector = 1; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":194 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":194 * cdef int copy_shape, i, ndim * cdef int endian_detector = 1 * cdef bint little_endian = ((&endian_detector)[0] != 0) # <<<<<<<<<<<<<< @@ -2288,7 +2276,7 @@ */ __pyx_v_little_endian = ((((char *)(&__pyx_v_endian_detector))[0]) != 0); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":196 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":196 * cdef bint little_endian = ((&endian_detector)[0] != 0) * * ndim = PyArray_NDIM(self) # <<<<<<<<<<<<<< @@ -2297,7 +2285,7 @@ */ __pyx_v_ndim = PyArray_NDIM(((PyArrayObject *)__pyx_v_self)); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":198 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":198 * ndim = PyArray_NDIM(self) * * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<< @@ -2307,7 +2295,7 @@ __pyx_t_1 = ((sizeof(npy_intp)) != (sizeof(Py_ssize_t))); if (__pyx_t_1) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":199 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":199 * * if sizeof(npy_intp) != sizeof(Py_ssize_t): * copy_shape = 1 # <<<<<<<<<<<<<< @@ -2319,7 +2307,7 @@ } /*else*/ { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":201 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":201 * copy_shape = 1 * else: * copy_shape = 0 # <<<<<<<<<<<<<< @@ -2330,7 +2318,7 @@ } __pyx_L5:; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":203 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":203 * copy_shape = 0 * * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) # <<<<<<<<<<<<<< @@ -2340,7 +2328,7 @@ __pyx_t_1 = ((__pyx_v_flags & PyBUF_C_CONTIGUOUS) == PyBUF_C_CONTIGUOUS); if (__pyx_t_1) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":204 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":204 * * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): # <<<<<<<<<<<<<< @@ -2354,7 +2342,7 @@ } if (__pyx_t_3) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":205 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":205 * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): * raise ValueError(u"ndarray is not C contiguous") # <<<<<<<<<<<<<< @@ -2376,7 +2364,7 @@ } __pyx_L6:; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":207 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":207 * raise ValueError(u"ndarray is not C contiguous") * * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) # <<<<<<<<<<<<<< @@ -2386,7 +2374,7 @@ __pyx_t_3 = ((__pyx_v_flags & PyBUF_F_CONTIGUOUS) == PyBUF_F_CONTIGUOUS); if (__pyx_t_3) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":208 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":208 * * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): # <<<<<<<<<<<<<< @@ -2400,7 +2388,7 @@ } if (__pyx_t_2) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":209 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":209 * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): * raise ValueError(u"ndarray is not Fortran contiguous") # <<<<<<<<<<<<<< @@ -2422,7 +2410,7 @@ } __pyx_L7:; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":211 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":211 * raise ValueError(u"ndarray is not Fortran contiguous") * * info.buf = PyArray_DATA(self) # <<<<<<<<<<<<<< @@ -2431,7 +2419,7 @@ */ __pyx_v_info->buf = PyArray_DATA(((PyArrayObject *)__pyx_v_self)); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":212 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":212 * * info.buf = PyArray_DATA(self) * info.ndim = ndim # <<<<<<<<<<<<<< @@ -2440,7 +2428,7 @@ */ __pyx_v_info->ndim = __pyx_v_ndim; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":213 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":213 * info.buf = PyArray_DATA(self) * info.ndim = ndim * if copy_shape: # <<<<<<<<<<<<<< @@ -2450,7 +2438,7 @@ __pyx_t_6 = __pyx_v_copy_shape; if (__pyx_t_6) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":216 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":216 * # Allocate new buffer for strides and shape info. This is allocated * # as one block, strides first. * info.strides = stdlib.malloc(sizeof(Py_ssize_t) * ndim * 2) # <<<<<<<<<<<<<< @@ -2459,7 +2447,7 @@ */ __pyx_v_info->strides = ((Py_ssize_t *)malloc((((sizeof(Py_ssize_t)) * __pyx_v_ndim) * 2))); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":217 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":217 * # as one block, strides first. * info.strides = stdlib.malloc(sizeof(Py_ssize_t) * ndim * 2) * info.shape = info.strides + ndim # <<<<<<<<<<<<<< @@ -2468,7 +2456,7 @@ */ __pyx_v_info->shape = (__pyx_v_info->strides + __pyx_v_ndim); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":218 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":218 * info.strides = stdlib.malloc(sizeof(Py_ssize_t) * ndim * 2) * info.shape = info.strides + ndim * for i in range(ndim): # <<<<<<<<<<<<<< @@ -2479,7 +2467,7 @@ for (__pyx_t_7 = 0; __pyx_t_7 < __pyx_t_6; __pyx_t_7+=1) { __pyx_v_i = __pyx_t_7; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":219 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":219 * info.shape = info.strides + ndim * for i in range(ndim): * info.strides[i] = PyArray_STRIDES(self)[i] # <<<<<<<<<<<<<< @@ -2488,7 +2476,7 @@ */ (__pyx_v_info->strides[__pyx_v_i]) = (PyArray_STRIDES(((PyArrayObject *)__pyx_v_self))[__pyx_v_i]); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":220 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":220 * for i in range(ndim): * info.strides[i] = PyArray_STRIDES(self)[i] * info.shape[i] = PyArray_DIMS(self)[i] # <<<<<<<<<<<<<< @@ -2501,7 +2489,7 @@ } /*else*/ { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":222 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":222 * info.shape[i] = PyArray_DIMS(self)[i] * else: * info.strides = PyArray_STRIDES(self) # <<<<<<<<<<<<<< @@ -2510,7 +2498,7 @@ */ __pyx_v_info->strides = ((Py_ssize_t *)PyArray_STRIDES(((PyArrayObject *)__pyx_v_self))); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":223 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":223 * else: * info.strides = PyArray_STRIDES(self) * info.shape = PyArray_DIMS(self) # <<<<<<<<<<<<<< @@ -2521,7 +2509,7 @@ } __pyx_L8:; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":224 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":224 * info.strides = PyArray_STRIDES(self) * info.shape = PyArray_DIMS(self) * info.suboffsets = NULL # <<<<<<<<<<<<<< @@ -2530,7 +2518,7 @@ */ __pyx_v_info->suboffsets = NULL; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":225 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":225 * info.shape = PyArray_DIMS(self) * info.suboffsets = NULL * info.itemsize = PyArray_ITEMSIZE(self) # <<<<<<<<<<<<<< @@ -2539,7 +2527,7 @@ */ __pyx_v_info->itemsize = PyArray_ITEMSIZE(((PyArrayObject *)__pyx_v_self)); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":226 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":226 * info.suboffsets = NULL * info.itemsize = PyArray_ITEMSIZE(self) * info.readonly = not PyArray_ISWRITEABLE(self) # <<<<<<<<<<<<<< @@ -2548,7 +2536,7 @@ */ __pyx_v_info->readonly = (!PyArray_ISWRITEABLE(((PyArrayObject *)__pyx_v_self))); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":229 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":229 * * cdef int t * cdef char* f = NULL # <<<<<<<<<<<<<< @@ -2557,7 +2545,7 @@ */ __pyx_v_f = NULL; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":230 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":230 * cdef int t * cdef char* f = NULL * cdef dtype descr = self.descr # <<<<<<<<<<<<<< @@ -2567,7 +2555,7 @@ __Pyx_INCREF(((PyObject *)((PyArrayObject *)__pyx_v_self)->descr)); __pyx_v_descr = ((PyArrayObject *)__pyx_v_self)->descr; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":234 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":234 * cdef int offset * * cdef bint hasfields = PyDataType_HASFIELDS(descr) # <<<<<<<<<<<<<< @@ -2576,7 +2564,7 @@ */ __pyx_v_hasfields = PyDataType_HASFIELDS(__pyx_v_descr); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":236 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":236 * cdef bint hasfields = PyDataType_HASFIELDS(descr) * * if not hasfields and not copy_shape: # <<<<<<<<<<<<<< @@ -2592,7 +2580,7 @@ } if (__pyx_t_1) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":238 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":238 * if not hasfields and not copy_shape: * # do not call releasebuffer * info.obj = None # <<<<<<<<<<<<<< @@ -2608,7 +2596,7 @@ } /*else*/ { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":241 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":241 * else: * # need to call releasebuffer * info.obj = self # <<<<<<<<<<<<<< @@ -2623,7 +2611,7 @@ } __pyx_L11:; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":243 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":243 * info.obj = self * * if not hasfields: # <<<<<<<<<<<<<< @@ -2633,7 +2621,7 @@ __pyx_t_1 = (!__pyx_v_hasfields); if (__pyx_t_1) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":244 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":244 * * if not hasfields: * t = descr.type_num # <<<<<<<<<<<<<< @@ -2642,7 +2630,7 @@ */ __pyx_v_t = __pyx_v_descr->type_num; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":245 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":245 * if not hasfields: * t = descr.type_num * if ((descr.byteorder == '>' and little_endian) or # <<<<<<<<<<<<<< @@ -2657,7 +2645,7 @@ } if (!__pyx_t_2) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":246 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":246 * t = descr.type_num * if ((descr.byteorder == '>' and little_endian) or * (descr.byteorder == '<' and not little_endian)): # <<<<<<<<<<<<<< @@ -2677,7 +2665,7 @@ } if (__pyx_t_1) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":247 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":247 * if ((descr.byteorder == '>' and little_endian) or * (descr.byteorder == '<' and not little_endian)): * raise ValueError(u"Non-native byte order not supported") # <<<<<<<<<<<<<< @@ -2699,7 +2687,7 @@ } __pyx_L13:; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":248 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":248 * (descr.byteorder == '<' and not little_endian)): * raise ValueError(u"Non-native byte order not supported") * if t == NPY_BYTE: f = "b" # <<<<<<<<<<<<<< @@ -2712,7 +2700,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":249 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":249 * raise ValueError(u"Non-native byte order not supported") * if t == NPY_BYTE: f = "b" * elif t == NPY_UBYTE: f = "B" # <<<<<<<<<<<<<< @@ -2725,7 +2713,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":250 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":250 * if t == NPY_BYTE: f = "b" * elif t == NPY_UBYTE: f = "B" * elif t == NPY_SHORT: f = "h" # <<<<<<<<<<<<<< @@ -2738,7 +2726,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":251 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":251 * elif t == NPY_UBYTE: f = "B" * elif t == NPY_SHORT: f = "h" * elif t == NPY_USHORT: f = "H" # <<<<<<<<<<<<<< @@ -2751,7 +2739,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":252 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":252 * elif t == NPY_SHORT: f = "h" * elif t == NPY_USHORT: f = "H" * elif t == NPY_INT: f = "i" # <<<<<<<<<<<<<< @@ -2764,7 +2752,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":253 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":253 * elif t == NPY_USHORT: f = "H" * elif t == NPY_INT: f = "i" * elif t == NPY_UINT: f = "I" # <<<<<<<<<<<<<< @@ -2777,7 +2765,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":254 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":254 * elif t == NPY_INT: f = "i" * elif t == NPY_UINT: f = "I" * elif t == NPY_LONG: f = "l" # <<<<<<<<<<<<<< @@ -2790,7 +2778,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":255 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":255 * elif t == NPY_UINT: f = "I" * elif t == NPY_LONG: f = "l" * elif t == NPY_ULONG: f = "L" # <<<<<<<<<<<<<< @@ -2803,7 +2791,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":256 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":256 * elif t == NPY_LONG: f = "l" * elif t == NPY_ULONG: f = "L" * elif t == NPY_LONGLONG: f = "q" # <<<<<<<<<<<<<< @@ -2816,7 +2804,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":257 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":257 * elif t == NPY_ULONG: f = "L" * elif t == NPY_LONGLONG: f = "q" * elif t == NPY_ULONGLONG: f = "Q" # <<<<<<<<<<<<<< @@ -2829,7 +2817,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":258 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":258 * elif t == NPY_LONGLONG: f = "q" * elif t == NPY_ULONGLONG: f = "Q" * elif t == NPY_FLOAT: f = "f" # <<<<<<<<<<<<<< @@ -2842,7 +2830,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":259 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":259 * elif t == NPY_ULONGLONG: f = "Q" * elif t == NPY_FLOAT: f = "f" * elif t == NPY_DOUBLE: f = "d" # <<<<<<<<<<<<<< @@ -2855,7 +2843,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":260 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":260 * elif t == NPY_FLOAT: f = "f" * elif t == NPY_DOUBLE: f = "d" * elif t == NPY_LONGDOUBLE: f = "g" # <<<<<<<<<<<<<< @@ -2868,7 +2856,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":261 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":261 * elif t == NPY_DOUBLE: f = "d" * elif t == NPY_LONGDOUBLE: f = "g" * elif t == NPY_CFLOAT: f = "Zf" # <<<<<<<<<<<<<< @@ -2881,7 +2869,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":262 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":262 * elif t == NPY_LONGDOUBLE: f = "g" * elif t == NPY_CFLOAT: f = "Zf" * elif t == NPY_CDOUBLE: f = "Zd" # <<<<<<<<<<<<<< @@ -2894,7 +2882,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":263 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":263 * elif t == NPY_CFLOAT: f = "Zf" * elif t == NPY_CDOUBLE: f = "Zd" * elif t == NPY_CLONGDOUBLE: f = "Zg" # <<<<<<<<<<<<<< @@ -2907,7 +2895,7 @@ goto __pyx_L14; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":264 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":264 * elif t == NPY_CDOUBLE: f = "Zd" * elif t == NPY_CLONGDOUBLE: f = "Zg" * elif t == NPY_OBJECT: f = "O" # <<<<<<<<<<<<<< @@ -2921,7 +2909,7 @@ } /*else*/ { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":266 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":266 * elif t == NPY_OBJECT: f = "O" * else: * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) # <<<<<<<<<<<<<< @@ -2947,7 +2935,7 @@ } __pyx_L14:; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":267 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":267 * else: * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) * info.format = f # <<<<<<<<<<<<<< @@ -2956,7 +2944,7 @@ */ __pyx_v_info->format = __pyx_v_f; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":268 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":268 * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) * info.format = f * return # <<<<<<<<<<<<<< @@ -2969,7 +2957,7 @@ } /*else*/ { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":270 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":270 * return * else: * info.format = stdlib.malloc(_buffer_format_string_len) # <<<<<<<<<<<<<< @@ -2978,7 +2966,7 @@ */ __pyx_v_info->format = ((char *)malloc(255)); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":271 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":271 * else: * info.format = stdlib.malloc(_buffer_format_string_len) * info.format[0] = '^' # Native data types, manual alignment # <<<<<<<<<<<<<< @@ -2987,7 +2975,7 @@ */ (__pyx_v_info->format[0]) = '^'; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":272 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":272 * info.format = stdlib.malloc(_buffer_format_string_len) * info.format[0] = '^' # Native data types, manual alignment * offset = 0 # <<<<<<<<<<<<<< @@ -2996,7 +2984,7 @@ */ __pyx_v_offset = 0; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":275 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":275 * f = _util_dtypestring(descr, info.format + 1, * info.format + _buffer_format_string_len, * &offset) # <<<<<<<<<<<<<< @@ -3006,7 +2994,7 @@ __pyx_t_9 = __pyx_f_5numpy__util_dtypestring(__pyx_v_descr, (__pyx_v_info->format + 1), (__pyx_v_info->format + 255), (&__pyx_v_offset)); if (unlikely(__pyx_t_9 == NULL)) {__pyx_filename = __pyx_f[1]; __pyx_lineno = 273; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __pyx_v_f = __pyx_t_9; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":276 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":276 * info.format + _buffer_format_string_len, * &offset) * f[0] = 0 # Terminate format string # <<<<<<<<<<<<<< @@ -3039,7 +3027,7 @@ return __pyx_r; } -/* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":278 +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":278 * f[0] = 0 # Terminate format string * * def __releasebuffer__(ndarray self, Py_buffer* info): # <<<<<<<<<<<<<< @@ -3053,7 +3041,7 @@ __Pyx_RefNannySetupContext("__releasebuffer__"); __Pyx_INCREF((PyObject *)__pyx_v_self); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":279 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":279 * * def __releasebuffer__(ndarray self, Py_buffer* info): * if PyArray_HASFIELDS(self): # <<<<<<<<<<<<<< @@ -3063,7 +3051,7 @@ __pyx_t_1 = PyArray_HASFIELDS(((PyArrayObject *)__pyx_v_self)); if (__pyx_t_1) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":280 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":280 * def __releasebuffer__(ndarray self, Py_buffer* info): * if PyArray_HASFIELDS(self): * stdlib.free(info.format) # <<<<<<<<<<<<<< @@ -3075,7 +3063,7 @@ } __pyx_L5:; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":281 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":281 * if PyArray_HASFIELDS(self): * stdlib.free(info.format) * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<< @@ -3085,7 +3073,7 @@ __pyx_t_1 = ((sizeof(npy_intp)) != (sizeof(Py_ssize_t))); if (__pyx_t_1) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":282 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":282 * stdlib.free(info.format) * if sizeof(npy_intp) != sizeof(Py_ssize_t): * stdlib.free(info.strides) # <<<<<<<<<<<<<< @@ -3101,7 +3089,7 @@ __Pyx_RefNannyFinishContext(); } -/* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":755 +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":755 * ctypedef npy_cdouble complex_t * * cdef inline object PyArray_MultiIterNew1(a): # <<<<<<<<<<<<<< @@ -3114,7 +3102,7 @@ PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("PyArray_MultiIterNew1"); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":756 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":756 * * cdef inline object PyArray_MultiIterNew1(a): * return PyArray_MultiIterNew(1, a) # <<<<<<<<<<<<<< @@ -3140,7 +3128,7 @@ return __pyx_r; } -/* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":758 +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":758 * return PyArray_MultiIterNew(1, a) * * cdef inline object PyArray_MultiIterNew2(a, b): # <<<<<<<<<<<<<< @@ -3153,7 +3141,7 @@ PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("PyArray_MultiIterNew2"); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":759 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":759 * * cdef inline object PyArray_MultiIterNew2(a, b): * return PyArray_MultiIterNew(2, a, b) # <<<<<<<<<<<<<< @@ -3179,7 +3167,7 @@ return __pyx_r; } -/* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":761 +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":761 * return PyArray_MultiIterNew(2, a, b) * * cdef inline object PyArray_MultiIterNew3(a, b, c): # <<<<<<<<<<<<<< @@ -3192,7 +3180,7 @@ PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("PyArray_MultiIterNew3"); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":762 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":762 * * cdef inline object PyArray_MultiIterNew3(a, b, c): * return PyArray_MultiIterNew(3, a, b, c) # <<<<<<<<<<<<<< @@ -3218,7 +3206,7 @@ return __pyx_r; } -/* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":764 +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":764 * return PyArray_MultiIterNew(3, a, b, c) * * cdef inline object PyArray_MultiIterNew4(a, b, c, d): # <<<<<<<<<<<<<< @@ -3231,7 +3219,7 @@ PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("PyArray_MultiIterNew4"); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":765 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":765 * * cdef inline object PyArray_MultiIterNew4(a, b, c, d): * return PyArray_MultiIterNew(4, a, b, c, d) # <<<<<<<<<<<<<< @@ -3257,7 +3245,7 @@ return __pyx_r; } -/* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":767 +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":767 * return PyArray_MultiIterNew(4, a, b, c, d) * * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e): # <<<<<<<<<<<<<< @@ -3270,7 +3258,7 @@ PyObject *__pyx_t_1 = NULL; __Pyx_RefNannySetupContext("PyArray_MultiIterNew5"); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":768 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":768 * * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e): * return PyArray_MultiIterNew(5, a, b, c, d, e) # <<<<<<<<<<<<<< @@ -3296,7 +3284,7 @@ return __pyx_r; } -/* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":770 +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":770 * return PyArray_MultiIterNew(5, a, b, c, d, e) * * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL: # <<<<<<<<<<<<<< @@ -3331,7 +3319,7 @@ __pyx_v_new_offset = Py_None; __Pyx_INCREF(Py_None); __pyx_v_t = Py_None; __Pyx_INCREF(Py_None); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":777 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":777 * cdef int delta_offset * cdef tuple i * cdef int endian_detector = 1 # <<<<<<<<<<<<<< @@ -3340,7 +3328,7 @@ */ __pyx_v_endian_detector = 1; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":778 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":778 * cdef tuple i * cdef int endian_detector = 1 * cdef bint little_endian = ((&endian_detector)[0] != 0) # <<<<<<<<<<<<<< @@ -3349,7 +3337,7 @@ */ __pyx_v_little_endian = ((((char *)(&__pyx_v_endian_detector))[0]) != 0); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":781 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":781 * cdef tuple fields * * for childname in descr.names: # <<<<<<<<<<<<<< @@ -3368,7 +3356,7 @@ __pyx_v_childname = __pyx_t_3; __pyx_t_3 = 0; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":782 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":782 * * for childname in descr.names: * fields = descr.fields[childname] # <<<<<<<<<<<<<< @@ -3382,7 +3370,7 @@ __pyx_v_fields = ((PyObject *)__pyx_t_3); __pyx_t_3 = 0; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":783 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":783 * for childname in descr.names: * fields = descr.fields[childname] * child, new_offset = fields # <<<<<<<<<<<<<< @@ -3405,7 +3393,7 @@ {__pyx_filename = __pyx_f[1]; __pyx_lineno = 783; __pyx_clineno = __LINE__; goto __pyx_L1_error;} } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":785 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":785 * child, new_offset = fields * * if (end - f) - (new_offset - offset[0]) < 15: # <<<<<<<<<<<<<< @@ -3430,7 +3418,7 @@ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; if (__pyx_t_6) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":786 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":786 * * if (end - f) - (new_offset - offset[0]) < 15: * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") # <<<<<<<<<<<<<< @@ -3452,7 +3440,7 @@ } __pyx_L5:; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":788 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":788 * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") * * if ((child.byteorder == '>' and little_endian) or # <<<<<<<<<<<<<< @@ -3467,7 +3455,7 @@ } if (!__pyx_t_7) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":789 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":789 * * if ((child.byteorder == '>' and little_endian) or * (child.byteorder == '<' and not little_endian)): # <<<<<<<<<<<<<< @@ -3487,7 +3475,7 @@ } if (__pyx_t_6) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":790 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":790 * if ((child.byteorder == '>' and little_endian) or * (child.byteorder == '<' and not little_endian)): * raise ValueError(u"Non-native byte order not supported") # <<<<<<<<<<<<<< @@ -3509,7 +3497,7 @@ } __pyx_L6:; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":800 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":800 * * # Output padding bytes * while offset[0] < new_offset: # <<<<<<<<<<<<<< @@ -3526,7 +3514,7 @@ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; if (!__pyx_t_6) break; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":801 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":801 * # Output padding bytes * while offset[0] < new_offset: * f[0] = 120 # "x"; pad byte # <<<<<<<<<<<<<< @@ -3535,7 +3523,7 @@ */ (__pyx_v_f[0]) = 120; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":802 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":802 * while offset[0] < new_offset: * f[0] = 120 # "x"; pad byte * f += 1 # <<<<<<<<<<<<<< @@ -3544,7 +3532,7 @@ */ __pyx_v_f += 1; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":803 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":803 * f[0] = 120 # "x"; pad byte * f += 1 * offset[0] += 1 # <<<<<<<<<<<<<< @@ -3554,7 +3542,7 @@ (__pyx_v_offset[0]) += 1; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":805 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":805 * offset[0] += 1 * * offset[0] += child.itemsize # <<<<<<<<<<<<<< @@ -3563,7 +3551,7 @@ */ (__pyx_v_offset[0]) += __pyx_v_child->elsize; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":807 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":807 * offset[0] += child.itemsize * * if not PyDataType_HASFIELDS(child): # <<<<<<<<<<<<<< @@ -3573,7 +3561,7 @@ __pyx_t_6 = (!PyDataType_HASFIELDS(__pyx_v_child)); if (__pyx_t_6) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":808 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":808 * * if not PyDataType_HASFIELDS(child): * t = child.type_num # <<<<<<<<<<<<<< @@ -3586,7 +3574,7 @@ __pyx_v_t = __pyx_t_3; __pyx_t_3 = 0; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":809 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":809 * if not PyDataType_HASFIELDS(child): * t = child.type_num * if end - f < 5: # <<<<<<<<<<<<<< @@ -3596,7 +3584,7 @@ __pyx_t_6 = ((__pyx_v_end - __pyx_v_f) < 5); if (__pyx_t_6) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":810 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":810 * t = child.type_num * if end - f < 5: * raise RuntimeError(u"Format string allocated too short.") # <<<<<<<<<<<<<< @@ -3618,7 +3606,7 @@ } __pyx_L10:; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":813 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":813 * * # Until ticket #99 is fixed, use integers to avoid warnings * if t == NPY_BYTE: f[0] = 98 #"b" # <<<<<<<<<<<<<< @@ -3637,7 +3625,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":814 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":814 * # Until ticket #99 is fixed, use integers to avoid warnings * if t == NPY_BYTE: f[0] = 98 #"b" * elif t == NPY_UBYTE: f[0] = 66 #"B" # <<<<<<<<<<<<<< @@ -3656,7 +3644,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":815 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":815 * if t == NPY_BYTE: f[0] = 98 #"b" * elif t == NPY_UBYTE: f[0] = 66 #"B" * elif t == NPY_SHORT: f[0] = 104 #"h" # <<<<<<<<<<<<<< @@ -3675,7 +3663,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":816 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":816 * elif t == NPY_UBYTE: f[0] = 66 #"B" * elif t == NPY_SHORT: f[0] = 104 #"h" * elif t == NPY_USHORT: f[0] = 72 #"H" # <<<<<<<<<<<<<< @@ -3694,7 +3682,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":817 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":817 * elif t == NPY_SHORT: f[0] = 104 #"h" * elif t == NPY_USHORT: f[0] = 72 #"H" * elif t == NPY_INT: f[0] = 105 #"i" # <<<<<<<<<<<<<< @@ -3713,7 +3701,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":818 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":818 * elif t == NPY_USHORT: f[0] = 72 #"H" * elif t == NPY_INT: f[0] = 105 #"i" * elif t == NPY_UINT: f[0] = 73 #"I" # <<<<<<<<<<<<<< @@ -3732,7 +3720,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":819 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":819 * elif t == NPY_INT: f[0] = 105 #"i" * elif t == NPY_UINT: f[0] = 73 #"I" * elif t == NPY_LONG: f[0] = 108 #"l" # <<<<<<<<<<<<<< @@ -3751,7 +3739,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":820 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":820 * elif t == NPY_UINT: f[0] = 73 #"I" * elif t == NPY_LONG: f[0] = 108 #"l" * elif t == NPY_ULONG: f[0] = 76 #"L" # <<<<<<<<<<<<<< @@ -3770,7 +3758,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":821 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":821 * elif t == NPY_LONG: f[0] = 108 #"l" * elif t == NPY_ULONG: f[0] = 76 #"L" * elif t == NPY_LONGLONG: f[0] = 113 #"q" # <<<<<<<<<<<<<< @@ -3789,7 +3777,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":822 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":822 * elif t == NPY_ULONG: f[0] = 76 #"L" * elif t == NPY_LONGLONG: f[0] = 113 #"q" * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" # <<<<<<<<<<<<<< @@ -3808,7 +3796,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":823 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":823 * elif t == NPY_LONGLONG: f[0] = 113 #"q" * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" * elif t == NPY_FLOAT: f[0] = 102 #"f" # <<<<<<<<<<<<<< @@ -3827,7 +3815,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":824 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":824 * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" * elif t == NPY_FLOAT: f[0] = 102 #"f" * elif t == NPY_DOUBLE: f[0] = 100 #"d" # <<<<<<<<<<<<<< @@ -3846,7 +3834,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":825 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":825 * elif t == NPY_FLOAT: f[0] = 102 #"f" * elif t == NPY_DOUBLE: f[0] = 100 #"d" * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" # <<<<<<<<<<<<<< @@ -3865,7 +3853,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":826 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":826 * elif t == NPY_DOUBLE: f[0] = 100 #"d" * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf # <<<<<<<<<<<<<< @@ -3886,7 +3874,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":827 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":827 * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd # <<<<<<<<<<<<<< @@ -3907,7 +3895,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":828 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":828 * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd * elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg # <<<<<<<<<<<<<< @@ -3928,7 +3916,7 @@ goto __pyx_L11; } - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":829 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":829 * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd * elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg * elif t == NPY_OBJECT: f[0] = 79 #"O" # <<<<<<<<<<<<<< @@ -3948,7 +3936,7 @@ } /*else*/ { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":831 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":831 * elif t == NPY_OBJECT: f[0] = 79 #"O" * else: * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) # <<<<<<<<<<<<<< @@ -3971,7 +3959,7 @@ } __pyx_L11:; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":832 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":832 * else: * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) * f += 1 # <<<<<<<<<<<<<< @@ -3983,7 +3971,7 @@ } /*else*/ { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":836 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":836 * # Cython ignores struct boundary information ("T{...}"), * # so don't output it * f = _util_dtypestring(child, f, end, offset) # <<<<<<<<<<<<<< @@ -3997,7 +3985,7 @@ } __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":837 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":837 * # so don't output it * f = _util_dtypestring(child, f, end, offset) * return f # <<<<<<<<<<<<<< @@ -4027,7 +4015,7 @@ return __pyx_r; } -/* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":952 +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":952 * * * cdef inline void set_array_base(ndarray arr, object base): # <<<<<<<<<<<<<< @@ -4042,7 +4030,7 @@ __Pyx_INCREF((PyObject *)__pyx_v_arr); __Pyx_INCREF(__pyx_v_base); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":954 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":954 * cdef inline void set_array_base(ndarray arr, object base): * cdef PyObject* baseptr * if base is None: # <<<<<<<<<<<<<< @@ -4052,7 +4040,7 @@ __pyx_t_1 = (__pyx_v_base == Py_None); if (__pyx_t_1) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":955 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":955 * cdef PyObject* baseptr * if base is None: * baseptr = NULL # <<<<<<<<<<<<<< @@ -4064,7 +4052,7 @@ } /*else*/ { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":957 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":957 * baseptr = NULL * else: * Py_INCREF(base) # important to do this before decref below! # <<<<<<<<<<<<<< @@ -4073,7 +4061,7 @@ */ Py_INCREF(__pyx_v_base); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":958 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":958 * else: * Py_INCREF(base) # important to do this before decref below! * baseptr = base # <<<<<<<<<<<<<< @@ -4084,7 +4072,7 @@ } __pyx_L3:; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":959 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":959 * Py_INCREF(base) # important to do this before decref below! * baseptr = base * Py_XDECREF(arr.base) # <<<<<<<<<<<<<< @@ -4093,7 +4081,7 @@ */ Py_XDECREF(__pyx_v_arr->base); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":960 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":960 * baseptr = base * Py_XDECREF(arr.base) * arr.base = baseptr # <<<<<<<<<<<<<< @@ -4107,7 +4095,7 @@ __Pyx_RefNannyFinishContext(); } -/* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":962 +/* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":962 * arr.base = baseptr * * cdef inline object get_array_base(ndarray arr): # <<<<<<<<<<<<<< @@ -4121,7 +4109,7 @@ __Pyx_RefNannySetupContext("get_array_base"); __Pyx_INCREF((PyObject *)__pyx_v_arr); - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":963 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":963 * * cdef inline object get_array_base(ndarray arr): * if arr.base is NULL: # <<<<<<<<<<<<<< @@ -4131,7 +4119,7 @@ __pyx_t_1 = (__pyx_v_arr->base == NULL); if (__pyx_t_1) { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":964 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":964 * cdef inline object get_array_base(ndarray arr): * if arr.base is NULL: * return None # <<<<<<<<<<<<<< @@ -4146,7 +4134,7 @@ } /*else*/ { - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":966 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/numpy.pxd":966 * return None * else: * return arr.base # <<<<<<<<<<<<<< @@ -4334,7 +4322,7 @@ /*--- Function import code ---*/ /*--- Execution code ---*/ - /* "/home/david/src/numeric/scipy/scipy-git/scipy/stats/vonmises_cython.pyx":1 + /* "/Users/mb312/dev_trees/scipy-work/scipy/stats/vonmises_cython.pyx":1 * import numpy as np # <<<<<<<<<<<<<< * import scipy.stats * from scipy.special import i0 @@ -4344,7 +4332,7 @@ if (PyObject_SetAttr(__pyx_m, __pyx_n_s__np, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/stats/vonmises_cython.pyx":2 + /* "/Users/mb312/dev_trees/scipy-work/scipy/stats/vonmises_cython.pyx":2 * import numpy as np * import scipy.stats # <<<<<<<<<<<<<< * from scipy.special import i0 @@ -4355,7 +4343,7 @@ if (PyObject_SetAttr(__pyx_m, __pyx_n_s__scipy, __pyx_t_1) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 2; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/stats/vonmises_cython.pyx":3 + /* "/Users/mb312/dev_trees/scipy-work/scipy/stats/vonmises_cython.pyx":3 * import numpy as np * import scipy.stats * from scipy.special import i0 # <<<<<<<<<<<<<< @@ -4376,7 +4364,7 @@ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/stats/vonmises_cython.pyx":4 + /* "/Users/mb312/dev_trees/scipy-work/scipy/stats/vonmises_cython.pyx":4 * import scipy.stats * from scipy.special import i0 * import numpy.testing # <<<<<<<<<<<<<< @@ -4388,7 +4376,7 @@ if (PyObject_SetAttr(__pyx_m, __pyx_n_s__numpy, __pyx_t_2) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 4; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - /* "/home/david/src/numeric/scipy/scipy-git/scipy/stats/vonmises_cython.pyx":1 + /* "/Users/mb312/dev_trees/scipy-work/scipy/stats/vonmises_cython.pyx":1 * import numpy as np # <<<<<<<<<<<<<< * import scipy.stats * from scipy.special import i0 @@ -4398,7 +4386,7 @@ if (PyObject_SetAttr(__pyx_m, __pyx_n_s____test__, ((PyObject *)__pyx_t_2)) < 0) {__pyx_filename = __pyx_f[0]; __pyx_lineno = 1; __pyx_clineno = __LINE__; goto __pyx_L1_error;} __Pyx_DECREF(((PyObject *)__pyx_t_2)); __pyx_t_2 = 0; - /* "/home/david/local/lib/python2.6/site-packages/Cython/Includes/stdlib.pxd":2 + /* "/Users/mb312/usr/local/lib/python2.6/site-packages/Cython/Includes/stdlib.pxd":2 * * cdef extern from "stdlib.h" nogil: # <<<<<<<<<<<<<< * void free(void *ptr) diff -Nru python-scipy-0.7.2+dfsg1/scipy/stats/vonmises_cython.pyx python-scipy-0.8.0+dfsg1/scipy/stats/vonmises_cython.pyx --- python-scipy-0.7.2+dfsg1/scipy/stats/vonmises_cython.pyx 2008-10-09 08:29:54.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/stats/vonmises_cython.pyx 2009-11-17 21:15:19.000000000 +0000 @@ -46,7 +46,7 @@ k = np.atleast_1d(k) x = np.atleast_1d(x) ix = np.round(x/(2*np.pi)) - x = x-ix + x = x-ix*2*np.pi # These values should give 12 decimal digits CK=50 @@ -71,6 +71,6 @@ result[~c_small_k] = von_mises_cdf_normalapprox(bk[~c_small_k],bx[~c_small_k],C1) if not zerodim: - return result+(2*np.pi)*ix + return result+ix else: - return (result+(2*np.pi)*ix)[0] + return (result+ix)[0] diff -Nru python-scipy-0.7.2+dfsg1/scipy/stsci/convolve/lib/Convolve.py python-scipy-0.8.0+dfsg1/scipy/stsci/convolve/lib/Convolve.py --- python-scipy-0.7.2+dfsg1/scipy/stsci/convolve/lib/Convolve.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/stsci/convolve/lib/Convolve.py 1970-01-01 01:00:00.000000000 +0100 @@ -1,391 +0,0 @@ -import numpy as np -import _correlate -import numpy.fft as dft -import iraf_frame - -VALID = 0 -SAME = 1 -FULL = 2 -PASS = 3 - -convolution_modes = { - "valid":0, - "same":1, - "full":2, - "pass":3, - } - -def _condition_inputs(data, kernel): - data, kernel = np.asarray(data), np.asarray(kernel) - if np.rank(data) == 0: - data.shape = (1,) - if np.rank(kernel) == 0: - kernel.shape = (1,) - if np.rank(data) > 1 or np.rank(kernel) > 1: - raise ValueError("arrays must be 1D") - if len(data) < len(kernel): - data, kernel = kernel, data - return data, kernel - -def correlate(data, kernel, mode=FULL): - """correlate(data, kernel, mode=FULL) - - >>> correlate(np.arange(8), [1, 2], mode=VALID) - array([ 2, 5, 8, 11, 14, 17, 20]) - >>> correlate(np.arange(8), [1, 2], mode=SAME) - array([ 0, 2, 5, 8, 11, 14, 17, 20]) - >>> correlate(np.arange(8), [1, 2], mode=FULL) - array([ 0, 2, 5, 8, 11, 14, 17, 20, 7]) - >>> correlate(np.arange(8), [1, 2, 3], mode=VALID) - array([ 8, 14, 20, 26, 32, 38]) - >>> correlate(np.arange(8), [1, 2, 3], mode=SAME) - array([ 3, 8, 14, 20, 26, 32, 38, 20]) - >>> correlate(np.arange(8), [1, 2, 3], mode=FULL) - array([ 0, 3, 8, 14, 20, 26, 32, 38, 20, 7]) - >>> correlate(np.arange(8), [1, 2, 3, 4, 5, 6], mode=VALID) - array([ 70, 91, 112]) - >>> correlate(np.arange(8), [1, 2, 3, 4, 5, 6], mode=SAME) - array([ 17, 32, 50, 70, 91, 112, 85, 60]) - >>> correlate(np.arange(8), [1, 2, 3, 4, 5, 6], mode=FULL) - array([ 0, 6, 17, 32, 50, 70, 91, 112, 85, 60, 38, 20, 7]) - >>> correlate(np.arange(8), 1+1j) - Traceback (most recent call last): - ... - TypeError: array cannot be safely cast to required type - - """ - data, kernel = _condition_inputs(data, kernel) - lenk = len(kernel) - halfk = int(lenk/2) - even = (lenk % 2 == 0) - kdata = [0] * lenk - - if mode in convolution_modes.keys(): - mode = convolution_modes[ mode ] - - result_type = max(kernel.dtype.name, data.dtype.name) - - if mode == VALID: - wdata = np.concatenate((kdata, data, kdata)) - result = wdata.astype(result_type) - _correlate.Correlate1d(kernel, wdata, result) - return result[lenk+halfk:-lenk-halfk+even] - elif mode == SAME: - wdata = np.concatenate((kdata, data, kdata)) - result = wdata.astype(result_type) - _correlate.Correlate1d(kernel, wdata, result) - return result[lenk:-lenk] - elif mode == FULL: - wdata = np.concatenate((kdata, data, kdata)) - result = wdata.astype(result_type) - _correlate.Correlate1d(kernel, wdata, result) - return result[halfk+1:-halfk-1+even] - elif mode == PASS: - result = data.astype(result_type) - _correlate.Correlate1d(kernel, data, result) - return result - else: - raise ValueError("Invalid convolution mode.") - -cross_correlate = correlate - -pix_modes = { - "nearest" : 0, - "reflect": 1, - "wrap" : 2, - "constant": 3 - } - -def convolve(data, kernel, mode=FULL): - """convolve(data, kernel, mode=FULL) - Returns the discrete, linear convolution of 1-D - sequences a and v; mode can be 0 (VALID), 1 (SAME), or 2 (FULL) - to specify size of the resulting sequence. - - >>> convolve(np.arange(8), [1, 2], mode=VALID) - array([ 1, 4, 7, 10, 13, 16, 19]) - >>> convolve(np.arange(8), [1, 2], mode=SAME) - array([ 0, 1, 4, 7, 10, 13, 16, 19]) - >>> convolve(np.arange(8), [1, 2], mode=FULL) - array([ 0, 1, 4, 7, 10, 13, 16, 19, 14]) - >>> convolve(np.arange(8), [1, 2, 3], mode=VALID) - array([ 4, 10, 16, 22, 28, 34]) - >>> convolve(np.arange(8), [1, 2, 3], mode=SAME) - array([ 1, 4, 10, 16, 22, 28, 34, 32]) - >>> convolve(np.arange(8), [1, 2, 3], mode=FULL) - array([ 0, 1, 4, 10, 16, 22, 28, 34, 32, 21]) - >>> convolve(np.arange(8), [1, 2, 3, 4, 5, 6], mode=VALID) - array([35, 56, 77]) - >>> convolve(np.arange(8), [1, 2, 3, 4, 5, 6], mode=SAME) - array([ 4, 10, 20, 35, 56, 77, 90, 94]) - >>> convolve(np.arange(8), [1, 2, 3, 4, 5, 6], mode=FULL) - array([ 0, 1, 4, 10, 20, 35, 56, 77, 90, 94, 88, 71, 42]) - >>> convolve([1.,2.], np.arange(10.)) - array([ 0., 1., 4., 7., 10., 13., 16., 19., 22., 25., 18.]) - """ - data, kernel = _condition_inputs(data, kernel) - if len(data) >= len(kernel): - return correlate(data, kernel[::-1], mode) - else: - return correlate(kernel, data[::-1], mode) - - -def _gaussian(sigma, mew, npoints, sigmas): - ox = np.arange(mew-sigmas*sigma, - mew+sigmas*sigma, - 2*sigmas*sigma/npoints, type=np.float64) - x = ox-mew - x /= sigma - x = x * x - x *= -1/2 - x = np.exp(x) - return ox, 1/(sigma * np.sqrt(2*np.pi)) * x - -def _correlate2d_fft(data0, kernel0, output=None, mode="nearest", cval=0.0): - """_correlate2d_fft does 2d correlation of 'data' with 'kernel', storing - the result in 'output' using the FFT to perform the correlation. - - supported 'mode's include: - 'nearest' elements beyond boundary come from nearest edge pixel. - 'wrap' elements beyond boundary come from the opposite array edge. - 'reflect' elements beyond boundary come from reflection on same array edge. - 'constant' elements beyond boundary are set to 'cval' - """ - shape = data0.shape - kshape = kernel0.shape - oversized = (np.array(shape) + np.array(kshape)) - - dy = kshape[0] // 2 - dx = kshape[1] // 2 - - kernel = np.zeros(oversized, dtype=np.float64) - kernel[:kshape[0], :kshape[1]] = kernel0[::-1,::-1] # convolution <-> correlation - data = iraf_frame.frame(data0, oversized, mode=mode, cval=cval) - - complex_result = (isinstance(data, np.complexfloating) or - isinstance(kernel, np.complexfloating)) - - Fdata = dft.fft2(data) - del data - - Fkernel = dft.fft2(kernel) - del kernel - - np.multiply(Fdata, Fkernel, Fdata) - del Fkernel - - if complex_result: - convolved = dft.irfft2( Fdata, s=oversized) - else: - convolved = dft.irfft2( Fdata, s=oversized) - - result = convolved[ kshape[0]-1:shape[0]+kshape[0]-1, kshape[1]-1:shape[1]+kshape[1]-1 ] - - if output is not None: - output._copyFrom( result ) - else: - return result - - -def _correlate2d_naive(data, kernel, output=None, mode="nearest", cval=0.0): - return _correlate.Correlate2d(kernel, data, output, pix_modes[mode], cval) - -def _fix_data_kernel(data, kernel): - """The _correlate.Correlate2d C-code can only handle kernels which - fit inside the data array. Since convolution and correlation are - commutative, _fix_data_kernel reverses kernel and data if necessary - and panics if there's no good order. - """ - data, kernel = map(np.asarray, [data, kernel]) - if np.rank(data) == 0: - data.shape = (1,1) - elif np.rank(data) == 1: - data.shape = (1,) + data.shape - if np.rank(kernel) == 0: - kernel.shape = (1,1) - elif np.rank(kernel) == 1: - kernel.shape = (1,) + kernel.shape - if (kernel.shape[0] > data.shape[0] and - kernel.shape[1] > data.shape[1]): - kernel, data = data, kernel - elif (kernel.shape[0] <= data.shape[0] and - kernel.shape[1] <= data.shape[1]): - pass - return data, kernel - -def correlate2d(data, kernel, output=None, mode="nearest", cval=0.0, fft=0): - """correlate2d does 2d correlation of 'data' with 'kernel', storing - the result in 'output'. - - supported 'mode's include: - 'nearest' elements beyond boundary come from nearest edge pixel. - 'wrap' elements beyond boundary come from the opposite array edge. - 'reflect' elements beyond boundary come from reflection on same array edge. - 'constant' elements beyond boundary are set to 'cval' - - If fft is True, the correlation is performed using the FFT, else the - correlation is performed using the naive approach. - - >>> a = np.arange(20*20) - >>> a = a.reshape((20,20)) - >>> b = np.ones((5,5), dtype=np.float64) - >>> rn = correlate2d(a, b, fft=0) - >>> rf = correlate2d(a, b, fft=1) - >>> np.alltrue(np.ravel(rn-rf<1e-10)) - True - """ - data, kernel = _fix_data_kernel(data, kernel) - if fft: - return _correlate2d_fft(data, kernel, output, mode, cval) - else: - a = _correlate2d_naive(data, kernel, output, mode, cval) - #a = a.byteswap() - return a - -def convolve2d(data, kernel, output=None, mode="nearest", cval=0.0, fft=0): - """convolve2d does 2d convolution of 'data' with 'kernel', storing - the result in 'output'. - - supported 'mode's include: - 'nearest' elements beyond boundary come from nearest edge pixel. - 'wrap' elements beyond boundary come from the opposite array edge. - 'reflect' elements beyond boundary come from reflection on same array edge. - 'constant' elements beyond boundary are set to 'cval' - - >>> a = np.arange(20*20) - >>> a = a.reshape((20,20)) - >>> b = np.ones((5,5), dtype=np.float64) - >>> rn = convolve2d(a, b, fft=0) - >>> rf = convolve2d(a, b, fft=1) - >>> np.alltrue(np.ravel(rn-rf<1e-10)) - True - """ - data, kernel = _fix_data_kernel(data, kernel) - kernel = kernel[::-1,::-1] # convolution -> correlation - if fft: - return _correlate2d_fft(data, kernel, output, mode, cval) - else: - return _correlate2d_naive(data, kernel, output, mode, cval) - -def _boxcar(data, output, boxshape, mode, cval): - if len(boxshape) == 1: - _correlate.Boxcar2d(data[np.newaxis,...], 1, boxshape[0], - output[np.newaxis,...], mode, cval) - elif len(boxshape) == 2: - _correlate.Boxcar2d(data, boxshape[0], boxshape[1], output, mode, cval) - else: - raise ValueError("boxshape must be a 1D or 2D shape.") - -def boxcar(data, boxshape, output=None, mode="nearest", cval=0.0): - """boxcar computes a 1D or 2D boxcar filter on every 1D or 2D subarray of data. - - 'boxshape' is a tuple of integers specifying the dimensions of the filter: e.g. (3,3) - - if 'output' is specified, it should be the same shape as 'data' and - None will be returned. - - supported 'mode's include: - 'nearest' elements beyond boundary come from nearest edge pixel. - 'wrap' elements beyond boundary come from the opposite array edge. - 'reflect' elements beyond boundary come from reflection on same array edge. - 'constant' elements beyond boundary are set to 'cval' - - >>> boxcar(np.array([10, 0, 0, 0, 0, 0, 1000]), (3,), mode="nearest").astype(np.longlong) - array([ 6, 3, 0, 0, 0, 333, 666], dtype=int64) - >>> boxcar(np.array([10, 0, 0, 0, 0, 0, 1000]), (3,), mode="wrap").astype(np.longlong) - array([336, 3, 0, 0, 0, 333, 336], dtype=int64) - >>> boxcar(np.array([10, 0, 0, 0, 0, 0, 1000]), (3,), mode="reflect").astype(np.longlong) - array([ 6, 3, 0, 0, 0, 333, 666], dtype=int64) - >>> boxcar(np.array([10, 0, 0, 0, 0, 0, 1000]), (3,), mode="constant").astype(np.longlong) - array([ 3, 3, 0, 0, 0, 333, 333], dtype=int64) - >>> a = np.zeros((10,10)) - >>> a[0,0] = 100 - >>> a[5,5] = 1000 - >>> a[9,9] = 10000 - >>> boxcar(a, (3,3)).astype(np.longlong) - array([[ 44, 22, 0, 0, 0, 0, 0, 0, 0, 0], - [ 22, 11, 0, 0, 0, 0, 0, 0, 0, 0], - [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [ 0, 0, 0, 0, 111, 111, 111, 0, 0, 0], - [ 0, 0, 0, 0, 111, 111, 111, 0, 0, 0], - [ 0, 0, 0, 0, 111, 111, 111, 0, 0, 0], - [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [ 0, 0, 0, 0, 0, 0, 0, 0, 1111, 2222], - [ 0, 0, 0, 0, 0, 0, 0, 0, 2222, 4444]], dtype=int64) - >>> boxcar(a, (3,3), mode="wrap").astype(np.longlong) - array([[1122, 11, 0, 0, 0, 0, 0, 0, 1111, 1122], - [ 11, 11, 0, 0, 0, 0, 0, 0, 0, 11], - [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [ 0, 0, 0, 0, 111, 111, 111, 0, 0, 0], - [ 0, 0, 0, 0, 111, 111, 111, 0, 0, 0], - [ 0, 0, 0, 0, 111, 111, 111, 0, 0, 0], - [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1111, 0, 0, 0, 0, 0, 0, 0, 1111, 1111], - [1122, 11, 0, 0, 0, 0, 0, 0, 1111, 1122]], dtype=int64) - >>> boxcar(a, (3,3), mode="reflect").astype(np.longlong) - array([[ 44, 22, 0, 0, 0, 0, 0, 0, 0, 0], - [ 22, 11, 0, 0, 0, 0, 0, 0, 0, 0], - [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [ 0, 0, 0, 0, 111, 111, 111, 0, 0, 0], - [ 0, 0, 0, 0, 111, 111, 111, 0, 0, 0], - [ 0, 0, 0, 0, 111, 111, 111, 0, 0, 0], - [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [ 0, 0, 0, 0, 0, 0, 0, 0, 1111, 2222], - [ 0, 0, 0, 0, 0, 0, 0, 0, 2222, 4444]], dtype=int64) - >>> boxcar(a, (3,3), mode="constant").astype(np.longlong) - array([[ 11, 11, 0, 0, 0, 0, 0, 0, 0, 0], - [ 11, 11, 0, 0, 0, 0, 0, 0, 0, 0], - [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [ 0, 0, 0, 0, 111, 111, 111, 0, 0, 0], - [ 0, 0, 0, 0, 111, 111, 111, 0, 0, 0], - [ 0, 0, 0, 0, 111, 111, 111, 0, 0, 0], - [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [ 0, 0, 0, 0, 0, 0, 0, 0, 1111, 1111], - [ 0, 0, 0, 0, 0, 0, 0, 0, 1111, 1111]], dtype=int64) - - >>> a = np.zeros((10,10)) - >>> a[3:6,3:6] = 111 - >>> boxcar(a, (3,3)).astype(np.longlong) - array([[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [ 0, 0, 12, 24, 37, 24, 12, 0, 0, 0], - [ 0, 0, 24, 49, 74, 49, 24, 0, 0, 0], - [ 0, 0, 37, 74, 111, 74, 37, 0, 0, 0], - [ 0, 0, 24, 49, 74, 49, 24, 0, 0, 0], - [ 0, 0, 12, 24, 37, 24, 12, 0, 0, 0], - [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int64) - """ - mode = pix_modes[ mode ] - if output is None: - woutput = data.astype(np.float64) - else: - woutput = output - _fbroadcast(_boxcar, len(boxshape), data.shape, - (data, woutput), (boxshape, mode, cval)) - if output is None: - return woutput - -def _fbroadcast(f, N, shape, args, params=()): - """_fbroadcast(f, N, args, shape, params=()) calls 'f' for each of the - 'N'-dimensional inner subnumarray of 'args'. Each subarray has - .shape == 'shape'[-N:]. There are a total of product(shape[:-N],axis=0) - calls to 'f'. - """ - if len(shape) == N: - apply(f, tuple(args)+params) - else: - for i in range(shape[0]): - _fbroadcast(f, N, shape[1:], [x[i] for x in args], params) - -def test(): - import doctest, Convolve - return doctest.testmod(Convolve) - -if __name__ == "__main__": - print test() diff -Nru python-scipy-0.7.2+dfsg1/scipy/stsci/convolve/lib/__init__.py python-scipy-0.8.0+dfsg1/scipy/stsci/convolve/lib/__init__.py --- python-scipy-0.7.2+dfsg1/scipy/stsci/convolve/lib/__init__.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/stsci/convolve/lib/__init__.py 1970-01-01 01:00:00.000000000 +0100 @@ -1,3 +0,0 @@ -__version__ = '2.0' -from Convolve import * -import iraf_frame diff -Nru python-scipy-0.7.2+dfsg1/scipy/stsci/convolve/lib/iraf_frame.py python-scipy-0.8.0+dfsg1/scipy/stsci/convolve/lib/iraf_frame.py --- python-scipy-0.7.2+dfsg1/scipy/stsci/convolve/lib/iraf_frame.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/stsci/convolve/lib/iraf_frame.py 1970-01-01 01:00:00.000000000 +0100 @@ -1,195 +0,0 @@ -import numpy as np - -"""This module defines the function frame() which creates -a framed copy of an input array with the boundary pixels -defined according to the IRAF boundary modes: 'nearest', -'reflect', 'wrap', and 'constant.' -""" - -def frame_nearest(a, shape, cval=None): - - """frame_nearest creates an oversized copy of 'a' with new 'shape' - and the contents of 'a' in the center. The boundary pixels are - copied from the nearest edge pixel in 'a'. - - >>> a = np.arange(16) - >>> a.shape=(4,4) - >>> frame_nearest(a, (8,8)) - array([[ 0, 0, 0, 1, 2, 3, 3, 3], - [ 0, 0, 0, 1, 2, 3, 3, 3], - [ 0, 0, 0, 1, 2, 3, 3, 3], - [ 4, 4, 4, 5, 6, 7, 7, 7], - [ 8, 8, 8, 9, 10, 11, 11, 11], - [12, 12, 12, 13, 14, 15, 15, 15], - [12, 12, 12, 13, 14, 15, 15, 15], - [12, 12, 12, 13, 14, 15, 15, 15]]) - - """ - - b = np.zeros(shape, dtype=a.dtype) - delta = (np.array(b.shape) - np.array(a.shape)) - dy = delta[0] // 2 - dx = delta[1] // 2 - my = a.shape[0] + dy - mx = a.shape[1] + dx - - b[dy:my, dx:mx] = a # center - b[:dy,dx:mx] = a[0:1,:] # top - b[my:,dx:mx] = a[-1:,:] # bottom - b[dy:my, :dx] = a[:, 0:1] # left - b[dy:my, mx:] = a[:, -1:] # right - b[:dy, :dx] = a[0,0] # topleft - b[:dy, mx:] = a[0,-1] # topright - b[my:, :dx] = a[-1, 0] # bottomleft - b[my:, mx:] = a[-1, -1] # bottomright - - return b - -def frame_reflect(a, shape, cval=None): - - """frame_reflect creates an oversized copy of 'a' with new 'shape' - and the contents of 'a' in the center. The boundary pixels are - reflected from the nearest edge pixels in 'a'. - - >>> a = np.arange(16) - >>> a.shape = (4,4) - >>> frame_reflect(a, (8,8)) - array([[ 5, 4, 4, 5, 6, 7, 7, 6], - [ 1, 0, 0, 1, 2, 3, 3, 2], - [ 1, 0, 0, 1, 2, 3, 3, 2], - [ 5, 4, 4, 5, 6, 7, 7, 6], - [ 9, 8, 8, 9, 10, 11, 11, 10], - [13, 12, 12, 13, 14, 15, 15, 14], - [13, 12, 12, 13, 14, 15, 15, 14], - [ 9, 8, 8, 9, 10, 11, 11, 10]]) - """ - - b = np.zeros(shape, dtype=a.dtype) - delta = (np.array(b.shape) - np.array(a.shape)) - dy = delta[0] // 2 - dx = delta[1] // 2 - my = a.shape[0] + dy - mx = a.shape[1] + dx - sy = delta[0] - dy - sx = delta[1] - dx - - b[dy:my, dx:mx] = a # center - b[:dy,dx:mx] = a[:dy,:][::-1,:] # top - b[my:,dx:mx] = a[-sy:,:][::-1,:] # bottom - b[dy:my,:dx] = a[:,:dx][:,::-1] # left - b[dy:my,mx:] = a[:,-sx:][:,::-1] # right - b[:dy,:dx] = a[:dy,:dx][::-1,::-1] # topleft - b[:dy,mx:] = a[:dy,-sx:][::-1,::-1] # topright - b[my:,:dx] = a[-sy:,:dx][::-1,::-1] # bottomleft - b[my:,mx:] = a[-sy:,-sx:][::-1,::-1] # bottomright - return b - -def frame_wrap(a, shape, cval=None): - """frame_wrap creates an oversized copy of 'a' with new 'shape' - and the contents of 'a' in the center. The boundary pixels are - wrapped around to the opposite edge pixels in 'a'. - - >>> a = np.arange(16) - >>> a.shape=(4,4) - >>> frame_wrap(a, (8,8)) - array([[10, 11, 8, 9, 10, 11, 8, 9], - [14, 15, 12, 13, 14, 15, 12, 13], - [ 2, 3, 0, 1, 2, 3, 0, 1], - [ 6, 7, 4, 5, 6, 7, 4, 5], - [10, 11, 8, 9, 10, 11, 8, 9], - [14, 15, 12, 13, 14, 15, 12, 13], - [ 2, 3, 0, 1, 2, 3, 0, 1], - [ 6, 7, 4, 5, 6, 7, 4, 5]]) - - """ - - b = np.zeros(shape, dtype=a.dtype) - delta = (np.array(b.shape) - np.array(a.shape)) - dy = delta[0] // 2 - dx = delta[1] // 2 - my = a.shape[0] + dy - mx = a.shape[1] + dx - sy = delta[0] - dy - sx = delta[1] - dx - - b[dy:my, dx:mx] = a # center - b[:dy,dx:mx] = a[-dy:,:] # top - b[my:,dx:mx] = a[:sy,:] # bottom - b[dy:my,:dx] = a[:,-dx:] # left - b[dy:my,mx:] = a[:, :sx] # right - b[:dy,:dx] = a[-dy:,-dx:] # topleft - b[:dy,mx:] = a[-dy:,:sx ] # topright - b[my:,:dx] = a[:sy, -dx:] # bottomleft - b[my:,mx:] = a[:sy, :sx] # bottomright - return b - -def frame_constant(a, shape, cval=0): - """frame_nearest creates an oversized copy of 'a' with new 'shape' - and the contents of 'a' in the center. The boundary pixels are - copied from the nearest edge pixel in 'a'. - - >>> a = np.arange(16) - >>> a.shape=(4,4) - >>> frame_constant(a, (8,8), cval=42) - array([[42, 42, 42, 42, 42, 42, 42, 42], - [42, 42, 42, 42, 42, 42, 42, 42], - [42, 42, 0, 1, 2, 3, 42, 42], - [42, 42, 4, 5, 6, 7, 42, 42], - [42, 42, 8, 9, 10, 11, 42, 42], - [42, 42, 12, 13, 14, 15, 42, 42], - [42, 42, 42, 42, 42, 42, 42, 42], - [42, 42, 42, 42, 42, 42, 42, 42]]) - - """ - - b = np.zeros(shape, dtype=a.dtype) - delta = (np.array(b.shape) - np.array(a.shape)) - dy = delta[0] // 2 - dx = delta[1] // 2 - my = a.shape[0] + dy - mx = a.shape[1] + dx - - b[dy:my, dx:mx] = a # center - b[:dy,dx:mx] = cval # top - b[my:,dx:mx] = cval # bottom - b[dy:my, :dx] = cval # left - b[dy:my, mx:] = cval # right - b[:dy, :dx] = cval # topleft - b[:dy, mx:] = cval # topright - b[my:, :dx] = cval # bottomleft - b[my:, mx:] = cval # bottomright - return b - -_frame_dispatch = { "nearest": frame_nearest, - "reflect": frame_reflect, - "wrap": frame_wrap, - "constant" : frame_constant } - -def frame(a, shape, mode="nearest", cval=0.0): - - """frame creates an oversized copy of 'a' with new 'shape', with - extra pixels being supplied according to IRAF boundary mode, - 'mode'. """ - - try: - f = _frame_dispatch[mode] - except KeyError: - raise ValueError('invalid IRAF boundary mode: "%s"' % mode) - - return f(a, shape, cval) - -def unframe(a, shape): - - """unframe extracts the center slice of framed array 'a' which had - 'shape' prior to framing.""" - - delta = np.array(a.shape) - np.array(shape) - dy = delta[0]//2 - dx = delta[1]//2 - my = shape[0] + dy - mx = shape[1] + dx - return a[dy:my, dx:mx] - -def test(): - import doctest, iraf_frame - return doctest.testmod(iraf_frame) diff -Nru python-scipy-0.7.2+dfsg1/scipy/stsci/convolve/lib/lineshape.py python-scipy-0.8.0+dfsg1/scipy/stsci/convolve/lib/lineshape.py --- python-scipy-0.7.2+dfsg1/scipy/stsci/convolve/lib/lineshape.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/stsci/convolve/lib/lineshape.py 1970-01-01 01:00:00.000000000 +0100 @@ -1,99 +0,0 @@ -# lineshape functors -# -*- coding: iso-8859-1 -*- -# -# Copyright (C) 2002 Jochen Küpper -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are met: -# -# 1. Redistributions of source code must retain the above copyright notice, -# this list of conditions and the following disclaimer. -# 2. Redistributions in binary form must reproduce the above copyright notice, -# this list of conditions and the following disclaimer in the documentation -# and/or other materials provided with the distribution. -# 3. The name of the author may not be used to endorse or promote products -# derived from this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED -# WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF -# MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO -# EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, -# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; -# OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, -# WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR -# OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF -# ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - - -__doc__ = """Lineshape functors. - -The objects defined in this module can be used to calculate numarrays containing -common lineshape-profiles. - -For the *Profile classes the profile is only evaluated once for each object and -then reused for each call. If you only use a profile once and you are concerned -about memory consumption you could call the underlying functions directly. -""" - -__author__ = "Jochen Küpper " -__date__ = "$Date: 2007/03/14 16:35:57 $"[7:-11] -__version__ = "$Revision: 1.1 $"[11:-2] - -from convolve._lineshape import * - -class Profile(object): - """An base object to provide a convolution kernel.""" - - def __init__(self, x, w, x0=0.0): - # call init for all superclasses - super(Profile, self).__init__(x, w, x0) - self._recalculate(x, w, x0) - - def __call__(self): - return self._kernel - - def _recalculate(self, x, w, x0): - self._kernel = self._profile(x, w, x0) - - -class GaussProfile(Profile): - """An object for Gauss-folding.""" - - def __init__(self, x, w, x0=0.0): - self._profile = gauss - # call init for all superclasses - super(GaussProfile, self).__init__(x, w, x0) - - -class LorentzProfile(Profile): - """An object for Lorentz-folding.""" - - def __init__(self, x, w, x0=0.0): - self._profile = lorentz - # call init for all superclasses - super(LorentzProfile, self).__init__(x, w, x0) - - - -class VoigtProfile(Profile): - """An object for Voigt-folding. - - The constructor takes the following parameter: - |x| Scalar or numarray with values to calculate profile at. - |w| Tuple of Gaussian and Lorentzian linewidth contribution - |x0| Center frequency - """ - - def __init__(self, x, w, x0=0.0): - self._profile = voigt - # call init for all superclasses - super(VoigtProfile, self).__init__(x, w, x0) - - - -## Local Variables: -## mode: python -## mode: auto-fill -## fill-column: 80 -## End: diff -Nru python-scipy-0.7.2+dfsg1/scipy/stsci/convolve/SConscript python-scipy-0.8.0+dfsg1/scipy/stsci/convolve/SConscript --- python-scipy-0.7.2+dfsg1/scipy/stsci/convolve/SConscript 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/stsci/convolve/SConscript 1970-01-01 01:00:00.000000000 +0100 @@ -1,13 +0,0 @@ -# Last Change: Wed Mar 05 09:00 PM 2008 J -from numpy.distutils.misc_util import get_numpy_include_dirs -from numpy import get_numarray_include -from numscons import GetNumpyEnvironment - -env = GetNumpyEnvironment(ARGUMENTS) - -env.AppendUnique(CPPPATH = [get_numpy_include_dirs(), get_numarray_include()]) -env.AppendUnique(CPPDEFINES = {'NUMPY': '1'}) - -# _correlate extension -env.DistutilsPythonExtension('_correlate', source = 'src/_correlatemodule.c') -env.DistutilsPythonExtension('_lineshape', source = 'src/_lineshapemodule.c') diff -Nru python-scipy-0.7.2+dfsg1/scipy/stsci/convolve/SConstruct python-scipy-0.8.0+dfsg1/scipy/stsci/convolve/SConstruct --- python-scipy-0.7.2+dfsg1/scipy/stsci/convolve/SConstruct 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/stsci/convolve/SConstruct 1970-01-01 01:00:00.000000000 +0100 @@ -1,2 +0,0 @@ -from numscons import GetInitEnvironment -GetInitEnvironment(ARGUMENTS).DistutilsSConscript('SConscript') diff -Nru python-scipy-0.7.2+dfsg1/scipy/stsci/convolve/src/_correlatemodule.c python-scipy-0.8.0+dfsg1/scipy/stsci/convolve/src/_correlatemodule.c --- python-scipy-0.7.2+dfsg1/scipy/stsci/convolve/src/_correlatemodule.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/stsci/convolve/src/_correlatemodule.c 1970-01-01 01:00:00.000000000 +0100 @@ -1,682 +0,0 @@ -#include "Python.h" - -#include -#include -#include -#include - -#include "numpy/libnumarray.h" - -typedef enum -{ - PIX_NEAREST, - PIX_REFLECT, - PIX_WRAP, - PIX_CONSTANT -} PixMode; - -typedef struct -{ - PixMode mode; - long rows, cols; - Float64 constval; - Float64 *data; -} PixData; - -static long -SlowCoord(long x, long maxx, PixMode m) -{ - switch(m) { - case PIX_NEAREST: - if (x < 0) x = 0; - if (x >= maxx) x = maxx-1; - return x; - case PIX_REFLECT: - if (x < 0) x = -x-1; - if (x >= maxx) x = maxx - (x - maxx) - 1; - return x; - case PIX_WRAP: - if (x < 0) x += maxx; - if (x >= maxx) x -= maxx; - return x; - case PIX_CONSTANT: /* handled in SlowPix, suppress warnings */ - break; - } - return x; -} - -static Float64 -SlowPix(long r, long c, PixData *p) -{ - long fr, fc; - if (p->mode == PIX_CONSTANT) { - if ((r < 0) || (r >= p->rows) || (c < 0) || (c >= p->cols)) - return p->constval; - else { - fr = r; - fc = c; - } - } else { - fr = SlowCoord(r, p->rows, p->mode); - fc = SlowCoord(c, p->cols, p->mode); - } - return p->data[fr*p->cols + fc]; -} - -static int -_reject_complex(PyObject *a) -{ - NumarrayType t; - if ((a == Py_None) || (a == NULL)) - return 0; - t = NA_NumarrayType(a); - if (t < 0) { - PyErr_Clear(); - return 0; - } - if (t == tComplex32 || t == tComplex64) { - PyErr_Format(PyExc_TypeError, - "function doesn't support complex arrays."); - return 1; - } - return 0; -} - -static void -Correlate1d(long ksizex, Float64 *kernel, - long dsizex, Float64 *data, - Float64 *correlated) -{ - long xc; - long halfk = ksizex/2; - - for(xc=0; xcnd != 1) || (data->nd != 1)) { - PyErr_Format(PyExc_ValueError, - "Correlate1d: numarray must have exactly 1 dimension."); - goto _fail; - } - - if (!NA_ShapeEqual(data, correlated)) { - PyErr_Format(PyExc_ValueError, - "Correlate1d: data and output must have identical length."); - goto _fail; - } - - Correlate1d(kernel->dimensions[0], NA_OFFSETDATA(kernel), - data->dimensions[0], NA_OFFSETDATA(data), - NA_OFFSETDATA(correlated)); - - Py_DECREF(kernel); - Py_DECREF(data); - - /* Align, Byteswap, Contiguous, Typeconvert */ - return NA_ReturnOutput(ocorrelated, correlated); - - _fail: - Py_XDECREF(kernel); - Py_XDECREF(data); - Py_XDECREF(correlated); - return NULL; -} - -/* SlowCorrelate computes 2D correlation near the boundaries of an array. -The output array shares the same dimensions as the input array, the latter -fully described by PixData. - -The region defined by rmin,rmax,cmin,cmax is assumed to contain only valid -coordinates. However, access to the input array is performed using SlowPix -because pixels reachable via "kernel offsets" may be at invalid coordinates. -*/ -static void -SlowCorrelate2d(long rmin, long rmax, long cmin, long cmax, - long krows, long kcols, Float64 *kernel, - PixData *pix, Float64 *output) -{ - long kr, kc, r, c; - long halfkrows = krows/2; - long halfkcols = kcols/2; - - for(r=rmin; rcols+c] = temp; - } - } -} - -static void -Correlate2d(long krows, long kcols, Float64 *kernel, - long drows, long dcols, Float64 *data, Float64 *correlated, - PixMode mode, Float64 cval) -{ - long ki, kj, di, dj; - long halfkrows = krows/2; - long halfkcols = kcols/2; - - PixData pix; - pix.mode = mode; - pix.data = data; - pix.constval = cval; - pix.rows = drows; - pix.cols = dcols; - - /* Compute the boundaries using SlowPix */ - - SlowCorrelate2d(0, halfkrows, 0, dcols, - krows, kcols, kernel, &pix, correlated); /* top */ - SlowCorrelate2d(drows-halfkrows, drows, 0, dcols, - krows, kcols, kernel, &pix, correlated); /* bottom */ - SlowCorrelate2d(halfkrows, drows-halfkrows, 0, halfkcols, - krows, kcols, kernel, &pix, correlated); /* left */ - SlowCorrelate2d(halfkrows, drows-halfkrows, dcols-halfkcols, dcols, - krows, kcols, kernel, &pix, correlated); /* right */ - - /* Correlate the center data using unchecked array access */ - for(di=halfkrows; di PIX_CONSTANT)) - return PyErr_Format(PyExc_ValueError, - "Correlate2d: mode value not in range(%d,%d)", - PIX_NEAREST, PIX_CONSTANT); - - /* Align, Byteswap, Contiguous, Typeconvert */ - kernel = NA_InputArray(okernel, tFloat64, C_ARRAY); - data = NA_InputArray(odata, tFloat64, C_ARRAY); - correlated = NA_OptionalOutputArray(ocorrelated, tFloat64, C_ARRAY, - data); - - if (!kernel || !data || !correlated) - goto _fail; - - if ((kernel->nd != 2) || (data->nd != 2) || (correlated->nd != 2)) { - PyErr_Format(PyExc_ValueError, "Correlate2d: inputs must have 2 dimensions."); - goto _fail; - } - - if (!NA_ShapeEqual(data, correlated)) { - PyErr_Format(PyExc_ValueError, - "Correlate2d: data and output numarray need identical shapes."); - goto _fail; - } - - if (_reject_complex(okernel) || _reject_complex(odata) || - _reject_complex(ocorrelated)) - goto _fail; - - Correlate2d(kernel->dimensions[0], kernel->dimensions[1], - NA_OFFSETDATA(kernel), - data->dimensions[0], data->dimensions[1], - NA_OFFSETDATA(data), - NA_OFFSETDATA(correlated), - mode, cval); - - Py_DECREF(kernel); - Py_DECREF(data); - - /* Align, Byteswap, Contiguous, Typeconvert */ - return NA_ReturnOutput(ocorrelated, correlated); - - _fail: - Py_XDECREF(kernel); - Py_XDECREF(data); - Py_XDECREF(correlated); - return NULL; -} - -void Shift2d( long rows, long cols, Float64 *data, long dx, long dy, Float64 *output, int mode, Float64 cval) -{ - long r, c; - PixData pix; - pix.mode = mode; - pix.constval = cval; - pix.rows = rows; - pix.cols = cols; - pix.data = data; - - for(r=0; r PIX_CONSTANT)) - return PyErr_Format(PyExc_ValueError, - "Shift2d: mode value not in range(%d,%d)", - PIX_NEAREST, PIX_CONSTANT); - - /* Align, Byteswap, Contiguous, Typeconvert */ - data = NA_InputArray(odata, tFloat64, C_ARRAY); - output = NA_OptionalOutputArray(ooutput, tFloat64, C_ARRAY, - data); - - if (!data || !output) - goto _fail; - - if (_reject_complex(odata) || _reject_complex(ooutput)) - goto _fail; - - if ((data->nd != 2)) { - PyErr_Format(PyExc_ValueError, - "Shift2d: numarray must have 2 dimensions."); - goto _fail; - } - - if (!NA_ShapeEqual(data, output)) { - PyErr_Format(PyExc_ValueError, - "Shift2d: data and output numarray need identical shapes."); - goto _fail; - } - - /* Invert sign of deltas to match sense of 2x2 correlation. */ - Shift2d( data->dimensions[0], data->dimensions[1], NA_OFFSETDATA(data), - -dx, -dy, NA_OFFSETDATA(output), mode, cval); - - Py_XDECREF(data); - - /* Align, Byteswap, Contiguous, Typeconvert */ - return NA_ReturnOutput(ooutput, output); - _fail: - Py_XDECREF(data); - Py_XDECREF(output); - return NULL; -} - -typedef struct s_BoxData BoxData; - -typedef Float64 (*SumColFunc)(long,long,BoxData*); -typedef Float64 (*SumBoxFunc)(long,long,BoxData*); - -struct s_BoxData { - PixData pix; - long krows, kcols; - SumColFunc sumcol; - SumBoxFunc sumbox; -}; - -static Float64 -SlowSumCol(long r, long c, BoxData *D) -{ - Float64 sum = 0; - long i, krows = D->krows; - for(i=0; ipix); - } - return sum; -} - -static Float64 -SlowSumBox(long r, long c, BoxData *D) -{ - long i, j; - Float64 sum = 0; - for(i=0; ikrows; i++) - for(j=0; jkcols; j++) - sum += SlowPix(r+i, c+j, &D->pix); - return sum; -} - -static Float64 -FastSumCol(long r, long c, BoxData *D) -{ - Float64 sum = 0; - long krows = D->krows; - long cols = D->pix.cols; - Float64 *data = D->pix.data; - - data += r*cols + c; - for(; krows--; data += cols) { - sum += *data; - } - return sum; -} - -static Float64 -FastSumBox(long r, long c, BoxData *D) -{ - long i, j; - Float64 sum = 0; - long cols = D->pix.cols; - Float64 *data = D->pix.data; - data += r*cols + c; - for(i=0; ikrows; i++, data += cols-D->kcols) - for(j=0; jkcols; j++, data++) - sum += *data; - return sum; -} - -static long bound(long x, long max) -{ - if (x < 0) return 0; - else if (x > max) return max; - else return x; -} - -static void -BoxFunc(long rmin, long rmax, long cmin, long cmax, Float64 *output, BoxData *D) -{ - long r, c; - long krows2 = D->krows/2; - long kcols2 = D->kcols/2; - long kcolseven = !(D->kcols & 1); - long rows = D->pix.rows; - long cols = D->pix.cols; - - rmin = bound(rmin, rows); - rmax = bound(rmax, rows); - cmin = bound(cmin, cols); - cmax = bound(cmax, cols); - - for(r=rmin; rsumbox(r - krows2, cmin - kcols2, D); - for(c=cmin; csumcol(r - krows2, c - kcols2, D); - sum += D->sumcol(r - krows2, c + kcols2 - kcolseven + 1, D); - } - } -} - -/* BoxFuncI computes a boxcar incrementally, using a formula independent of - the size of the boxcar. Each incremental step is based on dropping a - whole column of the "back" of the boxcar, and adding in a new column in - the "front". The sums of these columns are further optimized by realizing - they can be computed from their counterparts one element above by adding in - bottom corners and subtracting out top corners. - - incremental pixel layout: B C where S is the unknown, and A, B, C are known neighbors - A S each of these refers to the output array - - S = A + a1 - a0 where a0 and a1 are column vectors with *bottom* elements { a, d } - C - B = b1 - b0 where b0 and b1 are column vectors with *top* elements { b, g } - column vectors and corner elements refer to the input array - - offset matrix layout: b g where b is actually in b0 - [b0] [b1] g " b1 - [a0] S [a1] a " a0 - a d d is actually in a1 - - a0 = b0 - b + a column vector a0 is b0 dropping top element b and adding bottom a - a1 = b1 - g + d column vector a1 is b1 dropping top element g and adding bottom d - - S = A + (b1 - g + f) - (b0 - b + a) by substitution - S = A + (b1 - b0) - g + d + b - a rearranging additions - S = A + C - B - g + d + b - a by substitution -*/ -static void -BoxFuncI(long rmin, long rmax, long cmin, long cmax, Float64 *output, BoxData *D) -{ - long r, c; - long krows2 = D->krows/2; - long kcols2 = D->kcols/2; - long krowseven = !(D->krows & 1); - long kcolseven = !(D->kcols & 1); - long rows = D->pix.rows; - long cols = D->pix.cols; - Float64 *input = D->pix.data; - - rmin = bound(rmin, rows); - rmax = bound(rmax, rows); - cmin = bound(cmin, cols); - cmax = bound(cmax, cols); - - for(r=rmin; r 0."); - goto _fail; - } - - if ((mode < PIX_NEAREST) || (mode > PIX_CONSTANT)) { - PyErr_Format(PyExc_ValueError, - "Boxcar2d: mode value not in range(%d,%d)", - PIX_NEAREST, PIX_CONSTANT); - goto _fail; - } - - if ((data->nd != 2)|| (output->nd != 2)) { - PyErr_Format(PyExc_ValueError, - "Boxcar2d: numarray must have 2 dimensions."); - goto _fail; - } - - if (!NA_ShapeEqual(data, output)) { - PyErr_Format(PyExc_ValueError, - "Boxcar2d: data and output numarray need identical shapes."); - goto _fail; - } - - if ((kcols <=0) || (krows <= 0)) { - PyErr_Format(PyExc_ValueError, - "Boxcar2d: invalid data shape."); - goto _fail; - } - if ((kcols > data->dimensions[1]) || (krows > data->dimensions[0])) { - PyErr_Format(PyExc_ValueError, "Boxcar2d: boxcar shape incompatible with" - " data shape."); - goto _fail; - } - - Boxcar2d(krows, kcols, data->dimensions[0], data->dimensions[1], - NA_OFFSETDATA(data), NA_OFFSETDATA(output), mode, cval); - - Py_XDECREF(data); - - /* Align, Byteswap, Contiguous, Typeconvert */ - return NA_ReturnOutput(ooutput, output); - _fail: - Py_XDECREF(data); - Py_XDECREF(output); - return NULL; -} - -static PyMethodDef _correlateMethods[] = { - {"Correlate1d", Py_Correlate1d, METH_VARARGS}, - {"Correlate2d", (PyCFunction) Py_Correlate2d, METH_VARARGS | METH_KEYWORDS}, - {"Shift2d", (PyCFunction) Py_Shift2d, METH_VARARGS | METH_KEYWORDS, - "Shift2d shifts and image by an integer number of pixels, and uses IRAF compatible modes for the boundary pixels."}, - {"Boxcar2d", (PyCFunction) Py_Boxcar2d, METH_VARARGS | METH_KEYWORDS, - "Boxcar2d computes a sliding 2D boxcar average on a 2D array"}, - {NULL, NULL} /* Sentinel */ -}; - -PyMODINIT_FUNC init_correlate(void) -{ - PyObject *m, *d; - m = Py_InitModule("_correlate", _correlateMethods); - d = PyModule_GetDict(m); - import_libnumarray(); -} - -/* - * Local Variables: - * mode: C - * c-file-style: "python" - * End: - */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/stsci/convolve/src/_lineshapemodule.c python-scipy-0.8.0+dfsg1/scipy/stsci/convolve/src/_lineshapemodule.c --- python-scipy-0.7.2+dfsg1/scipy/stsci/convolve/src/_lineshapemodule.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/stsci/convolve/src/_lineshapemodule.c 1970-01-01 01:00:00.000000000 +0100 @@ -1,381 +0,0 @@ -/* C implementations of various lineshape functions - * - * Copyright (C) 2002,2003 Jochen Küpper - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * - * 1. Redistributions of source code must retain the above copyright notice, - * this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright notice, - * this list of conditions and the following disclaimer in the documentation - * and/or other materials provided with the distribution. - * 3. The name of the author may not be used to endorse or promote products - * derived from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED - * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF - * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO - * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, - * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; - * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, - * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR - * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF - * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#include "Python.h" - -#include - -#include "numpy/libnumarray.h" - - -#define sqr(x) ((x)*(x)) - - -/* These are apparently not defined in MSVC */ -#if !defined(M_PI) -#define M_PI 3.14159265358979323846 -#endif -#if !defined(M_LN2) -#define M_LN2 0.69314718055994530942 -#endif - - -/*** C implementation ***/ - -static void gauss(size_t n, double *x, double *y, double w, double xc) - /* Evaluate normalized Gauss profile around xc with FWHM w at all x_i, - return in y. */ -{ - int i; - for(i=0; i 0.85) || (fabs(x) < (18.1 * y + 1.65))) { - /* Bereich I */ - for(k=0; k<6; k++) { - xp = x + T_v12[k]; - xm = x - T_v12[k]; - sum += ((alpha_v12[k] * xm + beta_v12[k] * yp) / (xm * xm + yp2) - + (beta_v12[k] * yp - alpha_v12[k] * xp) / (xp * xp + yp2)); - } - } else { - /* Bereich II */ - for(k=0; k<6; k++) { - xp = x + T_v12[k]; - xp2 = xp * xp; - xm = x - T_v12[k]; - xm2 = xm * xm; - sum += (((beta_v12[k] * (xm2 - y0_v12 * yp) - alpha_v12[k] * xm * (yp + y0_v12)) - / ((xm2 + yp2) * (xm2 + y0_v12 * y0_v12))) - + ((beta_v12[k] * (xp2 - y0_v12 * yp) + alpha_v12[k] * xp * (yp + y0_v12)) - / ((xp2 + yp2) * (xp2 + y0_v12 * y0_v12)))); - } - if(fabs(x) < 100.) - sum = y * sum + exp(-pow(x, 2)); - else - sum *= y; - } - return sum; -} - - -static void voigt(size_t n, double *x, double *y, double w[2], double xc) - /* Evaluate normalized Voigt profile at x around xc with Gaussian - * linewidth contribution w[0] and Lorentzian linewidth - * contribution w[1]. - */ -{ - /* Transform into reduced coordinates and call Humlicek's 12 point - * formula: - * x = 2 \sqrt{\ln2} \frac{\nu-\nu_0}{\Delta\nu_G} - * y = \sqrt{\ln2} \frac{\Delta\nu_L}{\Delta\nu_G} - */ - int i; - double yh = sqrt(M_LN2) * w[1] / w[0]; - for(i=0; ind != 1) - return PyErr_Format(_Error, "gauss: x must be scalar or 1d array."); - if (!NA_ShapeEqual(x, y)) - return PyErr_Format(_Error, "gauss: x and y numarray must have same length."); - - /* calculate profile */ - { - double *xa = NA_OFFSETDATA(x); - double *ya = NA_OFFSETDATA(y); - Py_BEGIN_ALLOW_THREADS; - gauss(x->dimensions[0], xa, ya, w, xc); - Py_END_ALLOW_THREADS; - } - - /* cleanup and return */ - Py_XDECREF(x); - return NA_ReturnOutput(oy, y); - } -} - - - -static PyObject * -_lineshape_lorentz(PyObject *self, PyObject *args, PyObject *keywds) -{ - int f; - double w, xc = 0.0; - static char *kwlist[] = {"x", "w", "xc", "y", NULL}; - PyObject *ox, *oy=Py_None; - PyArrayObject *x, *y; - - if(! PyArg_ParseTupleAndKeywords(args, keywds, "Od|dO", kwlist, - &ox, &w, &xc, &oy)) - return PyErr_Format(PyExc_RuntimeError, "lorentz: invalid parameters"); - - if((f = PyFloat_Check(ox)) || PyInt_Check(ox)) { - /* scalar arguments -- always *return* Float result */ - double xa[1], ya[1]; - if(f) - xa[0] = PyFloat_AS_DOUBLE(ox); - else - xa[0] = (double)PyInt_AS_LONG(ox); - Py_BEGIN_ALLOW_THREADS; - lorentz(1, xa, ya, w, xc); - Py_END_ALLOW_THREADS; - Py_DECREF(ox); - return PyFloat_FromDouble(ya[0]); - } else { - /* array conversion */ - if(! ((x = NA_InputArray(ox, tFloat64, C_ARRAY)) - && (y = NA_OptionalOutputArray(oy, tFloat64, C_ARRAY, x)))) - return 0; - if(x->nd != 1) - return PyErr_Format(_Error, "lorentz: x must be scalar or 1d array."); - if (!NA_ShapeEqual(x, y)) - return PyErr_Format(_Error, "lorentz: x and y numarray must have same length."); - - /* calculate profile */ - { - double *xa = NA_OFFSETDATA(x); - double *ya = NA_OFFSETDATA(y); - - Py_BEGIN_ALLOW_THREADS; - lorentz(x->dimensions[0], xa, ya, w, xc); - Py_END_ALLOW_THREADS; - } - - /* cleanup and return */ - Py_XDECREF(x); - return NA_ReturnOutput(oy, y); - } -} - - - -static PyObject * -_lineshape_voigt(PyObject *self, PyObject *args, PyObject *keywds) -{ - int f; - double w[2], xc = 0.0; - static char *kwlist[] = {"x", "w", "xc", "y", NULL}; - PyObject *wt, *ox, *oy=Py_None; - PyArrayObject *x, *y; - - if(! PyArg_ParseTupleAndKeywords(args, keywds, "OO|dO", kwlist, - &ox, &wt, &xc, &oy)) - return PyErr_Format(PyExc_RuntimeError, "voigt: invalid parameters"); - - /* parse linewidths tuple */ - if(! PyArg_ParseTuple(wt, "dd", &(w[0]), &(w[1]))) - return(0); - - if((f = PyFloat_Check(ox)) || PyInt_Check(ox)) { - /* scalar arguments -- always *return* Float result */ - double xa[1], ya[1]; - if(f) - xa[0] = PyFloat_AS_DOUBLE(ox); - else - xa[0] = (double)PyInt_AS_LONG(ox); - Py_BEGIN_ALLOW_THREADS; - voigt(1, xa, ya, w, xc); - Py_END_ALLOW_THREADS; - Py_DECREF(ox); - return PyFloat_FromDouble(ya[0]); - } else { - /* array conversion */ - if(! ((x = NA_InputArray(ox, tFloat64, C_ARRAY)) - && (y = NA_OptionalOutputArray(oy, tFloat64, C_ARRAY, x)))) - return 0; - if(x->nd != 1) - return PyErr_Format(_Error, "voigt: x must be scalar or 1d array."); - if (!NA_ShapeEqual(x, y)) - return PyErr_Format(_Error, "voigt: x and y numarray must have same length."); - - /* calculate profile */ - { - double *xa = NA_OFFSETDATA(x); - double *ya = NA_OFFSETDATA(y); - Py_BEGIN_ALLOW_THREADS; - voigt(x->dimensions[0], xa, ya, w, xc); - Py_END_ALLOW_THREADS; - } - - /* cleanup and return */ - Py_XDECREF(x); - return NA_ReturnOutput(oy, y); - } -} - - - - -/*** table of methods ***/ - -static PyMethodDef _lineshape_Methods[] = { - {"gauss", (PyCFunction)_lineshape_gauss, METH_VARARGS|METH_KEYWORDS, - "gauss(x, w, xc=0.0, y=None)\n\n" - "Gaussian lineshape function\n\n" \ - "Calculate normalized Gaussian with full-width at half maximum |w| at |x|,\n" \ - "optionally specifying the line-center |xc|.\n" \ - "If, and only if |x| is an array an optional output array |y| can be\n" \ - "specified. In this case |x| and |y| must be one-dimensional numarray\n" \ - "with identical shapes.\n\n" \ - "If |x| is an scalar the routine always gives the result as scalar\n" \ - "return value." - }, - {"lorentz", (PyCFunction)_lineshape_lorentz, METH_VARARGS|METH_KEYWORDS, - "lorentz(x, w, xc=0.0, y=None)\n\n" - "Lorentzian lineshape function\n\n" \ - "Calculate normalized Lorentzian with full-width at half maximum |w| at |x|,\n" \ - "optionally specifying the line-center |xc|.\n" \ - "If, and only if |x| is an array an optional output array |y| can be\n" \ - "specified. In this case |x| and |y| must be one-dimensional numarray\n" \ - "with identical shapes.\n\n" \ - "If |x| is an scalar the routine always gives the result as scalar\n" \ - "return value." - }, - {"voigt", (PyCFunction)_lineshape_voigt, METH_VARARGS|METH_KEYWORDS, - "voigt(x, w, xc=0.0, y=None)\n\n" - "Voigt-lineshape function\n\n" \ - "Calculate normalized Voigt-profile with Gaussian full-width at half maximum |w[0]| and\n" \ - "Lorentzian full-width at half maximum |w[1]| at |x|, optionally specifying the line-center\n" \ - "|xc|.\n" \ - "If, and only if |x| is an array an optional output array |y| can be\n" \ - "specified. In this case |x| and |y| must be one-dimensional numarray\n" \ - "with identical shapes.\n\n" \ - "If |x| is an scalar the routine always gives the result as scalar\n" \ - "return value.\n\n" \ - "This function uses Humlicek's 12-point formula to approximate the Voigt\n" \ - "profile (J. Humlicek, J. Quant. Spectrosc. Radiat. Transfer, 21, 309 (1978))." - }, - {NULL, NULL, 0, ""} -}; - - - - -/*** module initialization ***/ - -PyMODINIT_FUNC init_lineshape(void) -{ - PyObject *m, *d; - m = Py_InitModule("_lineshape", _lineshape_Methods); - d = PyModule_GetDict(m); - _Error = PyErr_NewException("_lineshape.error", NULL, NULL); - PyDict_SetItemString(d, "error", _Error); - import_libnumarray(); -} - - - -/* - * Local Variables: - * mode: c - * c-file-style: "Stroustrup" - * fill-column: 80 - * End: - */ diff -Nru python-scipy-0.7.2+dfsg1/scipy/stsci/image/lib/combine.py python-scipy-0.8.0+dfsg1/scipy/stsci/image/lib/combine.py --- python-scipy-0.7.2+dfsg1/scipy/stsci/image/lib/combine.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/stsci/image/lib/combine.py 1970-01-01 01:00:00.000000000 +0100 @@ -1,267 +0,0 @@ -import numpy as np -from _combine import combine as _comb - -def _combine_f(funcstr, arrays, output=None, outtype=None, nlow=0, nhigh=0, badmasks=None): - arrays = [ np.asarray(a) for a in arrays ] - shape = arrays[0].shape - if output is None: - if outtype is not None: - out = arrays[0].astype(outtype) - else: - out = arrays[0].copy() - else: - out = output - for a in tuple(arrays[1:])+(out,): - if a.shape != shape: - raise ValueError("all arrays must have identical shapes") - _comb(arrays, out, nlow, nhigh, badmasks, funcstr) - if output is None: - return out - -def median( arrays, output=None, outtype=None, nlow=0, nhigh=0, badmasks=None): - """median() nominally computes the median pixels for a stack of - identically shaped images. - - arrays specifies a sequence of inputs arrays, which are nominally a - stack of identically shaped images. - - output may be used to specify the output array. If none is specified, - either arrays[0] is copied or a new array of type 'outtype' - is created. - - outtype specifies the type of the output array when no 'output' is - specified. - - nlow specifies the number of pixels to be excluded from median - on the low end of the pixel stack. - - nhigh specifies the number of pixels to be excluded from median - on the high end of the pixel stack. - - badmasks specifies boolean arrays corresponding to 'arrays', where true - indicates that a particular pixel is not to be included in the - median calculation. - - >>> a = np.arange(4) - >>> a = a.reshape((2,2)) - >>> arrays = [a*16, a*4, a*2, a*8] - >>> median(arrays) - array([[ 0, 6], - [12, 18]]) - >>> median(arrays, nhigh=1) - array([[ 0, 4], - [ 8, 12]]) - >>> median(arrays, nlow=1) - array([[ 0, 8], - [16, 24]]) - >>> median(arrays, outtype=np.float32) - array([[ 0., 6.], - [ 12., 18.]], dtype=float32) - >>> bm = np.zeros((4,2,2), dtype=np.bool8) - >>> bm[2,...] = 1 - >>> median(arrays, badmasks=bm) - array([[ 0, 8], - [16, 24]]) - >>> median(arrays, badmasks=threshhold(arrays, high=25)) - array([[ 0, 6], - [ 8, 12]]) - """ - return _combine_f("median", arrays, output, outtype, nlow, nhigh, badmasks) - -def average( arrays, output=None, outtype=None, nlow=0, nhigh=0, badmasks=None): - """average() nominally computes the average pixel value for a stack of - identically shaped images. - - arrays specifies a sequence of inputs arrays, which are nominally a - stack of identically shaped images. - - output may be used to specify the output array. If none is specified, - either arrays[0] is copied or a new array of type 'outtype' - is created. - - outtype specifies the type of the output array when no 'output' is - specified. - - nlow specifies the number of pixels to be excluded from average - on the low end of the pixel stack. - - nhigh specifies the number of pixels to be excluded from average - on the high end of the pixel stack. - - badmasks specifies boolean arrays corresponding to 'arrays', where true - indicates that a particular pixel is not to be included in the - average calculation. - - >>> a = np.arange(4) - >>> a = a.reshape((2,2)) - >>> arrays = [a*16, a*4, a*2, a*8] - >>> average(arrays) - array([[ 0, 7], - [15, 22]]) - >>> average(arrays, nhigh=1) - array([[ 0, 4], - [ 9, 14]]) - >>> average(arrays, nlow=1) - array([[ 0, 9], - [18, 28]]) - >>> average(arrays, outtype=np.float32) - array([[ 0. , 7.5], - [ 15. , 22.5]], dtype=float32) - >>> bm = np.zeros((4,2,2), dtype=np.bool8) - >>> bm[2,...] = 1 - >>> average(arrays, badmasks=bm) - array([[ 0, 9], - [18, 28]]) - >>> average(arrays, badmasks=threshhold(arrays, high=25)) - array([[ 0, 7], - [ 9, 14]]) - - """ - return _combine_f("average", arrays, output, outtype, nlow, nhigh, badmasks) - -def minimum( arrays, output=None, outtype=None, nlow=0, nhigh=0, badmasks=None): - """minimum() nominally computes the minimum pixel value for a stack of - identically shaped images. - - arrays specifies a sequence of inputs arrays, which are nominally a - stack of identically shaped images. - - output may be used to specify the output array. If none is specified, - either arrays[0] is copied or a new array of type 'outtype' - is created. - - outtype specifies the type of the output array when no 'output' is - specified. - - nlow specifies the number of pixels to be excluded from minimum - on the low end of the pixel stack. - - nhigh specifies the number of pixels to be excluded from minimum - on the high end of the pixel stack. - - badmasks specifies boolean arrays corresponding to 'arrays', where true - indicates that a particular pixel is not to be included in the - minimum calculation. - - >>> a = np.arange(4) - >>> a = a.reshape((2,2)) - >>> arrays = [a*16, a*4, a*2, a*8] - >>> minimum(arrays) - array([[0, 2], - [4, 6]]) - >>> minimum(arrays, nhigh=1) - array([[0, 2], - [4, 6]]) - >>> minimum(arrays, nlow=1) - array([[ 0, 4], - [ 8, 12]]) - >>> minimum(arrays, outtype=np.float32) - array([[ 0., 2.], - [ 4., 6.]], dtype=float32) - >>> bm = np.zeros((4,2,2), dtype=np.bool8) - >>> bm[2,...] = 1 - >>> minimum(arrays, badmasks=bm) - array([[ 0, 4], - [ 8, 12]]) - >>> minimum(arrays, badmasks=threshhold(arrays, low=10)) - array([[ 0, 16], - [16, 12]]) - - """ - return _combine_f("minimum", arrays, output, outtype, nlow, nhigh, badmasks) - -def threshhold(arrays, low=None, high=None, outputs=None): - """threshhold() computes a boolean array 'outputs' with - corresponding elements for each element of arrays. The - boolean value is true where each of the arrays values - is < the low or >= the high threshholds. - - >>> a=np.arange(100) - >>> a=a.reshape((10,10)) - >>> (threshhold(a, 1, 50)).astype(np.int8) - array([[1, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], dtype=int8) - >>> (threshhold([ range(10)]*10, 3, 7)).astype(np.int8) - array([[1, 1, 1, 0, 0, 0, 0, 1, 1, 1], - [1, 1, 1, 0, 0, 0, 0, 1, 1, 1], - [1, 1, 1, 0, 0, 0, 0, 1, 1, 1], - [1, 1, 1, 0, 0, 0, 0, 1, 1, 1], - [1, 1, 1, 0, 0, 0, 0, 1, 1, 1], - [1, 1, 1, 0, 0, 0, 0, 1, 1, 1], - [1, 1, 1, 0, 0, 0, 0, 1, 1, 1], - [1, 1, 1, 0, 0, 0, 0, 1, 1, 1], - [1, 1, 1, 0, 0, 0, 0, 1, 1, 1], - [1, 1, 1, 0, 0, 0, 0, 1, 1, 1]], dtype=int8) - >>> (threshhold(a, high=50)).astype(np.int8) - array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], dtype=int8) - >>> (threshhold(a, low=50)).astype(np.int8) - array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int8) - - """ - - if not isinstance(arrays[0], np.ndarray): - return threshhold( np.asarray(arrays), low, high, outputs) - - if outputs is None: - outs = np.zeros(shape=(len(arrays),)+arrays[0].shape, - dtype=np.bool8) - else: - outs = outputs - - for i in range(len(arrays)): - a, out = arrays[i], outs[i] - out[:] = 0 - - if high is not None: - np.greater_equal(a, high, out) - if low is not None: - np.logical_or(out, a < low, out) - else: - if low is not None: - np.less(a, low, out) - - if outputs is None: - return outs - -def _bench(): - """time a 10**6 element median""" - import time - a = np.arange(10**6) - a = a.reshape((1000, 1000)) - arrays = [a*2, a*64, a*16, a*8] - t0 = time.clock() - median(arrays) - print "maskless:", time.clock()-t0 - - a = np.arange(10**6) - a = a.reshape((1000, 1000)) - arrays = [a*2, a*64, a*16, a*8] - t0 = time.clock() - median(arrays, badmasks=np.zeros((1000,1000), dtype=np.bool8)) - print "masked:", time.clock()-t0 diff -Nru python-scipy-0.7.2+dfsg1/scipy/stsci/image/lib/_image.py python-scipy-0.8.0+dfsg1/scipy/stsci/image/lib/_image.py --- python-scipy-0.7.2+dfsg1/scipy/stsci/image/lib/_image.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/stsci/image/lib/_image.py 1970-01-01 01:00:00.000000000 +0100 @@ -1,58 +0,0 @@ -import numpy as np -import scipy.stsci.convolve -import scipy.stsci.convolve._correlate as _correlate - -def _translate(a, dx, dy, output=None, mode="nearest", cval=0.0): - """_translate does positive sub-pixel shifts using bilinear interpolation.""" - - assert 0 <= dx < 1.0 - assert 0 <= dy < 1.0 - - w = (1-dy) * (1-dx) - x = (1-dy) * dx - y = (1-dx) * dy - z = dx * dy - - kernel = np.array([[ z, y ], - [ x, w ]]) - - return convolve.correlate2d(a, kernel, output, mode, cval) - -def translate(a, sdx, sdy, output=None, mode="nearest", cval=0.0): - """translate performs a translation of 'a' by (sdx, sdy) - storing the result in 'output'. - - sdx, sdy are float values. - - supported 'mode's include: - 'nearest' elements beyond boundary come from nearest edge pixel. - 'wrap' elements beyond boundary come from the opposite array edge. - 'reflect' elements beyond boundary come from reflection on same array edge. - 'constant' elements beyond boundary are set to 'cval' - """ - a = np.asarray(a) - - sdx, sdy = -sdx, -sdy # Flip sign to match IRAF sign convention - - # _translate works "backwords" due to implementation of 2x2 correlation. - if sdx >= 0 and sdy >= 0: - rotation = 2 - dx, dy = abs(sdx), abs(sdy) - elif sdy < 0 and sdx >= 0: - rotation = 1 - dx, dy = abs(sdy), abs(sdx) - elif sdx < 0 and sdy >= 0: - rotation = 3 - dx, dy = abs(sdy), abs(sdx) - elif sdx < 0 and sdy < 0: - rotation = 0 - dx, dy = abs(sdx), abs(sdy) - - b = np.rot90(a, rotation) - c = _correlate.Shift2d(b, int(dx), int(dy), - mode=convolve.pix_modes[mode]) - d = _translate(c, dx % 1, dy % 1, output, mode, cval) - if output is not None: - output._copyFrom(np.rot90(output, -rotation%4)) - else: - return np.rot90(d, -rotation % 4).astype(a.type()) diff -Nru python-scipy-0.7.2+dfsg1/scipy/stsci/image/lib/__init__.py python-scipy-0.8.0+dfsg1/scipy/stsci/image/lib/__init__.py --- python-scipy-0.7.2+dfsg1/scipy/stsci/image/lib/__init__.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/stsci/image/lib/__init__.py 1970-01-01 01:00:00.000000000 +0100 @@ -1,29 +0,0 @@ -import sys -from _image import * -from combine import * - -__version__ = '2.0' -if sys.version_info < (2,4): - def test(): - import doctest, _image, combine - - t = doctest.Tester(globs = globals()) - - t.rundict(_image.__dict__, "_image") - t.rundict(combine.__dict__, "combine") - - return t.summarize() - -else: - def test(): - import doctest, _image, combine - - finder=doctest.DocTestFinder() - tests=finder.find(_image) - tests.extend(finder.find(combine)) - - runner=doctest.DocTestRunner(verbose=False) - - for test in tests: - runner.run(test) - return runner.summarize() diff -Nru python-scipy-0.7.2+dfsg1/scipy/stsci/image/SConscript python-scipy-0.8.0+dfsg1/scipy/stsci/image/SConscript --- python-scipy-0.7.2+dfsg1/scipy/stsci/image/SConscript 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/stsci/image/SConscript 1970-01-01 01:00:00.000000000 +0100 @@ -1,11 +0,0 @@ -# Last Change: Wed Mar 05 09:00 PM 2008 J -from numpy.distutils.misc_util import get_numpy_include_dirs -from numpy import get_numarray_include -from numscons import GetNumpyEnvironment - -env = GetNumpyEnvironment(ARGUMENTS) - -env.AppendUnique(CPPPATH = [get_numpy_include_dirs(), get_numarray_include()]) -env.AppendUnique(CPPDEFINES = {'NUMPY': '1'}) - -env.DistutilsPythonExtension('_combine', source = 'src/_combinemodule.c') diff -Nru python-scipy-0.7.2+dfsg1/scipy/stsci/image/SConstruct python-scipy-0.8.0+dfsg1/scipy/stsci/image/SConstruct --- python-scipy-0.7.2+dfsg1/scipy/stsci/image/SConstruct 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/stsci/image/SConstruct 1970-01-01 01:00:00.000000000 +0100 @@ -1,2 +0,0 @@ -from numscons import GetInitEnvironment -GetInitEnvironment(ARGUMENTS).DistutilsSConscript('SConscript') diff -Nru python-scipy-0.7.2+dfsg1/scipy/stsci/image/src/_combinemodule.c python-scipy-0.8.0+dfsg1/scipy/stsci/image/src/_combinemodule.c --- python-scipy-0.7.2+dfsg1/scipy/stsci/image/src/_combinemodule.c 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/stsci/image/src/_combinemodule.c 1970-01-01 01:00:00.000000000 +0100 @@ -1,242 +0,0 @@ -#include "Python.h" - -#include -#include -#include -#include - -#include "numpy/libnumarray.h" - -#define MAX_ARRAYS 1024 - -static PyObject *_Error; - -typedef Float64 (*combiner)(int, int, int, Float64 temp[MAX_ARRAYS]); - - -static int -_mask_and_sort(int ninputs, int index, Float64 **inputs, UInt8 **masks, - Float64 temp[MAX_ARRAYS]) -{ - int i, j, goodpix; - if (masks) { - for (i=j=0; idimensions[dim]; - - /* Allocate and convert 1 temporary row at a time */ - for(i=0; idata; - if (masks) { - for(i=0; idata; - } - toutput = (Float64 *) output->data; - - for(j=0; jdimensions[dim]; i++) { - for(j=0; jdata += inputs[j]->strides[dim]*i; - if (masks) { - masks[j]->data += masks[j]->strides[dim]*i; - } - } - output->data += output->strides[dim]*i; - _combine(f, dim+1, maxdim, ninputs, nlow, nhigh, - inputs, masks, output); - for(j=0; jdata -= inputs[j]->strides[dim]*i; - if (masks) { - masks[j]->data -= masks[j]->strides[dim]*i; - } - } - output->data -= output->strides[dim]*i; - } - } - return 0; -} - -typedef struct -{ - char *name; - combiner fptr; -} fmapping; - -static fmapping functions[] = { - {"median", _inner_median}, - {"average", _inner_average}, - {"minimum", _inner_minimum}, -}; - - -static PyObject * -_Py_combine(PyObject *obj, PyObject *args, PyObject *kw) -{ - PyObject *arrays, *output; - int nlow=0, nhigh=0, narrays; - PyObject *badmasks=Py_None; - char *keywds[] = { "arrays", "output", "nlow", "nhigh", - "badmasks", "kind", NULL }; - char *kind; - combiner f; - PyArrayObject *arr[MAX_ARRAYS], *bmk[MAX_ARRAYS], *toutput; - int i; - - if (!PyArg_ParseTupleAndKeywords(args, kw, "OO|iiOs:combine", keywds, - &arrays, &output, &nlow, &nhigh, &badmasks, &kind)) - return NULL; - - narrays = PySequence_Length(arrays); - if (narrays < 0) - return PyErr_Format( - PyExc_TypeError, "combine: arrays is not a sequence"); - if (narrays > MAX_ARRAYS) - return PyErr_Format( - PyExc_TypeError, "combine: too many arrays."); - - for(i=0; ind, narrays, nlow, nhigh, - arr, (badmasks != Py_None ? bmk : NULL), - toutput) < 0) - return NULL; - - for(i=0; i #include +#include + BZ_NAMESPACE(blitz) - + /* Helper functions */ - + template inline T blitz_sqr(T x) { return x*x; } @@ -59,7 +61,7 @@ /* Unary functions that return same type as argument */ - + #define BZ_DEFINE_UNARY_FUNC(name,fun) \ template \ struct name { \ @@ -68,7 +70,7 @@ static inline T_numtype \ apply(T_numtype1 a) \ { return fun(a); } \ - \ + \ template \ static inline void prettyPrint(BZ_STD_SCOPE(string) &str, \ prettyPrintFormat& format, const T1& t1) \ @@ -114,13 +116,13 @@ BZ_DEFINE_UNARY_FUNC(Fn_y0,BZ_IEEEMATHFN_SCOPE(y0)) BZ_DEFINE_UNARY_FUNC(Fn_y1,BZ_IEEEMATHFN_SCOPE(y1)) #endif - + #ifdef BZ_HAVE_SYSTEM_V_MATH BZ_DEFINE_UNARY_FUNC(Fn__class,BZ_IEEEMATHFN_SCOPE(_class)) BZ_DEFINE_UNARY_FUNC(Fn_nearest,BZ_IEEEMATHFN_SCOPE(nearest)) BZ_DEFINE_UNARY_FUNC(Fn_rsqrt,BZ_IEEEMATHFN_SCOPE(rsqrt)) #endif - + BZ_DEFINE_UNARY_FUNC(Fn_sqr,BZ_BLITZ_SCOPE(blitz_sqr)) BZ_DEFINE_UNARY_FUNC(Fn_cube,BZ_BLITZ_SCOPE(blitz_cube)) BZ_DEFINE_UNARY_FUNC(Fn_pow4,BZ_BLITZ_SCOPE(blitz_pow4)) @@ -130,7 +132,7 @@ BZ_DEFINE_UNARY_FUNC(Fn_pow8,BZ_BLITZ_SCOPE(blitz_pow8)) /* Unary functions that return a specified type */ - + #define BZ_DEFINE_UNARY_FUNC_RET(name,fun,ret) \ template \ struct name { \ @@ -139,7 +141,7 @@ static inline T_numtype \ apply(T_numtype1 a) \ { return fun(a); } \ - \ + \ template \ static inline void prettyPrint(BZ_STD_SCOPE(string) &str, \ prettyPrintFormat& format, const T1& t1) \ @@ -154,16 +156,16 @@ #ifdef BZ_HAVE_IEEE_MATH BZ_DEFINE_UNARY_FUNC_RET(Fn_ilogb,BZ_IEEEMATHFN_SCOPE(ilogb),int) #endif - + #ifdef BZ_HAVE_SYSTEM_V_MATH BZ_DEFINE_UNARY_FUNC_RET(Fn_itrunc,BZ_IEEEMATHFN_SCOPE(itrunc),int) BZ_DEFINE_UNARY_FUNC_RET(Fn_uitrunc,BZ_IEEEMATHFN_SCOPE(uitrunc),unsigned int) #endif - - + + #ifdef BZ_HAVE_COMPLEX /* Specialization of unary functor for complex type */ - + #define BZ_DEFINE_UNARY_CFUNC(name,fun) \ template \ struct name< complex > { \ @@ -173,7 +175,7 @@ static inline T_numtype \ apply(T_numtype1 a) \ { return fun(a); } \ - \ + \ template \ static inline void prettyPrint(BZ_STD_SCOPE(string) &str, \ prettyPrintFormat& format, const T1& t1) \ @@ -211,7 +213,7 @@ BZ_DEFINE_UNARY_CFUNC(Fn_pow8,BZ_BLITZ_SCOPE(blitz_pow8)) /* Unary functions that apply only to complex and return T */ - + #define BZ_DEFINE_UNARY_CFUNC2(name,fun) \ template \ struct name; \ @@ -224,7 +226,7 @@ static inline T_numtype \ apply(T_numtype1 a) \ { return fun(a); } \ - \ + \ template \ static inline void prettyPrint(BZ_STD_SCOPE(string) &str, \ prettyPrintFormat& format, const T1& t1) \ @@ -242,11 +244,11 @@ BZ_DEFINE_UNARY_CFUNC2(Fn_norm,BZ_CMATHFN_SCOPE(norm)) BZ_DEFINE_UNARY_CFUNC2(Fn_real,BZ_CMATHFN_SCOPE(real)) #endif // BZ_HAVE_COMPLEX_FCNS - + #endif // BZ_HAVE_COMPLEX - + /* Binary functions that return type based on type promotion */ - + #define BZ_DEFINE_BINARY_FUNC(name,fun) \ template \ struct name { \ @@ -255,7 +257,7 @@ static inline T_numtype \ apply(T_numtype1 a, T_numtype2 b) \ { return fun(a,b); } \ - \ + \ template \ static inline void prettyPrint(BZ_STD_SCOPE(string) &str, \ prettyPrintFormat& format, const T1& t1, \ @@ -273,7 +275,7 @@ BZ_DEFINE_BINARY_FUNC(Fn_atan2,BZ_MATHFN_SCOPE(atan2)) BZ_DEFINE_BINARY_FUNC(Fn_fmod,BZ_MATHFN_SCOPE(fmod)) BZ_DEFINE_BINARY_FUNC(Fn_pow,BZ_MATHFN_SCOPE(pow)) - + #ifdef BZ_HAVE_SYSTEM_V_MATH BZ_DEFINE_BINARY_FUNC(Fn_copysign,BZ_IEEEMATHFN_SCOPE(copysign)) BZ_DEFINE_BINARY_FUNC(Fn_drem,BZ_IEEEMATHFN_SCOPE(drem)) @@ -282,9 +284,9 @@ BZ_DEFINE_BINARY_FUNC(Fn_remainder,BZ_IEEEMATHFN_SCOPE(remainder)) BZ_DEFINE_BINARY_FUNC(Fn_scalb,BZ_IEEEMATHFN_SCOPE(scalb)) #endif - + /* Binary functions that return a specified type */ - + #define BZ_DEFINE_BINARY_FUNC_RET(name,fun,ret) \ template \ struct name { \ @@ -293,7 +295,7 @@ static inline T_numtype \ apply(T_numtype1 a, T_numtype2 b) \ { return fun(a,b); } \ - \ + \ template \ static inline void prettyPrint(BZ_STD_SCOPE(string) &str, \ prettyPrintFormat& format, const T1& t1, \ @@ -311,10 +313,10 @@ #ifdef BZ_HAVE_SYSTEM_V_MATH BZ_DEFINE_BINARY_FUNC_RET(Fn_unordered,BZ_IEEEMATHFN_SCOPE(unordered),int) #endif - + #ifdef BZ_HAVE_COMPLEX /* Specialization of binary functor for complex type */ - + #define BZ_DEFINE_BINARY_CFUNC(name,fun) \ template \ struct name< complex, complex > { \ @@ -325,7 +327,7 @@ static inline T_numtype \ apply(T_numtype1 a, T_numtype2 b) \ { return fun(a,b); } \ - \ + \ template \ static inline void prettyPrint(BZ_STD_SCOPE(string) &str, \ prettyPrintFormat& format, const T1& t1, \ @@ -349,7 +351,7 @@ static inline T_numtype \ apply(T_numtype1 a, T_numtype2 b) \ { return fun(a,b); } \ - \ + \ template \ static inline void prettyPrint(BZ_STD_SCOPE(string) &str, \ prettyPrintFormat& format, const T1& t1, \ @@ -373,7 +375,7 @@ static inline T_numtype \ apply(T_numtype1 a, T_numtype2 b) \ { return fun(a,b); } \ - \ + \ template \ static inline void prettyPrint(BZ_STD_SCOPE(string) &str, \ prettyPrintFormat& format, const T1& t1, \ @@ -393,7 +395,7 @@ #endif /* Binary functions that apply only to T and return complex */ - + #define BZ_DEFINE_BINARY_FUNC_CRET(name,fun) \ template \ struct name; \ @@ -407,7 +409,7 @@ static inline T_numtype \ apply(T_numtype1 a, T_numtype2 b) \ { return fun(a,b); } \ - \ + \ template \ static inline void prettyPrint(BZ_STD_SCOPE(string) &str, \ prettyPrintFormat& format, const T1& t1, \ @@ -425,11 +427,11 @@ #ifdef BZ_HAVE_COMPLEX_FCNS BZ_DEFINE_BINARY_FUNC_CRET(Fn_polar,BZ_CMATHFN_SCOPE(polar)) #endif - + #endif // BZ_HAVE_COMPLEX - + /* Ternary functions that return type based on type promotion */ - + #define BZ_DEFINE_TERNARY_FUNC(name,fun) \ template \ @@ -458,7 +460,7 @@ }; /* Ternary functions that return a specified type */ - + #define BZ_DEFINE_TERNARY_FUNC_RET(name,fun,ret) \ template \ @@ -485,9 +487,9 @@ } \ }; - + /* These functions don't quite fit the usual patterns */ - + // abs() Absolute value template struct Fn_abs; @@ -497,11 +499,11 @@ struct Fn_abs< int > { typedef int T_numtype1; typedef int T_numtype; - + static inline T_numtype apply(T_numtype1 a) { return BZ_MATHFN_SCOPE(abs)(a); } - + template static inline void prettyPrint(BZ_STD_SCOPE(string) &str, prettyPrintFormat& format, const T1& t1) @@ -518,11 +520,11 @@ struct Fn_abs< long int > { typedef long int T_numtype1; typedef long int T_numtype; - + static inline T_numtype apply(T_numtype1 a) { return BZ_MATHFN_SCOPE(labs)(a); } - + template static inline void prettyPrint(BZ_STD_SCOPE(string) &str, prettyPrintFormat& format, const T1& t1) @@ -539,11 +541,11 @@ struct Fn_abs< float > { typedef float T_numtype1; typedef float T_numtype; - + static inline T_numtype apply(T_numtype1 a) { return BZ_MATHFN_SCOPE(fabs)(a); } - + template static inline void prettyPrint(BZ_STD_SCOPE(string) &str, prettyPrintFormat& format, const T1& t1) @@ -560,11 +562,11 @@ struct Fn_abs< double > { typedef double T_numtype1; typedef double T_numtype; - + static inline T_numtype apply(T_numtype1 a) { return BZ_MATHFN_SCOPE(fabs)(a); } - + template static inline void prettyPrint(BZ_STD_SCOPE(string) &str, prettyPrintFormat& format, const T1& t1) @@ -581,11 +583,11 @@ struct Fn_abs< long double > { typedef long double T_numtype1; typedef long double T_numtype; - + static inline T_numtype apply(T_numtype1 a) { return BZ_MATHFN_SCOPE(fabs)(a); } - + template static inline void prettyPrint(BZ_STD_SCOPE(string) &str, prettyPrintFormat& format, const T1& t1) @@ -603,11 +605,11 @@ struct Fn_abs< complex > { typedef complex T_numtype1; typedef T T_numtype; - + static inline T_numtype apply(T_numtype1 a) { return BZ_CMATHFN_SCOPE(abs)(a); } - + template static inline void prettyPrint(BZ_STD_SCOPE(string) &str, prettyPrintFormat& format, const T1& t1) @@ -626,7 +628,7 @@ template struct Fn_isnan { typedef int T_numtype; - + static inline T_numtype apply(T_numtype1 a) { @@ -636,7 +638,7 @@ return BZ_IEEEMATHFN_SCOPE(isnan)(a); #endif } - + template static inline void prettyPrint(BZ_STD_SCOPE(string) &str, prettyPrintFormat& format, const T1& t1) @@ -654,7 +656,7 @@ template struct Cast { typedef T_cast T_numtype; - + static inline T_numtype apply(T_numtype1 a) { return T_numtype(a); } diff -Nru python-scipy-0.7.2+dfsg1/scipy/weave/blitz/blitz/mathfunc.h python-scipy-0.8.0+dfsg1/scipy/weave/blitz/blitz/mathfunc.h --- python-scipy-0.7.2+dfsg1/scipy/weave/blitz/blitz/mathfunc.h 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/weave/blitz/blitz/mathfunc.h 2010-07-26 15:48:37.000000000 +0100 @@ -12,6 +12,8 @@ #include #endif +#include + BZ_NAMESPACE(blitz) // abs(P_numtype1) Absolute value diff -Nru python-scipy-0.7.2+dfsg1/scipy/weave/blitz_tools.py python-scipy-0.8.0+dfsg1/scipy/weave/blitz_tools.py --- python-scipy-0.7.2+dfsg1/scipy/weave/blitz_tools.py 2010-03-03 14:34:13.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/weave/blitz_tools.py 2010-07-26 15:48:37.000000000 +0100 @@ -32,7 +32,10 @@ # of time. It also can cause core-dumps if the sizes of the inputs # aren't compatible. if check_size and not size_check.check_expr(expr,local_dict,global_dict): - raise 'inputs failed to pass size check.' + if sys.version_info < (2, 6): + raise "inputs failed to pass size check." + else: + raise ValueError("inputs failed to pass size check.") # 2. try local cache try: diff -Nru python-scipy-0.7.2+dfsg1/scipy/weave/build_tools.py python-scipy-0.8.0+dfsg1/scipy/weave/build_tools.py --- python-scipy-0.7.2+dfsg1/scipy/weave/build_tools.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/weave/build_tools.py 2010-07-26 15:48:37.000000000 +0100 @@ -24,6 +24,7 @@ import exceptions import commands import subprocess +import warnings import platform_info @@ -414,8 +415,9 @@ # make sure build_dir exists and is writable if build_dir and (not os.path.exists(build_dir) or not os.access(build_dir,os.W_OK)): - print "warning: specified build_dir '%s' does not exist " \ - "or is not writable. Trying default locations" % build_dir + msg = "specified build_dir '%s' does not exist " \ + "or is not writable. Trying default locations" % build_dir + warnings.warn(msg) build_dir = None if build_dir is None: diff -Nru python-scipy-0.7.2+dfsg1/scipy/weave/bytecodecompiler.py python-scipy-0.8.0+dfsg1/scipy/weave/bytecodecompiler.py --- python-scipy-0.7.2+dfsg1/scipy/weave/bytecodecompiler.py 2010-03-03 14:34:13.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/weave/bytecodecompiler.py 2010-07-26 15:48:37.000000000 +0100 @@ -6,6 +6,7 @@ #**************************************************************************# #* *# #**************************************************************************# +import sys import inspect import accelerate_tools @@ -237,7 +238,10 @@ elif goto is None: return next # Normal else: - raise 'xx' + if sys.version_info < (2, 6): + raise "Executing code failed." + else: + raise ValueError("Executing code failed.") symbols = { 0: 'less', 1: 'lesseq', 2: 'equal', 3: 'notequal', 4: 'greater', 5: 'greatereq', 6: 'in', 7: 'not in', @@ -977,7 +981,6 @@ var_name = self.codeobject.co_names[var_num] # First, figure out who owns this global - import sys myHash = id(self.function.func_globals) for module_name in sys.modules.keys(): module = sys.modules[module_name] diff -Nru python-scipy-0.7.2+dfsg1/scipy/weave/catalog.py python-scipy-0.8.0+dfsg1/scipy/weave/catalog.py --- python-scipy-0.7.2+dfsg1/scipy/weave/catalog.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/weave/catalog.py 2010-07-26 15:48:37.000000000 +0100 @@ -178,28 +178,35 @@ # Use a cached value for fast return if possible if hasattr(default_dir,"cached_path") and \ - os.path.exists(default_dir.cached_path): + os.path.exists(default_dir.cached_path) and \ + os.access(default_dir.cached_path, os.W_OK): return default_dir.cached_path python_name = "python%d%d_compiled" % tuple(sys.version_info[:2]) + path_candidates = [] if sys.platform != 'win32': try: - path = os.path.join(os.environ['HOME'],'.' + python_name) + path_candidates.append(os.path.join(os.environ['HOME'], + '.' + python_name)) except KeyError: - temp_dir = `os.getuid()` + '_' + python_name - path = os.path.join(tempfile.gettempdir(),temp_dir) + pass - # add a subdirectory for the OS. - # It might be better to do this at a different location so that - # it wasn't only the default directory that gets this behavior. - #path = os.path.join(path,sys.platform) + temp_dir = `os.getuid()` + '_' + python_name + path_candidates.append(os.path.join(tempfile.gettempdir(), temp_dir)) else: - path = os.path.join(tempfile.gettempdir(),"%s"%whoami(),python_name) + path_candidates.append(os.path.join(tempfile.gettempdir(), + "%s" % whoami(), python_name)) - if not os.path.exists(path): - create_dir(path) - os.chmod(path,0700) # make it only accessible by this user. - if not is_writable(path): + writable = False + for path in path_candidates: + if not os.path.exists(path): + create_dir(path) + os.chmod(path, 0700) # make it only accessible by this user. + if is_writable(path): + writable = True + break + + if not writable: print 'warning: default directory is not write accessible.' print 'default:', path @@ -373,11 +380,7 @@ paths = [] if 'PYTHONCOMPILED' in os.environ: path_string = os.environ['PYTHONCOMPILED'] - if sys.platform == 'win32': - #probably should also look in registry - paths = path_string.split(';') - else: - paths = path_string.split(':') + paths = path_string.split(os.path.pathsep) return paths def build_search_order(self): diff -Nru python-scipy-0.7.2+dfsg1/scipy/weave/c_spec.py python-scipy-0.8.0+dfsg1/scipy/weave/c_spec.py --- python-scipy-0.7.2+dfsg1/scipy/weave/c_spec.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/weave/c_spec.py 2010-07-26 15:48:37.000000000 +0100 @@ -266,9 +266,12 @@ def c_to_py_code(self): # !! Need to dedent returned code. code = """ - PyObject* file_to_py(FILE* file, char* name, char* mode) + PyObject* file_to_py(FILE* file, const char* name, + const char* mode) { - return (PyObject*) PyFile_FromFile(file, name, mode, fclose); + return (PyObject*) PyFile_FromFile(file, + const_cast(name), + const_cast(mode), fclose); } """ return code diff -Nru python-scipy-0.7.2+dfsg1/scipy/weave/ext_tools.py python-scipy-0.8.0+dfsg1/scipy/weave/ext_tools.py --- python-scipy-0.7.2+dfsg1/scipy/weave/ext_tools.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/weave/ext_tools.py 2010-07-26 15:48:37.000000000 +0100 @@ -47,7 +47,8 @@ arg_string_list = self.arg_specs.variable_as_strings() + ['"local_dict"'] arg_strings = ','.join(arg_string_list) if arg_strings: arg_strings += ',' - declare_kwlist = 'static char *kwlist[] = {%s NULL};\n' % arg_strings + declare_kwlist = 'static const char *kwlist[] = {%s NULL};\n' % \ + arg_strings py_objects = ', '.join(self.arg_specs.py_pointers()) init_flags = ', '.join(self.arg_specs.init_flags()) @@ -74,7 +75,8 @@ format = "O"* len(self.arg_specs) + "|O" + ':' + self.name parse_tuple = 'if(!PyArg_ParseTupleAndKeywords(args,' \ - 'kywds,"%s",kwlist,%s))\n' % (format,ref_string) + 'kywds,"%s",const_cast(kwlist),%s))\n' % \ + (format,ref_string) parse_tuple += ' return NULL;\n' return declare_return + declare_kwlist + declare_py_objects \ diff -Nru python-scipy-0.7.2+dfsg1/scipy/weave/inline_tools.py python-scipy-0.8.0+dfsg1/scipy/weave/inline_tools.py --- python-scipy-0.7.2+dfsg1/scipy/weave/inline_tools.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/weave/inline_tools.py 2010-07-26 15:48:37.000000000 +0100 @@ -79,21 +79,24 @@ function_code = indent(self.code_block,4) #local_dict_code = indent(self.arg_local_dict_code(),4) - try_code = 'try \n' \ - '{ \n' \ - ' PyObject* raw_locals __attribute__ ((unused));\n' \ - ' raw_locals = py_to_raw_dict(' \ - 'py__locals,"_locals");\n' \ - ' PyObject* raw_globals __attribute__ ((unused));\n' \ - ' raw_globals = py_to_raw_dict(' \ - 'py__globals,"_globals");\n' + \ - ' /* argument conversion code */ \n' + \ - decl_code + \ - ' /* inline code */ \n' + \ - function_code + \ - ' /*I would like to fill in changed ' \ - 'locals and globals here...*/ \n' \ - '\n} \n' + try_code = \ + ' try \n' \ + ' { \n' \ + '#if defined(__GNUC__) || defined(__ICC)\n' \ + ' PyObject* raw_locals __attribute__ ((unused));\n' \ + ' PyObject* raw_globals __attribute__ ((unused));\n' \ + '#else\n' \ + ' PyObject* raw_locals;\n' \ + ' PyObject* raw_globals;\n' \ + '#endif\n' \ + ' raw_locals = py_to_raw_dict(py__locals,"_locals");\n' \ + ' raw_globals = py_to_raw_dict(py__globals,"_globals");\n' \ + ' /* argument conversion code */ \n' \ + + decl_code + \ + ' /* inline code */ \n' \ + + function_code + \ + ' /*I would like to fill in changed locals and globals here...*/ \n' \ + ' }\n' catch_code = "catch(...) \n" \ "{ \n" + \ " return_val = py::object(); \n" \ @@ -110,7 +113,7 @@ all_code = self.function_declaration_code() + \ indent(self.parse_tuple_code(),4) + \ - indent(try_code,4) + \ + try_code + \ indent(catch_code,4) + \ return_code @@ -138,139 +141,156 @@ auto_downcast=1, newarr_converter=0, **kw): - """ Inline C/C++ code within Python scripts. + """ + Inline C/C++ code within Python scripts. + + ``inline()`` compiles and executes C/C++ code on the fly. Variables + in the local and global Python scope are also available in the + C/C++ code. Values are passed to the C/C++ code by assignment + much like variables passed are passed into a standard Python + function. Values are returned from the C/C++ code through a + special argument called return_val. Also, the contents of + mutable objects can be changed within the C/C++ code and the + changes remain after the C code exits and returns to Python. + + inline has quite a few options as listed below. Also, the keyword + arguments for distutils extension modules are accepted to + specify extra information needed for compiling. + + Parameters + ---------- + code : string + A string of valid C++ code. It should not specify a return + statement. Instead it should assign results that need to be + returned to Python in the `return_val`. + arg_names : [str], optional + A list of Python variable names that should be transferred from + Python into the C/C++ code. It defaults to an empty string. + local_dict : dict, optional + If specified, it is a dictionary of values that should be used as + the local scope for the C/C++ code. If local_dict is not + specified the local dictionary of the calling function is used. + global_dict : dict, optional + If specified, it is a dictionary of values that should be used as + the global scope for the C/C++ code. If `global_dict` is not + specified, the global dictionary of the calling function is used. + force : {0, 1}, optional + If 1, the C++ code is compiled every time inline is called. This + is really only useful for debugging, and probably only useful if + your editing `support_code` a lot. + compiler : str, optional + The name of compiler to use when compiling. On windows, it + understands 'msvc' and 'gcc' as well as all the compiler names + understood by distutils. On Unix, it'll only understand the + values understood by distutils. (I should add 'gcc' though to + this). + + On windows, the compiler defaults to the Microsoft C++ compiler. + If this isn't available, it looks for mingw32 (the gcc compiler). + + On Unix, it'll probably use the same compiler that was used when + compiling Python. Cygwin's behavior should be similar. + verbose : {0,1,2}, optional + Speficies how much much information is printed during the compile + phase of inlining code. 0 is silent (except on windows with msvc + where it still prints some garbage). 1 informs you when compiling + starts, finishes, and how long it took. 2 prints out the command + lines for the compilation process and can be useful if your having + problems getting code to work. Its handy for finding the name of + the .cpp file if you need to examine it. verbose has no affect if + the compilation isn't necessary. + support_code : str, optional + A string of valid C++ code declaring extra code that might be + needed by your compiled function. This could be declarations of + functions, classes, or structures. + headers : [str], optional + A list of strings specifying header files to use when compiling + the code. The list might look like ``["","'my_header'"]``. + Note that the header strings need to be in a form than can be + pasted at the end of a ``#include`` statement in the C++ code. + customize : base_info.custom_info, optional + An alternative way to specify `support_code`, `headers`, etc. needed + by the function. See :mod:`scipy.weave.base_info` for more + details. (not sure this'll be used much). + type_converters : [type converters], optional + These guys are what convert Python data types to C/C++ data types. + If you'd like to use a different set of type conversions than the + default, specify them here. Look in the type conversions section + of the main documentation for examples. + auto_downcast : {1,0}, optional + This only affects functions that have numpy arrays as input + variables. Setting this to 1 will cause all floating point values + to be cast as float instead of double if all the Numeric arrays + are of type float. If even one of the arrays has type double or + double complex, all variables maintain there standard + types. + newarr_converter : int, optional + Unused. + + Other Parameters + ---------------- + Relevant :mod:`distutils` keywords. These are duplicated from Greg Ward's + :class:`distutils.extension.Extension` class for convenience: + + sources : [string] + list of source filenames, relative to the distribution root + (where the setup script lives), in Unix form (slash-separated) + for portability. Source files may be C, C++, SWIG (.i), + platform-specific resource files, or whatever else is recognized + by the "build_ext" command as source for a Python extension. + + .. note:: The `module_path` file is always appended to the front of + this list + include_dirs : [string] + list of directories to search for C/C++ header files (in Unix + form for portability) + define_macros : [(name : string, value : string|None)] + list of macros to define; each macro is defined using a 2-tuple, + where 'value' is either the string to define it to or None to + define it without a particular value (equivalent of "#define + FOO" in source or -DFOO on Unix C compiler command line) + undef_macros : [string] + list of macros to undefine explicitly + library_dirs : [string] + list of directories to search for C/C++ libraries at link time + libraries : [string] + list of library names (not filenames or paths) to link against + runtime_library_dirs : [string] + list of directories to search for C/C++ libraries at run time + (for shared extensions, this is when the extension is loaded) + extra_objects : [string] + list of extra files to link with (eg. object files not implied + by 'sources', static library that must be explicitly specified, + binary resource files, etc.) + extra_compile_args : [string] + any extra platform- and compiler-specific information to use + when compiling the source files in 'sources'. For platforms and + compilers where "command line" makes sense, this is typically a + list of command-line arguments, but for other platforms it could + be anything. + extra_link_args : [string] + any extra platform- and compiler-specific information to use + when linking object files together to create the extension (or + to create a new static Python interpreter). Similar + interpretation as for 'extra_compile_args'. + export_symbols : [string] + list of symbols to be exported from a shared extension. Not + used on all platforms, and not generally necessary for Python + extensions, which typically export exactly one symbol: "init" + + extension_name. + swig_opts : [string] + any extra options to pass to SWIG if a source file has the .i + extension. + depends : [string] + list of files that the extension depends on + language : string + extension language (i.e. "c", "c++", "objc"). Will be detected + from the source extensions if not provided. + + See Also + -------- + distutils.extension.Extension : Describes additional parameters. - inline() compiles and executes C/C++ code on the fly. Variables - in the local and global Python scope are also available in the - C/C++ code. Values are passed to the C/C++ code by assignment - much like variables passed are passed into a standard Python - function. Values are returned from the C/C++ code through a - special argument called return_val. Also, the contents of - mutable objects can be changed within the C/C++ code and the - changes remain after the C code exits and returns to Python. - - inline has quite a few options as listed below. Also, the keyword - arguments for distutils extension modules are accepted to - specify extra information needed for compiling. - - code -- string. A string of valid C++ code. It should not specify a - return statement. Instead it should assign results that - need to be returned to Python in the return_val. - arg_names -- optional. list of strings. A list of Python variable names - that should be transferred from Python into the C/C++ - code. It defaults to an empty string. - local_dict -- optional. dictionary. If specified, it is a dictionary - of values that should be used as the local scope for the - C/C++ code. If local_dict is not specified the local - dictionary of the calling function is used. - global_dict -- optional. dictionary. If specified, it is a dictionary - of values that should be used as the global scope for - the C/C++ code. If global_dict is not specified the - global dictionary of the calling function is used. - force -- optional. 0 or 1. default 0. If 1, the C++ code is - compiled every time inline is called. This is really - only useful for debugging, and probably only useful if - your editing support_code a lot. - compiler -- optional. string. The name of compiler to use when - compiling. On windows, it understands 'msvc' and 'gcc' - as well as all the compiler names understood by - distutils. On Unix, it'll only understand the values - understood by distutils. ( I should add 'gcc' though - to this). - - On windows, the compiler defaults to the Microsoft C++ - compiler. If this isn't available, it looks for mingw32 - (the gcc compiler). - - On Unix, it'll probably use the same compiler that was - used when compiling Python. Cygwin's behavior should be - similar. - verbose -- optional. 0,1, or 2. default 0. Speficies how much - much information is printed during the compile phase - of inlining code. 0 is silent (except on windows with - msvc where it still prints some garbage). 1 informs - you when compiling starts, finishes, and how long it - took. 2 prints out the command lines for the compilation - process and can be useful if your having problems - getting code to work. Its handy for finding the name - of the .cpp file if you need to examine it. verbose has - no affect if the compilation isn't necessary. - support_code -- optional. string. A string of valid C++ code declaring - extra code that might be needed by your compiled - function. This could be declarations of functions, - classes, or structures. - headers -- optional. list of strings. A list of strings specifying - header files to use when compiling the code. The list - might look like ["","'my_header'"]. Note that - the header strings need to be in a form than can be - pasted at the end of a #include statement in the - C++ code. - customize -- optional. base_info.custom_info object. An alternative - way to specify support_code, headers, etc. needed by - the function see the compiler.base_info module for more - details. (not sure this'll be used much). - type_converters -- optional. list of type converters. These - guys are what convert Python data types to C/C++ data - types. If you'd like to use a different set of type - conversions than the default, specify them here. Look - in the type conversions section of the main - documentation for examples. - auto_downcast -- optional. 0 or 1. default 1. This only affects - functions that have NumPy arrays as input variables. - Setting this to 1 will cause all floating point values - to be cast as float instead of double if all the - Numeric arrays are of type float. If even one of the - arrays has type double or double complex, all - variables maintain there standard types. - - Distutils keywords. These are cut and pasted from Greg Ward's - distutils.extension.Extension class for convenience: - - sources : [string] - list of source filenames, relative to the distribution root - (where the setup script lives), in Unix form (slash-separated) - for portability. Source files may be C, C++, SWIG (.i), - platform-specific resource files, or whatever else is recognized - by the "build_ext" command as source for a Python extension. - Note: The module_path file is always appended to the front of this - list - include_dirs : [string] - list of directories to search for C/C++ header files (in Unix - form for portability) - define_macros : [(name : string, value : string|None)] - list of macros to define; each macro is defined using a 2-tuple, - where 'value' is either the string to define it to or None to - define it without a particular value (equivalent of "#define - FOO" in source or -DFOO on Unix C compiler command line) - undef_macros : [string] - list of macros to undefine explicitly - library_dirs : [string] - list of directories to search for C/C++ libraries at link time - libraries : [string] - list of library names (not filenames or paths) to link against - runtime_library_dirs : [string] - list of directories to search for C/C++ libraries at run time - (for shared extensions, this is when the extension is loaded) - extra_objects : [string] - list of extra files to link with (eg. object files not implied - by 'sources', static library that must be explicitly specified, - binary resource files, etc.) - extra_compile_args : [string] - any extra platform- and compiler-specific information to use - when compiling the source files in 'sources'. For platforms and - compilers where "command line" makes sense, this is typically a - list of command-line arguments, but for other platforms it could - be anything. - extra_link_args : [string] - any extra platform- and compiler-specific information to use - when linking object files together to create the extension (or - to create a new static Python interpreter). Similar - interpretation as for 'extra_compile_args'. - export_symbols : [string] - list of symbols to be exported from a shared extension. Not - used on all platforms, and not generally necessary for Python - extensions, which typically export exactly one symbol: "init" + - extension_name. """ # this grabs the local variables from the *previous* call # frame -- that is the locals from the function that called diff -Nru python-scipy-0.7.2+dfsg1/scipy/weave/platform_info.py python-scipy-0.8.0+dfsg1/scipy/weave/platform_info.py --- python-scipy-0.7.2+dfsg1/scipy/weave/platform_info.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/weave/platform_info.py 2010-07-26 15:48:37.000000000 +0100 @@ -168,7 +168,7 @@ result = 0 cmd = '%s -v' % name try: - p = subprocess.Popen([str(name), '-v'], shell=True, close_fds=True, + p = subprocess.Popen([str(name), '-v'], shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) str_result = p.stdout.read() if 'Reading specs' in str_result: @@ -186,7 +186,7 @@ """ result = 0 try: - p = subprocess.Popen(['cl'], shell=True, close_fds=True, + p = subprocess.Popen(['cl'], shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) str_result = p.stdout.read() if 'Microsoft' in str_result: diff -Nru python-scipy-0.7.2+dfsg1/scipy/weave/scxx/sequence.h python-scipy-0.8.0+dfsg1/scipy/weave/scxx/sequence.h --- python-scipy-0.7.2+dfsg1/scipy/weave/scxx/sequence.h 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/weave/scxx/sequence.h 2010-07-26 15:48:37.000000000 +0100 @@ -90,11 +90,11 @@ object val = value; return count(val); }; - int count(char* value) const { + int count(const char* value) const { object val = value; return count(val); }; - int count(std::string& value) const { + int count(const std::string& value) const { object val = value.c_str(); return count(val); }; diff -Nru python-scipy-0.7.2+dfsg1/scipy/weave/tests/test_blitz_tools.py python-scipy-0.8.0+dfsg1/scipy/weave/tests/test_blitz_tools.py --- python-scipy-0.7.2+dfsg1/scipy/weave/tests/test_blitz_tools.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/weave/tests/test_blitz_tools.py 2010-07-26 15:48:37.000000000 +0100 @@ -131,7 +131,6 @@ # """ result = a + b""" # expr = "result = a + b" # self.generic_2d(expr) - @dec.knownfailureif(True) @dec.slow def test_5point_avg_2d_float(self): """ result[1:-1,1:-1] = (b[1:-1,1:-1] + b[2:,1:-1] + b[:-2,1:-1] @@ -141,7 +140,6 @@ "+ b[1:-1,2:] + b[1:-1,:-2]) / 5." self.generic_2d(expr,float32) - @dec.knownfailureif(True) @dec.slow def test_5point_avg_2d_double(self): """ result[1:-1,1:-1] = (b[1:-1,1:-1] + b[2:,1:-1] + b[:-2,1:-1] @@ -170,7 +168,6 @@ "+ b[1:-1,2:] + b[1:-1,:-2]) / 5." self.generic_2d(expr,complex64) - @dec.knownfailureif(True) @dec.slow def test_5point_avg_2d_complex_double(self): """ result[1:-1,1:-1] = (b[1:-1,1:-1] + b[2:,1:-1] + b[:-2,1:-1] diff -Nru python-scipy-0.7.2+dfsg1/scipy/weave/tests/test_build_tools.py python-scipy-0.8.0+dfsg1/scipy/weave/tests/test_build_tools.py --- python-scipy-0.7.2+dfsg1/scipy/weave/tests/test_build_tools.py 2010-03-03 14:34:13.000000000 +0000 +++ python-scipy-0.8.0+dfsg1/scipy/weave/tests/test_build_tools.py 2010-07-26 15:48:37.000000000 +0100 @@ -2,12 +2,17 @@ # tests for MingW32Compiler # don't know how to test gcc_exists() and msvc_exists()... -import os, sys, tempfile +import os, sys, tempfile, warnings from numpy.testing import * from scipy.weave import build_tools +# filter warnings generated by checking for bad paths +warnings.filterwarnings('ignore', + message="specified build_dir", + module='scipy.weave') + def is_writable(val): return os.access(val,os.W_OK) diff -Nru python-scipy-0.7.2+dfsg1/scipy/weave/tests/test_c_spec.py python-scipy-0.8.0+dfsg1/scipy/weave/tests/test_c_spec.py --- python-scipy-0.7.2+dfsg1/scipy/weave/tests/test_c_spec.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/weave/tests/test_c_spec.py 2010-07-26 15:48:37.000000000 +0100 @@ -49,27 +49,22 @@ class IntConverter(TestCase): compiler = '' - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_type_match_string(self): s = c_spec.int_converter() assert( not s.type_match('string') ) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_type_match_int(self): s = c_spec.int_converter() assert(s.type_match(5)) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_type_match_float(self): s = c_spec.int_converter() assert(not s.type_match(5.)) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_type_match_complex(self): s = c_spec.int_converter() assert(not s.type_match(5.+1j)) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_var_in(self): mod_name = 'int_var_in' + self.compiler @@ -94,7 +89,6 @@ except TypeError: pass - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_int_return(self): mod_name = sys._getframe().f_code.co_name + self.compiler @@ -116,27 +110,22 @@ class FloatConverter(TestCase): compiler = '' - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_type_match_string(self): s = c_spec.float_converter() assert( not s.type_match('string')) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_type_match_int(self): s = c_spec.float_converter() assert(not s.type_match(5)) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_type_match_float(self): s = c_spec.float_converter() assert(s.type_match(5.)) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_type_match_complex(self): s = c_spec.float_converter() assert(not s.type_match(5.+1j)) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_float_var_in(self): mod_name = sys._getframe().f_code.co_name + self.compiler @@ -162,7 +151,6 @@ pass - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_float_return(self): mod_name = sys._getframe().f_code.co_name + self.compiler @@ -183,27 +171,22 @@ class ComplexConverter(TestCase): compiler = '' - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_type_match_string(self): s = c_spec.complex_converter() assert( not s.type_match('string') ) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_type_match_int(self): s = c_spec.complex_converter() assert(not s.type_match(5)) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_type_match_float(self): s = c_spec.complex_converter() assert(not s.type_match(5.)) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_type_match_complex(self): s = c_spec.complex_converter() assert(s.type_match(5.+1j)) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_complex_var_in(self): mod_name = sys._getframe().f_code.co_name + self.compiler @@ -228,7 +211,6 @@ except TypeError: pass - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_complex_return(self): mod_name = sys._getframe().f_code.co_name + self.compiler @@ -253,7 +235,6 @@ class FileConverter(TestCase): compiler = '' - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_py_to_file(self): import tempfile @@ -266,7 +247,6 @@ file.close() file = open(file_name,'r') assert(file.read() == "hello bob") - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_file_to_py(self): import tempfile @@ -274,9 +254,9 @@ # not sure I like Py::String as default -- might move to std::sting # or just plain char* code = """ - char* _file_name = (char*) file_name.c_str(); - FILE* file = fopen(_file_name,"w"); - return_val = file_to_py(file,_file_name,"w"); + const char* _file_name = file_name.c_str(); + FILE* file = fopen(_file_name, "w"); + return_val = file_to_py(file, _file_name, "w"); """ file = inline_tools.inline(code,['file_name'], compiler=self.compiler, force=1) @@ -298,7 +278,6 @@ class CallableConverter(TestCase): compiler='' - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_call_function(self): import string @@ -320,22 +299,18 @@ class SequenceConverter(TestCase): compiler = '' - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_convert_to_dict(self): d = {} inline_tools.inline("",['d'],compiler=self.compiler,force=1) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_convert_to_list(self): l = [] inline_tools.inline("",['l'],compiler=self.compiler,force=1) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_convert_to_string(self): s = 'hello' inline_tools.inline("",['s'],compiler=self.compiler,force=1) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_convert_to_tuple(self): t = () @@ -343,27 +318,22 @@ class StringConverter(TestCase): compiler = '' - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_type_match_string(self): s = c_spec.string_converter() assert( s.type_match('string') ) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_type_match_int(self): s = c_spec.string_converter() assert(not s.type_match(5)) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_type_match_float(self): s = c_spec.string_converter() assert(not s.type_match(5.)) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_type_match_complex(self): s = c_spec.string_converter() assert(not s.type_match(5.+1j)) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_var_in(self): mod_name = 'string_var_in'+self.compiler @@ -389,7 +359,6 @@ except TypeError: pass - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_return(self): mod_name = 'string_return'+self.compiler @@ -410,19 +379,16 @@ class ListConverter(TestCase): compiler = '' - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_type_match_bad(self): s = c_spec.list_converter() objs = [{},(),'',1,1.,1+1j] for i in objs: assert( not s.type_match(i) ) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_type_match_good(self): s = c_spec.list_converter() assert(s.type_match([])) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_var_in(self): mod_name = 'list_var_in'+self.compiler @@ -447,7 +413,6 @@ except TypeError: pass - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_return(self): mod_name = 'list_return'+self.compiler @@ -467,7 +432,6 @@ c = test(b) assert( c == ['hello']) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_speed(self): mod_name = 'list_speed'+self.compiler @@ -531,19 +495,16 @@ class TupleConverter(TestCase): compiler = '' - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_type_match_bad(self): s = c_spec.tuple_converter() objs = [{},[],'',1,1.,1+1j] for i in objs: assert( not s.type_match(i) ) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_type_match_good(self): s = c_spec.tuple_converter() assert(s.type_match((1,))) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_var_in(self): mod_name = 'tuple_var_in'+self.compiler @@ -568,7 +529,6 @@ except TypeError: pass - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_return(self): mod_name = 'tuple_return'+self.compiler @@ -600,19 +560,16 @@ # so that it can run on its own. compiler='' - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_type_match_bad(self): s = c_spec.dict_converter() objs = [[],(),'',1,1.,1+1j] for i in objs: assert( not s.type_match(i) ) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_type_match_good(self): s = c_spec.dict_converter() assert(s.type_match({})) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_var_in(self): mod_name = 'dict_var_in'+self.compiler @@ -637,7 +594,6 @@ except TypeError: pass - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_return(self): mod_name = 'dict_return'+self.compiler diff -Nru python-scipy-0.7.2+dfsg1/scipy/weave/tests/test_ext_tools.py python-scipy-0.8.0+dfsg1/scipy/weave/tests/test_ext_tools.py --- python-scipy-0.7.2+dfsg1/scipy/weave/tests/test_ext_tools.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/weave/tests/test_ext_tools.py 2010-07-26 15:48:37.000000000 +0100 @@ -1,4 +1,4 @@ -import sys +import nose from numpy.testing import * from scipy.weave import ext_tools, c_spec @@ -11,18 +11,15 @@ from weave_test_utils import * build_dir = empty_temp_dir() -print 'building extensions here:', build_dir class TestExtModule(TestCase): #should really do some testing of where modules end up - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_simple(self): """ Simplest possible module """ mod = ext_tools.ext_module('simple_ext_module') mod.compile(location = build_dir) import simple_ext_module - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_multi_functions(self): mod = ext_tools.ext_module('module_multi_function') @@ -36,9 +33,6 @@ import module_multi_function module_multi_function.test() module_multi_function.test2() - - @dec.knownfailureif(sys.platform == 'win32', - "this crashes python (segfault) on windows with mingw") @dec.slow def test_with_include(self): # decalaring variables @@ -51,8 +45,10 @@ # function 2 --> a little more complex expression var_specs = ext_tools.assign_variable_types(['a'],locals(),globals()) code = """ + std::cout.clear(std::ios_base::badbit); std::cout << std::endl; std::cout << "test printing a value:" << a << std::endl; + std::cout.clear(std::ios_base::goodbit); """ test = ext_tools.ext_function_from_specs('test',code,var_specs) mod.add_function(test) @@ -61,7 +57,6 @@ import ext_module_with_include ext_module_with_include.test(a) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_string_and_int(self): # decalaring variables @@ -79,7 +74,6 @@ c = ext_string_and_int.test(a,b) assert(c == len(b)) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_return_tuple(self): # decalaring variables @@ -104,7 +98,6 @@ class TestExtFunction(TestCase): #should really do some testing of where modules end up - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_simple(self): """ Simplest possible function """ diff -Nru python-scipy-0.7.2+dfsg1/scipy/weave/tests/test_inline_tools.py python-scipy-0.8.0+dfsg1/scipy/weave/tests/test_inline_tools.py --- python-scipy-0.7.2+dfsg1/scipy/weave/tests/test_inline_tools.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/weave/tests/test_inline_tools.py 2010-07-26 15:48:37.000000000 +0100 @@ -1,5 +1,3 @@ -import sys - from numpy import * from numpy.testing import * @@ -10,7 +8,6 @@ I'd like to benchmark these things somehow. """ - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_exceptions(self): a = 3 diff -Nru python-scipy-0.7.2+dfsg1/scipy/weave/tests/test_numpy_scalar_spec.py python-scipy-0.8.0+dfsg1/scipy/weave/tests/test_numpy_scalar_spec.py --- python-scipy-0.7.2+dfsg1/scipy/weave/tests/test_numpy_scalar_spec.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/weave/tests/test_numpy_scalar_spec.py 2010-07-26 15:48:37.000000000 +0100 @@ -38,24 +38,19 @@ def setUp(self): self.converter = numpy_complex_scalar_converter() - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_type_match_string(self): assert( not self.converter.type_match('string') ) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_type_match_int(self): assert( not self.converter.type_match(5)) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_type_match_float(self): assert( not self.converter.type_match(5.)) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_type_match_complex128(self): assert(self.converter.type_match(numpy.complex128(5.+1j))) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_complex_var_in(self): mod_name = sys._getframe().f_code.co_name + self.compiler @@ -80,7 +75,6 @@ except TypeError: pass - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_complex_return(self): mod_name = sys._getframe().f_code.co_name + self.compiler @@ -99,7 +93,6 @@ c = test(b) assert( c == 3.+3j) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_inline(self): a = numpy.complex128(1+1j) diff -Nru python-scipy-0.7.2+dfsg1/scipy/weave/tests/test_scxx_dict.py python-scipy-0.8.0+dfsg1/scipy/weave/tests/test_scxx_dict.py --- python-scipy-0.7.2+dfsg1/scipy/weave/tests/test_scxx_dict.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/weave/tests/test_scxx_dict.py 2010-07-26 15:48:37.000000000 +0100 @@ -13,7 +13,6 @@ # Check that construction from basic types is allowed and have correct # reference counts #------------------------------------------------------------------------ - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_empty(self): # strange int value used to try and make sure refcount is 2. @@ -27,7 +26,6 @@ class TestDictHasKey(TestCase): - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_obj(self): class Foo: @@ -40,7 +38,6 @@ """ res = inline_tools.inline(code,['a','key']) assert res - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_int(self): a = {} @@ -50,7 +47,6 @@ """ res = inline_tools.inline(code,['a']) assert res - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_double(self): a = {} @@ -60,7 +56,6 @@ """ res = inline_tools.inline(code,['a']) assert res - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_complex(self): a = {} @@ -72,7 +67,6 @@ res = inline_tools.inline(code,['a','key']) assert res - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_string(self): a = {} @@ -82,7 +76,6 @@ """ res = inline_tools.inline(code,['a']) assert res - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_std_string(self): a = {} @@ -93,7 +86,6 @@ """ res = inline_tools.inline(code,['a','key_name']) assert res - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_string_fail(self): a = {} @@ -113,7 +105,6 @@ res = inline_tools.inline(code,args) assert res == a['b'] - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_char(self): self.generic_get('return_val = a["b"];') @@ -128,13 +119,11 @@ except KeyError: pass - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_string(self): self.generic_get('return_val = a[std::string("b")];') - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_obj(self): code = """ @@ -188,27 +177,22 @@ assert before == after assert before_overwritten == after_overwritten - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_new_int_int(self): key,val = 1234,12345 self.generic_new(key,val) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_new_double_int(self): key,val = 1234.,12345 self.generic_new(key,val) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_new_std_string_int(self): key,val = "hello",12345 self.generic_new(key,val) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_new_complex_int(self): key,val = 1+1j,12345 self.generic_new(key,val) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_new_obj_int(self): class Foo: @@ -216,27 +200,22 @@ key,val = Foo(),12345 self.generic_new(key,val) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_overwrite_int_int(self): key,val = 1234,12345 self.generic_overwrite(key,val) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_overwrite_double_int(self): key,val = 1234.,12345 self.generic_overwrite(key,val) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_overwrite_std_string_int(self): key,val = "hello",12345 self.generic_overwrite(key,val) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_overwrite_complex_int(self): key,val = 1+1j,12345 self.generic_overwrite(key,val) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_overwrite_obj_int(self): class Foo: @@ -260,27 +239,22 @@ after = sys.getrefcount(a), sys.getrefcount(key) assert before[0] == after[0] assert before[1] == after[1] + 1 - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_int(self): key = 1234 self.generic(key) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_double(self): key = 1234. self.generic(key) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_std_string(self): key = "hello" self.generic(key) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_complex(self): key = 1+1j self.generic(key) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_obj(self): class Foo: @@ -289,35 +263,30 @@ self.generic(key) class TestDictOthers(TestCase): - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_clear(self): a = {} a["hello"] = 1 inline_tools.inline("a.clear();",['a']) assert not a - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_items(self): a = {} a["hello"] = 1 items = inline_tools.inline("return_val = a.items();",['a']) assert items == a.items() - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_values(self): a = {} a["hello"] = 1 values = inline_tools.inline("return_val = a.values();",['a']) assert values == a.values() - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_keys(self): a = {} a["hello"] = 1 keys = inline_tools.inline("return_val = a.keys();",['a']) assert keys == a.keys() - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_update(self): a,b = {},{} diff -Nru python-scipy-0.7.2+dfsg1/scipy/weave/tests/test_scxx_object.py python-scipy-0.8.0+dfsg1/scipy/weave/tests/test_scxx_object.py --- python-scipy-0.7.2+dfsg1/scipy/weave/tests/test_scxx_object.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/weave/tests/test_scxx_object.py 2010-07-26 15:48:37.000000000 +0100 @@ -3,6 +3,7 @@ import sys +import nose from numpy.testing import * from scipy.weave import inline_tools @@ -13,7 +14,6 @@ # Check that construction from basic types is allowed and have correct # reference counts #------------------------------------------------------------------------ - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_int(self): # strange int value used to try and make sure refcount is 2. @@ -24,7 +24,6 @@ res = inline_tools.inline(code) assert_equal(sys.getrefcount(res),2) assert_equal(res,1001) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_float(self): code = """ @@ -34,7 +33,6 @@ res = inline_tools.inline(code) assert_equal(sys.getrefcount(res),2) assert_equal(res,1.0) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_double(self): code = """ @@ -44,7 +42,6 @@ res = inline_tools.inline(code) assert_equal(sys.getrefcount(res),2) assert_equal(res,1.0) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_complex(self): code = """ @@ -55,7 +52,6 @@ res = inline_tools.inline(code) assert_equal(sys.getrefcount(res),2) assert_equal(res,1.0+1.0j) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_string(self): code = """ @@ -66,7 +62,6 @@ assert_equal(sys.getrefcount(res),2) assert_equal(res,"hello") - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_std_string(self): code = """ @@ -82,7 +77,6 @@ #------------------------------------------------------------------------ # Check the object print protocol. #------------------------------------------------------------------------ - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_stringio(self): import cStringIO @@ -95,7 +89,6 @@ print file_imposter.getvalue() assert_equal(file_imposter.getvalue(),"'how now brown cow'") -## @dec.knownfailureif(sys.platform=='win32') ## @dec.slow ## def test_failure(self): ## code = """ @@ -111,45 +104,40 @@ class TestObjectCast(TestCase): - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_int_cast(self): code = """ py::object val = 1; - int raw_val = val; + int raw_val __attribute__ ((unused)) = val; """ inline_tools.inline(code) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_double_cast(self): code = """ py::object val = 1.0; - double raw_val = val; + double raw_val __attribute__ ((unused)) = val; """ inline_tools.inline(code) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_float_cast(self): code = """ py::object val = 1.0; - float raw_val = val; + float raw_val __attribute__ ((unused)) = val; """ inline_tools.inline(code) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_complex_cast(self): code = """ - std::complex num = std::complex(1.0,1.0); + std::complex num = std::complex(1.0, 1.0); py::object val = num; - std::complex raw_val = val; + std::complex raw_val __attribute__ ((unused)) = val; """ inline_tools.inline(code) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_string_cast(self): code = """ py::object val = "hello"; - std::string raw_val = val; + std::string raw_val __attribute__ ((unused)) = val; """ inline_tools.inline(code) @@ -167,7 +155,6 @@ # return "b" class TestObjectHasattr(TestCase): - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_string(self): a = Foo() @@ -177,7 +164,6 @@ """ res = inline_tools.inline(code,['a']) assert res - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_std_string(self): a = Foo() @@ -188,7 +174,6 @@ """ res = inline_tools.inline(code,['a','attr_name']) assert res - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_string_fail(self): a = Foo() @@ -198,7 +183,6 @@ """ res = inline_tools.inline(code,['a']) assert not res - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_inline(self): """ THIS NEEDS TO MOVE TO THE INLINE TEST SUITE @@ -221,7 +205,6 @@ print 'before, after, after2:', before, after, after2 pass - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_func(self): a = Foo() @@ -245,12 +228,10 @@ after = sys.getrefcount(a.b) assert_equal(after,before) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_char(self): self.generic_attr('return_val = a.attr("b");') - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_char_fail(self): try: @@ -258,12 +239,10 @@ except AttributeError: pass - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_string(self): self.generic_attr('return_val = a.attr(std::string("b"));') - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_string_fail(self): try: @@ -271,7 +250,6 @@ except AttributeError: pass - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_obj(self): code = """ @@ -280,7 +258,6 @@ """ self.generic_attr(code,['a']) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_obj_fail(self): try: @@ -292,7 +269,6 @@ except AttributeError: pass - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_attr_call(self): a = Foo() @@ -319,23 +295,18 @@ res = inline_tools.inline(code,args) assert_equal(a.b,desired) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_existing_char(self): self.generic_existing('a.set_attr("b","hello");',"hello") - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_new_char(self): self.generic_new('a.set_attr("b","hello");',"hello") - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_existing_string(self): self.generic_existing('a.set_attr("b",std::string("hello"));',"hello") - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_new_string(self): self.generic_new('a.set_attr("b",std::string("hello"));',"hello") - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_existing_object(self): code = """ @@ -343,7 +314,6 @@ a.set_attr("b",obj); """ self.generic_existing(code,"hello") - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_new_object(self): code = """ @@ -351,7 +321,6 @@ a.set_attr("b",obj); """ self.generic_new(code,"hello") - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_new_fail(self): try: @@ -363,15 +332,12 @@ except: pass - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_existing_int(self): self.generic_existing('a.set_attr("b",1);',1) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_existing_double(self): self.generic_existing('a.set_attr("b",1.0);',1.0) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_existing_complex(self): code = """ @@ -379,11 +345,9 @@ a.set_attr("b",obj); """ self.generic_existing(code,1+1j) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_existing_char1(self): self.generic_existing('a.set_attr("b","hello");',"hello") - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_existing_string1(self): code = """ @@ -400,11 +364,9 @@ res = inline_tools.inline(code,args) assert not hasattr(a,"b") - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_char(self): self.generic('a.del("b");') - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_string(self): code = """ @@ -412,7 +374,6 @@ a.del(name); """ self.generic(code) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_object(self): code = """ @@ -422,13 +383,11 @@ self.generic(code) class TestObjectCmp(TestCase): - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_equal(self): a,b = 1,1 res = inline_tools.inline('return_val = (a == b);',['a','b']) assert_equal(res,(a == b)) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_equal_objects(self): class Foo: @@ -439,67 +398,56 @@ a,b = Foo(1),Foo(2) res = inline_tools.inline('return_val = (a == b);',['a','b']) assert_equal(res,(a == b)) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_lt(self): a,b = 1,2 res = inline_tools.inline('return_val = (a < b);',['a','b']) assert_equal(res,(a < b)) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_gt(self): a,b = 1,2 res = inline_tools.inline('return_val = (a > b);',['a','b']) assert_equal(res,(a > b)) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_gte(self): a,b = 1,2 res = inline_tools.inline('return_val = (a >= b);',['a','b']) assert_equal(res,(a >= b)) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_lte(self): a,b = 1,2 res = inline_tools.inline('return_val = (a <= b);',['a','b']) assert_equal(res,(a <= b)) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_not_equal(self): a,b = 1,2 res = inline_tools.inline('return_val = (a != b);',['a','b']) assert_equal(res,(a != b)) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_int(self): a = 1 res = inline_tools.inline('return_val = (a == 1);',['a']) assert_equal(res,(a == 1)) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_int2(self): a = 1 res = inline_tools.inline('return_val = (1 == a);',['a']) assert_equal(res,(a == 1)) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_unsigned_long(self): a = 1 res = inline_tools.inline('return_val = (a == (unsigned long)1);',['a']) assert_equal(res,(a == 1)) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_double(self): a = 1 res = inline_tools.inline('return_val = (a == 1.0);',['a']) assert_equal(res,(a == 1.0)) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_char(self): a = "hello" res = inline_tools.inline('return_val = (a == "hello");',['a']) assert_equal(res,(a == "hello")) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_std_string(self): a = "hello" @@ -511,7 +459,6 @@ assert_equal(res,(a == "hello")) class TestObjectRepr(TestCase): - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_repr(self): class Foo: @@ -529,7 +476,6 @@ assert_equal(res,"repr return") class TestObjectStr(TestCase): - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_str(self): class Foo: @@ -549,7 +495,6 @@ class TestObjectUnicode(TestCase): # This ain't going to win awards for test of the year... - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_unicode(self): class Foo: @@ -567,7 +512,6 @@ assert_equal(res,"unicode") class TestObjectIsCallable(TestCase): - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_true(self): class Foo: @@ -576,7 +520,6 @@ a= Foo() res = inline_tools.inline('return_val = a.is_callable();',['a']) assert res - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_false(self): class Foo: @@ -586,7 +529,6 @@ assert not res class TestObjectCall(TestCase): - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_noargs(self): def Foo(): @@ -594,7 +536,6 @@ res = inline_tools.inline('return_val = Foo.call();',['Foo']) assert_equal(res,(1,2,3)) assert_equal(sys.getrefcount(res),3) # should be 2? - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_args(self): def Foo(val1,val2): @@ -608,7 +549,6 @@ res = inline_tools.inline(code,['Foo']) assert_equal(res,(1,"hello")) assert_equal(sys.getrefcount(res),2) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_args_kw(self): def Foo(val1,val2,val3=1): @@ -624,7 +564,6 @@ res = inline_tools.inline(code,['Foo']) assert_equal(res,(1,"hello",3)) assert_equal(sys.getrefcount(res),2) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_noargs_with_args(self): # calling a function that does take args with args @@ -650,7 +589,6 @@ assert_equal(second,third) class TestObjectMcall(TestCase): - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_noargs(self): a = Foo() @@ -662,7 +600,6 @@ assert_equal(res,"bar results") second = sys.getrefcount(res) assert_equal(first,second) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_args(self): a = Foo() @@ -675,7 +612,6 @@ res = inline_tools.inline(code,['a']) assert_equal(res,(1,"hello")) assert_equal(sys.getrefcount(res),2) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_args_kw(self): a = Foo() @@ -690,7 +626,6 @@ res = inline_tools.inline(code,['a']) assert_equal(res,(1,"hello",3)) assert_equal(sys.getrefcount(res),2) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_std_noargs(self): a = Foo() @@ -703,7 +638,6 @@ assert_equal(res,"bar results") second = sys.getrefcount(res) assert_equal(first,second) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_std_args(self): a = Foo() @@ -717,7 +651,6 @@ res = inline_tools.inline(code,['a','method']) assert_equal(res,(1,"hello")) assert_equal(sys.getrefcount(res),2) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_std_args_kw(self): a = Foo() @@ -733,7 +666,6 @@ res = inline_tools.inline(code,['a','method']) assert_equal(res,(1,"hello",3)) assert_equal(sys.getrefcount(res),2) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_noargs_with_args(self): # calling a function that does take args with args @@ -758,7 +690,6 @@ assert_equal(second,third) class TestObjectHash(TestCase): - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_hash(self): class Foo: @@ -770,7 +701,6 @@ assert_equal(res,123) class TestObjectIsTrue(TestCase): - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_true(self): class Foo: @@ -778,7 +708,6 @@ a= Foo() res = inline_tools.inline('return_val = a.is_true();',['a']) assert_equal(res,1) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_false(self): a= None @@ -786,7 +715,6 @@ assert_equal(res,0) class TestObjectType(TestCase): - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_type(self): class Foo: @@ -796,7 +724,6 @@ assert_equal(res,type(a)) class TestObjectSize(TestCase): - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_size(self): class Foo: @@ -805,7 +732,6 @@ a= Foo() res = inline_tools.inline('return_val = a.size();',['a']) assert_equal(res,len(a)) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_len(self): class Foo: @@ -814,7 +740,6 @@ a= Foo() res = inline_tools.inline('return_val = a.len();',['a']) assert_equal(res,len(a)) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_length(self): class Foo: @@ -826,7 +751,6 @@ from UserList import UserList class TestObjectSetItemOpIndex(TestCase): - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_list_refcount(self): a = UserList([1,2,3]) @@ -835,35 +759,30 @@ before1 = sys.getrefcount(a) after1 = sys.getrefcount(a) assert_equal(after1,before1) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_set_int(self): a = UserList([1,2,3]) inline_tools.inline("a[1] = 1234;",['a']) assert_equal(sys.getrefcount(a[1]),2) assert_equal(a[1],1234) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_set_double(self): a = UserList([1,2,3]) inline_tools.inline("a[1] = 123.0;",['a']) assert_equal(sys.getrefcount(a[1]),2) assert_equal(a[1],123.0) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_set_char(self): a = UserList([1,2,3]) inline_tools.inline('a[1] = "bubba";',['a']) assert_equal(sys.getrefcount(a[1]),2) assert_equal(a[1],'bubba') - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_set_string(self): a = UserList([1,2,3]) inline_tools.inline('a[1] = std::string("sissy");',['a']) assert_equal(sys.getrefcount(a[1]),2) assert_equal(a[1],'sissy') - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_set_string(self): a = UserList([1,2,3]) @@ -873,7 +792,6 @@ from UserDict import UserDict class TestObjectSetItemOpKey(TestCase): - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_key_refcount(self): a = UserDict() @@ -909,7 +827,6 @@ assert_equal(val[0] + 1, val[1]) assert_equal(val[1], val[2]) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_set_double_exists(self): a = UserDict() @@ -924,7 +841,6 @@ assert_equal(sys.getrefcount(key),5) assert_equal(sys.getrefcount(a[key]),2) assert_equal(a[key],123.0) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_set_double_new(self): a = UserDict() @@ -933,7 +849,6 @@ assert_equal(sys.getrefcount(key),4) # should be 3 assert_equal(sys.getrefcount(a[key]),2) assert_equal(a[key],123.0) - @dec.knownfailureif(True) @dec.slow def test_set_complex(self): a = UserDict() @@ -942,7 +857,6 @@ assert_equal(sys.getrefcount(key),4) # should be 3 assert_equal(sys.getrefcount(a[key]),2) assert_equal(a[key],1234) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_set_char(self): a = UserDict() @@ -950,7 +864,6 @@ assert_equal(sys.getrefcount(a["hello"]),2) assert_equal(a["hello"],123.0) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_set_class(self): a = UserDict() @@ -970,7 +883,6 @@ assert_equal(sys.getrefcount(key),4) assert_equal(sys.getrefcount(a[key]),2) assert_equal(a[key],'bubba') - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_set_from_member(self): a = UserDict() diff -Nru python-scipy-0.7.2+dfsg1/scipy/weave/tests/test_scxx_sequence.py python-scipy-0.8.0+dfsg1/scipy/weave/tests/test_scxx_sequence.py --- python-scipy-0.7.2+dfsg1/scipy/weave/tests/test_scxx_sequence.py 2010-04-18 11:02:46.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/scipy/weave/tests/test_scxx_sequence.py 2010-07-26 15:48:37.000000000 +0100 @@ -21,7 +21,6 @@ class _TestSequenceBase(TestCase): seq_type = None - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_conversion(self): a = self.seq_type([1]) @@ -35,7 +34,6 @@ #print '2nd,3rd:', before, after assert(after == before) - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_in(self): """ Test the "in" method for lists. We'll assume @@ -89,7 +87,6 @@ res = inline_tools.inline(code,['a']) assert res == 0 - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_count(self): """ Test the "count" method for lists. We'll assume @@ -125,7 +122,6 @@ res = inline_tools.inline(code,['a']) assert res == 1 - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_access_speed(self): N = 1000000 @@ -157,7 +153,6 @@ print 'weave:', t2 - t1 # Fails - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_access_set_speed(self): N = 1000000 @@ -189,7 +184,6 @@ class TestTuple(_TestSequenceBase): seq_type = tuple - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_set_item_operator_equal_fail(self): # Tuples should only allow setting of variables @@ -199,7 +193,6 @@ inline_tools.inline("a[1] = 1234;",['a']) except TypeError: pass - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_set_item_operator_equal(self): code = """ @@ -214,7 +207,6 @@ # returned value should only have a single refcount assert sys.getrefcount(a) == 2 - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_set_item_index_error(self): code = """ @@ -227,7 +219,6 @@ assert 0 except IndexError: pass - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_get_item_operator_index_error(self): code = """ @@ -242,7 +233,6 @@ class TestList(_TestSequenceBase): seq_type = list - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_append_passed_item(self): a = [] @@ -261,7 +251,6 @@ after2 = sys.getrefcount(item) assert after1 == before1 assert after2 == before2 - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_append(self): a = [] @@ -297,7 +286,6 @@ after1 = sys.getrefcount(a) assert after1 == before1 - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_insert(self): a = [1,2,3] @@ -338,7 +326,6 @@ after1 = sys.getrefcount(a) assert after1 == before1 - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_set_item_operator_equal(self): a = self.seq_type([1,2,3]) @@ -372,7 +359,6 @@ after1 = sys.getrefcount(a) assert after1 == before1 - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_set_item_operator_equal_created(self): code = """ @@ -386,7 +372,6 @@ assert a == [1,2,3] # returned value should only have a single refcount assert sys.getrefcount(a) == 2 - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_set_item_index_error(self): code = """ @@ -398,7 +383,6 @@ assert 0 except IndexError: pass - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_get_item_index_error(self): code = """ @@ -411,7 +395,6 @@ except IndexError: pass - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_string_add_speed(self): N = 1000000 @@ -439,7 +422,6 @@ t2 = time.time() print 'weave:', t2 - t1 assert b == desired - @dec.knownfailureif(sys.platform=='win32') @dec.slow def test_int_add_speed(self): N = 1000000 diff -Nru python-scipy-0.7.2+dfsg1/setup.py python-scipy-0.8.0+dfsg1/setup.py --- python-scipy-0.7.2+dfsg1/setup.py 2010-04-22 11:58:37.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/setup.py 2010-07-26 16:59:05.000000000 +0100 @@ -41,8 +41,8 @@ """ MAJOR = 0 -MINOR = 7 -MICRO = 2 +MINOR = 8 +MICRO = 0 ISRELEASED = True VERSION = '%d.%d.%d' % (MAJOR, MINOR, MICRO) @@ -147,7 +147,7 @@ url = "http://www.scipy.org", download_url = "http://sourceforge.net/project/showfiles.php?group_id=27747&package_id=19531", license = 'BSD', - classifiers=filter(None, CLASSIFIERS.split('\n')), + classifiers=[f for f in CLASSIFIERS.split('\n') if f], platforms = ["Windows", "Linux", "Solaris", "Mac OS-X", "Unix"], configuration=configuration ) finally: diff -Nru python-scipy-0.7.2+dfsg1/THANKS.txt python-scipy-0.8.0+dfsg1/THANKS.txt --- python-scipy-0.7.2+dfsg1/THANKS.txt 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/THANKS.txt 2010-07-26 15:48:29.000000000 +0100 @@ -34,7 +34,8 @@ Anne Archibald for kd-trees and nearest neighbor in scipy.spatial. Pauli Virtanen for Sphinx documentation generation, online documentation framework and interpolation bugfixes. -Josef Perktold for major improvements to scipy.stats and its test suite. +Josef Perktold for major improvements to scipy.stats and its test suite and + fixes and tests to optimize.curve_fit and leastsq. David Morrill for getting the scoreboard test system up and running. Louis Luangkesorn for providing multiple tests for the stats module. Jochen Kupper for the zoom feature in the now-deprecated plt plotting module. @@ -63,7 +64,7 @@ Ondrej Certik for Debian packaging. Paul Ivanov for porting Numeric-style C code to the new NumPy API. Ariel Rokem for contributions on percentileofscore fixes and tests. - +Yosef Meller for tests in the optimization module. Institutions ------------ @@ -73,4 +74,4 @@ Agilent which gave a genereous donation for support of SciPy. UC Berkeley for providing travel money and hosting numerous sprints. The University of Stellenbosch for funding the development of - the SciKits portal. + the SciKits portal. diff -Nru python-scipy-0.7.2+dfsg1/TOCHANGE.txt python-scipy-0.8.0+dfsg1/TOCHANGE.txt --- python-scipy-0.7.2+dfsg1/TOCHANGE.txt 2010-04-18 11:02:45.000000000 +0100 +++ python-scipy-0.8.0+dfsg1/TOCHANGE.txt 2010-07-26 15:48:29.000000000 +0100 @@ -30,7 +30,7 @@ Documentation ------------- -See http://projects.scipy.org/scipy/numpy/wiki/CodingStyleGuidelines +See http://projects.scipy.org/numpy/wiki/CodingStyleGuidelines * use new docstring format