Fixed database typo and removed unnecessary class identifier.

This commit is contained in:
Batuhan Berk Başoğlu 2020-10-14 10:10:37 -04:00
parent 00ad49a143
commit 45fb349a7d
5098 changed files with 952558 additions and 85 deletions

View file

@ -0,0 +1,678 @@
"""
========================================
Special functions (:mod:`scipy.special`)
========================================
.. currentmodule:: scipy.special
Nearly all of the functions below are universal functions and follow
broadcasting and automatic array-looping rules.
.. seealso::
`scipy.special.cython_special` -- Typed Cython versions of special functions
Error handling
==============
Errors are handled by returning NaNs or other appropriate values.
Some of the special function routines can emit warnings or raise
exceptions when an error occurs. By default this is disabled; to
query and control the current error handling state the following
functions are provided.
.. autosummary::
:toctree: generated/
geterr -- Get the current way of handling special-function errors.
seterr -- Set how special-function errors are handled.
errstate -- Context manager for special-function error handling.
SpecialFunctionWarning -- Warning that can be emitted by special functions.
SpecialFunctionError -- Exception that can be raised by special functions.
Available functions
===================
Airy functions
--------------
.. autosummary::
:toctree: generated/
airy -- Airy functions and their derivatives.
airye -- Exponentially scaled Airy functions and their derivatives.
ai_zeros -- Compute `nt` zeros and values of the Airy function Ai and its derivative.
bi_zeros -- Compute `nt` zeros and values of the Airy function Bi and its derivative.
itairy -- Integrals of Airy functions
Elliptic functions and integrals
--------------------------------
.. autosummary::
:toctree: generated/
ellipj -- Jacobian elliptic functions.
ellipk -- Complete elliptic integral of the first kind.
ellipkm1 -- Complete elliptic integral of the first kind around `m` = 1.
ellipkinc -- Incomplete elliptic integral of the first kind.
ellipe -- Complete elliptic integral of the second kind.
ellipeinc -- Incomplete elliptic integral of the second kind.
Bessel functions
----------------
.. autosummary::
:toctree: generated/
jv -- Bessel function of the first kind of real order and complex argument.
jve -- Exponentially scaled Bessel function of order `v`.
yn -- Bessel function of the second kind of integer order and real argument.
yv -- Bessel function of the second kind of real order and complex argument.
yve -- Exponentially scaled Bessel function of the second kind of real order.
kn -- Modified Bessel function of the second kind of integer order `n`
kv -- Modified Bessel function of the second kind of real order `v`
kve -- Exponentially scaled modified Bessel function of the second kind.
iv -- Modified Bessel function of the first kind of real order.
ive -- Exponentially scaled modified Bessel function of the first kind.
hankel1 -- Hankel function of the first kind.
hankel1e -- Exponentially scaled Hankel function of the first kind.
hankel2 -- Hankel function of the second kind.
hankel2e -- Exponentially scaled Hankel function of the second kind.
The following is not an universal function:
.. autosummary::
:toctree: generated/
lmbda -- Jahnke-Emden Lambda function, Lambdav(x).
Zeros of Bessel functions
^^^^^^^^^^^^^^^^^^^^^^^^^
These are not universal functions:
.. autosummary::
:toctree: generated/
jnjnp_zeros -- Compute zeros of integer-order Bessel functions Jn and Jn'.
jnyn_zeros -- Compute nt zeros of Bessel functions Jn(x), Jn'(x), Yn(x), and Yn'(x).
jn_zeros -- Compute zeros of integer-order Bessel function Jn(x).
jnp_zeros -- Compute zeros of integer-order Bessel function derivative Jn'(x).
yn_zeros -- Compute zeros of integer-order Bessel function Yn(x).
ynp_zeros -- Compute zeros of integer-order Bessel function derivative Yn'(x).
y0_zeros -- Compute nt zeros of Bessel function Y0(z), and derivative at each zero.
y1_zeros -- Compute nt zeros of Bessel function Y1(z), and derivative at each zero.
y1p_zeros -- Compute nt zeros of Bessel derivative Y1'(z), and value at each zero.
Faster versions of common Bessel functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. autosummary::
:toctree: generated/
j0 -- Bessel function of the first kind of order 0.
j1 -- Bessel function of the first kind of order 1.
y0 -- Bessel function of the second kind of order 0.
y1 -- Bessel function of the second kind of order 1.
i0 -- Modified Bessel function of order 0.
i0e -- Exponentially scaled modified Bessel function of order 0.
i1 -- Modified Bessel function of order 1.
i1e -- Exponentially scaled modified Bessel function of order 1.
k0 -- Modified Bessel function of the second kind of order 0, :math:`K_0`.
k0e -- Exponentially scaled modified Bessel function K of order 0
k1 -- Modified Bessel function of the second kind of order 1, :math:`K_1(x)`.
k1e -- Exponentially scaled modified Bessel function K of order 1.
Integrals of Bessel functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. autosummary::
:toctree: generated/
itj0y0 -- Integrals of Bessel functions of order 0.
it2j0y0 -- Integrals related to Bessel functions of order 0.
iti0k0 -- Integrals of modified Bessel functions of order 0.
it2i0k0 -- Integrals related to modified Bessel functions of order 0.
besselpoly -- Weighted integral of a Bessel function.
Derivatives of Bessel functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. autosummary::
:toctree: generated/
jvp -- Compute nth derivative of Bessel function Jv(z) with respect to `z`.
yvp -- Compute nth derivative of Bessel function Yv(z) with respect to `z`.
kvp -- Compute nth derivative of real-order modified Bessel function Kv(z)
ivp -- Compute nth derivative of modified Bessel function Iv(z) with respect to `z`.
h1vp -- Compute nth derivative of Hankel function H1v(z) with respect to `z`.
h2vp -- Compute nth derivative of Hankel function H2v(z) with respect to `z`.
Spherical Bessel functions
^^^^^^^^^^^^^^^^^^^^^^^^^^
.. autosummary::
:toctree: generated/
spherical_jn -- Spherical Bessel function of the first kind or its derivative.
spherical_yn -- Spherical Bessel function of the second kind or its derivative.
spherical_in -- Modified spherical Bessel function of the first kind or its derivative.
spherical_kn -- Modified spherical Bessel function of the second kind or its derivative.
Riccati-Bessel functions
^^^^^^^^^^^^^^^^^^^^^^^^
These are not universal functions:
.. autosummary::
:toctree: generated/
riccati_jn -- Compute Ricatti-Bessel function of the first kind and its derivative.
riccati_yn -- Compute Ricatti-Bessel function of the second kind and its derivative.
Struve functions
----------------
.. autosummary::
:toctree: generated/
struve -- Struve function.
modstruve -- Modified Struve function.
itstruve0 -- Integral of the Struve function of order 0.
it2struve0 -- Integral related to the Struve function of order 0.
itmodstruve0 -- Integral of the modified Struve function of order 0.
Raw statistical functions
-------------------------
.. seealso:: :mod:`scipy.stats`: Friendly versions of these functions.
.. autosummary::
:toctree: generated/
bdtr -- Binomial distribution cumulative distribution function.
bdtrc -- Binomial distribution survival function.
bdtri -- Inverse function to `bdtr` with respect to `p`.
bdtrik -- Inverse function to `bdtr` with respect to `k`.
bdtrin -- Inverse function to `bdtr` with respect to `n`.
btdtr -- Cumulative distribution function of the beta distribution.
btdtri -- The `p`-th quantile of the beta distribution.
btdtria -- Inverse of `btdtr` with respect to `a`.
btdtrib -- btdtria(a, p, x).
fdtr -- F cumulative distribution function.
fdtrc -- F survival function.
fdtri -- The `p`-th quantile of the F-distribution.
fdtridfd -- Inverse to `fdtr` vs dfd.
gdtr -- Gamma distribution cumulative distribution function.
gdtrc -- Gamma distribution survival function.
gdtria -- Inverse of `gdtr` vs a.
gdtrib -- Inverse of `gdtr` vs b.
gdtrix -- Inverse of `gdtr` vs x.
nbdtr -- Negative binomial cumulative distribution function.
nbdtrc -- Negative binomial survival function.
nbdtri -- Inverse of `nbdtr` vs `p`.
nbdtrik -- Inverse of `nbdtr` vs `k`.
nbdtrin -- Inverse of `nbdtr` vs `n`.
ncfdtr -- Cumulative distribution function of the non-central F distribution.
ncfdtridfd -- Calculate degrees of freedom (denominator) for the noncentral F-distribution.
ncfdtridfn -- Calculate degrees of freedom (numerator) for the noncentral F-distribution.
ncfdtri -- Inverse cumulative distribution function of the non-central F distribution.
ncfdtrinc -- Calculate non-centrality parameter for non-central F distribution.
nctdtr -- Cumulative distribution function of the non-central `t` distribution.
nctdtridf -- Calculate degrees of freedom for non-central t distribution.
nctdtrit -- Inverse cumulative distribution function of the non-central t distribution.
nctdtrinc -- Calculate non-centrality parameter for non-central t distribution.
nrdtrimn -- Calculate mean of normal distribution given other params.
nrdtrisd -- Calculate standard deviation of normal distribution given other params.
pdtr -- Poisson cumulative distribution function.
pdtrc -- Poisson survival function.
pdtri -- Inverse to `pdtr` vs m.
pdtrik -- Inverse to `pdtr` vs k.
stdtr -- Student t distribution cumulative distribution function.
stdtridf -- Inverse of `stdtr` vs df.
stdtrit -- Inverse of `stdtr` vs `t`.
chdtr -- Chi square cumulative distribution function.
chdtrc -- Chi square survival function.
chdtri -- Inverse to `chdtrc`.
chdtriv -- Inverse to `chdtr` vs `v`.
ndtr -- Gaussian cumulative distribution function.
log_ndtr -- Logarithm of Gaussian cumulative distribution function.
ndtri -- Inverse of `ndtr` vs x.
chndtr -- Non-central chi square cumulative distribution function.
chndtridf -- Inverse to `chndtr` vs `df`.
chndtrinc -- Inverse to `chndtr` vs `nc`.
chndtrix -- Inverse to `chndtr` vs `x`.
smirnov -- Kolmogorov-Smirnov complementary cumulative distribution function.
smirnovi -- Inverse to `smirnov`.
kolmogorov -- Complementary cumulative distribution function of Kolmogorov distribution.
kolmogi -- Inverse function to `kolmogorov`.
tklmbda -- Tukey-Lambda cumulative distribution function.
logit -- Logit ufunc for ndarrays.
expit -- Expit ufunc for ndarrays.
boxcox -- Compute the Box-Cox transformation.
boxcox1p -- Compute the Box-Cox transformation of 1 + `x`.
inv_boxcox -- Compute the inverse of the Box-Cox transformation.
inv_boxcox1p -- Compute the inverse of the Box-Cox transformation.
owens_t -- Owen's T Function.
Information Theory functions
----------------------------
.. autosummary::
:toctree: generated/
entr -- Elementwise function for computing entropy.
rel_entr -- Elementwise function for computing relative entropy.
kl_div -- Elementwise function for computing Kullback-Leibler divergence.
huber -- Huber loss function.
pseudo_huber -- Pseudo-Huber loss function.
Gamma and related functions
---------------------------
.. autosummary::
:toctree: generated/
gamma -- Gamma function.
gammaln -- Logarithm of the absolute value of the Gamma function for real inputs.
loggamma -- Principal branch of the logarithm of the Gamma function.
gammasgn -- Sign of the gamma function.
gammainc -- Regularized lower incomplete gamma function.
gammaincinv -- Inverse to `gammainc`.
gammaincc -- Regularized upper incomplete gamma function.
gammainccinv -- Inverse to `gammaincc`.
beta -- Beta function.
betaln -- Natural logarithm of absolute value of beta function.
betainc -- Incomplete beta integral.
betaincinv -- Inverse function to beta integral.
psi -- The digamma function.
rgamma -- Gamma function inverted.
polygamma -- Polygamma function n.
multigammaln -- Returns the log of multivariate gamma, also sometimes called the generalized gamma.
digamma -- psi(x[, out]).
poch -- Rising factorial (z)_m.
Error function and Fresnel integrals
------------------------------------
.. autosummary::
:toctree: generated/
erf -- Returns the error function of complex argument.
erfc -- Complementary error function, ``1 - erf(x)``.
erfcx -- Scaled complementary error function, ``exp(x**2) * erfc(x)``.
erfi -- Imaginary error function, ``-i erf(i z)``.
erfinv -- Inverse function for erf.
erfcinv -- Inverse function for erfc.
wofz -- Faddeeva function.
dawsn -- Dawson's integral.
fresnel -- Fresnel sin and cos integrals.
fresnel_zeros -- Compute nt complex zeros of sine and cosine Fresnel integrals S(z) and C(z).
modfresnelp -- Modified Fresnel positive integrals.
modfresnelm -- Modified Fresnel negative integrals.
voigt_profile -- Voigt profile.
These are not universal functions:
.. autosummary::
:toctree: generated/
erf_zeros -- Compute nt complex zeros of error function erf(z).
fresnelc_zeros -- Compute nt complex zeros of cosine Fresnel integral C(z).
fresnels_zeros -- Compute nt complex zeros of sine Fresnel integral S(z).
Legendre functions
------------------
.. autosummary::
:toctree: generated/
lpmv -- Associated Legendre function of integer order and real degree.
sph_harm -- Compute spherical harmonics.
These are not universal functions:
.. autosummary::
:toctree: generated/
clpmn -- Associated Legendre function of the first kind for complex arguments.
lpn -- Legendre function of the first kind.
lqn -- Legendre function of the second kind.
lpmn -- Sequence of associated Legendre functions of the first kind.
lqmn -- Sequence of associated Legendre functions of the second kind.
Ellipsoidal harmonics
---------------------
.. autosummary::
:toctree: generated/
ellip_harm -- Ellipsoidal harmonic functions E^p_n(l).
ellip_harm_2 -- Ellipsoidal harmonic functions F^p_n(l).
ellip_normal -- Ellipsoidal harmonic normalization constants gamma^p_n.
Orthogonal polynomials
----------------------
The following functions evaluate values of orthogonal polynomials:
.. autosummary::
:toctree: generated/
assoc_laguerre -- Compute the generalized (associated) Laguerre polynomial of degree n and order k.
eval_legendre -- Evaluate Legendre polynomial at a point.
eval_chebyt -- Evaluate Chebyshev polynomial of the first kind at a point.
eval_chebyu -- Evaluate Chebyshev polynomial of the second kind at a point.
eval_chebyc -- Evaluate Chebyshev polynomial of the first kind on [-2, 2] at a point.
eval_chebys -- Evaluate Chebyshev polynomial of the second kind on [-2, 2] at a point.
eval_jacobi -- Evaluate Jacobi polynomial at a point.
eval_laguerre -- Evaluate Laguerre polynomial at a point.
eval_genlaguerre -- Evaluate generalized Laguerre polynomial at a point.
eval_hermite -- Evaluate physicist's Hermite polynomial at a point.
eval_hermitenorm -- Evaluate probabilist's (normalized) Hermite polynomial at a point.
eval_gegenbauer -- Evaluate Gegenbauer polynomial at a point.
eval_sh_legendre -- Evaluate shifted Legendre polynomial at a point.
eval_sh_chebyt -- Evaluate shifted Chebyshev polynomial of the first kind at a point.
eval_sh_chebyu -- Evaluate shifted Chebyshev polynomial of the second kind at a point.
eval_sh_jacobi -- Evaluate shifted Jacobi polynomial at a point.
The following functions compute roots and quadrature weights for
orthogonal polynomials:
.. autosummary::
:toctree: generated/
roots_legendre -- Gauss-Legendre quadrature.
roots_chebyt -- Gauss-Chebyshev (first kind) quadrature.
roots_chebyu -- Gauss-Chebyshev (second kind) quadrature.
roots_chebyc -- Gauss-Chebyshev (first kind) quadrature.
roots_chebys -- Gauss-Chebyshev (second kind) quadrature.
roots_jacobi -- Gauss-Jacobi quadrature.
roots_laguerre -- Gauss-Laguerre quadrature.
roots_genlaguerre -- Gauss-generalized Laguerre quadrature.
roots_hermite -- Gauss-Hermite (physicst's) quadrature.
roots_hermitenorm -- Gauss-Hermite (statistician's) quadrature.
roots_gegenbauer -- Gauss-Gegenbauer quadrature.
roots_sh_legendre -- Gauss-Legendre (shifted) quadrature.
roots_sh_chebyt -- Gauss-Chebyshev (first kind, shifted) quadrature.
roots_sh_chebyu -- Gauss-Chebyshev (second kind, shifted) quadrature.
roots_sh_jacobi -- Gauss-Jacobi (shifted) quadrature.
The functions below, in turn, return the polynomial coefficients in
``orthopoly1d`` objects, which function similarly as `numpy.poly1d`.
The ``orthopoly1d`` class also has an attribute ``weights``, which returns
the roots, weights, and total weights for the appropriate form of Gaussian
quadrature. These are returned in an ``n x 3`` array with roots in the first
column, weights in the second column, and total weights in the final column.
Note that ``orthopoly1d`` objects are converted to `~numpy.poly1d` when doing
arithmetic, and lose information of the original orthogonal polynomial.
.. autosummary::
:toctree: generated/
legendre -- Legendre polynomial.
chebyt -- Chebyshev polynomial of the first kind.
chebyu -- Chebyshev polynomial of the second kind.
chebyc -- Chebyshev polynomial of the first kind on :math:`[-2, 2]`.
chebys -- Chebyshev polynomial of the second kind on :math:`[-2, 2]`.
jacobi -- Jacobi polynomial.
laguerre -- Laguerre polynomial.
genlaguerre -- Generalized (associated) Laguerre polynomial.
hermite -- Physicist's Hermite polynomial.
hermitenorm -- Normalized (probabilist's) Hermite polynomial.
gegenbauer -- Gegenbauer (ultraspherical) polynomial.
sh_legendre -- Shifted Legendre polynomial.
sh_chebyt -- Shifted Chebyshev polynomial of the first kind.
sh_chebyu -- Shifted Chebyshev polynomial of the second kind.
sh_jacobi -- Shifted Jacobi polynomial.
.. warning::
Computing values of high-order polynomials (around ``order > 20``) using
polynomial coefficients is numerically unstable. To evaluate polynomial
values, the ``eval_*`` functions should be used instead.
Hypergeometric functions
------------------------
.. autosummary::
:toctree: generated/
hyp2f1 -- Gauss hypergeometric function 2F1(a, b; c; z).
hyp1f1 -- Confluent hypergeometric function 1F1(a, b; x).
hyperu -- Confluent hypergeometric function U(a, b, x) of the second kind.
hyp0f1 -- Confluent hypergeometric limit function 0F1.
Parabolic cylinder functions
----------------------------
.. autosummary::
:toctree: generated/
pbdv -- Parabolic cylinder function D.
pbvv -- Parabolic cylinder function V.
pbwa -- Parabolic cylinder function W.
These are not universal functions:
.. autosummary::
:toctree: generated/
pbdv_seq -- Parabolic cylinder functions Dv(x) and derivatives.
pbvv_seq -- Parabolic cylinder functions Vv(x) and derivatives.
pbdn_seq -- Parabolic cylinder functions Dn(z) and derivatives.
Mathieu and related functions
-----------------------------
.. autosummary::
:toctree: generated/
mathieu_a -- Characteristic value of even Mathieu functions.
mathieu_b -- Characteristic value of odd Mathieu functions.
These are not universal functions:
.. autosummary::
:toctree: generated/
mathieu_even_coef -- Fourier coefficients for even Mathieu and modified Mathieu functions.
mathieu_odd_coef -- Fourier coefficients for even Mathieu and modified Mathieu functions.
The following return both function and first derivative:
.. autosummary::
:toctree: generated/
mathieu_cem -- Even Mathieu function and its derivative.
mathieu_sem -- Odd Mathieu function and its derivative.
mathieu_modcem1 -- Even modified Mathieu function of the first kind and its derivative.
mathieu_modcem2 -- Even modified Mathieu function of the second kind and its derivative.
mathieu_modsem1 -- Odd modified Mathieu function of the first kind and its derivative.
mathieu_modsem2 -- Odd modified Mathieu function of the second kind and its derivative.
Spheroidal wave functions
-------------------------
.. autosummary::
:toctree: generated/
pro_ang1 -- Prolate spheroidal angular function of the first kind and its derivative.
pro_rad1 -- Prolate spheroidal radial function of the first kind and its derivative.
pro_rad2 -- Prolate spheroidal radial function of the secon kind and its derivative.
obl_ang1 -- Oblate spheroidal angular function of the first kind and its derivative.
obl_rad1 -- Oblate spheroidal radial function of the first kind and its derivative.
obl_rad2 -- Oblate spheroidal radial function of the second kind and its derivative.
pro_cv -- Characteristic value of prolate spheroidal function.
obl_cv -- Characteristic value of oblate spheroidal function.
pro_cv_seq -- Characteristic values for prolate spheroidal wave functions.
obl_cv_seq -- Characteristic values for oblate spheroidal wave functions.
The following functions require pre-computed characteristic value:
.. autosummary::
:toctree: generated/
pro_ang1_cv -- Prolate spheroidal angular function pro_ang1 for precomputed characteristic value.
pro_rad1_cv -- Prolate spheroidal radial function pro_rad1 for precomputed characteristic value.
pro_rad2_cv -- Prolate spheroidal radial function pro_rad2 for precomputed characteristic value.
obl_ang1_cv -- Oblate spheroidal angular function obl_ang1 for precomputed characteristic value.
obl_rad1_cv -- Oblate spheroidal radial function obl_rad1 for precomputed characteristic value.
obl_rad2_cv -- Oblate spheroidal radial function obl_rad2 for precomputed characteristic value.
Kelvin functions
----------------
.. autosummary::
:toctree: generated/
kelvin -- Kelvin functions as complex numbers.
kelvin_zeros -- Compute nt zeros of all Kelvin functions.
ber -- Kelvin function ber.
bei -- Kelvin function bei
berp -- Derivative of the Kelvin function `ber`.
beip -- Derivative of the Kelvin function `bei`.
ker -- Kelvin function ker.
kei -- Kelvin function ker.
kerp -- Derivative of the Kelvin function ker.
keip -- Derivative of the Kelvin function kei.
These are not universal functions:
.. autosummary::
:toctree: generated/
ber_zeros -- Compute nt zeros of the Kelvin function ber(x).
bei_zeros -- Compute nt zeros of the Kelvin function bei(x).
berp_zeros -- Compute nt zeros of the Kelvin function ber'(x).
beip_zeros -- Compute nt zeros of the Kelvin function bei'(x).
ker_zeros -- Compute nt zeros of the Kelvin function ker(x).
kei_zeros -- Compute nt zeros of the Kelvin function kei(x).
kerp_zeros -- Compute nt zeros of the Kelvin function ker'(x).
keip_zeros -- Compute nt zeros of the Kelvin function kei'(x).
Combinatorics
-------------
.. autosummary::
:toctree: generated/
comb -- The number of combinations of N things taken k at a time.
perm -- Permutations of N things taken k at a time, i.e., k-permutations of N.
Lambert W and related functions
-------------------------------
.. autosummary::
:toctree: generated/
lambertw -- Lambert W function.
wrightomega -- Wright Omega function.
Other special functions
-----------------------
.. autosummary::
:toctree: generated/
agm -- Arithmetic, Geometric Mean.
bernoulli -- Bernoulli numbers B0..Bn (inclusive).
binom -- Binomial coefficient
diric -- Periodic sinc function, also called the Dirichlet function.
euler -- Euler numbers E0..En (inclusive).
expn -- Exponential integral E_n.
exp1 -- Exponential integral E_1 of complex argument z.
expi -- Exponential integral Ei.
factorial -- The factorial of a number or array of numbers.
factorial2 -- Double factorial.
factorialk -- Multifactorial of n of order k, n(!!...!).
shichi -- Hyperbolic sine and cosine integrals.
sici -- Sine and cosine integrals.
softmax -- Softmax function.
log_softmax -- Logarithm of softmax function.
spence -- Spence's function, also known as the dilogarithm.
zeta -- Riemann zeta function.
zetac -- Riemann zeta function minus 1.
Convenience functions
---------------------
.. autosummary::
:toctree: generated/
cbrt -- Cube root of `x`.
exp10 -- 10**x.
exp2 -- 2**x.
radian -- Convert from degrees to radians.
cosdg -- Cosine of the angle `x` given in degrees.
sindg -- Sine of angle given in degrees.
tandg -- Tangent of angle x given in degrees.
cotdg -- Cotangent of the angle `x` given in degrees.
log1p -- Calculates log(1+x) for use when `x` is near zero.
expm1 -- exp(x) - 1 for use when `x` is near zero.
cosm1 -- cos(x) - 1 for use when `x` is near zero.
round -- Round to nearest integer.
xlogy -- Compute ``x*log(y)`` so that the result is 0 if ``x = 0``.
xlog1py -- Compute ``x*log1p(y)`` so that the result is 0 if ``x = 0``.
logsumexp -- Compute the log of the sum of exponentials of input elements.
exprel -- Relative error exponential, (exp(x)-1)/x, for use when `x` is near zero.
sinc -- Return the sinc function.
"""
from .sf_error import SpecialFunctionWarning, SpecialFunctionError
from . import _ufuncs
from ._ufuncs import *
from . import _basic
from ._basic import *
from ._logsumexp import logsumexp, softmax, log_softmax
from . import orthogonal
from .orthogonal import *
from .spfun_stats import multigammaln
from ._ellip_harm import (
ellip_harm,
ellip_harm_2,
ellip_normal
)
from ._lambertw import lambertw
from ._spherical_bessel import (
spherical_jn,
spherical_yn,
spherical_in,
spherical_kn
)
__all__ = _ufuncs.__all__ + _basic.__all__ + orthogonal.__all__ + [
'SpecialFunctionWarning',
'SpecialFunctionError',
'orthogonal', # Not public, but kept in __all__ for back-compat
'logsumexp',
'softmax',
'log_softmax',
'multigammaln',
'ellip_harm',
'ellip_harm_2',
'ellip_normal',
'lambertw',
'spherical_jn',
'spherical_yn',
'spherical_in',
'spherical_kn',
]
from scipy._lib._testutils import PytestTester
test = PytestTester(__name__)
del PytestTester

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,207 @@
import numpy as np
from ._ufuncs import _ellip_harm
from ._ellip_harm_2 import _ellipsoid, _ellipsoid_norm
def ellip_harm(h2, k2, n, p, s, signm=1, signn=1):
r"""
Ellipsoidal harmonic functions E^p_n(l)
These are also known as Lame functions of the first kind, and are
solutions to the Lame equation:
.. math:: (s^2 - h^2)(s^2 - k^2)E''(s) + s(2s^2 - h^2 - k^2)E'(s) + (a - q s^2)E(s) = 0
where :math:`q = (n+1)n` and :math:`a` is the eigenvalue (not
returned) corresponding to the solutions.
Parameters
----------
h2 : float
``h**2``
k2 : float
``k**2``; should be larger than ``h**2``
n : int
Degree
s : float
Coordinate
p : int
Order, can range between [1,2n+1]
signm : {1, -1}, optional
Sign of prefactor of functions. Can be +/-1. See Notes.
signn : {1, -1}, optional
Sign of prefactor of functions. Can be +/-1. See Notes.
Returns
-------
E : float
the harmonic :math:`E^p_n(s)`
See Also
--------
ellip_harm_2, ellip_normal
Notes
-----
The geometric interpretation of the ellipsoidal functions is
explained in [2]_, [3]_, [4]_. The `signm` and `signn` arguments control the
sign of prefactors for functions according to their type::
K : +1
L : signm
M : signn
N : signm*signn
.. versionadded:: 0.15.0
References
----------
.. [1] Digital Library of Mathematical Functions 29.12
https://dlmf.nist.gov/29.12
.. [2] Bardhan and Knepley, "Computational science and
re-discovery: open-source implementations of
ellipsoidal harmonics for problems in potential theory",
Comput. Sci. Disc. 5, 014006 (2012)
:doi:`10.1088/1749-4699/5/1/014006`.
.. [3] David J.and Dechambre P, "Computation of Ellipsoidal
Gravity Field Harmonics for small solar system bodies"
pp. 30-36, 2000
.. [4] George Dassios, "Ellipsoidal Harmonics: Theory and Applications"
pp. 418, 2012
Examples
--------
>>> from scipy.special import ellip_harm
>>> w = ellip_harm(5,8,1,1,2.5)
>>> w
2.5
Check that the functions indeed are solutions to the Lame equation:
>>> from scipy.interpolate import UnivariateSpline
>>> def eigenvalue(f, df, ddf):
... r = ((s**2 - h**2)*(s**2 - k**2)*ddf + s*(2*s**2 - h**2 - k**2)*df - n*(n+1)*s**2*f)/f
... return -r.mean(), r.std()
>>> s = np.linspace(0.1, 10, 200)
>>> k, h, n, p = 8.0, 2.2, 3, 2
>>> E = ellip_harm(h**2, k**2, n, p, s)
>>> E_spl = UnivariateSpline(s, E)
>>> a, a_err = eigenvalue(E_spl(s), E_spl(s,1), E_spl(s,2))
>>> a, a_err
(583.44366156701483, 6.4580890640310646e-11)
"""
return _ellip_harm(h2, k2, n, p, s, signm, signn)
_ellip_harm_2_vec = np.vectorize(_ellipsoid, otypes='d')
def ellip_harm_2(h2, k2, n, p, s):
r"""
Ellipsoidal harmonic functions F^p_n(l)
These are also known as Lame functions of the second kind, and are
solutions to the Lame equation:
.. math:: (s^2 - h^2)(s^2 - k^2)F''(s) + s(2s^2 - h^2 - k^2)F'(s) + (a - q s^2)F(s) = 0
where :math:`q = (n+1)n` and :math:`a` is the eigenvalue (not
returned) corresponding to the solutions.
Parameters
----------
h2 : float
``h**2``
k2 : float
``k**2``; should be larger than ``h**2``
n : int
Degree.
p : int
Order, can range between [1,2n+1].
s : float
Coordinate
Returns
-------
F : float
The harmonic :math:`F^p_n(s)`
Notes
-----
Lame functions of the second kind are related to the functions of the first kind:
.. math::
F^p_n(s)=(2n + 1)E^p_n(s)\int_{0}^{1/s}\frac{du}{(E^p_n(1/u))^2\sqrt{(1-u^2k^2)(1-u^2h^2)}}
.. versionadded:: 0.15.0
See Also
--------
ellip_harm, ellip_normal
Examples
--------
>>> from scipy.special import ellip_harm_2
>>> w = ellip_harm_2(5,8,2,1,10)
>>> w
0.00108056853382
"""
with np.errstate(all='ignore'):
return _ellip_harm_2_vec(h2, k2, n, p, s)
def _ellip_normal_vec(h2, k2, n, p):
return _ellipsoid_norm(h2, k2, n, p)
_ellip_normal_vec = np.vectorize(_ellip_normal_vec, otypes='d')
def ellip_normal(h2, k2, n, p):
r"""
Ellipsoidal harmonic normalization constants gamma^p_n
The normalization constant is defined as
.. math::
\gamma^p_n=8\int_{0}^{h}dx\int_{h}^{k}dy\frac{(y^2-x^2)(E^p_n(y)E^p_n(x))^2}{\sqrt((k^2-y^2)(y^2-h^2)(h^2-x^2)(k^2-x^2)}
Parameters
----------
h2 : float
``h**2``
k2 : float
``k**2``; should be larger than ``h**2``
n : int
Degree.
p : int
Order, can range between [1,2n+1].
Returns
-------
gamma : float
The normalization constant :math:`\gamma^p_n`
See Also
--------
ellip_harm, ellip_harm_2
Notes
-----
.. versionadded:: 0.15.0
Examples
--------
>>> from scipy.special import ellip_normal
>>> w = ellip_normal(5,8,3,7)
>>> w
1723.38796997
"""
with np.errstate(all='ignore'):
return _ellip_normal_vec(h2, k2, n, p)

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,105 @@
from ._ufuncs import _lambertw
def lambertw(z, k=0, tol=1e-8):
r"""
lambertw(z, k=0, tol=1e-8)
Lambert W function.
The Lambert W function `W(z)` is defined as the inverse function
of ``w * exp(w)``. In other words, the value of ``W(z)`` is
such that ``z = W(z) * exp(W(z))`` for any complex number
``z``.
The Lambert W function is a multivalued function with infinitely
many branches. Each branch gives a separate solution of the
equation ``z = w exp(w)``. Here, the branches are indexed by the
integer `k`.
Parameters
----------
z : array_like
Input argument.
k : int, optional
Branch index.
tol : float, optional
Evaluation tolerance.
Returns
-------
w : array
`w` will have the same shape as `z`.
Notes
-----
All branches are supported by `lambertw`:
* ``lambertw(z)`` gives the principal solution (branch 0)
* ``lambertw(z, k)`` gives the solution on branch `k`
The Lambert W function has two partially real branches: the
principal branch (`k = 0`) is real for real ``z > -1/e``, and the
``k = -1`` branch is real for ``-1/e < z < 0``. All branches except
``k = 0`` have a logarithmic singularity at ``z = 0``.
**Possible issues**
The evaluation can become inaccurate very close to the branch point
at ``-1/e``. In some corner cases, `lambertw` might currently
fail to converge, or can end up on the wrong branch.
**Algorithm**
Halley's iteration is used to invert ``w * exp(w)``, using a first-order
asymptotic approximation (O(log(w)) or `O(w)`) as the initial estimate.
The definition, implementation and choice of branches is based on [2]_.
See Also
--------
wrightomega : the Wright Omega function
References
----------
.. [1] https://en.wikipedia.org/wiki/Lambert_W_function
.. [2] Corless et al, "On the Lambert W function", Adv. Comp. Math. 5
(1996) 329-359.
https://cs.uwaterloo.ca/research/tr/1993/03/W.pdf
Examples
--------
The Lambert W function is the inverse of ``w exp(w)``:
>>> from scipy.special import lambertw
>>> w = lambertw(1)
>>> w
(0.56714329040978384+0j)
>>> w * np.exp(w)
(1.0+0j)
Any branch gives a valid inverse:
>>> w = lambertw(1, k=3)
>>> w
(-2.8535817554090377+17.113535539412148j)
>>> w*np.exp(w)
(1.0000000000000002+1.609823385706477e-15j)
**Applications to equation-solving**
The Lambert W function may be used to solve various kinds of
equations, such as finding the value of the infinite power
tower :math:`z^{z^{z^{\ldots}}}`:
>>> def tower(z, n):
... if n == 0:
... return z
... return z ** tower(z, n-1)
...
>>> tower(0.5, 100)
0.641185744504986
>>> -lambertw(-np.log(0.5)) / np.log(0.5)
(0.64118574450498589+0j)
"""
return _lambertw(z, k, tol)

View file

@ -0,0 +1,283 @@
import numpy as np
from scipy._lib._util import _asarray_validated
__all__ = ["logsumexp", "softmax", "log_softmax"]
def logsumexp(a, axis=None, b=None, keepdims=False, return_sign=False):
"""Compute the log of the sum of exponentials of input elements.
Parameters
----------
a : array_like
Input array.
axis : None or int or tuple of ints, optional
Axis or axes over which the sum is taken. By default `axis` is None,
and all elements are summed.
.. versionadded:: 0.11.0
keepdims : bool, optional
If this is set to True, the axes which are reduced are left in the
result as dimensions with size one. With this option, the result
will broadcast correctly against the original array.
.. versionadded:: 0.15.0
b : array-like, optional
Scaling factor for exp(`a`) must be of the same shape as `a` or
broadcastable to `a`. These values may be negative in order to
implement subtraction.
.. versionadded:: 0.12.0
return_sign : bool, optional
If this is set to True, the result will be a pair containing sign
information; if False, results that are negative will be returned
as NaN. Default is False (no sign information).
.. versionadded:: 0.16.0
Returns
-------
res : ndarray
The result, ``np.log(np.sum(np.exp(a)))`` calculated in a numerically
more stable way. If `b` is given then ``np.log(np.sum(b*np.exp(a)))``
is returned.
sgn : ndarray
If return_sign is True, this will be an array of floating-point
numbers matching res and +1, 0, or -1 depending on the sign
of the result. If False, only one result is returned.
See Also
--------
numpy.logaddexp, numpy.logaddexp2
Notes
-----
NumPy has a logaddexp function which is very similar to `logsumexp`, but
only handles two arguments. `logaddexp.reduce` is similar to this
function, but may be less stable.
Examples
--------
>>> from scipy.special import logsumexp
>>> a = np.arange(10)
>>> np.log(np.sum(np.exp(a)))
9.4586297444267107
>>> logsumexp(a)
9.4586297444267107
With weights
>>> a = np.arange(10)
>>> b = np.arange(10, 0, -1)
>>> logsumexp(a, b=b)
9.9170178533034665
>>> np.log(np.sum(b*np.exp(a)))
9.9170178533034647
Returning a sign flag
>>> logsumexp([1,2],b=[1,-1],return_sign=True)
(1.5413248546129181, -1.0)
Notice that `logsumexp` does not directly support masked arrays. To use it
on a masked array, convert the mask into zero weights:
>>> a = np.ma.array([np.log(2), 2, np.log(3)],
... mask=[False, True, False])
>>> b = (~a.mask).astype(int)
>>> logsumexp(a.data, b=b), np.log(5)
1.6094379124341005, 1.6094379124341005
"""
a = _asarray_validated(a, check_finite=False)
if b is not None:
a, b = np.broadcast_arrays(a, b)
if np.any(b == 0):
a = a + 0. # promote to at least float
a[b == 0] = -np.inf
a_max = np.amax(a, axis=axis, keepdims=True)
if a_max.ndim > 0:
a_max[~np.isfinite(a_max)] = 0
elif not np.isfinite(a_max):
a_max = 0
if b is not None:
b = np.asarray(b)
tmp = b * np.exp(a - a_max)
else:
tmp = np.exp(a - a_max)
# suppress warnings about log of zero
with np.errstate(divide='ignore'):
s = np.sum(tmp, axis=axis, keepdims=keepdims)
if return_sign:
sgn = np.sign(s)
s *= sgn # /= makes more sense but we need zero -> zero
out = np.log(s)
if not keepdims:
a_max = np.squeeze(a_max, axis=axis)
out += a_max
if return_sign:
return out, sgn
else:
return out
def softmax(x, axis=None):
r"""
Softmax function
The softmax function transforms each element of a collection by
computing the exponential of each element divided by the sum of the
exponentials of all the elements. That is, if `x` is a one-dimensional
numpy array::
softmax(x) = np.exp(x)/sum(np.exp(x))
Parameters
----------
x : array_like
Input array.
axis : int or tuple of ints, optional
Axis to compute values along. Default is None and softmax will be
computed over the entire array `x`.
Returns
-------
s : ndarray
An array the same shape as `x`. The result will sum to 1 along the
specified axis.
Notes
-----
The formula for the softmax function :math:`\sigma(x)` for a vector
:math:`x = \{x_0, x_1, ..., x_{n-1}\}` is
.. math:: \sigma(x)_j = \frac{e^{x_j}}{\sum_k e^{x_k}}
The `softmax` function is the gradient of `logsumexp`.
.. versionadded:: 1.2.0
Examples
--------
>>> from scipy.special import softmax
>>> np.set_printoptions(precision=5)
>>> x = np.array([[1, 0.5, 0.2, 3],
... [1, -1, 7, 3],
... [2, 12, 13, 3]])
...
Compute the softmax transformation over the entire array.
>>> m = softmax(x)
>>> m
array([[ 4.48309e-06, 2.71913e-06, 2.01438e-06, 3.31258e-05],
[ 4.48309e-06, 6.06720e-07, 1.80861e-03, 3.31258e-05],
[ 1.21863e-05, 2.68421e-01, 7.29644e-01, 3.31258e-05]])
>>> m.sum()
1.0000000000000002
Compute the softmax transformation along the first axis (i.e., the
columns).
>>> m = softmax(x, axis=0)
>>> m
array([[ 2.11942e-01, 1.01300e-05, 2.75394e-06, 3.33333e-01],
[ 2.11942e-01, 2.26030e-06, 2.47262e-03, 3.33333e-01],
[ 5.76117e-01, 9.99988e-01, 9.97525e-01, 3.33333e-01]])
>>> m.sum(axis=0)
array([ 1., 1., 1., 1.])
Compute the softmax transformation along the second axis (i.e., the rows).
>>> m = softmax(x, axis=1)
>>> m
array([[ 1.05877e-01, 6.42177e-02, 4.75736e-02, 7.82332e-01],
[ 2.42746e-03, 3.28521e-04, 9.79307e-01, 1.79366e-02],
[ 1.22094e-05, 2.68929e-01, 7.31025e-01, 3.31885e-05]])
>>> m.sum(axis=1)
array([ 1., 1., 1.])
"""
# compute in log space for numerical stability
return np.exp(x - logsumexp(x, axis=axis, keepdims=True))
def log_softmax(x, axis=None):
r"""
Logarithm of softmax function::
log_softmax(x) = log(softmax(x))
Parameters
----------
x : array_like
Input array.
axis : int or tuple of ints, optional
Axis to compute values along. Default is None and softmax will be
computed over the entire array `x`.
Returns
-------
s : ndarray or scalar
An array with the same shape as `x`. Exponential of the result will
sum to 1 along the specified axis. If `x` is a scalar, a scalar is
returned.
Notes
-----
`log_softmax` is more accurate than ``np.log(softmax(x))`` with inputs that
make `softmax` saturate (see examples below).
.. versionadded:: 1.5.0
Examples
--------
>>> from scipy.special import log_softmax
>>> from scipy.special import softmax
>>> np.set_printoptions(precision=5)
>>> x = np.array([1000.0, 1.0])
>>> y = log_softmax(x)
>>> y
array([ 0., -999.])
>>> with np.errstate(divide='ignore'):
... y = np.log(softmax(x))
...
>>> y
array([ 0., -inf])
"""
x = _asarray_validated(x, check_finite=False)
x_max = np.amax(x, axis=axis, keepdims=True)
if x_max.ndim > 0:
x_max[~np.isfinite(x_max)] = 0
elif not np.isfinite(x_max):
x_max = 0
tmp = x - x_max
exp_tmp = np.exp(tmp)
# suppress warnings about log of zero
with np.errstate(divide='ignore'):
s = np.sum(exp_tmp, axis=axis, keepdims=True)
out = np.log(s)
out = tmp - out
return out

View file

@ -0,0 +1,454 @@
import os
import sys
import time
import numpy as np
from numpy.testing import assert_
import pytest
from scipy.special._testutils import assert_func_equal
try:
import mpmath # type: ignore[import]
except ImportError:
pass
# ------------------------------------------------------------------------------
# Machinery for systematic tests with mpmath
# ------------------------------------------------------------------------------
class Arg(object):
"""Generate a set of numbers on the real axis, concentrating on
'interesting' regions and covering all orders of magnitude.
"""
def __init__(self, a=-np.inf, b=np.inf, inclusive_a=True, inclusive_b=True):
if a > b:
raise ValueError("a should be less than or equal to b")
if a == -np.inf:
a = -0.5*np.finfo(float).max
if b == np.inf:
b = 0.5*np.finfo(float).max
self.a, self.b = a, b
self.inclusive_a, self.inclusive_b = inclusive_a, inclusive_b
def _positive_values(self, a, b, n):
if a < 0:
raise ValueError("a should be positive")
# Try to put half of the points into a linspace between a and
# 10 the other half in a logspace.
if n % 2 == 0:
nlogpts = n//2
nlinpts = nlogpts
else:
nlogpts = n//2
nlinpts = nlogpts + 1
if a >= 10:
# Outside of linspace range; just return a logspace.
pts = np.logspace(np.log10(a), np.log10(b), n)
elif a > 0 and b < 10:
# Outside of logspace range; just return a linspace
pts = np.linspace(a, b, n)
elif a > 0:
# Linspace between a and 10 and a logspace between 10 and
# b.
linpts = np.linspace(a, 10, nlinpts, endpoint=False)
logpts = np.logspace(1, np.log10(b), nlogpts)
pts = np.hstack((linpts, logpts))
elif a == 0 and b <= 10:
# Linspace between 0 and b and a logspace between 0 and
# the smallest positive point of the linspace
linpts = np.linspace(0, b, nlinpts)
if linpts.size > 1:
right = np.log10(linpts[1])
else:
right = -30
logpts = np.logspace(-30, right, nlogpts, endpoint=False)
pts = np.hstack((logpts, linpts))
else:
# Linspace between 0 and 10, logspace between 0 and the
# smallest positive point of the linspace, and a logspace
# between 10 and b.
if nlogpts % 2 == 0:
nlogpts1 = nlogpts//2
nlogpts2 = nlogpts1
else:
nlogpts1 = nlogpts//2
nlogpts2 = nlogpts1 + 1
linpts = np.linspace(0, 10, nlinpts, endpoint=False)
if linpts.size > 1:
right = np.log10(linpts[1])
else:
right = -30
logpts1 = np.logspace(-30, right, nlogpts1, endpoint=False)
logpts2 = np.logspace(1, np.log10(b), nlogpts2)
pts = np.hstack((logpts1, linpts, logpts2))
return np.sort(pts)
def values(self, n):
"""Return an array containing n numbers."""
a, b = self.a, self.b
if a == b:
return np.zeros(n)
if not self.inclusive_a:
n += 1
if not self.inclusive_b:
n += 1
if n % 2 == 0:
n1 = n//2
n2 = n1
else:
n1 = n//2
n2 = n1 + 1
if a >= 0:
pospts = self._positive_values(a, b, n)
negpts = []
elif b <= 0:
pospts = []
negpts = -self._positive_values(-b, -a, n)
else:
pospts = self._positive_values(0, b, n1)
negpts = -self._positive_values(0, -a, n2 + 1)
# Don't want to get zero twice
negpts = negpts[1:]
pts = np.hstack((negpts[::-1], pospts))
if not self.inclusive_a:
pts = pts[1:]
if not self.inclusive_b:
pts = pts[:-1]
return pts
class FixedArg(object):
def __init__(self, values):
self._values = np.asarray(values)
def values(self, n):
return self._values
class ComplexArg(object):
def __init__(self, a=complex(-np.inf, -np.inf), b=complex(np.inf, np.inf)):
self.real = Arg(a.real, b.real)
self.imag = Arg(a.imag, b.imag)
def values(self, n):
m = int(np.floor(np.sqrt(n)))
x = self.real.values(m)
y = self.imag.values(m + 1)
return (x[:,None] + 1j*y[None,:]).ravel()
class IntArg(object):
def __init__(self, a=-1000, b=1000):
self.a = a
self.b = b
def values(self, n):
v1 = Arg(self.a, self.b).values(max(1 + n//2, n-5)).astype(int)
v2 = np.arange(-5, 5)
v = np.unique(np.r_[v1, v2])
v = v[(v >= self.a) & (v < self.b)]
return v
def get_args(argspec, n):
if isinstance(argspec, np.ndarray):
args = argspec.copy()
else:
nargs = len(argspec)
ms = np.asarray([1.5 if isinstance(spec, ComplexArg) else 1.0 for spec in argspec])
ms = (n**(ms/sum(ms))).astype(int) + 1
args = [spec.values(m) for spec, m in zip(argspec, ms)]
args = np.array(np.broadcast_arrays(*np.ix_(*args))).reshape(nargs, -1).T
return args
class MpmathData(object):
def __init__(self, scipy_func, mpmath_func, arg_spec, name=None,
dps=None, prec=None, n=None, rtol=1e-7, atol=1e-300,
ignore_inf_sign=False, distinguish_nan_and_inf=True,
nan_ok=True, param_filter=None):
# mpmath tests are really slow (see gh-6989). Use a small number of
# points by default, increase back to 5000 (old default) if XSLOW is
# set
if n is None:
try:
is_xslow = int(os.environ.get('SCIPY_XSLOW', '0'))
except ValueError:
is_xslow = False
n = 5000 if is_xslow else 500
self.scipy_func = scipy_func
self.mpmath_func = mpmath_func
self.arg_spec = arg_spec
self.dps = dps
self.prec = prec
self.n = n
self.rtol = rtol
self.atol = atol
self.ignore_inf_sign = ignore_inf_sign
self.nan_ok = nan_ok
if isinstance(self.arg_spec, np.ndarray):
self.is_complex = np.issubdtype(self.arg_spec.dtype, np.complexfloating)
else:
self.is_complex = any([isinstance(arg, ComplexArg) for arg in self.arg_spec])
self.ignore_inf_sign = ignore_inf_sign
self.distinguish_nan_and_inf = distinguish_nan_and_inf
if not name or name == '<lambda>':
name = getattr(scipy_func, '__name__', None)
if not name or name == '<lambda>':
name = getattr(mpmath_func, '__name__', None)
self.name = name
self.param_filter = param_filter
def check(self):
np.random.seed(1234)
# Generate values for the arguments
argarr = get_args(self.arg_spec, self.n)
# Check
old_dps, old_prec = mpmath.mp.dps, mpmath.mp.prec
try:
if self.dps is not None:
dps_list = [self.dps]
else:
dps_list = [20]
if self.prec is not None:
mpmath.mp.prec = self.prec
# Proper casting of mpmath input and output types. Using
# native mpmath types as inputs gives improved precision
# in some cases.
if np.issubdtype(argarr.dtype, np.complexfloating):
pytype = mpc2complex
def mptype(x):
return mpmath.mpc(complex(x))
else:
def mptype(x):
return mpmath.mpf(float(x))
def pytype(x):
if abs(x.imag) > 1e-16*(1 + abs(x.real)):
return np.nan
else:
return mpf2float(x.real)
# Try out different dps until one (or none) works
for j, dps in enumerate(dps_list):
mpmath.mp.dps = dps
try:
assert_func_equal(self.scipy_func,
lambda *a: pytype(self.mpmath_func(*map(mptype, a))),
argarr,
vectorized=False,
rtol=self.rtol, atol=self.atol,
ignore_inf_sign=self.ignore_inf_sign,
distinguish_nan_and_inf=self.distinguish_nan_and_inf,
nan_ok=self.nan_ok,
param_filter=self.param_filter)
break
except AssertionError:
if j >= len(dps_list)-1:
# reraise the Exception
tp, value, tb = sys.exc_info()
if value.__traceback__ is not tb:
raise value.with_traceback(tb)
raise value
finally:
mpmath.mp.dps, mpmath.mp.prec = old_dps, old_prec
def __repr__(self):
if self.is_complex:
return "<MpmathData: %s (complex)>" % (self.name,)
else:
return "<MpmathData: %s>" % (self.name,)
def assert_mpmath_equal(*a, **kw):
d = MpmathData(*a, **kw)
d.check()
def nonfunctional_tooslow(func):
return pytest.mark.skip(reason=" Test not yet functional (too slow), needs more work.")(func)
# ------------------------------------------------------------------------------
# Tools for dealing with mpmath quirks
# ------------------------------------------------------------------------------
def mpf2float(x):
"""
Convert an mpf to the nearest floating point number. Just using
float directly doesn't work because of results like this:
with mp.workdps(50):
float(mpf("0.99999999999999999")) = 0.9999999999999999
"""
return float(mpmath.nstr(x, 17, min_fixed=0, max_fixed=0))
def mpc2complex(x):
return complex(mpf2float(x.real), mpf2float(x.imag))
def trace_args(func):
def tofloat(x):
if isinstance(x, mpmath.mpc):
return complex(x)
else:
return float(x)
def wrap(*a, **kw):
sys.stderr.write("%r: " % (tuple(map(tofloat, a)),))
sys.stderr.flush()
try:
r = func(*a, **kw)
sys.stderr.write("-> %r" % r)
finally:
sys.stderr.write("\n")
sys.stderr.flush()
return r
return wrap
try:
import posix
import signal
POSIX = ('setitimer' in dir(signal))
except ImportError:
POSIX = False
class TimeoutError(Exception):
pass
def time_limited(timeout=0.5, return_val=np.nan, use_sigalrm=True):
"""
Decorator for setting a timeout for pure-Python functions.
If the function does not return within `timeout` seconds, the
value `return_val` is returned instead.
On POSIX this uses SIGALRM by default. On non-POSIX, settrace is
used. Do not use this with threads: the SIGALRM implementation
does probably not work well. The settrace implementation only
traces the current thread.
The settrace implementation slows down execution speed. Slowdown
by a factor around 10 is probably typical.
"""
if POSIX and use_sigalrm:
def sigalrm_handler(signum, frame):
raise TimeoutError()
def deco(func):
def wrap(*a, **kw):
old_handler = signal.signal(signal.SIGALRM, sigalrm_handler)
signal.setitimer(signal.ITIMER_REAL, timeout)
try:
return func(*a, **kw)
except TimeoutError:
return return_val
finally:
signal.setitimer(signal.ITIMER_REAL, 0)
signal.signal(signal.SIGALRM, old_handler)
return wrap
else:
def deco(func):
def wrap(*a, **kw):
start_time = time.time()
def trace(frame, event, arg):
if time.time() - start_time > timeout:
raise TimeoutError()
return trace
sys.settrace(trace)
try:
return func(*a, **kw)
except TimeoutError:
sys.settrace(None)
return return_val
finally:
sys.settrace(None)
return wrap
return deco
def exception_to_nan(func):
"""Decorate function to return nan if it raises an exception"""
def wrap(*a, **kw):
try:
return func(*a, **kw)
except Exception:
return np.nan
return wrap
def inf_to_nan(func):
"""Decorate function to return nan if it returns inf"""
def wrap(*a, **kw):
v = func(*a, **kw)
if not np.isfinite(v):
return np.nan
return v
return wrap
def mp_assert_allclose(res, std, atol=0, rtol=1e-17):
"""
Compare lists of mpmath.mpf's or mpmath.mpc's directly so that it
can be done to higher precision than double.
"""
try:
len(res)
except TypeError:
res = list(res)
n = len(std)
if len(res) != n:
raise AssertionError("Lengths of inputs not equal.")
failures = []
for k in range(n):
try:
assert_(mpmath.fabs(res[k] - std[k]) <= atol + rtol*mpmath.fabs(std[k]))
except AssertionError:
failures.append(k)
ndigits = int(abs(np.log10(rtol)))
msg = [""]
msg.append("Bad results ({} out of {}) for the following points:"
.format(len(failures), n))
for k in failures:
resrep = mpmath.nstr(res[k], ndigits, min_fixed=0, max_fixed=0)
stdrep = mpmath.nstr(std[k], ndigits, min_fixed=0, max_fixed=0)
if std[k] == 0:
rdiff = "inf"
else:
rdiff = mpmath.fabs((res[k] - std[k])/std[k])
rdiff = mpmath.nstr(rdiff, 3)
msg.append("{}: {} != {} (rdiff {})".format(k, resrep, stdrep, rdiff))
if failures:
assert_(False, "\n".join(msg))

View file

@ -0,0 +1,59 @@
"""Precompute the polynomials for the asymptotic expansion of the
generalized exponential integral.
Sources
-------
[1] NIST, Digital Library of Mathematical Functions,
https://dlmf.nist.gov/8.20#ii
"""
import os
from numpy.testing import suppress_warnings
try:
# Can remove when sympy #11255 is resolved; see
# https://github.com/sympy/sympy/issues/11255
with suppress_warnings() as sup:
sup.filter(DeprecationWarning, "inspect.getargspec.. is deprecated")
import sympy # type: ignore[import]
from sympy import Poly
x = sympy.symbols('x')
except ImportError:
pass
def generate_A(K):
A = [Poly(1, x)]
for k in range(K):
A.append(Poly(1 - 2*k*x, x)*A[k] + Poly(x*(x + 1))*A[k].diff())
return A
WARNING = """\
/* This file was automatically generated by _precompute/expn_asy.py.
* Do not edit it manually!
*/
"""
def main():
print(__doc__)
fn = os.path.join('..', 'cephes', 'expn.h')
K = 12
A = generate_A(K)
with open(fn + '.new', 'w') as f:
f.write(WARNING)
f.write("#define nA {}\n".format(len(A)))
for k, Ak in enumerate(A):
tmp = ', '.join([str(x.evalf(18)) for x in Ak.coeffs()])
f.write("static const double A{}[] = {{{}}};\n".format(k, tmp))
tmp = ", ".join(["A{}".format(k) for k in range(K + 1)])
f.write("static const double *A[] = {{{}}};\n".format(tmp))
tmp = ", ".join([str(Ak.degree()) for Ak in A])
f.write("static const int Adegs[] = {{{}}};\n".format(tmp))
os.rename(fn + '.new', fn)
if __name__ == "__main__":
main()

View file

@ -0,0 +1,115 @@
"""
Precompute coefficients of Temme's asymptotic expansion for gammainc.
This takes about 8 hours to run on a 2.3 GHz Macbook Pro with 4GB ram.
Sources:
[1] NIST, "Digital Library of Mathematical Functions",
https://dlmf.nist.gov/
"""
import os
from scipy.special._precompute.utils import lagrange_inversion
try:
import mpmath as mp # type: ignore[import]
except ImportError:
pass
def compute_a(n):
"""a_k from DLMF 5.11.6"""
a = [mp.sqrt(2)/2]
for k in range(1, n):
ak = a[-1]/k
for j in range(1, len(a)):
ak -= a[j]*a[-j]/(j + 1)
ak /= a[0]*(1 + mp.mpf(1)/(k + 1))
a.append(ak)
return a
def compute_g(n):
"""g_k from DLMF 5.11.3/5.11.5"""
a = compute_a(2*n)
g = [mp.sqrt(2)*mp.rf(0.5, k)*a[2*k] for k in range(n)]
return g
def eta(lam):
"""Function from DLMF 8.12.1 shifted to be centered at 0."""
if lam > 0:
return mp.sqrt(2*(lam - mp.log(lam + 1)))
elif lam < 0:
return -mp.sqrt(2*(lam - mp.log(lam + 1)))
else:
return 0
def compute_alpha(n):
"""alpha_n from DLMF 8.12.13"""
coeffs = mp.taylor(eta, 0, n - 1)
return lagrange_inversion(coeffs)
def compute_d(K, N):
"""d_{k, n} from DLMF 8.12.12"""
M = N + 2*K
d0 = [-mp.mpf(1)/3]
alpha = compute_alpha(M + 2)
for n in range(1, M):
d0.append((n + 2)*alpha[n+2])
d = [d0]
g = compute_g(K)
for k in range(1, K):
dk = []
for n in range(M - 2*k):
dk.append((-1)**k*g[k]*d[0][n] + (n + 2)*d[k-1][n+2])
d.append(dk)
for k in range(K):
d[k] = d[k][:N]
return d
header = \
r"""/* This file was automatically generated by _precomp/gammainc.py.
* Do not edit it manually!
*/
#ifndef IGAM_H
#define IGAM_H
#define K {}
#define N {}
static const double d[K][N] =
{{"""
footer = \
r"""
#endif
"""
def main():
print(__doc__)
K = 25
N = 25
with mp.workdps(50):
d = compute_d(K, N)
fn = os.path.join(os.path.dirname(__file__), '..', 'cephes', 'igam.h')
with open(fn + '.new', 'w') as f:
f.write(header.format(K, N))
for k, row in enumerate(d):
row = map(lambda x: mp.nstr(x, 17, min_fixed=0, max_fixed=0), row)
f.write('{')
f.write(", ".join(row))
if k < K - 1:
f.write('},\n')
else:
f.write('}};\n')
f.write(footer)
os.rename(fn + '.new', fn)
if __name__ == "__main__":
main()

View file

@ -0,0 +1,124 @@
"""Compute gammainc and gammaincc for large arguments and parameters
and save the values to data files for use in tests. We can't just
compare to mpmath's gammainc in test_mpmath.TestSystematic because it
would take too long.
Note that mpmath's gammainc is computed using hypercomb, but since it
doesn't allow the user to increase the maximum number of terms used in
the series it doesn't converge for many arguments. To get around this
we copy the mpmath implementation but use more terms.
This takes about 17 minutes to run on a 2.3 GHz Macbook Pro with 4GB
ram.
Sources:
[1] Fredrik Johansson and others. mpmath: a Python library for
arbitrary-precision floating-point arithmetic (version 0.19),
December 2013. http://mpmath.org/.
"""
import os
from time import time
import numpy as np
from numpy import pi
from scipy.special._mptestutils import mpf2float
try:
import mpmath as mp # type: ignore[import]
except ImportError:
pass
def gammainc(a, x, dps=50, maxterms=10**8):
"""Compute gammainc exactly like mpmath does but allow for more
summands in hypercomb. See
mpmath/functions/expintegrals.py#L134
in the mpmath github repository.
"""
with mp.workdps(dps):
z, a, b = mp.mpf(a), mp.mpf(x), mp.mpf(x)
G = [z]
negb = mp.fneg(b, exact=True)
def h(z):
T1 = [mp.exp(negb), b, z], [1, z, -1], [], G, [1], [1+z], b
return (T1,)
res = mp.hypercomb(h, [z], maxterms=maxterms)
return mpf2float(res)
def gammaincc(a, x, dps=50, maxterms=10**8):
"""Compute gammaincc exactly like mpmath does but allow for more
terms in hypercomb. See
mpmath/functions/expintegrals.py#L187
in the mpmath github repository.
"""
with mp.workdps(dps):
z, a = a, x
if mp.isint(z):
try:
# mpmath has a fast integer path
return mpf2float(mp.gammainc(z, a=a, regularized=True))
except mp.libmp.NoConvergence:
pass
nega = mp.fneg(a, exact=True)
G = [z]
# Use 2F0 series when possible; fall back to lower gamma representation
try:
def h(z):
r = z-1
return [([mp.exp(nega), a], [1, r], [], G, [1, -r], [], 1/nega)]
return mpf2float(mp.hypercomb(h, [z], force_series=True))
except mp.libmp.NoConvergence:
def h(z):
T1 = [], [1, z-1], [z], G, [], [], 0
T2 = [-mp.exp(nega), a, z], [1, z, -1], [], G, [1], [1+z], a
return T1, T2
return mpf2float(mp.hypercomb(h, [z], maxterms=maxterms))
def main():
t0 = time()
# It would be nice to have data for larger values, but either this
# requires prohibitively large precision (dps > 800) or mpmath has
# a bug. For example, gammainc(1e20, 1e20, dps=800) returns a
# value around 0.03, while the true value should be close to 0.5
# (DLMF 8.12.15).
print(__doc__)
pwd = os.path.dirname(__file__)
r = np.logspace(4, 14, 30)
ltheta = np.logspace(np.log10(pi/4), np.log10(np.arctan(0.6)), 30)
utheta = np.logspace(np.log10(pi/4), np.log10(np.arctan(1.4)), 30)
regimes = [(gammainc, ltheta), (gammaincc, utheta)]
for func, theta in regimes:
rg, thetag = np.meshgrid(r, theta)
a, x = rg*np.cos(thetag), rg*np.sin(thetag)
a, x = a.flatten(), x.flatten()
dataset = []
for i, (a0, x0) in enumerate(zip(a, x)):
if func == gammaincc:
# Exploit the fast integer path in gammaincc whenever
# possible so that the computation doesn't take too
# long
a0, x0 = np.floor(a0), np.floor(x0)
dataset.append((a0, x0, func(a0, x0)))
dataset = np.array(dataset)
filename = os.path.join(pwd, '..', 'tests', 'data', 'local',
'{}.txt'.format(func.__name__))
np.savetxt(filename, dataset)
print("{} minutes elapsed".format((time() - t0)/60))
if __name__ == "__main__":
main()

View file

@ -0,0 +1,68 @@
"""Compute a Pade approximation for the principle branch of the
Lambert W function around 0 and compare it to various other
approximations.
"""
import numpy as np
try:
import mpmath # type: ignore[import]
import matplotlib.pyplot as plt # type: ignore[import]
except ImportError:
pass
def lambertw_pade():
derivs = [mpmath.diff(mpmath.lambertw, 0, n=n) for n in range(6)]
p, q = mpmath.pade(derivs, 3, 2)
return p, q
def main():
print(__doc__)
with mpmath.workdps(50):
p, q = lambertw_pade()
p, q = p[::-1], q[::-1]
print("p = {}".format(p))
print("q = {}".format(q))
x, y = np.linspace(-1.5, 1.5, 75), np.linspace(-1.5, 1.5, 75)
x, y = np.meshgrid(x, y)
z = x + 1j*y
lambertw_std = []
for z0 in z.flatten():
lambertw_std.append(complex(mpmath.lambertw(z0)))
lambertw_std = np.array(lambertw_std).reshape(x.shape)
fig, axes = plt.subplots(nrows=3, ncols=1)
# Compare Pade approximation to true result
p = np.array([float(p0) for p0 in p])
q = np.array([float(q0) for q0 in q])
pade_approx = np.polyval(p, z)/np.polyval(q, z)
pade_err = abs(pade_approx - lambertw_std)
axes[0].pcolormesh(x, y, pade_err)
# Compare two terms of asymptotic series to true result
asy_approx = np.log(z) - np.log(np.log(z))
asy_err = abs(asy_approx - lambertw_std)
axes[1].pcolormesh(x, y, asy_err)
# Compare two terms of the series around the branch point to the
# true result
p = np.sqrt(2*(np.exp(1)*z + 1))
series_approx = -1 + p - p**2/3
series_err = abs(series_approx - lambertw_std)
im = axes[2].pcolormesh(x, y, series_err)
fig.colorbar(im, ax=axes.ravel().tolist())
plt.show()
fig, ax = plt.subplots(nrows=1, ncols=1)
pade_better = pade_err < asy_err
im = ax.pcolormesh(x, y, pade_better)
t = np.linspace(-0.3, 0.3)
ax.plot(-2.5*abs(t) - 0.2, t, 'r')
fig.colorbar(im, ax=ax)
plt.show()
if __name__ == '__main__':
main()

View file

@ -0,0 +1,43 @@
"""Precompute series coefficients for log-Gamma."""
try:
import mpmath # type: ignore[import]
except ImportError:
pass
def stirling_series(N):
with mpmath.workdps(100):
coeffs = [mpmath.bernoulli(2*n)/(2*n*(2*n - 1))
for n in range(1, N + 1)]
return coeffs
def taylor_series_at_1(N):
coeffs = []
with mpmath.workdps(100):
coeffs.append(-mpmath.euler)
for n in range(2, N + 1):
coeffs.append((-1)**n*mpmath.zeta(n)/n)
return coeffs
def main():
print(__doc__)
print()
stirling_coeffs = [mpmath.nstr(x, 20, min_fixed=0, max_fixed=0)
for x in stirling_series(8)[::-1]]
taylor_coeffs = [mpmath.nstr(x, 20, min_fixed=0, max_fixed=0)
for x in taylor_series_at_1(23)[::-1]]
print("Stirling series coefficients")
print("----------------------------")
print("\n".join(stirling_coeffs))
print()
print("Taylor series coefficients")
print("--------------------------")
print("\n".join(taylor_coeffs))
print()
if __name__ == '__main__':
main()

View file

@ -0,0 +1,11 @@
def configuration(parent_name='special', top_path=None):
from numpy.distutils.misc_util import Configuration
config = Configuration('_precompute', parent_name, top_path)
return config
if __name__ == '__main__':
from numpy.distutils.core import setup
setup(**configuration().todict())

View file

@ -0,0 +1,120 @@
"""
Convergence regions of the expansions used in ``struve.c``
Note that for v >> z both functions tend rapidly to 0,
and for v << -z, they tend to infinity.
The floating-point functions over/underflow in the lower left and right
corners of the figure.
Figure legend
=============
Red region
Power series is close (1e-12) to the mpmath result
Blue region
Asymptotic series is close to the mpmath result
Green region
Bessel series is close to the mpmath result
Dotted colored lines
Boundaries of the regions
Solid colored lines
Boundaries estimated by the routine itself. These will be used
for determining which of the results to use.
Black dashed line
The line z = 0.7*|v| + 12
"""
import numpy as np
import matplotlib.pyplot as plt # type: ignore[import]
import mpmath # type: ignore[import]
def err_metric(a, b, atol=1e-290):
m = abs(a - b) / (atol + abs(b))
m[np.isinf(b) & (a == b)] = 0
return m
def do_plot(is_h=True):
from scipy.special._ufuncs import (_struve_power_series,
_struve_asymp_large_z,
_struve_bessel_series)
vs = np.linspace(-1000, 1000, 91)
zs = np.sort(np.r_[1e-5, 1.0, np.linspace(0, 700, 91)[1:]])
rp = _struve_power_series(vs[:,None], zs[None,:], is_h)
ra = _struve_asymp_large_z(vs[:,None], zs[None,:], is_h)
rb = _struve_bessel_series(vs[:,None], zs[None,:], is_h)
mpmath.mp.dps = 50
if is_h:
sh = lambda v, z: float(mpmath.struveh(mpmath.mpf(v), mpmath.mpf(z)))
else:
sh = lambda v, z: float(mpmath.struvel(mpmath.mpf(v), mpmath.mpf(z)))
ex = np.vectorize(sh, otypes='d')(vs[:,None], zs[None,:])
err_a = err_metric(ra[0], ex) + 1e-300
err_p = err_metric(rp[0], ex) + 1e-300
err_b = err_metric(rb[0], ex) + 1e-300
err_est_a = abs(ra[1]/ra[0])
err_est_p = abs(rp[1]/rp[0])
err_est_b = abs(rb[1]/rb[0])
z_cutoff = 0.7*abs(vs) + 12
levels = [-1000, -12]
plt.cla()
plt.hold(1)
plt.contourf(vs, zs, np.log10(err_p).T, levels=levels, colors=['r', 'r'], alpha=0.1)
plt.contourf(vs, zs, np.log10(err_a).T, levels=levels, colors=['b', 'b'], alpha=0.1)
plt.contourf(vs, zs, np.log10(err_b).T, levels=levels, colors=['g', 'g'], alpha=0.1)
plt.contour(vs, zs, np.log10(err_p).T, levels=levels, colors=['r', 'r'], linestyles=[':', ':'])
plt.contour(vs, zs, np.log10(err_a).T, levels=levels, colors=['b', 'b'], linestyles=[':', ':'])
plt.contour(vs, zs, np.log10(err_b).T, levels=levels, colors=['g', 'g'], linestyles=[':', ':'])
lp = plt.contour(vs, zs, np.log10(err_est_p).T, levels=levels, colors=['r', 'r'], linestyles=['-', '-'])
la = plt.contour(vs, zs, np.log10(err_est_a).T, levels=levels, colors=['b', 'b'], linestyles=['-', '-'])
lb = plt.contour(vs, zs, np.log10(err_est_b).T, levels=levels, colors=['g', 'g'], linestyles=['-', '-'])
plt.clabel(lp, fmt={-1000: 'P', -12: 'P'})
plt.clabel(la, fmt={-1000: 'A', -12: 'A'})
plt.clabel(lb, fmt={-1000: 'B', -12: 'B'})
plt.plot(vs, z_cutoff, 'k--')
plt.xlim(vs.min(), vs.max())
plt.ylim(zs.min(), zs.max())
plt.xlabel('v')
plt.ylabel('z')
def main():
plt.clf()
plt.subplot(121)
do_plot(True)
plt.title('Struve H')
plt.subplot(122)
do_plot(False)
plt.title('Struve L')
plt.savefig('struve_convergence.png')
plt.show()
if __name__ == "__main__":
main()

View file

@ -0,0 +1,44 @@
from numpy.testing import suppress_warnings
try:
import mpmath as mp # type: ignore[import]
except ImportError:
pass
try:
# Can remove when sympy #11255 is resolved; see
# https://github.com/sympy/sympy/issues/11255
with suppress_warnings() as sup:
sup.filter(DeprecationWarning, "inspect.getargspec.. is deprecated")
from sympy.abc import x # type: ignore[import]
except ImportError:
pass
def lagrange_inversion(a):
"""Given a series
f(x) = a[1]*x + a[2]*x**2 + ... + a[n-1]*x**(n - 1),
use the Lagrange inversion formula to compute a series
g(x) = b[1]*x + b[2]*x**2 + ... + b[n-1]*x**(n - 1)
so that f(g(x)) = g(f(x)) = x mod x**n. We must have a[0] = 0, so
necessarily b[0] = 0 too.
The algorithm is naive and could be improved, but speed isn't an
issue here and it's easy to read.
"""
n = len(a)
f = sum(a[i]*x**i for i in range(len(a)))
h = (x/f).series(x, 0, n).removeO()
hpower = [h**0]
for k in range(n):
hpower.append((hpower[-1]*h).expand())
b = [mp.mpf(0)]
for k in range(1, n):
b.append(hpower[k].coeff(x, k - 1)/k)
b = map(lambda x: mp.mpf(x), b)
return b

View file

@ -0,0 +1,41 @@
import numpy as np
try:
import mpmath # type: ignore[import]
except ImportError:
pass
def mpmath_wrightomega(x):
return mpmath.lambertw(mpmath.exp(x), mpmath.mpf('-0.5'))
def wrightomega_series_error(x):
series = x
desired = mpmath_wrightomega(x)
return abs(series - desired) / desired
def wrightomega_exp_error(x):
exponential_approx = mpmath.exp(x)
desired = mpmath_wrightomega(x)
return abs(exponential_approx - desired) / desired
def main():
desired_error = 2 * np.finfo(float).eps
print('Series Error')
for x in [1e5, 1e10, 1e15, 1e20]:
with mpmath.workdps(100):
error = wrightomega_series_error(x)
print(x, error, error < desired_error)
print('Exp error')
for x in [-10, -25, -50, -100, -200, -400, -700, -740]:
with mpmath.workdps(100):
error = wrightomega_exp_error(x)
print(x, error, error < desired_error)
if __name__ == '__main__':
main()

View file

@ -0,0 +1,27 @@
"""Compute the Taylor series for zeta(x) - 1 around x = 0."""
try:
import mpmath # type: ignore[import]
except ImportError:
pass
def zetac_series(N):
coeffs = []
with mpmath.workdps(100):
coeffs.append(-1.5)
for n in range(1, N):
coeff = mpmath.diff(mpmath.zeta, 0, n)/mpmath.factorial(n)
coeffs.append(coeff)
return coeffs
def main():
print(__doc__)
coeffs = zetac_series(10)
coeffs = [mpmath.nstr(x, 20, min_fixed=0, max_fixed=0)
for x in coeffs]
print("\n".join(coeffs[::-1]))
if __name__ == '__main__':
main()

View file

@ -0,0 +1,203 @@
from ._ufuncs import (_spherical_jn, _spherical_yn, _spherical_in,
_spherical_kn, _spherical_jn_d, _spherical_yn_d,
_spherical_in_d, _spherical_kn_d)
def spherical_jn(n, z, derivative=False):
r"""Spherical Bessel function of the first kind or its derivative.
Defined as [1]_,
.. math:: j_n(z) = \sqrt{\frac{\pi}{2z}} J_{n + 1/2}(z),
where :math:`J_n` is the Bessel function of the first kind.
Parameters
----------
n : int, array_like
Order of the Bessel function (n >= 0).
z : complex or float, array_like
Argument of the Bessel function.
derivative : bool, optional
If True, the value of the derivative (rather than the function
itself) is returned.
Returns
-------
jn : ndarray
Notes
-----
For real arguments greater than the order, the function is computed
using the ascending recurrence [2]_. For small real or complex
arguments, the definitional relation to the cylindrical Bessel function
of the first kind is used.
The derivative is computed using the relations [3]_,
.. math::
j_n'(z) = j_{n-1}(z) - \frac{n + 1}{z} j_n(z).
j_0'(z) = -j_1(z)
.. versionadded:: 0.18.0
References
----------
.. [1] https://dlmf.nist.gov/10.47.E3
.. [2] https://dlmf.nist.gov/10.51.E1
.. [3] https://dlmf.nist.gov/10.51.E2
"""
if derivative:
return _spherical_jn_d(n, z)
else:
return _spherical_jn(n, z)
def spherical_yn(n, z, derivative=False):
r"""Spherical Bessel function of the second kind or its derivative.
Defined as [1]_,
.. math:: y_n(z) = \sqrt{\frac{\pi}{2z}} Y_{n + 1/2}(z),
where :math:`Y_n` is the Bessel function of the second kind.
Parameters
----------
n : int, array_like
Order of the Bessel function (n >= 0).
z : complex or float, array_like
Argument of the Bessel function.
derivative : bool, optional
If True, the value of the derivative (rather than the function
itself) is returned.
Returns
-------
yn : ndarray
Notes
-----
For real arguments, the function is computed using the ascending
recurrence [2]_. For complex arguments, the definitional relation to
the cylindrical Bessel function of the second kind is used.
The derivative is computed using the relations [3]_,
.. math::
y_n' = y_{n-1} - \frac{n + 1}{z} y_n.
y_0' = -y_1
.. versionadded:: 0.18.0
References
----------
.. [1] https://dlmf.nist.gov/10.47.E4
.. [2] https://dlmf.nist.gov/10.51.E1
.. [3] https://dlmf.nist.gov/10.51.E2
"""
if derivative:
return _spherical_yn_d(n, z)
else:
return _spherical_yn(n, z)
def spherical_in(n, z, derivative=False):
r"""Modified spherical Bessel function of the first kind or its derivative.
Defined as [1]_,
.. math:: i_n(z) = \sqrt{\frac{\pi}{2z}} I_{n + 1/2}(z),
where :math:`I_n` is the modified Bessel function of the first kind.
Parameters
----------
n : int, array_like
Order of the Bessel function (n >= 0).
z : complex or float, array_like
Argument of the Bessel function.
derivative : bool, optional
If True, the value of the derivative (rather than the function
itself) is returned.
Returns
-------
in : ndarray
Notes
-----
The function is computed using its definitional relation to the
modified cylindrical Bessel function of the first kind.
The derivative is computed using the relations [2]_,
.. math::
i_n' = i_{n-1} - \frac{n + 1}{z} i_n.
i_1' = i_0
.. versionadded:: 0.18.0
References
----------
.. [1] https://dlmf.nist.gov/10.47.E7
.. [2] https://dlmf.nist.gov/10.51.E5
"""
if derivative:
return _spherical_in_d(n, z)
else:
return _spherical_in(n, z)
def spherical_kn(n, z, derivative=False):
r"""Modified spherical Bessel function of the second kind or its derivative.
Defined as [1]_,
.. math:: k_n(z) = \sqrt{\frac{\pi}{2z}} K_{n + 1/2}(z),
where :math:`K_n` is the modified Bessel function of the second kind.
Parameters
----------
n : int, array_like
Order of the Bessel function (n >= 0).
z : complex or float, array_like
Argument of the Bessel function.
derivative : bool, optional
If True, the value of the derivative (rather than the function
itself) is returned.
Returns
-------
kn : ndarray
Notes
-----
The function is computed using its definitional relation to the
modified cylindrical Bessel function of the second kind.
The derivative is computed using the relations [2]_,
.. math::
k_n' = -k_{n-1} - \frac{n + 1}{z} k_n.
k_0' = -k_1
.. versionadded:: 0.18.0
References
----------
.. [1] https://dlmf.nist.gov/10.47.E9
.. [2] https://dlmf.nist.gov/10.51.E5
"""
if derivative:
return _spherical_kn_d(n, z)
else:
return _spherical_kn(n, z)

View file

@ -0,0 +1,5 @@
import numpy as np
def have_fenv() -> bool: ...
def random_double(size: int) -> np.float64: ...
def test_add_round(size: int, mode: str): ...

View file

@ -0,0 +1,316 @@
import os
import functools
import operator
from distutils.version import LooseVersion
import numpy as np
from numpy.testing import assert_
import pytest
import scipy.special as sc
__all__ = ['with_special_errors', 'assert_func_equal', 'FuncData']
#------------------------------------------------------------------------------
# Check if a module is present to be used in tests
#------------------------------------------------------------------------------
class MissingModule(object):
def __init__(self, name):
self.name = name
def check_version(module, min_ver):
if type(module) == MissingModule:
return pytest.mark.skip(reason="{} is not installed".format(module.name))
return pytest.mark.skipif(LooseVersion(module.__version__) < LooseVersion(min_ver),
reason="{} version >= {} required".format(module.__name__, min_ver))
#------------------------------------------------------------------------------
# Enable convergence and loss of precision warnings -- turn off one by one
#------------------------------------------------------------------------------
def with_special_errors(func):
"""
Enable special function errors (such as underflow, overflow,
loss of precision, etc.)
"""
@functools.wraps(func)
def wrapper(*a, **kw):
with sc.errstate(all='raise'):
res = func(*a, **kw)
return res
return wrapper
#------------------------------------------------------------------------------
# Comparing function values at many data points at once, with helpful
# error reports
#------------------------------------------------------------------------------
def assert_func_equal(func, results, points, rtol=None, atol=None,
param_filter=None, knownfailure=None,
vectorized=True, dtype=None, nan_ok=False,
ignore_inf_sign=False, distinguish_nan_and_inf=True):
if hasattr(points, 'next'):
# it's a generator
points = list(points)
points = np.asarray(points)
if points.ndim == 1:
points = points[:,None]
nparams = points.shape[1]
if hasattr(results, '__name__'):
# function
data = points
result_columns = None
result_func = results
else:
# dataset
data = np.c_[points, results]
result_columns = list(range(nparams, data.shape[1]))
result_func = None
fdata = FuncData(func, data, list(range(nparams)),
result_columns=result_columns, result_func=result_func,
rtol=rtol, atol=atol, param_filter=param_filter,
knownfailure=knownfailure, nan_ok=nan_ok, vectorized=vectorized,
ignore_inf_sign=ignore_inf_sign,
distinguish_nan_and_inf=distinguish_nan_and_inf)
fdata.check()
class FuncData(object):
"""
Data set for checking a special function.
Parameters
----------
func : function
Function to test
data : numpy array
columnar data to use for testing
param_columns : int or tuple of ints
Columns indices in which the parameters to `func` lie.
Can be imaginary integers to indicate that the parameter
should be cast to complex.
result_columns : int or tuple of ints, optional
Column indices for expected results from `func`.
result_func : callable, optional
Function to call to obtain results.
rtol : float, optional
Required relative tolerance. Default is 5*eps.
atol : float, optional
Required absolute tolerance. Default is 5*tiny.
param_filter : function, or tuple of functions/Nones, optional
Filter functions to exclude some parameter ranges.
If omitted, no filtering is done.
knownfailure : str, optional
Known failure error message to raise when the test is run.
If omitted, no exception is raised.
nan_ok : bool, optional
If nan is always an accepted result.
vectorized : bool, optional
Whether all functions passed in are vectorized.
ignore_inf_sign : bool, optional
Whether to ignore signs of infinities.
(Doesn't matter for complex-valued functions.)
distinguish_nan_and_inf : bool, optional
If True, treat numbers which contain nans or infs as as
equal. Sets ignore_inf_sign to be True.
"""
def __init__(self, func, data, param_columns, result_columns=None,
result_func=None, rtol=None, atol=None, param_filter=None,
knownfailure=None, dataname=None, nan_ok=False, vectorized=True,
ignore_inf_sign=False, distinguish_nan_and_inf=True):
self.func = func
self.data = data
self.dataname = dataname
if not hasattr(param_columns, '__len__'):
param_columns = (param_columns,)
self.param_columns = tuple(param_columns)
if result_columns is not None:
if not hasattr(result_columns, '__len__'):
result_columns = (result_columns,)
self.result_columns = tuple(result_columns)
if result_func is not None:
raise ValueError("Only result_func or result_columns should be provided")
elif result_func is not None:
self.result_columns = None
else:
raise ValueError("Either result_func or result_columns should be provided")
self.result_func = result_func
self.rtol = rtol
self.atol = atol
if not hasattr(param_filter, '__len__'):
param_filter = (param_filter,)
self.param_filter = param_filter
self.knownfailure = knownfailure
self.nan_ok = nan_ok
self.vectorized = vectorized
self.ignore_inf_sign = ignore_inf_sign
self.distinguish_nan_and_inf = distinguish_nan_and_inf
if not self.distinguish_nan_and_inf:
self.ignore_inf_sign = True
def get_tolerances(self, dtype):
if not np.issubdtype(dtype, np.inexact):
dtype = np.dtype(float)
info = np.finfo(dtype)
rtol, atol = self.rtol, self.atol
if rtol is None:
rtol = 5*info.eps
if atol is None:
atol = 5*info.tiny
return rtol, atol
def check(self, data=None, dtype=None, dtypes=None):
"""Check the special function against the data."""
__tracebackhide__ = operator.methodcaller(
'errisinstance', AssertionError
)
if self.knownfailure:
pytest.xfail(reason=self.knownfailure)
if data is None:
data = self.data
if dtype is None:
dtype = data.dtype
else:
data = data.astype(dtype)
rtol, atol = self.get_tolerances(dtype)
# Apply given filter functions
if self.param_filter:
param_mask = np.ones((data.shape[0],), np.bool_)
for j, filter in zip(self.param_columns, self.param_filter):
if filter:
param_mask &= list(filter(data[:,j]))
data = data[param_mask]
# Pick parameters from the correct columns
params = []
for idx, j in enumerate(self.param_columns):
if np.iscomplexobj(j):
j = int(j.imag)
params.append(data[:,j].astype(complex))
elif dtypes and idx < len(dtypes):
params.append(data[:, j].astype(dtypes[idx]))
else:
params.append(data[:,j])
# Helper for evaluating results
def eval_func_at_params(func, skip_mask=None):
if self.vectorized:
got = func(*params)
else:
got = []
for j in range(len(params[0])):
if skip_mask is not None and skip_mask[j]:
got.append(np.nan)
continue
got.append(func(*tuple([params[i][j] for i in range(len(params))])))
got = np.asarray(got)
if not isinstance(got, tuple):
got = (got,)
return got
# Evaluate function to be tested
got = eval_func_at_params(self.func)
# Grab the correct results
if self.result_columns is not None:
# Correct results passed in with the data
wanted = tuple([data[:,icol] for icol in self.result_columns])
else:
# Function producing correct results passed in
skip_mask = None
if self.nan_ok and len(got) == 1:
# Don't spend time evaluating what doesn't need to be evaluated
skip_mask = np.isnan(got[0])
wanted = eval_func_at_params(self.result_func, skip_mask=skip_mask)
# Check the validity of each output returned
assert_(len(got) == len(wanted))
for output_num, (x, y) in enumerate(zip(got, wanted)):
if np.issubdtype(x.dtype, np.complexfloating) or self.ignore_inf_sign:
pinf_x = np.isinf(x)
pinf_y = np.isinf(y)
minf_x = np.isinf(x)
minf_y = np.isinf(y)
else:
pinf_x = np.isposinf(x)
pinf_y = np.isposinf(y)
minf_x = np.isneginf(x)
minf_y = np.isneginf(y)
nan_x = np.isnan(x)
nan_y = np.isnan(y)
with np.errstate(all='ignore'):
abs_y = np.absolute(y)
abs_y[~np.isfinite(abs_y)] = 0
diff = np.absolute(x - y)
diff[~np.isfinite(diff)] = 0
rdiff = diff / np.absolute(y)
rdiff[~np.isfinite(rdiff)] = 0
tol_mask = (diff <= atol + rtol*abs_y)
pinf_mask = (pinf_x == pinf_y)
minf_mask = (minf_x == minf_y)
nan_mask = (nan_x == nan_y)
bad_j = ~(tol_mask & pinf_mask & minf_mask & nan_mask)
point_count = bad_j.size
if self.nan_ok:
bad_j &= ~nan_x
bad_j &= ~nan_y
point_count -= (nan_x | nan_y).sum()
if not self.distinguish_nan_and_inf and not self.nan_ok:
# If nan's are okay we've already covered all these cases
inf_x = np.isinf(x)
inf_y = np.isinf(y)
both_nonfinite = (inf_x & nan_y) | (nan_x & inf_y)
bad_j &= ~both_nonfinite
point_count -= both_nonfinite.sum()
if np.any(bad_j):
# Some bad results: inform what, where, and how bad
msg = [""]
msg.append("Max |adiff|: %g" % diff[bad_j].max())
msg.append("Max |rdiff|: %g" % rdiff[bad_j].max())
msg.append("Bad results (%d out of %d) for the following points (in output %d):"
% (np.sum(bad_j), point_count, output_num,))
for j in np.nonzero(bad_j)[0]:
j = int(j)
fmt = lambda x: "%30s" % np.array2string(x[j], precision=18)
a = " ".join(map(fmt, params))
b = " ".join(map(fmt, got))
c = " ".join(map(fmt, wanted))
d = fmt(rdiff)
msg.append("%s => %s != %s (rdiff %s)" % (a, b, c, d))
assert_(False, "\n".join(msg))
def __repr__(self):
"""Pretty-printing, esp. for Nose output"""
if np.any(list(map(np.iscomplexobj, self.param_columns))):
is_complex = " (complex)"
else:
is_complex = ""
if self.dataname:
return "<Data for %s%s: %s>" % (self.func.__name__, is_complex,
os.path.basename(self.dataname))
else:
return "<Data for %s%s>" % (self.func.__name__, is_complex)

View file

@ -0,0 +1,497 @@
from typing import Any, Dict
import numpy as np
__all__ = [
'geterr',
'seterr',
'errstate',
'agm',
'airy',
'airye',
'bdtr',
'bdtrc',
'bdtri',
'bdtrik',
'bdtrin',
'bei',
'beip',
'ber',
'berp',
'besselpoly',
'beta',
'betainc',
'betaincinv',
'betaln',
'binom',
'boxcox',
'boxcox1p',
'btdtr',
'btdtri',
'btdtria',
'btdtrib',
'cbrt',
'chdtr',
'chdtrc',
'chdtri',
'chdtriv',
'chndtr',
'chndtridf',
'chndtrinc',
'chndtrix',
'cosdg',
'cosm1',
'cotdg',
'dawsn',
'ellipe',
'ellipeinc',
'ellipj',
'ellipk',
'ellipkinc',
'ellipkm1',
'entr',
'erf',
'erfc',
'erfcinv',
'erfcx',
'erfi',
'erfinv',
'eval_chebyc',
'eval_chebys',
'eval_chebyt',
'eval_chebyu',
'eval_gegenbauer',
'eval_genlaguerre',
'eval_hermite',
'eval_hermitenorm',
'eval_jacobi',
'eval_laguerre',
'eval_legendre',
'eval_sh_chebyt',
'eval_sh_chebyu',
'eval_sh_jacobi',
'eval_sh_legendre',
'exp1',
'exp10',
'exp2',
'expi',
'expit',
'expm1',
'expn',
'exprel',
'fdtr',
'fdtrc',
'fdtri',
'fdtridfd',
'fresnel',
'gamma',
'gammainc',
'gammaincc',
'gammainccinv',
'gammaincinv',
'gammaln',
'gammasgn',
'gdtr',
'gdtrc',
'gdtria',
'gdtrib',
'gdtrix',
'hankel1',
'hankel1e',
'hankel2',
'hankel2e',
'huber',
'hyp0f1',
'hyp1f1',
'hyp2f1',
'hyperu',
'i0',
'i0e',
'i1',
'i1e',
'inv_boxcox',
'inv_boxcox1p',
'it2i0k0',
'it2j0y0',
'it2struve0',
'itairy',
'iti0k0',
'itj0y0',
'itmodstruve0',
'itstruve0',
'iv',
'ive',
'j0',
'j1',
'jn',
'jv',
'jve',
'k0',
'k0e',
'k1',
'k1e',
'kei',
'keip',
'kelvin',
'ker',
'kerp',
'kl_div',
'kn',
'kolmogi',
'kolmogorov',
'kv',
'kve',
'log1p',
'log_ndtr',
'loggamma',
'logit',
'lpmv',
'mathieu_a',
'mathieu_b',
'mathieu_cem',
'mathieu_modcem1',
'mathieu_modcem2',
'mathieu_modsem1',
'mathieu_modsem2',
'mathieu_sem',
'modfresnelm',
'modfresnelp',
'modstruve',
'nbdtr',
'nbdtrc',
'nbdtri',
'nbdtrik',
'nbdtrin',
'ncfdtr',
'ncfdtri',
'ncfdtridfd',
'ncfdtridfn',
'ncfdtrinc',
'nctdtr',
'nctdtridf',
'nctdtrinc',
'nctdtrit',
'ndtr',
'ndtri',
'nrdtrimn',
'nrdtrisd',
'obl_ang1',
'obl_ang1_cv',
'obl_cv',
'obl_rad1',
'obl_rad1_cv',
'obl_rad2',
'obl_rad2_cv',
'owens_t',
'pbdv',
'pbvv',
'pbwa',
'pdtr',
'pdtrc',
'pdtri',
'pdtrik',
'poch',
'pro_ang1',
'pro_ang1_cv',
'pro_cv',
'pro_rad1',
'pro_rad1_cv',
'pro_rad2',
'pro_rad2_cv',
'pseudo_huber',
'psi',
'radian',
'rel_entr',
'rgamma',
'round',
'shichi',
'sici',
'sindg',
'smirnov',
'smirnovi',
'spence',
'sph_harm',
'stdtr',
'stdtridf',
'stdtrit',
'struve',
'tandg',
'tklmbda',
'voigt_profile',
'wofz',
'wrightomega',
'xlog1py',
'xlogy',
'y0',
'y1',
'yn',
'yv',
'yve',
'zetac'
]
def geterr() -> Dict[str, str]: ...
def seterr(**kwargs: str) -> Dict[str, str]: ...
class errstate:
def __init__(self, **kargs: str) -> None: ...
def __enter__(self) -> None: ...
def __exit__(
self,
exc_type: Any, # Unused
exc_value: Any, # Unused
traceback: Any, # Unused
) -> None: ...
_cospi: np.ufunc
_ellip_harm: np.ufunc
_factorial: np.ufunc
_igam_fac: np.ufunc
_kolmogc: np.ufunc
_kolmogci: np.ufunc
_kolmogp: np.ufunc
_lambertw: np.ufunc
_lanczos_sum_expg_scaled: np.ufunc
_lgam1p: np.ufunc
_log1pmx: np.ufunc
_riemann_zeta: np.ufunc
_sf_error_test_function: np.ufunc
_sinpi: np.ufunc
_smirnovc: np.ufunc
_smirnovci: np.ufunc
_smirnovp: np.ufunc
_spherical_in: np.ufunc
_spherical_in_d: np.ufunc
_spherical_jn: np.ufunc
_spherical_jn_d: np.ufunc
_spherical_kn: np.ufunc
_spherical_kn_d: np.ufunc
_spherical_yn: np.ufunc
_spherical_yn_d: np.ufunc
_struve_asymp_large_z: np.ufunc
_struve_bessel_series: np.ufunc
_struve_power_series: np.ufunc
_zeta: np.ufunc
agm: np.ufunc
airy: np.ufunc
airye: np.ufunc
bdtr: np.ufunc
bdtrc: np.ufunc
bdtri: np.ufunc
bdtrik: np.ufunc
bdtrin: np.ufunc
bei: np.ufunc
beip: np.ufunc
ber: np.ufunc
berp: np.ufunc
besselpoly: np.ufunc
beta: np.ufunc
betainc: np.ufunc
betaincinv: np.ufunc
betaln: np.ufunc
binom: np.ufunc
boxcox1p: np.ufunc
boxcox: np.ufunc
btdtr: np.ufunc
btdtri: np.ufunc
btdtria: np.ufunc
btdtrib: np.ufunc
cbrt: np.ufunc
chdtr: np.ufunc
chdtrc: np.ufunc
chdtri: np.ufunc
chdtriv: np.ufunc
chndtr: np.ufunc
chndtridf: np.ufunc
chndtrinc: np.ufunc
chndtrix: np.ufunc
cosdg: np.ufunc
cosm1: np.ufunc
cotdg: np.ufunc
dawsn: np.ufunc
ellipe: np.ufunc
ellipeinc: np.ufunc
ellipj: np.ufunc
ellipk: np.ufunc
ellipkinc: np.ufunc
ellipkm1: np.ufunc
entr: np.ufunc
erf: np.ufunc
erfc: np.ufunc
erfcinv: np.ufunc
erfcx: np.ufunc
erfi: np.ufunc
erfinv: np.ufunc
eval_chebyc: np.ufunc
eval_chebys: np.ufunc
eval_chebyt: np.ufunc
eval_chebyu: np.ufunc
eval_gegenbauer: np.ufunc
eval_genlaguerre: np.ufunc
eval_hermite: np.ufunc
eval_hermitenorm: np.ufunc
eval_jacobi: np.ufunc
eval_laguerre: np.ufunc
eval_legendre: np.ufunc
eval_sh_chebyt: np.ufunc
eval_sh_chebyu: np.ufunc
eval_sh_jacobi: np.ufunc
eval_sh_legendre: np.ufunc
exp10: np.ufunc
exp1: np.ufunc
exp2: np.ufunc
expi: np.ufunc
expit: np.ufunc
expm1: np.ufunc
expn: np.ufunc
exprel: np.ufunc
fdtr: np.ufunc
fdtrc: np.ufunc
fdtri: np.ufunc
fdtridfd: np.ufunc
fresnel: np.ufunc
gamma: np.ufunc
gammainc: np.ufunc
gammaincc: np.ufunc
gammainccinv: np.ufunc
gammaincinv: np.ufunc
gammaln: np.ufunc
gammasgn: np.ufunc
gdtr: np.ufunc
gdtrc: np.ufunc
gdtria: np.ufunc
gdtrib: np.ufunc
gdtrix: np.ufunc
hankel1: np.ufunc
hankel1e: np.ufunc
hankel2: np.ufunc
hankel2e: np.ufunc
huber: np.ufunc
hyp0f1: np.ufunc
hyp1f1: np.ufunc
hyp2f1: np.ufunc
hyperu: np.ufunc
i0: np.ufunc
i0e: np.ufunc
i1: np.ufunc
i1e: np.ufunc
inv_boxcox1p: np.ufunc
inv_boxcox: np.ufunc
it2i0k0: np.ufunc
it2j0y0: np.ufunc
it2struve0: np.ufunc
itairy: np.ufunc
iti0k0: np.ufunc
itj0y0: np.ufunc
itmodstruve0: np.ufunc
itstruve0: np.ufunc
iv: np.ufunc
ive: np.ufunc
j0: np.ufunc
j1: np.ufunc
jn: np.ufunc
jv: np.ufunc
jve: np.ufunc
k0: np.ufunc
k0e: np.ufunc
k1: np.ufunc
k1e: np.ufunc
kei: np.ufunc
keip: np.ufunc
kelvin: np.ufunc
ker: np.ufunc
kerp: np.ufunc
kl_div: np.ufunc
kn: np.ufunc
kolmogi: np.ufunc
kolmogorov: np.ufunc
kv: np.ufunc
kve: np.ufunc
log1p: np.ufunc
log_ndtr: np.ufunc
loggamma: np.ufunc
logit: np.ufunc
lpmv: np.ufunc
mathieu_a: np.ufunc
mathieu_b: np.ufunc
mathieu_cem: np.ufunc
mathieu_modcem1: np.ufunc
mathieu_modcem2: np.ufunc
mathieu_modsem1: np.ufunc
mathieu_modsem2: np.ufunc
mathieu_sem: np.ufunc
modfresnelm: np.ufunc
modfresnelp: np.ufunc
modstruve: np.ufunc
nbdtr: np.ufunc
nbdtrc: np.ufunc
nbdtri: np.ufunc
nbdtrik: np.ufunc
nbdtrin: np.ufunc
ncfdtr: np.ufunc
ncfdtri: np.ufunc
ncfdtridfd: np.ufunc
ncfdtridfn: np.ufunc
ncfdtrinc: np.ufunc
nctdtr: np.ufunc
nctdtridf: np.ufunc
nctdtrinc: np.ufunc
nctdtrit: np.ufunc
ndtr: np.ufunc
ndtri: np.ufunc
nrdtrimn: np.ufunc
nrdtrisd: np.ufunc
obl_ang1: np.ufunc
obl_ang1_cv: np.ufunc
obl_cv: np.ufunc
obl_rad1: np.ufunc
obl_rad1_cv: np.ufunc
obl_rad2: np.ufunc
obl_rad2_cv: np.ufunc
owens_t: np.ufunc
pbdv: np.ufunc
pbvv: np.ufunc
pbwa: np.ufunc
pdtr: np.ufunc
pdtrc: np.ufunc
pdtri: np.ufunc
pdtrik: np.ufunc
poch: np.ufunc
pro_ang1: np.ufunc
pro_ang1_cv: np.ufunc
pro_cv: np.ufunc
pro_rad1: np.ufunc
pro_rad1_cv: np.ufunc
pro_rad2: np.ufunc
pro_rad2_cv: np.ufunc
pseudo_huber: np.ufunc
psi: np.ufunc
radian: np.ufunc
rel_entr: np.ufunc
rgamma: np.ufunc
round: np.ufunc
shichi: np.ufunc
sici: np.ufunc
sindg: np.ufunc
smirnov: np.ufunc
smirnovi: np.ufunc
spence: np.ufunc
sph_harm: np.ufunc
stdtr: np.ufunc
stdtridf: np.ufunc
stdtrit: np.ufunc
struve: np.ufunc
tandg: np.ufunc
tklmbda: np.ufunc
voigt_profile: np.ufunc
wofz: np.ufunc
wrightomega: np.ufunc
xlog1py: np.ufunc
xlogy: np.ufunc
y0: np.ufunc
y1: np.ufunc
yn: np.ufunc
yv: np.ufunc
yve: np.ufunc
zetac: np.ufunc

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,10 @@
# Deprecated in SciPy 1.4
from ._basic import *
from warnings import warn as _warn
_warn(
'scipy.special.basic is deprecated, '
'import directly from scipy.special instead',
category=DeprecationWarning,
stacklevel=2
)

View file

@ -0,0 +1,246 @@
# This file is automatically generated by _generate_pyx.py.
# Do not edit manually!
ctypedef fused number_t:
double complex
double
cpdef number_t spherical_jn(long n, number_t z, bint derivative=*) nogil
cpdef number_t spherical_yn(long n, number_t z, bint derivative=*) nogil
cpdef number_t spherical_in(long n, number_t z, bint derivative=*) nogil
cpdef number_t spherical_kn(long n, number_t z, bint derivative=*) nogil
ctypedef fused Dd_number_t:
double complex
double
ctypedef fused dfg_number_t:
double
float
long double
ctypedef fused dl_number_t:
double
long
cpdef double voigt_profile(double x0, double x1, double x2) nogil
cpdef double agm(double x0, double x1) nogil
cdef void airy(Dd_number_t x0, Dd_number_t *y0, Dd_number_t *y1, Dd_number_t *y2, Dd_number_t *y3) nogil
cdef void airye(Dd_number_t x0, Dd_number_t *y0, Dd_number_t *y1, Dd_number_t *y2, Dd_number_t *y3) nogil
cpdef double bdtr(double x0, dl_number_t x1, double x2) nogil
cpdef double bdtrc(double x0, dl_number_t x1, double x2) nogil
cpdef double bdtri(double x0, dl_number_t x1, double x2) nogil
cpdef double bdtrik(double x0, double x1, double x2) nogil
cpdef double bdtrin(double x0, double x1, double x2) nogil
cpdef double bei(double x0) nogil
cpdef double beip(double x0) nogil
cpdef double ber(double x0) nogil
cpdef double berp(double x0) nogil
cpdef double besselpoly(double x0, double x1, double x2) nogil
cpdef double beta(double x0, double x1) nogil
cpdef double betainc(double x0, double x1, double x2) nogil
cpdef double betaincinv(double x0, double x1, double x2) nogil
cpdef double betaln(double x0, double x1) nogil
cpdef double binom(double x0, double x1) nogil
cpdef double boxcox(double x0, double x1) nogil
cpdef double boxcox1p(double x0, double x1) nogil
cpdef double btdtr(double x0, double x1, double x2) nogil
cpdef double btdtri(double x0, double x1, double x2) nogil
cpdef double btdtria(double x0, double x1, double x2) nogil
cpdef double btdtrib(double x0, double x1, double x2) nogil
cpdef double cbrt(double x0) nogil
cpdef double chdtr(double x0, double x1) nogil
cpdef double chdtrc(double x0, double x1) nogil
cpdef double chdtri(double x0, double x1) nogil
cpdef double chdtriv(double x0, double x1) nogil
cpdef double chndtr(double x0, double x1, double x2) nogil
cpdef double chndtridf(double x0, double x1, double x2) nogil
cpdef double chndtrinc(double x0, double x1, double x2) nogil
cpdef double chndtrix(double x0, double x1, double x2) nogil
cpdef double cosdg(double x0) nogil
cpdef double cosm1(double x0) nogil
cpdef double cotdg(double x0) nogil
cpdef Dd_number_t dawsn(Dd_number_t x0) nogil
cpdef double ellipe(double x0) nogil
cpdef double ellipeinc(double x0, double x1) nogil
cdef void ellipj(double x0, double x1, double *y0, double *y1, double *y2, double *y3) nogil
cpdef double ellipkinc(double x0, double x1) nogil
cpdef double ellipkm1(double x0) nogil
cpdef double ellipk(double x0) nogil
cpdef double entr(double x0) nogil
cpdef Dd_number_t erf(Dd_number_t x0) nogil
cpdef Dd_number_t erfc(Dd_number_t x0) nogil
cpdef Dd_number_t erfcx(Dd_number_t x0) nogil
cpdef Dd_number_t erfi(Dd_number_t x0) nogil
cpdef double erfinv(double x0) nogil
cpdef double erfcinv(double x0) nogil
cpdef Dd_number_t eval_chebyc(dl_number_t x0, Dd_number_t x1) nogil
cpdef Dd_number_t eval_chebys(dl_number_t x0, Dd_number_t x1) nogil
cpdef Dd_number_t eval_chebyt(dl_number_t x0, Dd_number_t x1) nogil
cpdef Dd_number_t eval_chebyu(dl_number_t x0, Dd_number_t x1) nogil
cpdef Dd_number_t eval_gegenbauer(dl_number_t x0, double x1, Dd_number_t x2) nogil
cpdef Dd_number_t eval_genlaguerre(dl_number_t x0, double x1, Dd_number_t x2) nogil
cpdef double eval_hermite(long x0, double x1) nogil
cpdef double eval_hermitenorm(long x0, double x1) nogil
cpdef Dd_number_t eval_jacobi(dl_number_t x0, double x1, double x2, Dd_number_t x3) nogil
cpdef Dd_number_t eval_laguerre(dl_number_t x0, Dd_number_t x1) nogil
cpdef Dd_number_t eval_legendre(dl_number_t x0, Dd_number_t x1) nogil
cpdef Dd_number_t eval_sh_chebyt(dl_number_t x0, Dd_number_t x1) nogil
cpdef Dd_number_t eval_sh_chebyu(dl_number_t x0, Dd_number_t x1) nogil
cpdef Dd_number_t eval_sh_jacobi(dl_number_t x0, double x1, double x2, Dd_number_t x3) nogil
cpdef Dd_number_t eval_sh_legendre(dl_number_t x0, Dd_number_t x1) nogil
cpdef Dd_number_t exp1(Dd_number_t x0) nogil
cpdef double exp10(double x0) nogil
cpdef double exp2(double x0) nogil
cpdef Dd_number_t expi(Dd_number_t x0) nogil
cpdef dfg_number_t expit(dfg_number_t x0) nogil
cpdef Dd_number_t expm1(Dd_number_t x0) nogil
cpdef double expn(dl_number_t x0, double x1) nogil
cpdef double exprel(double x0) nogil
cpdef double fdtr(double x0, double x1, double x2) nogil
cpdef double fdtrc(double x0, double x1, double x2) nogil
cpdef double fdtri(double x0, double x1, double x2) nogil
cpdef double fdtridfd(double x0, double x1, double x2) nogil
cdef void fresnel(Dd_number_t x0, Dd_number_t *y0, Dd_number_t *y1) nogil
cpdef Dd_number_t gamma(Dd_number_t x0) nogil
cpdef double gammainc(double x0, double x1) nogil
cpdef double gammaincc(double x0, double x1) nogil
cpdef double gammainccinv(double x0, double x1) nogil
cpdef double gammaincinv(double x0, double x1) nogil
cpdef double gammaln(double x0) nogil
cpdef double gammasgn(double x0) nogil
cpdef double gdtr(double x0, double x1, double x2) nogil
cpdef double gdtrc(double x0, double x1, double x2) nogil
cpdef double gdtria(double x0, double x1, double x2) nogil
cpdef double gdtrib(double x0, double x1, double x2) nogil
cpdef double gdtrix(double x0, double x1, double x2) nogil
cpdef double complex hankel1(double x0, double complex x1) nogil
cpdef double complex hankel1e(double x0, double complex x1) nogil
cpdef double complex hankel2(double x0, double complex x1) nogil
cpdef double complex hankel2e(double x0, double complex x1) nogil
cpdef double huber(double x0, double x1) nogil
cpdef Dd_number_t hyp0f1(double x0, Dd_number_t x1) nogil
cpdef Dd_number_t hyp1f1(double x0, double x1, Dd_number_t x2) nogil
cpdef Dd_number_t hyp2f1(double x0, double x1, double x2, Dd_number_t x3) nogil
cpdef double hyperu(double x0, double x1, double x2) nogil
cpdef double i0(double x0) nogil
cpdef double i0e(double x0) nogil
cpdef double i1(double x0) nogil
cpdef double i1e(double x0) nogil
cpdef double inv_boxcox(double x0, double x1) nogil
cpdef double inv_boxcox1p(double x0, double x1) nogil
cdef void it2i0k0(double x0, double *y0, double *y1) nogil
cdef void it2j0y0(double x0, double *y0, double *y1) nogil
cpdef double it2struve0(double x0) nogil
cdef void itairy(double x0, double *y0, double *y1, double *y2, double *y3) nogil
cdef void iti0k0(double x0, double *y0, double *y1) nogil
cdef void itj0y0(double x0, double *y0, double *y1) nogil
cpdef double itmodstruve0(double x0) nogil
cpdef double itstruve0(double x0) nogil
cpdef Dd_number_t iv(double x0, Dd_number_t x1) nogil
cpdef Dd_number_t ive(double x0, Dd_number_t x1) nogil
cpdef double j0(double x0) nogil
cpdef double j1(double x0) nogil
cpdef Dd_number_t jv(double x0, Dd_number_t x1) nogil
cpdef Dd_number_t jve(double x0, Dd_number_t x1) nogil
cpdef double k0(double x0) nogil
cpdef double k0e(double x0) nogil
cpdef double k1(double x0) nogil
cpdef double k1e(double x0) nogil
cpdef double kei(double x0) nogil
cpdef double keip(double x0) nogil
cdef void kelvin(double x0, double complex *y0, double complex *y1, double complex *y2, double complex *y3) nogil
cpdef double ker(double x0) nogil
cpdef double kerp(double x0) nogil
cpdef double kl_div(double x0, double x1) nogil
cpdef double kn(dl_number_t x0, double x1) nogil
cpdef double kolmogi(double x0) nogil
cpdef double kolmogorov(double x0) nogil
cpdef Dd_number_t kv(double x0, Dd_number_t x1) nogil
cpdef Dd_number_t kve(double x0, Dd_number_t x1) nogil
cpdef Dd_number_t log1p(Dd_number_t x0) nogil
cpdef Dd_number_t log_ndtr(Dd_number_t x0) nogil
cpdef Dd_number_t loggamma(Dd_number_t x0) nogil
cpdef dfg_number_t logit(dfg_number_t x0) nogil
cpdef double lpmv(double x0, double x1, double x2) nogil
cpdef double mathieu_a(double x0, double x1) nogil
cpdef double mathieu_b(double x0, double x1) nogil
cdef void mathieu_cem(double x0, double x1, double x2, double *y0, double *y1) nogil
cdef void mathieu_modcem1(double x0, double x1, double x2, double *y0, double *y1) nogil
cdef void mathieu_modcem2(double x0, double x1, double x2, double *y0, double *y1) nogil
cdef void mathieu_modsem1(double x0, double x1, double x2, double *y0, double *y1) nogil
cdef void mathieu_modsem2(double x0, double x1, double x2, double *y0, double *y1) nogil
cdef void mathieu_sem(double x0, double x1, double x2, double *y0, double *y1) nogil
cdef void modfresnelm(double x0, double complex *y0, double complex *y1) nogil
cdef void modfresnelp(double x0, double complex *y0, double complex *y1) nogil
cpdef double modstruve(double x0, double x1) nogil
cpdef double nbdtr(dl_number_t x0, dl_number_t x1, double x2) nogil
cpdef double nbdtrc(dl_number_t x0, dl_number_t x1, double x2) nogil
cpdef double nbdtri(dl_number_t x0, dl_number_t x1, double x2) nogil
cpdef double nbdtrik(double x0, double x1, double x2) nogil
cpdef double nbdtrin(double x0, double x1, double x2) nogil
cpdef double ncfdtr(double x0, double x1, double x2, double x3) nogil
cpdef double ncfdtri(double x0, double x1, double x2, double x3) nogil
cpdef double ncfdtridfd(double x0, double x1, double x2, double x3) nogil
cpdef double ncfdtridfn(double x0, double x1, double x2, double x3) nogil
cpdef double ncfdtrinc(double x0, double x1, double x2, double x3) nogil
cpdef double nctdtr(double x0, double x1, double x2) nogil
cpdef double nctdtridf(double x0, double x1, double x2) nogil
cpdef double nctdtrinc(double x0, double x1, double x2) nogil
cpdef double nctdtrit(double x0, double x1, double x2) nogil
cpdef Dd_number_t ndtr(Dd_number_t x0) nogil
cpdef double ndtri(double x0) nogil
cpdef double nrdtrimn(double x0, double x1, double x2) nogil
cpdef double nrdtrisd(double x0, double x1, double x2) nogil
cdef void obl_ang1(double x0, double x1, double x2, double x3, double *y0, double *y1) nogil
cdef void obl_ang1_cv(double x0, double x1, double x2, double x3, double x4, double *y0, double *y1) nogil
cpdef double obl_cv(double x0, double x1, double x2) nogil
cdef void obl_rad1(double x0, double x1, double x2, double x3, double *y0, double *y1) nogil
cdef void obl_rad1_cv(double x0, double x1, double x2, double x3, double x4, double *y0, double *y1) nogil
cdef void obl_rad2(double x0, double x1, double x2, double x3, double *y0, double *y1) nogil
cdef void obl_rad2_cv(double x0, double x1, double x2, double x3, double x4, double *y0, double *y1) nogil
cpdef double owens_t(double x0, double x1) nogil
cdef void pbdv(double x0, double x1, double *y0, double *y1) nogil
cdef void pbvv(double x0, double x1, double *y0, double *y1) nogil
cdef void pbwa(double x0, double x1, double *y0, double *y1) nogil
cpdef double pdtr(double x0, double x1) nogil
cpdef double pdtrc(double x0, double x1) nogil
cpdef double pdtri(dl_number_t x0, double x1) nogil
cpdef double pdtrik(double x0, double x1) nogil
cpdef double poch(double x0, double x1) nogil
cdef void pro_ang1(double x0, double x1, double x2, double x3, double *y0, double *y1) nogil
cdef void pro_ang1_cv(double x0, double x1, double x2, double x3, double x4, double *y0, double *y1) nogil
cpdef double pro_cv(double x0, double x1, double x2) nogil
cdef void pro_rad1(double x0, double x1, double x2, double x3, double *y0, double *y1) nogil
cdef void pro_rad1_cv(double x0, double x1, double x2, double x3, double x4, double *y0, double *y1) nogil
cdef void pro_rad2(double x0, double x1, double x2, double x3, double *y0, double *y1) nogil
cdef void pro_rad2_cv(double x0, double x1, double x2, double x3, double x4, double *y0, double *y1) nogil
cpdef double pseudo_huber(double x0, double x1) nogil
cpdef Dd_number_t psi(Dd_number_t x0) nogil
cpdef double radian(double x0, double x1, double x2) nogil
cpdef double rel_entr(double x0, double x1) nogil
cpdef Dd_number_t rgamma(Dd_number_t x0) nogil
cpdef double round(double x0) nogil
cdef void shichi(Dd_number_t x0, Dd_number_t *y0, Dd_number_t *y1) nogil
cdef void sici(Dd_number_t x0, Dd_number_t *y0, Dd_number_t *y1) nogil
cpdef double sindg(double x0) nogil
cpdef double smirnov(dl_number_t x0, double x1) nogil
cpdef double smirnovi(dl_number_t x0, double x1) nogil
cpdef Dd_number_t spence(Dd_number_t x0) nogil
cpdef double complex sph_harm(dl_number_t x0, dl_number_t x1, double x2, double x3) nogil
cpdef double stdtr(double x0, double x1) nogil
cpdef double stdtridf(double x0, double x1) nogil
cpdef double stdtrit(double x0, double x1) nogil
cpdef double struve(double x0, double x1) nogil
cpdef double tandg(double x0) nogil
cpdef double tklmbda(double x0, double x1) nogil
cpdef double complex wofz(double complex x0) nogil
cpdef Dd_number_t wrightomega(Dd_number_t x0) nogil
cpdef Dd_number_t xlog1py(Dd_number_t x0, Dd_number_t x1) nogil
cpdef Dd_number_t xlogy(Dd_number_t x0, Dd_number_t x1) nogil
cpdef double y0(double x0) nogil
cpdef double y1(double x0) nogil
cpdef double yn(dl_number_t x0, double x1) nogil
cpdef Dd_number_t yv(double x0, Dd_number_t x1) nogil
cpdef Dd_number_t yve(double x0, Dd_number_t x1) nogil
cpdef double zetac(double x0) nogil

View file

@ -0,0 +1,3 @@
from typing import Any
def __getattr__(name) -> Any: ...

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,327 @@
from typing import (
Any,
Callable,
List,
Optional,
overload,
Tuple,
Union,
)
from typing_extensions import Literal
import numpy
_IntegerType = Union[int, numpy.integer]
_FloatingType = Union[float, numpy.floating]
_PointsAndWeights = Tuple[numpy.ndarray, numpy.ndarray]
_PointsAndWeightsAndMu = Tuple[numpy.ndarray, numpy.ndarray, float]
__all__ = [
'legendre',
'chebyt',
'chebyu',
'chebyc',
'chebys',
'jacobi',
'laguerre',
'genlaguerre',
'hermite',
'hermitenorm',
'gegenbauer',
'sh_legendre',
'sh_chebyt',
'sh_chebyu',
'sh_jacobi',
'roots_legendre',
'roots_chebyt',
'roots_chebyu',
'roots_chebyc',
'roots_chebys',
'roots_jacobi',
'roots_laguerre',
'roots_genlaguerre',
'roots_hermite',
'roots_hermitenorm',
'roots_gegenbauer',
'roots_sh_legendre',
'roots_sh_chebyt',
'roots_sh_chebyu',
'roots_sh_jacobi',
]
@overload
def roots_jacobi(
n: _IntegerType,
alpha: _FloatingType,
beta: _FloatingType,
) -> _PointsAndWeights: ...
@overload
def roots_jacobi(
n: _IntegerType,
alpha: _FloatingType,
beta: _FloatingType,
mu: Literal[False],
) -> _PointsAndWeights: ...
@overload
def roots_jacobi(
n: _IntegerType,
alpha: _FloatingType,
beta: _FloatingType,
mu: Literal[True],
) -> _PointsAndWeightsAndMu: ...
@overload
def roots_sh_jacobi(
n: _IntegerType,
p1: _FloatingType,
q1: _FloatingType,
) -> _PointsAndWeights: ...
@overload
def roots_sh_jacobi(
n: _IntegerType,
p1: _FloatingType,
q1: _FloatingType,
mu: Literal[False],
) -> _PointsAndWeights: ...
@overload
def roots_sh_jacobi(
n: _IntegerType,
p1: _FloatingType,
q1: _FloatingType,
mu: Literal[True],
) -> _PointsAndWeightsAndMu: ...
@overload
def roots_genlaguerre(
n: _IntegerType,
alpha: _FloatingType,
) -> _PointsAndWeights: ...
@overload
def roots_genlaguerre(
n: _IntegerType,
alpha: _FloatingType,
mu: Literal[False],
) -> _PointsAndWeights: ...
@overload
def roots_genlaguerre(
n: _IntegerType,
alpha: _FloatingType,
mu: Literal[True],
) -> _PointsAndWeightsAndMu: ...
@overload
def roots_laguerre(n: _IntegerType) -> _PointsAndWeights: ...
@overload
def roots_laguerre(
n: _IntegerType,
mu: Literal[False],
) -> _PointsAndWeights: ...
@overload
def roots_laguerre(
n: _IntegerType,
mu: Literal[True],
) -> _PointsAndWeightsAndMu: ...
@overload
def roots_hermite(n: _IntegerType) -> _PointsAndWeights: ...
@overload
def roots_hermite(
n: _IntegerType,
mu: Literal[False],
) -> _PointsAndWeights: ...
@overload
def roots_hermite(
n: _IntegerType,
mu: Literal[True],
) -> _PointsAndWeightsAndMu: ...
@overload
def roots_hermitenorm(n: _IntegerType) -> _PointsAndWeights: ...
@overload
def roots_hermitenorm(
n: _IntegerType,
mu: Literal[False],
) -> _PointsAndWeights: ...
@overload
def roots_hermitenorm(
n: _IntegerType,
mu: Literal[True],
) -> _PointsAndWeightsAndMu: ...
@overload
def roots_gegenbauer(
n: _IntegerType,
alpha: _FloatingType,
) -> _PointsAndWeights: ...
@overload
def roots_gegenbauer(
n: _IntegerType,
alpha: _FloatingType,
mu: Literal[False],
) -> _PointsAndWeights: ...
@overload
def roots_gegenbauer(
n: _IntegerType,
alpha: _FloatingType,
mu: Literal[True],
) -> _PointsAndWeightsAndMu: ...
@overload
def roots_chebyt(n: _IntegerType) -> _PointsAndWeights: ...
@overload
def roots_chebyt(
n: _IntegerType,
mu: Literal[False],
) -> _PointsAndWeights: ...
@overload
def roots_chebyt(
n: _IntegerType,
mu: Literal[True],
) -> _PointsAndWeightsAndMu: ...
@overload
def roots_chebyu(n: _IntegerType) -> _PointsAndWeights: ...
@overload
def roots_chebyu(
n: _IntegerType,
mu: Literal[False],
) -> _PointsAndWeights: ...
@overload
def roots_chebyu(
n: _IntegerType,
mu: Literal[True],
) -> _PointsAndWeightsAndMu: ...
@overload
def roots_chebyc(n: _IntegerType) -> _PointsAndWeights: ...
@overload
def roots_chebyc(
n: _IntegerType,
mu: Literal[False],
) -> _PointsAndWeights: ...
@overload
def roots_chebyc(
n: _IntegerType,
mu: Literal[True],
) -> _PointsAndWeightsAndMu: ...
@overload
def roots_chebys(n: _IntegerType) -> _PointsAndWeights: ...
@overload
def roots_chebys(
n: _IntegerType,
mu: Literal[False],
) -> _PointsAndWeights: ...
@overload
def roots_chebys(
n: _IntegerType,
mu: Literal[True],
) -> _PointsAndWeightsAndMu: ...
@overload
def roots_sh_chebyt(n: _IntegerType) -> _PointsAndWeights: ...
@overload
def roots_sh_chebyt(
n: _IntegerType,
mu: Literal[False],
) -> _PointsAndWeights: ...
@overload
def roots_sh_chebyt(
n: _IntegerType,
mu: Literal[True],
) -> _PointsAndWeightsAndMu: ...
@overload
def roots_sh_chebyu(n: _IntegerType) -> _PointsAndWeights: ...
@overload
def roots_sh_chebyu(
n: _IntegerType,
mu: Literal[False],
) -> _PointsAndWeights: ...
@overload
def roots_sh_chebyu(
n: _IntegerType,
mu: Literal[True],
) -> _PointsAndWeightsAndMu: ...
@overload
def roots_legendre(n: _IntegerType) -> _PointsAndWeights: ...
@overload
def roots_legendre(
n: _IntegerType,
mu: Literal[False],
) -> _PointsAndWeights: ...
@overload
def roots_legendre(
n: _IntegerType,
mu: Literal[True],
) -> _PointsAndWeightsAndMu: ...
@overload
def roots_sh_legendre(n: _IntegerType) -> _PointsAndWeights: ...
@overload
def roots_sh_legendre(
n: _IntegerType,
mu: Literal[False],
) -> _PointsAndWeights: ...
@overload
def roots_sh_legendre(
n: _IntegerType,
mu: Literal[True],
) -> _PointsAndWeightsAndMu: ...
class orthopoly1d(numpy.poly1d):
def __init__(
self,
roots: Any, # TODO: ArrayLike
weights: Optional[Any], # TODO: ArrayLike
hn: float = ...,
kn: float = ...,
wfunc = Optional[Callable[[float], float]],
limits = Optional[Tuple[float, float]],
monic: bool = ...,
eval_func: numpy.ufunc = ...,
) -> None: ...
@property
def limits(self) -> Tuple[float, float]: ...
def wfunc(x: float) -> float: ...
# TODO: ArrayLike
def __call__(self, x: Any) -> numpy.ndarray: ...
def legendre(n: _IntegerType, monic: bool = ...) -> orthopoly1d: ...
def chebyt(n: _IntegerType, monic: bool = ...) -> orthopoly1d: ...
def chebyu(n: _IntegerType, monic: bool = ...) -> orthopoly1d: ...
def chebyc(n: _IntegerType, monic: bool = ...) -> orthopoly1d: ...
def chebys(n: _IntegerType, monic: bool = ...) -> orthopoly1d: ...
def jacobi(
n: _IntegerType,
alpha: _FloatingType,
beta: _FloatingType,
monic: bool = ...,
) -> orthopoly1d: ...
def laguerre(n: _IntegerType, monic: bool = ...) -> orthopoly1d: ...
def genlaguerre(
n: _IntegerType,
alpha: _FloatingType,
monic: bool = ...,
) -> orthopoly1d: ...
def hermite(n: _IntegerType, monic: bool = ...) -> orthopoly1d: ...
def hermitenorm(n: _IntegerType, monic: bool = ...) -> orthopoly1d: ...
def gegenbauer(
n: _IntegerType,
alpha: _FloatingType,
monic: bool = ...,
) -> orthopoly1d: ...
def sh_legendre(n: _IntegerType, monic: bool = ...) -> orthopoly1d: ...
def sh_chebyt(n: _IntegerType, monic: bool = ...) -> orthopoly1d: ...
def sh_chebyu(n: _IntegerType, monic: bool = ...) -> orthopoly1d: ...
def sh_jacobi(
n: _IntegerType,
p: _FloatingType,
q: _FloatingType,
monic: bool = ...,
) -> orthopoly1d: ...
# These functions are not public, but still need stubs because they
# get checked in the tests.
def _roots_hermite_asy(n: _IntegerType) -> _PointsAndWeights: ...

View file

@ -0,0 +1,170 @@
import os
import sys
from os.path import join, dirname
from distutils.sysconfig import get_python_inc
import subprocess
import numpy
from numpy.distutils.misc_util import get_numpy_include_dirs
try:
from numpy.distutils.misc_util import get_info
except ImportError:
raise ValueError("numpy >= 1.4 is required (detected %s from %s)" %
(numpy.__version__, numpy.__file__))
def configuration(parent_package='',top_path=None):
from numpy.distutils.misc_util import Configuration
from scipy._build_utils.system_info import get_info as get_system_info
config = Configuration('special', parent_package, top_path)
define_macros = []
if sys.platform == 'win32':
# define_macros.append(('NOINFINITIES',None))
# define_macros.append(('NONANS',None))
define_macros.append(('_USE_MATH_DEFINES',None))
curdir = os.path.abspath(os.path.dirname(__file__))
python_inc_dirs = get_python_inc()
plat_specific_python_inc_dirs = get_python_inc(plat_specific=1)
inc_dirs = [get_numpy_include_dirs(), python_inc_dirs]
if python_inc_dirs != plat_specific_python_inc_dirs:
inc_dirs.append(plat_specific_python_inc_dirs)
inc_dirs.append(join(dirname(dirname(__file__)), '_lib'))
# C libraries
cephes_src = [join('cephes','*.c')]
cephes_hdr = [join('cephes', '*.h')]
config.add_library('sc_cephes',sources=cephes_src,
include_dirs=[curdir] + inc_dirs,
depends=(cephes_hdr + ['*.h']),
macros=define_macros)
# Fortran/C++ libraries
mach_src = [join('mach','*.f')]
amos_src = [join('amos','*.f')]
cdf_src = [join('cdflib','*.f')]
specfun_src = [join('specfun','*.f')]
config.add_library('sc_mach',sources=mach_src,
config_fc={'noopt':(__file__,1)})
config.add_library('sc_amos',sources=amos_src)
config.add_library('sc_cdf',sources=cdf_src)
config.add_library('sc_specfun',sources=specfun_src)
# Extension specfun
config.add_extension('specfun',
sources=['specfun.pyf'],
f2py_options=['--no-wrap-functions'],
depends=specfun_src,
define_macros=[],
libraries=['sc_specfun'])
# Extension _ufuncs
headers = ['*.h', join('cephes', '*.h')]
ufuncs_src = ['_ufuncs.c', 'sf_error.c', '_logit.c.src',
"amos_wrappers.c", "cdf_wrappers.c", "specfun_wrappers.c"]
ufuncs_dep = (
headers
+ ufuncs_src
+ amos_src
+ cephes_src
+ mach_src
+ cdf_src
+ specfun_src
)
cfg = dict(get_system_info('lapack_opt'))
cfg.setdefault('include_dirs', []).extend([curdir] + inc_dirs + [numpy.get_include()])
cfg.setdefault('libraries', []).extend(
['sc_amos', 'sc_cephes', 'sc_mach', 'sc_cdf', 'sc_specfun']
)
cfg.setdefault('define_macros', []).extend(define_macros)
config.add_extension('_ufuncs',
depends=ufuncs_dep,
sources=ufuncs_src,
extra_info=get_info("npymath"),
**cfg)
# Extension _ufuncs_cxx
ufuncs_cxx_src = ['_ufuncs_cxx.cxx', 'sf_error.c',
'_faddeeva.cxx', 'Faddeeva.cc',
'_wright.cxx', 'wright.cc']
ufuncs_cxx_dep = (headers + ufuncs_cxx_src + cephes_src
+ ['*.hh'])
config.add_extension('_ufuncs_cxx',
sources=ufuncs_cxx_src,
depends=ufuncs_cxx_dep,
include_dirs=[curdir] + inc_dirs,
define_macros=define_macros,
extra_info=get_info("npymath"))
cfg = dict(get_system_info('lapack_opt'))
config.add_extension('_ellip_harm_2',
sources=['_ellip_harm_2.c', 'sf_error.c',],
**cfg
)
# Cython API
config.add_data_files('cython_special.pxd')
cython_special_src = ['cython_special.c', 'sf_error.c', '_logit.c.src',
"amos_wrappers.c", "cdf_wrappers.c", "specfun_wrappers.c"]
cython_special_dep = (
headers
+ ufuncs_src
+ ufuncs_cxx_src
+ amos_src
+ cephes_src
+ mach_src
+ cdf_src
+ specfun_src
)
cfg = dict(get_system_info('lapack_opt'))
cfg.setdefault('include_dirs', []).extend([curdir] + inc_dirs + [numpy.get_include()])
cfg.setdefault('libraries', []).extend(
['sc_amos', 'sc_cephes', 'sc_mach', 'sc_cdf', 'sc_specfun']
)
cfg.setdefault('define_macros', []).extend(define_macros)
config.add_extension('cython_special',
depends=cython_special_dep,
sources=cython_special_src,
extra_info=get_info("npymath"),
**cfg)
# combinatorics
config.add_extension('_comb',
sources=['_comb.c'])
# testing for _round.h
config.add_extension('_test_round',
sources=['_test_round.c'],
depends=['_round.h', 'cephes/dd_idefs.h'],
include_dirs=[numpy.get_include()] + inc_dirs,
extra_info=get_info('npymath'))
config.add_data_files('tests/*.py')
config.add_data_files('tests/data/README')
# regenerate npz data files
makenpz = os.path.join(os.path.dirname(__file__),
'utils', 'makenpz.py')
data_dir = os.path.join(os.path.dirname(__file__),
'tests', 'data')
for name in ['boost', 'gsl', 'local']:
subprocess.check_call([sys.executable, makenpz,
'--use-timestamp',
os.path.join(data_dir, name)])
config.add_data_files('tests/data/*.npz')
config.add_subpackage('_precompute')
# Type stubs
config.add_data_files('*.pyi')
return config
if __name__ == '__main__':
from numpy.distutils.core import setup
setup(**configuration(top_path='').todict())

View file

@ -0,0 +1,15 @@
"""Warnings and Exceptions that can be raised by special functions."""
import warnings
class SpecialFunctionWarning(Warning):
"""Warning that can be emitted by special functions."""
pass
warnings.simplefilter("always", category=SpecialFunctionWarning)
class SpecialFunctionError(Exception):
"""Exception that can be raised by special functions."""
pass

View file

@ -0,0 +1,93 @@
# Last Change: Sat Mar 21 02:00 PM 2009 J
# Copyright (c) 2001, 2002 Enthought, Inc.
#
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# a. Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
# b. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# c. Neither the name of the Enthought nor the names of its contributors
# may be used to endorse or promote products derived from this software
# without specific prior written permission.
#
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR
# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
# DAMAGE.
"""Some more special functions which may be useful for multivariate statistical
analysis."""
import numpy as np
from scipy.special import gammaln as loggam
__all__ = ['multigammaln']
def multigammaln(a, d):
r"""Returns the log of multivariate gamma, also sometimes called the
generalized gamma.
Parameters
----------
a : ndarray
The multivariate gamma is computed for each item of `a`.
d : int
The dimension of the space of integration.
Returns
-------
res : ndarray
The values of the log multivariate gamma at the given points `a`.
Notes
-----
The formal definition of the multivariate gamma of dimension d for a real
`a` is
.. math::
\Gamma_d(a) = \int_{A>0} e^{-tr(A)} |A|^{a - (d+1)/2} dA
with the condition :math:`a > (d-1)/2`, and :math:`A > 0` being the set of
all the positive definite matrices of dimension `d`. Note that `a` is a
scalar: the integrand only is multivariate, the argument is not (the
function is defined over a subset of the real set).
This can be proven to be equal to the much friendlier equation
.. math::
\Gamma_d(a) = \pi^{d(d-1)/4} \prod_{i=1}^{d} \Gamma(a - (i-1)/2).
References
----------
R. J. Muirhead, Aspects of multivariate statistical theory (Wiley Series in
probability and mathematical statistics).
"""
a = np.asarray(a)
if not np.isscalar(d) or (np.floor(d) != d):
raise ValueError("d should be a positive integer (dimension)")
if np.any(a <= 0.5 * (d - 1)):
raise ValueError("condition a (%f) > 0.5 * (d-1) (%f) not met"
% (a, 0.5 * (d-1)))
res = (d * (d-1) * 0.25) * np.log(np.pi)
res += np.sum(loggam([(a - (j - 1.)/2) for j in range(1, d+1)]), axis=0)
return res

Some files were not shown because too many files have changed in this diff Show more