module Slap_D:sig..end
Slap_common.normal and
Slap_common.trans).
For complex matrices, Slap_common.conjtr is also offered, hence the name.typeprec =Bigarray.float64_elt
typenum_type =float
type('indim, 'outdim, [< `N | `T ])trans3 =('indim, 'outdim, [< `N | `T ] as 'a) Slap_common.trans2
Slap_common.normal and
Slap_common.trans).
For complex matrices, Slap_common.conjtr is also offered, hence the name.type('n, 'cnt_or_dsc)vec =('n, num_type, prec, 'cnt_or_dsc) Slap_vec.t
type('m, 'n, 'cnt_or_dsc)mat =('m, 'n, num_type, prec, 'cnt_or_dsc) Slap_mat.t
typerprec =Bigarray.float64_elt
type('n, 'cnt_or_dsc)rvec =('n, float, rprec, 'cnt_or_dsc) Slap_vec.t
val prec : (num_type, prec) Bigarray.kind
val rprec : (float, rprec) Bigarray.kind
module Vec:sig..end
module Mat:sig..end
val pp_num : Format.formatter -> num_type -> unitval pp_vec : Format.formatter -> ('n, 'cnt_or_dsc) vec -> unitval pp_mat : Format.formatter -> ('m, 'n, 'cnt_or_dsc) mat -> unitval swap : ('n, 'x_cd) vec -> ('n, 'y_cd) vec -> unitswap x y swaps elements in x and y.val scal : num_type -> ('n, 'cd) vec -> unitscal c x multiplies all elements in x by scalar value c,
and destructively assigns the result to x.val copy : ?y:('n, 'y_cd) vec -> ('n, 'x_cd) vec -> ('n, 'y_cd) veccopy ?y x copies x into y.y, which is overwritten.val nrm2 : ('n, 'cd) vec -> floatnrm2 x retruns the L2 norm of vector x: ||x||.val axpy : ?alpha:num_type ->
('n, 'x_cd) vec -> ('n, 'y_cd) vec -> unitaxpy ?alpha x y executes y := alpha * x + y with scalar value alpha,
and vectors x and y.alpha : default = 1.0val iamax : ('n, 'cd) vec -> intiamax x returns the index of the maximum value of all elements in x.val amax : ('n, 'cd) vec -> num_typeamax x finds the maximum value of all elements in x.val gemv : ?beta:num_type ->
?y:('m, 'y_cd) vec ->
trans:('a_m * 'a_n, 'm * 'n, [< `N | `T ]) trans3 ->
?alpha:num_type ->
('a_m, 'a_n, 'a_cd) mat ->
('n, 'x_cd) vec -> ('m, 'y_cd) vecgemv ?beta ?y ~trans ?alpha a x executes
y := alpha * OP(a) * x + beta * y.beta : default = 0.0trans : the transpose flag for a:trans = Slap_common.normal, then OP(a) = a;trans = Slap_common.trans, then OP(a) = a^T;trans = Slap_common.conjtr, then OP(a) = a^H
(the conjugate transpose of a).alpha : default = 1.0val gbmv : m:'a_m Slap_size.t ->
?beta:num_type ->
?y:('m, 'y_cd) vec ->
trans:('a_m * 'a_n, 'm * 'n, [< `N | `T ]) trans3 ->
?alpha:num_type ->
(('a_m, 'a_n, 'kl, 'ku) Slap_size.geband, 'a_n, 'a_cd) mat ->
'kl Slap_size.t ->
'ku Slap_size.t -> ('n, 'x_cd) vec -> ('m, 'y_cd) vecgbmv ~m ?beta ?y ~trans ?alpha a kl ku x computes
y := alpha * OP(a) * x + beta * y where a is a band matrix stored in
band storage.y, which is overwritten.beta : default = 0.0trans : the transpose flag for a:trans = Slap_common.normal, then OP(a) = a;trans = Slap_common.trans, then OP(a) = a^T;trans = Slap_common.conjtr, then OP(a) = a^H
(the conjugate transpose of a).alpha : default = 1.0val symv : ?beta:num_type ->
?y:('n, 'y_cd) vec ->
?up:[< `L | `U ] Slap_common.uplo ->
?alpha:num_type ->
('n, 'n, 'a_cd) mat ->
('n, 'x_cd) vec -> ('n, 'y_cd) vecsymv ?beta ?y ?up ?alpha a x executes
y := alpha * a * x + beta * y.beta : default = 0.0up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of a is used;up = Slap_common.lower,
then the lower triangular part of a is used.alpha : default = 1.0val trmv : trans:('n * 'n, 'n * 'n, [< `N | `T ]) trans3 ->
?diag:Slap_common.diag ->
?up:[< `L | `U ] Slap_common.uplo ->
('n, 'n, 'a_cd) mat -> ('n, 'x_cd) vec -> unittrmv ~trans ?diag ?up a x executes x := OP(a) * x.trans : the transpose flag for a:trans = Slap_common.normal, then OP(a) = a;trans = Slap_common.trans, then OP(a) = a^T;trans = Slap_common.conjtr, then OP(a) = a^H
(the conjugate transpose of a).diag : default = Slap_common.non_unitdiag = Slap_common.unit, then a is unit triangular;diag = Slap_common.non_unit, then a is not unit triangular.up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of a is used;up = Slap_common.lower,
then the lower triangular part of a is used.val trsv : trans:('n * 'n, 'n * 'n, [< `N | `T ]) trans3 ->
?diag:Slap_common.diag ->
?up:[< `L | `U ] Slap_common.uplo ->
('n, 'n, 'a_cd) mat -> ('n, 'b_cd) vec -> unittrmv ~trans ?diag ?up a b solves linear system OP(a) * x = b
and destructively assigns x to b.trans : the transpose flag for a:trans = Slap_common.normal, then OP(a) = a;trans = Slap_common.trans, then OP(a) = a^T;trans = Slap_common.conjtr, then OP(a) = a^H
(the conjugate transpose of a).diag : default = Slap_common.non_unitdiag = Slap_common.unit, then a is unit triangular;diag = Slap_common.non_unit, then a is not unit triangular.up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of a is used;up = Slap_common.lower,
then the lower triangular part of a is used.val tpmv : trans:('n * 'n, 'n * 'n, [< `N | `T ]) trans3 ->
?diag:Slap_common.diag ->
?up:[< `L | `U ] Slap_common.uplo ->
('n Slap_size.packed, Slap_misc.cnt) vec ->
('n, 'x_cd) vec -> unittpmv ~trans ?diag ?up a x executes x := OP(a) * x
where a is a packed triangular matrix.trans : the transpose flag for a:trans = Slap_common.normal, then OP(a) = a;trans = Slap_common.trans, then OP(a) = a^T;trans = Slap_common.conjtr, then OP(a) = a^H
(the conjugate transpose of a).diag : default = Slap_common.non_unitdiag = Slap_common.unit, then a is unit triangular;diag = Slap_common.non_unit, then a is not unit triangular.up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of a is used;up = Slap_common.lower,
then the lower triangular part of a is used.val tpsv : trans:('n * 'n, 'n * 'n, [< `N | `T ]) trans3 ->
?diag:Slap_common.diag ->
?up:[< `L | `U ] Slap_common.uplo ->
('n Slap_size.packed, Slap_misc.cnt) vec ->
('n, 'x_cd) vec -> unittpsv ~trans ?diag ?up a b solves linear system OP(a) * x = b
and destructively assigns x to b where a is a packed triangular
matrix.trans : the transpose flag for a:trans = Slap_common.normal, then OP(a) = a;trans = Slap_common.trans, then OP(a) = a^T;trans = Slap_common.conjtr, then OP(a) = a^H
(the conjugate transpose of a).diag : default = Slap_common.non_unitdiag = Slap_common.unit, then a is unit triangular;diag = Slap_common.non_unit, then a is not unit triangular.up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of a is used;up = Slap_common.lower,
then the lower triangular part of a is used.val gemm : ?beta:num_type ->
?c:('m, 'n, 'c_cd) mat ->
transa:('a_m * 'a_k, 'm * 'k, [< `N | `T ]) trans3 ->
?alpha:num_type ->
('a_m, 'a_k, 'a_cd) mat ->
transb:('b_k * 'b_n, 'k * 'n, [< `N | `T ]) trans3 ->
('b_k, 'b_n, 'b_cd) mat -> ('m, 'n, 'c_cd) matgemm ?beta ?c ~transa ?alpha a ~transb b executes
c := alpha * OP(a) * OP(b) + beta * c.beta : default = 0.0transa : the transpose flag for a:transa = Slap_common.normal, then OP(a) = a;transa = Slap_common.trans, then OP(a) = a^T;transa = Slap_common.conjtr, then OP(a) = a^H
(the conjugate transpose of a).alpha : default = 1.0transb : the transpose flag for b:transb = Slap_common.normal, then OP(b) = b;transb = Slap_common.trans, then OP(b) = b^T;transb = Slap_common.conjtr, then OP(b) = b^H
(the conjugate transpose of b).val symm : side:('k, 'm, 'n) Slap_common.side ->
?up:[< `L | `U ] Slap_common.uplo ->
?beta:num_type ->
?c:('m, 'n, 'c_cd) mat ->
?alpha:num_type ->
('k, 'k, 'a_cd) mat ->
('m, 'n, 'b_cd) mat -> ('m, 'n, 'c_cd) matsymm ~side ?up ?beta ?c ?alpha a b executes
c := alpha * a * b + beta * c (if side = Slap_common.left) orc := alpha * b * a + beta * c (if side = Slap_common.right)a is a symmterix matrix, and b and c are general matrices.side : the side flag to specify direction of multiplication of a and
b.up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of a is used;up = Slap_common.lower,
then the lower triangular part of a is used.beta : default = 0.0alpha : default = 1.0val trmm : side:('k, 'm, 'n) Slap_common.side ->
?up:[< `L | `U ] Slap_common.uplo ->
transa:('k * 'k, 'k * 'k, [< `N | `T ]) trans3 ->
?diag:Slap_common.diag ->
?alpha:num_type ->
a:('k, 'k, 'a_cd) mat -> ('m, 'n, 'b_cd) mat -> unittrmm ~side ?up ~transa ?diag ?alpha ~a b executes
b := alpha * OP(a) * b (if side = Slap_common.left) orb := alpha * b * OP(a) (if side = Slap_common.right)a is a triangular matrix, and b is a general matrix.side : the side flag to specify direction of multiplication of a and
b.up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of a is used;up = Slap_common.lower,
then the lower triangular part of a is used.transa : the transpose flag for a:transa = Slap_common.normal, then OP(a) = a;transa = Slap_common.trans, then OP(a) = a^T;transa = Slap_common.conjtr, then OP(a) = a^H
(the conjugate transpose of a).diag : default = Slap_common.non_unitdiag = Slap_common.unit, then a is unit triangular;diag = Slap_common.non_unit, then a is not unit triangular.alpha : default = 1.0val trsm : side:('k, 'm, 'n) Slap_common.side ->
?up:[< `L | `U ] Slap_common.uplo ->
transa:('k * 'k, 'k * 'k, [< `N | `T ]) trans3 ->
?diag:Slap_common.diag ->
?alpha:num_type ->
a:('k, 'k, 'a_cd) mat -> ('m, 'n, 'b_cd) mat -> unittrsm ~side ?up ~transa ?diag ?alpha ~a b solves a system of linear
equations
OP(a) * x = alpha * b (if side = Slap_common.left) orx * OP(a) = alpha * b (if side = Slap_common.right)a is a triangular matrix, and b is a general matrix.
The solution x is returned by b.side : the side flag to specify direction of multiplication of a and
b.up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of a is used;up = Slap_common.lower,
then the lower triangular part of a is used.transa : the transpose flag for a:transa = Slap_common.normal, then OP(a) = a;transa = Slap_common.trans, then OP(a) = a^T;transa = Slap_common.conjtr, then OP(a) = a^H
(the conjugate transpose of a).diag : default = Slap_common.non_unitdiag = Slap_common.unit, then a is unit triangular;diag = Slap_common.non_unit, then a is not unit triangular.alpha : default = 1.0val syrk : ?up:[< `L | `U ] Slap_common.uplo ->
?beta:num_type ->
?c:('n, 'n, 'c_cd) mat ->
trans:('a_n * 'a_k, 'n * 'k, [< `N | `T ]) Slap_common.trans2 ->
?alpha:num_type ->
('a_n, 'a_k, 'a_cd) mat -> ('n, 'n, 'c_cd) matsyrk ?up ?beta ?c ~trans ?alpha a executes
c := alpha * a * a^T + beta * c (if trans = Slap_common.normal) orc := alpha * a^T * a + beta * c (if trans = Slap_common.trans)a is a general matrix and c is a symmetric matrix.up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of a is used;up = Slap_common.lower,
then the lower triangular part of a is used.beta : default = 0.0trans : the transpose flag for aalpha : default = 1.0val syr2k : ?up:[< `L | `U ] Slap_common.uplo ->
?beta:num_type ->
?c:('n, 'n, 'c_cd) mat ->
trans:('p * 'q, 'n * 'k, [< `N | `T ]) Slap_common.trans2 ->
?alpha:num_type ->
('p, 'q, 'a_cd) mat ->
('p, 'q, 'b_cd) mat -> ('n, 'n, 'c_cd) matsyr2k ?up ?beta ?c ~trans ?alpha a b computes
c := alpha * a * b^T + alpha * b * a^T + beta * c
(if trans = Slap_common.normal) orc := alpha * a^T * b + alpha * b^T * a + beta * c
(if trans = Slap_common.trans)c, and general matrices a and b.up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of a is used;up = Slap_common.lower,
then the lower triangular part of a is used.beta : default = 0.0trans : the transpose flag for aalpha : default = 1.0val lacpy : ?uplo:[< `A | `L | `U ] Slap_common.uplo ->
?b:('m, 'n, 'b_cd) mat ->
('m, 'n, 'a_cd) mat -> ('m, 'n, 'b_cd) matlacpy ?uplo ?b a copies the matrix a into the matrix b.b, which is overwritten.uplo : default = Slap_common.upper_loweruplo = Slap_common.upper,
then the upper triangular part of a is copied;uplo = Slap_common.lower,
then the lower triangular part of a is copied.b : default = a fresh matrix.val lassq : ?scale:float -> ?sumsq:float -> ('n, 'cd) vec -> float * floatlassq ?scale ?sumsq x(scl, smsq) where scl and smsq satisfy
scl^2 * smsq = x1^2 + x2^2 + ... + xn^2 + scale^2 * smsq.scale : default = 0.0sumsq : default = 1.0typelarnv_liseed =Slap_size.four
val larnv : ?idist:[ `Normal | `Uniform0 | `Uniform1 ] ->
?iseed:(larnv_liseed, Slap_misc.cnt) Slap_common.int32_vec ->
x:('n, Slap_misc.cnt) vec -> unit -> ('n, 'cnt) veclarnv ?idist ?iseed ~x () generates a random vector with the random
distribution specified by idist and random seed iseed.x, which is overwritten.idist : default = `Normaliseed : a four-dimensional integer vector with all ones.type ('m, 'a) lange_min_lwork
val lange_min_lwork : 'm Slap_size.t ->
([< `F | `I | `M | `O ] as 'a) Slap_common.norm4 ->
('m, 'a) lange_min_lwork Slap_size.tlange_min_lwork m norm computes the minimum length of workspace for
lange routine. m is the number of rows in a matrix, and norm is
the sort of matrix norms.val lange : ?norm:[< `F | `I | `M | `O ] Slap_common.norm4 ->
?work:('lwork, Slap_misc.cnt) rvec ->
('m, 'n, 'cd) mat -> floatlange ?norm ?work aa.norm : default = Slap_common.norm_1.norm = Slap_common.norm_1, the one norm is returned;norm = Slap_common.norm_inf, the infinity norm is returned;norm = Slap_common.norm_amax, the largest absolute value of
elements in matrix a (not a matrix norm) is returned;norm = Slap_common.norm_frob, the Frobenius norm is returned.work : default = an optimum-length vector.val lauum : ?up:[< `L | `U ] Slap_common.uplo -> ('n, 'n, 'cd) mat -> unitlauum ?up a computes
U * U^T where U is the upper triangular part of matrix a
if up is Slap_common.upper.L^T * L where L is the lower triangular part of matrix a
if up is Slap_common.lower.up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of a is used;up = Slap_common.lower,
then the lower triangular part of a is used.val getrf : ?ipiv:(('m, 'n) Slap_size.min, Slap_misc.cnt) Slap_common.int32_vec ->
('m, 'n, 'cd) mat ->
(('m, 'n) Slap_size.min, 'cnt) Slap_common.int32_vecgetrf ?ipiv a computes LU factorization of matrix a using partial
pivoting with row interchanges: a = P * L * U where P is a permutation
matrix, and L and U are lower and upper triangular matrices,
respectively. the permutation matrix is returned in ipiv.Failure if the matrix is singular.ipiv, which is overwritten.val getrs : ?ipiv:(('n, 'n) Slap_size.min, Slap_misc.cnt) Slap_common.int32_vec ->
trans:('n * 'n, 'n * 'n, [< `N | `T ]) trans3 ->
('n, 'n, 'a_cd) mat -> ('n, 'n, 'b_cd) mat -> unitgetrs ?ipiv trans a b solves systems of linear equations OP(a) * x = b
where a a 'n-by-'n general matrix, each column of matrix b is the
r.h.s. vector, and each column of matrix x is the corresponding solution.
The solution x is returned in b.Failure if the matrix is singular.ipiv : a result of gesv or getrf. It is internally computed by
getrf if omitted.trans : the transpose flag for a:trans = Slap_common.normal, then OP(a) = a;trans = Slap_common.trans, then OP(a) = a^T;trans = Slap_common.conjtr, then OP(a) = a^H
(the conjugate transpose of a).type 'n getri_min_lwork
val getri_min_lwork : 'n Slap_size.t -> 'n getri_min_lwork Slap_size.tgetri_min_lwork n computes the minimum length of workspace for getri
routine. n is the number of columns or rows in a matrix.val getri_opt_lwork : ('n, 'n, 'cd) mat -> (module Slap_size.SIZE)getri_opt_lwork a computes the optimal length of workspace for getri
routine.val getri : ?ipiv:(('n, 'n) Slap_size.min, Slap_misc.cnt) Slap_common.int32_vec ->
?work:('lwork, Slap_misc.cnt) vec -> ('n, 'n, 'cd) mat -> unitgetri ?ipiv ?work a computes the inverse of general matrix a by
LU-factorization. The inverse matrix is returned in a.Failure if the matrix is singular.ipiv : a result of gesv or getrf. It is internally computed by
getrf if omitted.work : default = an optimum-length vector.type sytrf_min_lwork
val sytrf_min_lwork : unit -> sytrf_min_lwork Slap_size.tsytrf_min_lwork () computes the minimum length of workspace for sytrf
routine.val sytrf_opt_lwork : ?up:[< `L | `U ] Slap_common.uplo ->
('n, 'n, 'cd) mat -> (module Slap_size.SIZE)sytrf_opt_lwork ?up a computes the optimal length of workspace for sytrf
routine.up : default = trueup = true, then the upper triangular part of a is used;up = false, then the lower triangular part of a is used.val sytrf : ?up:[< `L | `U ] Slap_common.uplo ->
?ipiv:('n, Slap_misc.cnt) Slap_common.int32_vec ->
?work:('lwork, Slap_misc.cnt) vec ->
('n, 'n, 'cd) mat -> ('n, 'cnt) Slap_common.int32_vecsytrf ?up ?ipiv ?work a factorizes symmetric matrix a using the
Bunch-Kaufman diagonal pivoting method:
a = P * U * D * U^T * P^T if up = true;a = P * L * D * L^T * P^T if up = falseP is a permutation matrix, U and L are upper and lower
triangular matrices with unit diagonal, and D is a symmetric
block-diagonal matrix. The permutation matrix is returned in ipiv.Failure if a is singular.ipiv, which is overwritten.up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of a is used;up = Slap_common.lower,
then the lower triangular part of a is used.work : default = an optimum-length vector.val sytrs : ?up:[< `L | `U ] Slap_common.uplo ->
?ipiv:('n, Slap_misc.cnt) Slap_common.int32_vec ->
('n, 'n, 'a_cd) mat -> ('n, 'nrhs, 'b_cd) mat -> unitsytrs ?up ?ipiv a b solves systems of linear equations a * x = b where
a is a symmetric matrix, each column of matrix b is the r.h.s. vector,
and each column of matrix x is the corresponding solution.
The solution x is returned in b.
This routine uses the Bunch-Kaufman diagonal pivoting method:
a = P * U * D * U^T * P^T if up = true;a = P * L * D * L^T * P^T if up = falseP is a permutation matrix, U and L are upper and lower
triangular matrices with unit diagonal, and D is a symmetric
block-diagonal matrix.Failure if a is singular.up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of a is used;up = Slap_common.lower,
then the lower triangular part of a is used.ipiv : a result of sytrf. It is internally computed by sytrf if
omitted.type 'n sytri_min_lwork
val sytri_min_lwork : 'n Slap_size.t -> 'n sytri_min_lwork Slap_size.tsytri_min_lwork () computes the minimum length of workspace for sytri
routine.val sytri : ?up:[< `L | `U ] Slap_common.uplo ->
?ipiv:('n, Slap_misc.cnt) Slap_common.int32_vec ->
?work:('lwork, Slap_misc.cnt) vec -> ('n, 'n, 'cd) mat -> unitsytri ?up ?ipiv ?work a computes the inverse of symmetric matrix a using
the Bunch-Kaufman diagonal pivoting method:
a = P * U * D * U^T * P^T if up = true;a = P * L * D * L^T * P^T if up = falseP is a permutation matrix, U and L are upper and lower
triangular matrices with unit diagonal, and D is a symmetric
block-diagonal matrix.Failure if a is singular.up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of a is used;up = Slap_common.lower,
then the lower triangular part of a is used.ipiv : a result of sytrf. It is internally computed by sytrf if
omitted.work : default = an optimum-length vector.val potrf : ?up:[< `L | `U ] Slap_common.uplo ->
?jitter:num_type -> ('n, 'n, 'cd) mat -> unitpotrf ?up ?jitter a computes the Cholesky factorization of symmetrix
(Hermitian) positive-definite matrix a:
a = U^T * U (real) or a = U^H * U (complex) if up = true;a = L * L^T (real) or a = L * L^H (complex) if up = falseU and L are upper and lower triangular matrices, respectively.
Either of them is returned in the upper or lower triangular part of a,
as specified by up.Failure if a is not positive-definite symmetric.up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of a is used;up = Slap_common.lower,
then the lower triangular part of a is used.jitter : default = nothingval potrs : ?up:[< `L | `U ] Slap_common.uplo ->
('n, 'n, 'a_cd) mat ->
?factorize:bool ->
?jitter:num_type -> ('n, 'nrhs, 'b_cd) mat -> unitpotrf ?up a ?jitter b solves systems of linear equations a * x = b using
the Cholesky factorization of symmetrix (Hermitian) positive-definite matrix
a:
a = U^T * U (real) or a = U^H * U (complex)
if up = Slap_common.upper;a = L * L^T (real) or a = L * L^H (complex)
if up = Slap_common.lowerU and L are upper and lower triangular matrices, respectively.Failure if a is singular.up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of a is used;up = Slap_common.lower,
then the lower triangular part of a is used.factorize : default = true (potrf is called implicitly)jitter : default = nothingval potri : ?up:[< `L | `U ] Slap_common.uplo ->
?factorize:bool ->
?jitter:num_type -> ('n, 'n, 'cd) mat -> unitpotrf ?up ?jitter a computes the inverse of symmetrix (Hermitian)
positive-definite matrix a using the Cholesky factorization:
a = U^T * U (real) or a = U^H * U (complex)
if up = Slap_common.upper;a = L * L^T (real) or a = L * L^H (complex)
if up = Slap_common.lowerU and L are upper and lower triangular matrices, respectively.Failure if a is singular.up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of a is used;up = Slap_common.lower,
then the lower triangular part of a is used.factorize : default = true (potrf is called implicitly)jitter : default = nothingval trtrs : ?up:[< `L | `U ] Slap_common.uplo ->
trans:('n * 'n, 'n * 'n, [< `N | `T ]) trans3 ->
?diag:Slap_common.diag ->
('n, 'n, 'a_cd) mat -> ('n, 'nrhs, 'b_cd) mat -> unittrtrs ?up trans ?diag a b solves systems of linear equations
OP(a) * x = b where a is a triangular matrix of order 'n, each column
of matrix b is the r.h.s vector, and each column of matrix x is the
corresponding solution. The solution x is returned in b.Failure if a is singular.up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of a is used;up = Slap_common.lower,
then the lower triangular part of a is used.trans : the transpose flag for a:trans = Slap_common.normal, then OP(a) = a;trans = Slap_common.trans, then OP(a) = a^T;trans = Slap_common.conjtr, then OP(a) = a^H
(the conjugate transpose of a).diag : default = `Ndiag = `U, then a is unit triangular;diag = `N, then a is not unit triangular.val tbtrs : kd:'kd Slap_size.t ->
?up:[< `L | `U ] Slap_common.uplo ->
trans:('n * 'n, 'n * 'n, [< `N | `T ]) trans3 ->
?diag:Slap_common.diag ->
(('n, 'kd) Slap_size.syband, 'n, 'a_cd) mat ->
('n, 'nrhs, 'b_cd) mat -> unittbtrs ~kd ?up ~trans ?diag ab b solves systems of linear equations
OP(A) * x = b where A is a triangular band matrix with kd subdiagonals,
each column of matrix b is the r.h.s vector, and each column of matrix x
is the corresponding solution. Matrix A is stored into ab in band
storage format. The solution x is returned in b.Failure if the matrix A is singular.kd : the number of subdiagonals or superdiagonals in A.up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of A is used;up = Slap_common.lower,
then the lower triangular part of A is used.trans : the transpose flag for A:trans = Slap_common.normal, then OP(A) = A;trans = Slap_common.trans, then OP(A) = A^T;trans = Slap_common.conjtr, then OP(A) = A^H
(the conjugate transpose of A).diag : default = `Ndiag = `U, then A is unit triangular;diag = `N, then A is not unit triangular.val trtri : ?up:[< `L | `U ] Slap_common.uplo ->
?diag:Slap_common.diag -> ('n, 'n, 'cd) mat -> unittrtri ?up ?diag a computes the inverse of triangular matrix a. The
inverse matrix is returned in a.Failure if the matrix a is singular.up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of A is used;up = Slap_common.lower,
then the lower triangular part of A is used.diag : default = `Ndiag = `U, then a is unit triangular;diag = `N, then a is not unit triangular.type 'n geqrf_min_lwork
val geqrf_min_lwork : n:'n Slap_size.t -> 'n geqrf_min_lwork Slap_size.tgeqrf_min_lwork ~n computes the minimum length of workspace for geqrf
routine. n is the number of columns in a matrix.val geqrf_opt_lwork : ('m, 'n, 'cd) mat -> (module Slap_size.SIZE)geqrf_opt_lwork a computes the optimum length of workspace for geqrf
routine.val geqrf : ?work:('lwork, Slap_misc.cnt) vec ->
?tau:(('m, 'n) Slap_size.min, Slap_misc.cnt) vec ->
('m, 'n, 'cd) mat -> (('m, 'n) Slap_size.min, 'cnt) vecgeqrf ?work ?tau a computes the QR factorization of general matrix a:
a = Q * R where Q is an orthogonal (unitary) matrix and R is an
upper triangular matrix. R is returned in a. This routine does not
generate Q explicitly. It is generated by orgqr.tau, which is overwritten.work : default = an optimum-length vector.val gesv : ?ipiv:('n, Slap_misc.cnt) Slap_common.int32_vec ->
('n, 'n, 'a_cd) mat -> ('n, 'nrhs, 'b_cd) mat -> unitgesv ?ipiv a b solves a system of linear equations a * x = b where a
is a 'n-by-'n general matrix, each column of matrix b is the r.h.s.
vector, and each column of matrix x is the corresponding solution.
This routine uses LU factorization: a = P * L * U with permutation matrix
P, a lower triangular matrix L and an upper triangular matrix U.
By this function, the upper triangular part of a is replaced by U, the
lower triangular part by L, and the solution x is returned in b.
Since 0.2.0
Raises Failure if the matrix is singular.
val gbsv : ?ipiv:('n, Slap_misc.cnt) Slap_common.int32_vec ->
(('n, 'n, 'kl, 'ku) Slap_size.luband, 'n, 'a_cd) mat ->
'kl Slap_size.t -> 'ku Slap_size.t -> ('n, 'nrhs, 'b_cd) mat -> unitgbsv ?ipiv ab kl ku b solves a system of linear equations A * X = B
where A is a 'n-by-'n band matrix, each column of matrix B is the
r.h.s. vector, and each column of matrix X is the corresponding solution.
The matrix A with kl subdiagonals and ku superdiagonals is stored into
ab in band storage format for LU factorizaion.
This routine uses LU factorization: A = P * L * U with permutation matrix
P, a lower triangular matrix L and an upper triangular matrix U.
By this function, the upper triangular part of A is replaced by U, the
lower triangular part by L, and the solution X is returned in B.
Since 0.2.0
Raises Failure if the matrix is singular.
val posv : ?up:[< `L | `U ] Slap_common.uplo ->
('n, 'n, 'a_cd) mat -> ('n, 'nrhs, 'b_cd) mat -> unitposv ?up a b solves systems of linear equations a * x = b where a is
a 'n-by-'n symmetric positive-definite matrix, each column of matrix b
is the r.h.s vector, and each column of matrix x is the corresponding
solution. The solution x is returned in b.
The Cholesky decomposition is used:
up = Slap_common.upper,
then a = U^T * U (real) or a = U^H * U (complex)up = Slap_common.lower,
then a = L^T * L (real) or a = L^H * L (complex)U and L are the upper and lower triangular matrices, respectively.Failure if the matrix is singular.up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of a is used;up = Slap_common.lower,
then the lower triangular part of a is used.val ppsv : ?up:[< `L | `U ] Slap_common.uplo ->
('n Slap_size.packed, Slap_misc.cnt) vec ->
('n, 'nrhs, 'b_cd) mat -> unitppsv ?up a b solves systems of linear equations a * x = b where a is
a 'n-by-'n symmetric positive-definite matrix stored in packed format,
each column of matrix b is the r.h.s vector, and each column of matrix x
is the corresponding solution. The solution x is returned in b.
up = Slap_common.upper,
then a = U^T * U (real) or a = U^H * U (complex)up = Slap_common.lower,
then a = L^T * L (real) or a = L^H * L (complex)U and L are the upper and lower triangular matrices, respectively.Failure if the matrix is singular.up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of a is used;up = Slap_common.lower,
then the lower triangular part of a is used.val pbsv : ?up:[< `L | `U ] Slap_common.uplo ->
kd:'kd Slap_size.t ->
(('n, 'kd) Slap_size.syband, 'n, 'ab_cd) mat ->
('n, 'nrhs, 'b_cd) mat -> unitpbsv ?up ~kd ab b solves systems of linear equations ab * x = b where
ab is a 'n-by-'n symmetric positive-definite band matrix with kd
subdiangonals, stored in band storage format, each column of matrix b is
the r.h.s vector, and each column of matrix x is the corresponding
solution. The solution x is returned in b.
This routine uses the Cholesky decomposition:
up = Slap_common.upper,
then a = U^T * U (real) or a = U^H * U (complex)up = Slap_common.lower,
then a = L^T * L (real) or a = L^H * L (complex)U and L are the upper and lower triangular matrices, respectively.Failure if the matrix is singular.up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of a is used;up = Slap_common.lower,
then the lower triangular part of a is used.kd : the number of subdiagonals or superdiagonals in ab.val ptsv : ('n, Slap_misc.cnt) vec ->
('n Slap_size.p, Slap_misc.cnt) vec ->
('n, 'nrhs, 'b_cd) mat -> unitptsv d e b solves systems of linear equations A * x = b where A is a
'n-by-'n symmetric positive-definite tridiagonal matrix with diagonal
elements d and subdiagonal elements e, each column of matrix b is the
r.h.s vector, and each column of matrix x is the corresponding solution.
The solution x is returned in b.
This routine uses the Cholesky decomposition: A = L^T * L (real) or
A = L^H * L (complex) where L is a lower triangular matrix.
Since 0.2.0
Raises Failure if the matrix is singular.
type sysv_min_lwork
val sysv_min_lwork : unit -> sysv_min_lwork Slap_size.tsysv_min_lwork () computes the minimum length of workspace for sysv
routine.val sysv_opt_lwork : ?up:[< `L | `U ] Slap_common.uplo ->
('n, 'n, 'a_cd) mat ->
('n, 'nrhs, 'b_cd) mat -> (module Slap_size.SIZE)sysv_opt_lwork ?up a b computes the optimal length of workspace for sysv
routine.val sysv : ?up:[< `L | `U ] Slap_common.uplo ->
?ipiv:('n, Slap_misc.cnt) Slap_common.int32_vec ->
?work:('lwork, Slap_misc.cnt) vec ->
('n, 'n, 'a_cd) mat -> ('n, 'nrhs, 'b_cd) mat -> unitsysv ?up ?ipiv ?work a b solves systems of linear equations a * x = b
where a is a 'n-by-'n symmetric matrix, each column of matrix b is
the r.h.s. vector, and each column of matrix x is the corresponding
solution. The solution x is returned in b.
The diagonal pivoting method is used:
up = Slap_common.upper, then a = U * D * U^Tup = Slap_common.lower, then a = L * D * L^T
where U and L are the upper and lower triangular matrices, respectively.Failure if the matrix is singular.up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of a is used;up = Slap_common.lower,
then the lower triangular part of a is used.ipiv : a result of sytrf. It is internally computed by sytrf if
omitted.work : default = an optimum-length vector.val spsv : ?up:[< `L | `U ] Slap_common.uplo ->
?ipiv:('n, Slap_misc.cnt) Slap_common.int32_vec ->
('n Slap_size.packed, Slap_misc.cnt) vec ->
('n, 'nrhs, 'b_cd) mat -> unitspsv ?up a b solves systems of linear equations a * x = b where a is
a 'n-by-'n symmetric matrix stored in packed format, each column of
matrix b is the r.h.s. vector, and each column of matrix x is the
corresponding solution. The solution x is returned in b.
The diagonal pivoting method is used:
up = Slap_common.upper, then a = U * D * U^Tup = Slap_common.lower, then a = L * D * L^T
where U and L are the upper and lower triangular matrices, respectively.Failure if the matrix is singular.up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of a is used;up = Slap_common.lower,
then the lower triangular part of a is used.ipiv : a result of sytrf. It is internally computed by sytrf if
omitted.type ('m, 'n, 'nrhs) gels_min_lwork
val gels_min_lwork : m:'m Slap_size.t ->
n:'n Slap_size.t ->
nrhs:'nrhs Slap_size.t -> ('m, 'n, 'nrhs) gels_min_lwork Slap_size.tgels_min_lwork ~n computes the minimum length of workspace for gels
routine.m : the number of rows in a matrix.n : the number of columns in a matrix.nrhs : the number of right hand sides.val gels_opt_lwork : trans:('am * 'an, 'm * 'n, [< `N | `T ]) Slap_common.trans2 ->
('am, 'an, 'a_cd) mat ->
('m, 'nrhs, 'b_cd) mat -> (module Slap_size.SIZE)gels_opt_lwork ~trans a b computes the optimum length of workspace for
gels routine.val gels : ?work:('work, Slap_misc.cnt) vec ->
trans:('am * 'an, 'm * 'n, [< `N | `T ]) Slap_common.trans2 ->
('am, 'an, 'a_cd) mat -> ('m, 'nrhs, 'b_cd) mat -> unitgels ?work ~trans a b solves an overdetermined or underdetermined system
of linear equations using QR or LU factorization.
trans = Slap_common.normal and 'm >= 'n: find the least square
solution to an overdetermined system by minimizing ||b - A * x||^2.trans = Slap_common.normal and 'm < 'n: find the minimum norm
solution to an underdetermined system a * x = b.trans = Slap_common.trans, and 'm >= 'n: find the minimum norm
solution to an underdetermined system a^H * x = b.trans = Slap_common.trans and 'm < 'n: find the least square
solution to an overdetermined system by minimizing ||b - A^H * x||^2.work : default = an optimum-length vector.trans : the transpose flag for a.val dot : ('n, 'x_cd) vec -> ('n, 'y_cd) vec -> floatdot x yx and y.val asum : ('n, 'x_cd) vec -> floatasum xx.val sbmv : k:'k Slap_size.t ->
?y:('n, 'y_cd) vec ->
(('n, 'k) Slap_size.syband, 'n, 'a_cd) mat ->
?up:[< `L | `U ] Slap_common.uplo ->
?alpha:float ->
?beta:float -> ('n, 'x_cd) vec -> ('n, 'y_cd) vecsbmv ~k ?y a ?up ?alpha ?beta x computes y := alpha * a * x + beta * y
where a is a 'n-by-'n symmetric band matrix with k
super-(or sub-)diagonals, and x and y are 'n-dimensional vectors.y, which is overwritten.k : the number of superdiangonals or subdiangonalsup : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of a is used;up = Slap_common.lower,
then the lower triangular part of a is used.alpha : default = 1.0beta : default = 0.0val ger : ?alpha:float ->
('m, 'x_cd) vec ->
('n, 'y_cd) vec ->
('m, 'n, 'a_cd) mat -> ('m, 'n, 'a_cd) matger ?alpha x y a computes a := alpha * x * y^T + a with
the general matrix a, the vector x and
the transposed vector y^T of y.a, which is overwritten.alpha : default = 1.0val syr : ?alpha:float ->
?up:[< `L | `U ] Slap_common.uplo ->
('n, 'x_cd) vec ->
('n, 'n, 'a_cd) mat -> ('n, 'n, 'a_cd) matsyr ?alpha x a computes a := alpha * x * x^T + a with
the symmetric matrix a, the vector x and
the transposed vector x^T of x.a, which is overwritten.alpha : default = 1.0up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of a is used;up = Slap_common.lower,
then the lower triangular part of a is used.type ('m, 'a) lansy_min_lwork
val lansy_min_lwork : 'n Slap_size.t ->
([< `F | `I | `M | `O ] as 'a) Slap_common.norm4 ->
('n, 'a) lansy_min_lwork Slap_size.tlansy_min_lwork n norm computes the minimum length of workspace for
lansy routine. n is the number of rows or columns in a matrix.
norm is a matrix norm.val lansy : ?up:[< `L | `U ] Slap_common.uplo ->
?norm:[< `F | `I | `M | `O ] Slap_common.norm4 ->
?work:('lwork, Slap_misc.cnt) vec -> ('n, 'n, 'cd) mat -> floatlansy ?up ?norm ?work aa.up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of a is used;up = Slap_common.lower,
then the lower triangular part of a is used.norm : default = Slap_common.norm_1norm = Slap_common.norm_1, the one norm is returned;norm = Slap_common.norm_inf, the infinity norm is returned;norm = Slap_common.norm_amax, the largest absolute value of
elements in matrix a (not a matrix norm) is returned;norm = Slap_common.norm_frob, the Frobenius norm is returned.work : default = an optimum-length vector.val lamch : [ `B | `E | `L | `M | `N | `O | `P | `R | `S | `U ] -> floatlamch cmach see LAPACK documentation.type 'n orgqr_min_lwork
val orgqr_min_lwork : n:'n Slap_size.t -> 'n orgqr_min_lwork Slap_size.torgqr_min_lwork ~n computes the minimum length of workspace for
orgqr routine. n is the number of columns in a matrix.val orgqr_opt_lwork : tau:('k, Slap_misc.cnt) vec ->
('m, 'n, 'cd) mat -> (module Slap_size.SIZE)orgqr_min_lwork ~tau a computes the optimum length of workspace for
orgqr routine.val orgqr_dyn : ?work:('lwork, Slap_misc.cnt) vec ->
tau:('k, Slap_misc.cnt) vec -> ('m, 'n, 'cd) mat -> unitorgqr_dyn ?work ~tau a generates the orthogonal matrix Q of the QR
factorization formed by geqrf/geqpf.Invalid_argument if the following inequality is not satisfied:
(Mat.dim1 a) >= (Mat.dim2 a) >= (Vec.dim tau),
i.e., 'm >= 'n >= 'k.work : default = an optimum-length vectortau : a result of geqrftype ('r, 'm, 'n) ormqr_min_lwork
val ormqr_min_lwork : side:('r, 'm, 'n) Slap_common.side ->
m:'m Slap_size.t ->
n:'n Slap_size.t -> ('r, 'm, 'n) ormqr_min_lwork Slap_size.tormqr_min_lwork ~side ~m ~n computes the minimum length of workspace for
ormqr routine.side : the side flag to specify direction of matrix multiplication.m : the number of rows in a matrix.n : the number of columns in a matrix.val ormqr_opt_lwork : side:('r, 'm, 'n) Slap_common.side ->
trans:('r * 'r, 'r * 'r, [< `N | `T ]) Slap_common.trans2 ->
tau:('k, Slap_misc.cnt) vec ->
('r, 'k, 'a_cd) mat ->
('m, 'n, 'c_cd) mat -> (module Slap_size.SIZE)ormqr_opt_lwork ~side ~trans ~tau a c computes the optimum length of
workspace for ormqr routine.side : the side flag to specify direction of matrix multiplication.trans : the transpose flag for orthogonal matrix Q.tau : a result of geqrf.val ormqr_dyn : side:('r, 'm, 'n) Slap_common.side ->
trans:('r * 'r, 'r * 'r, [< `N | `T ]) Slap_common.trans2 ->
?work:('lwork, Slap_misc.cnt) vec ->
tau:('k, Slap_misc.cnt) vec ->
('r, 'k, 'a_cd) mat -> ('m, 'n, 'c_cd) mat -> unitormqr_dyn ~side ~trans ?work ~tau a c multiplies a matrix c by the
orthogonal matrix Q of the QR factorization formed by geqrf/geqpf:
Q * c if side = Slap_common.left and
trans = Slap_common.normal;Q^T * c if side = Slap_common.left and
trans = Slap_common.trans;c * Q if side = Slap_common.right and
trans = Slap_common.normal;c * Q^T if side = Slap_common.right and
trans = Slap_common.trans.Invalid_argument if the following inequality is not satisfied:'m >= 'k if side = Slap_common.left;'n >= 'k if side = Slap_common.right.side : the side flag to specify direction of matrix multiplication of
Q and c.trans : the transpose flag for orthogonal matrix Q.work : default = an optimum-length vector.tau : a result of geqrf.type 'n gecon_min_lwork
val gecon_min_lwork : 'n Slap_size.t -> 'n gecon_min_lwork Slap_size.tgecon_min_lwork n computes the minimum length of workspace work for
gecon routine. n is the number of rows or columns in a matrix.type 'n gecon_min_liwork
val gecon_min_liwork : 'n Slap_size.t -> 'n gecon_min_liwork Slap_size.tgecon_min_liwork n computes the minimum length of workspace iwork for
gecon routine. n is the number of rows or columns in a matrix.val gecon : ?norm:[< `I | `O ] Slap_common.norm2 ->
?anorm:float ->
?work:('lwork, Slap_misc.cnt) vec ->
?iwork:('liwork, Slap_misc.cnt) Slap_common.int32_vec ->
('n, 'n, 'cd) mat -> floatgecon ?norm ?anorm ?work ?iwork a estimates the reciprocal of the
condition number of general matrix a.norm : default = Slap_common.norm_1.norm = Slap_common.norm_1, the one norm is returned;norm = Slap_common.norm_inf, the infinity norm is returned.anorm : default = the norm of matrix a as returned by lange.work : default = an optimum-length vector.iwork : default = an optimum-length vector.type 'n sycon_min_lwork
val sycon_min_lwork : 'n Slap_size.t -> 'n sycon_min_lwork Slap_size.tsycon_min_lwork n computes the minimum length of workspace work for
sycon routine. n is the number of rows or columns in a matrix.type 'n sycon_min_liwork
val sycon_min_liwork : 'n Slap_size.t -> 'n sycon_min_liwork Slap_size.tsycon_min_liwork n computes the minimum length of workspace iwork for
sycon routine. n is the number of rows or columns in a matrix.val sycon : ?up:[< `L | `U ] Slap_common.uplo ->
?ipiv:('n, Slap_misc.cnt) Slap_common.int32_vec ->
?anorm:float ->
?work:('lwork, Slap_misc.cnt) vec ->
?iwork:('liwork, Slap_misc.cnt) Slap_common.int32_vec ->
('n, 'n, 'cd) mat -> floatsycon ?up ?ipiv ?anorm ?work ?iwork a estimates the reciprocal of the
condition number of symmetric matrix a. Since a is symmetric, the
1-norm is equal to the infinity norm.up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of a is used;up = Slap_common.lower,
then the lower triangular part of a is used.ipiv : a result of sytrf. It is internally computed by sytrf if
omitted.anorm : default = the norm of matrix a as returned by lange.work : default = an optimum-length vector.iwork : default = an optimum-length vector.type 'n pocon_min_lwork
val pocon_min_lwork : 'n Slap_size.t -> 'n pocon_min_lwork Slap_size.tpocon_min_lwork n computes the minimum length of workspace work for
pocon routine. n is the number of rows or columns in a matrix.type 'n pocon_min_liwork
val pocon_min_liwork : 'n Slap_size.t -> 'n pocon_min_liwork Slap_size.tpocon_min_liwork n computes the minimum length of workspace iwork for
pocon routine. n is the number of rows or columns in a matrix.val pocon : ?up:[< `L | `U ] Slap_common.uplo ->
?anorm:float ->
?work:('lwork, Slap_misc.cnt) vec ->
?iwork:('liwork, Slap_misc.cnt) Slap_common.int32_vec ->
('n, 'n, 'cd) mat -> floatpocon ?up ?anorm ?work ?iwork a estimates the reciprocal of the
condition number of symmetric positive-definite matrix a.
Since a is symmetric, the 1-norm is equal to the infinity norm.up : default = Slap_common.upperup = Slap_common.upper,
then the upper triangular part of a is used;up = Slap_common.lower,
then the lower triangular part of a is used.anorm : default = the norm of matrix a as returned by lange.work : default = an optimum-length vector.iwork : default = an optimum-length vector.type ('m, 'n, 'nrhs) gelsy_min_lwork
val gelsy_min_lwork : m:'m Slap_size.t ->
n:'n Slap_size.t ->
nrhs:'nrhs Slap_size.t -> ('m, 'n, 'nrhs) gelsy_min_lwork Slap_size.tgelsy_min_lwork ~m ~n ~nrhs computes the minimum length of workspace for
gelsy routine.m : the number of rows in a matrix.n : the number of columns in a matrix.nrhs : the number of right hand sides.val gelsy_opt_lwork : ('m, 'n, 'a_cd) mat ->
('n, 'nrhs, 'b_cd) mat -> (module Slap_size.SIZE)gelsy_opt_lwork a b computes the optimum length of workspace for
gelsy routine.val gelsy : ('m, 'n, 'a_cd) mat ->
?rcond:float ->
?jpvt:('n, Slap_misc.cnt) Slap_common.int32_vec ->
?work:('lwork, Slap_misc.cnt) vec ->
('n, 'nrhs, 'b_cd) mat -> intgelsy a ?rcond ?jpvt ?work b computes the minimum-norm solution to a
linear least square problem (minimize ||b - a * x||) using a complete
orthogonal factorization of a.a.rcond : default = -1.0 (machine precision)jpvt : default = a 'n-dimensional vector.work : default = an optimum-length vector.type ('m, 'n, 'nrhs) gelsd_min_lwork
val gelsd_min_lwork : m:'m Slap_size.t ->
n:'n Slap_size.t ->
nrhs:'nrhs Slap_size.t -> ('m, 'n, 'nrhs) gelsd_min_lwork Slap_size.tgelsd_min_lwork ~m ~n ~nrhs computes the minimum length of workspace for
gelsd routine.m : the number of rows in a matrix.n : the number of columns in a matrix.nrhs : the number of right hand sides.val gelsd_opt_lwork : ('m, 'n, 'a_cd) mat ->
('n, 'nrhs, 'b_cd) mat -> (module Slap_size.SIZE)gelsd_opt_lwork a b computes the optimum length of workspace for
gelsd routine.type ('m, 'n, 'nrhs) gelsd_min_iwork
val gelsd_min_iwork : 'm Slap_size.t ->
'n Slap_size.t -> ('m, 'n, 'nrhs) gelsd_min_iwork Slap_size.tgelsd_min_iwork ~m ~n ~nrhs computes the minimum length of workspace
iwork for gelsd routine.val gelsd : ('m, 'n, 'a_cd) mat ->
?rcond:float ->
?s:(('m, 'n) Slap_size.min, Slap_misc.cnt) vec ->
?work:('lwork, Slap_misc.cnt) vec ->
?iwork:('liwork, Slap_misc.cnt) vec ->
('n, 'nrhs, 'b_cd) mat -> intgelsd a ?rcond ?jpvt ?work b computes the minimum-norm solution to a
linear least square problem (minimize ||b - a * x||) using the singular
value decomposition (SVD) of a and a divide and conquer method.Failure if the function fails to converge.a.rcond : default = -1.0 (machine precision)s : the singular values of a in decreasing order.
They are implicitly computed if omitted.work : default = an optimum-length vector.iwork : default = an optimum-length vector.type ('m, 'n, 'nrhs) gelss_min_lwork
val gelss_min_lwork : m:'m Slap_size.t ->
n:'n Slap_size.t ->
nrhs:'nrhs Slap_size.t -> ('m, 'n, 'nrhs) gelss_min_lwork Slap_size.tgelss_min_lwork ~m ~n ~nrhs computes the minimum length of workspace for
gelss routine.m : the number of rows in a matrix.n : the number of columns in a matrix.nrhs : the number of right hand sides.val gelss_opt_lwork : ('m, 'n, 'a_cd) mat ->
('n, 'nrhs, 'b_cd) mat -> (module Slap_size.SIZE)gelss_min_lwork a b computes the optimum length of workspace for
gelss routine.val gelss : ('m, 'n, 'a_cd) mat ->
?rcond:float ->
?s:(('m, 'n) Slap_size.min, Slap_misc.cnt) vec ->
?work:('lwork, Slap_misc.cnt) vec ->
('n, 'nrhs, 'b_cd) mat -> intgelss a ?rcond ?work b computes the minimum-norm solution to a
linear least square problem (minimize ||b - a * x||) using the singular
value decomposition (SVD) of a.Failure if the function fails to converge.a.rcond : default = -1.0 (machine precision)s : the singular values of a in decreasing order.
They are implicitly computed if omitted.work : default = an optimum-length vector.type ('m, 'n) gesvd_min_lwork
val gesvd_min_lwork : m:'m Slap_size.t ->
n:'n Slap_size.t -> ('m, 'n) gesvd_min_lwork Slap_size.tgesvd_min_lwork ~m ~n computes the minimum length of workspace for
gesvd routine.m : the number of rows in a matrix.n : the number of columns in a matrix.val gesvd_opt_lwork : jobu:('u_cols, 'm, ('m, 'n) Slap_size.min, Slap_size.z, Slap_size.z)
Slap_common.svd_job ->
jobvt:('vt_rows, 'n, ('m, 'n) Slap_size.min, Slap_size.z, Slap_size.z)
Slap_common.svd_job ->
?s:(('m, 'n) Slap_size.min, Slap_misc.cnt) vec ->
?u:('m, 'u_cols, 'u_cd) mat ->
?vt:('vt_rows, 'n, 'vt_cd) mat ->
('m, 'n, 'a_cd) mat -> (module Slap_size.SIZE)gesvd_opt_lwork ~jobu ~jobvt ?s ?u ?vt computes the optimum length of
workspace for gesvd routine.jobu : the SVD job flag for u.jobvt : the SVD job flag for vt.s : a return location for singular values.u : a return location for left singular vectors.vt : a return location for (transposed) right singular vectors.val gesvd : jobu:('u_cols, 'm, ('m, 'n) Slap_size.min, Slap_size.z, Slap_size.z)
Slap_common.svd_job ->
jobvt:('vt_rows, 'n, ('m, 'n) Slap_size.min, Slap_size.z, Slap_size.z)
Slap_common.svd_job ->
?s:(('m, 'n) Slap_size.min, Slap_misc.cnt) vec ->
?u:('m, 'u_cols, 'u_cd) mat ->
?vt:('vt_rows, 'n, 'vt_cd) mat ->
?work:('lwork, Slap_misc.cnt) vec ->
('m, 'n, 'a_cd) mat ->
(('m, 'n) Slap_size.min, 'cnt) vec * ('m, 'u_cols, 'cnt) mat *
('vt_rows, 'n, 'cnt) matgesvd ?jobu ?jobvt ?s ?u ?vt ?work a computes the singular value
decomposition (SVD) of 'm-by-'n general rectangular matrix a:
a = U * D * V^T where
D is an 'm-by-'n matrix (the diagonal elements in D are singular
values in descreasing order, and other elements are zeros),U is an 'm-by-'m orthogonal matrix (the columns in U are left
singular vectors), andV is an 'n-by-'n orthogonal matrix (the columns in V are right
singular vectors).(s, u, vt) with singular values s in descreasing order,
left singular vectors u and right singular vectors vt.jobu : the SVD job flag for u:jobu = Slap_common.svd_all, then all 'm columns of U are
returned in u. ('u_cols = 'm.)jobu = Slap_common.svd_top, then the first ('m, 'n) min
columns of U are returned in u. ('u_cols = ('m, 'n) min.)jobu = Slap_common.svd_overwrite, then the first ('m, 'n) min
columns of U are overwritten on a. ('u_cols = z since u
is unused.)jobu = Slap_common.svd_no, then no columns of U are computed.
('u_cols = z.)jobvt : the SVD job flag for vt:jobvt = Slap_common.svd_all, then all 'n rows of V^T are
returned in vt. ('vt_rows = 'n.)jobvt = Slap_common.svd_top, then the first ('m, 'n) min
rows of V^T are returned in vt. ('vt_rows = ('m, 'n) min.)jobvt = Slap_common.svd_overwrite, then the first ('m, 'n) min
rows of V^T are overwritten on a. ('vt_cols = z since vt
is unused.)jobvt = Slap_common.svd_no, then no columns of V^T are
computed. ('vt_cols = z.)s : a return location for singular values.
(default = an implicitly allocated vector.)u : a return location for left singular vectors.
(default = an implicitly allocated matrix.)vt : a return location for (transposed) right singular vectors.
(default = an implicitly allocated matrix.)work : default = an optimum-length vector.
(Note: jobu and jobvt cannot both be Slap_common.svd_overwrite.)
type ('m, 'n) gesdd_liwork
val gesdd_liwork : m:'m Slap_size.t ->
n:'n Slap_size.t -> ('m, 'n) gesdd_liwork Slap_size.tgesdd_liwork ~m ~n computes the length of workspace iwork for
gesdd routine.m : the number of rows in a matrix.n : the number of columns in a matrix.type ('m, 'n, 'jobz) gesdd_min_lwork
val gesdd_min_lwork : jobz:('u_cols * 'vt_rows, 'm * 'n,
('m, 'n) Slap_size.min * ('m, 'n) Slap_size.min, 'm * 'n,
Slap_size.z * Slap_size.z)
Slap_common.svd_job ->
m:'m Slap_size.t ->
n:'n Slap_size.t ->
unit -> ('m, 'n, 'u_cols * 'vt_rows) gesdd_min_lwork Slap_size.tgesdd_min_lwork ~m ~n computes the minimum length of workspace work for
gesdd routine.jobz : the SVD job flag.m : the number of rows in a matrix.n : the number of columns in a matrix.val gesdd_opt_lwork : jobz:('u_cols * 'vt_rows, 'm * 'n,
('m, 'n) Slap_size.min * ('m, 'n) Slap_size.min, 'm * 'n,
Slap_size.z * Slap_size.z)
Slap_common.svd_job ->
?s:(('m, 'n) Slap_size.min, Slap_misc.cnt) vec ->
?u:('m, 'u_cols, 'u_cd) mat ->
?vt:('vt_rows, 'n, 'vt_cd) mat ->
?iwork:('liwork, Slap_misc.cnt) Slap_common.int32_vec ->
('m, 'n, 'a_cd) mat -> (module Slap_size.SIZE)gesdd_opt_lwork ~jobz ?s ?u ?vt ?iwork a computes the optimum length of
workspace work for gesdd routine.jobz : the SVD job flag.s : a return location for singular values.
(default = an implicitly allocated vector.)u : a return location for left singular vectors.
(default = an implicitly allocated matrix.)vt : a return location for (transposed) right singular vectors.
(default = an implicitly allocated matrix.)iwork : default = an optimum-length vector.val gesdd : jobz:('u_cols * 'vt_rows, 'm * 'n,
('m, 'n) Slap_size.min * ('m, 'n) Slap_size.min, 'm * 'n,
Slap_size.z * Slap_size.z)
Slap_common.svd_job ->
?s:(('m, 'n) Slap_size.min, Slap_misc.cnt) vec ->
?u:('m, 'u_cols, 'u_cd) mat ->
?vt:('vt_rows, 'n, 'vt_cd) mat ->
?work:('lwork, Slap_misc.cnt) vec ->
?iwork:('liwork, Slap_misc.cnt) Slap_common.int32_vec ->
('m, 'n, 'a_cd) mat ->
(('m, 'n) Slap_size.min, 'cnt) vec *
('m, 'u_cols, 'u_cd) mat option *
('vt_rows, 'n, 'vt_cd) mat optiongesdd ~jobz ?s ?u ?vt ?work ?iwork a computes the singular value
decomposition (SVD) of general rectangular matrix a using a divide and
conquer method: a = U * D * V^T where
D is an 'm-by-'n matrix (the diagonal elements in D are singular
values in descreasing order, and other elements are zeros),U is an 'm-by-'m orthogonal matrix (the columns in U are left
singular vectors), andV is an 'n-by-'n orthogonal matrix (the columns in V are right
singular vectors).(s, u, vt) with singular values s in descreasing order,
left singular vectors u and right singular vectors vt.
If u (vt) is not needed, None is returned.jobz : the SVD job flag:jobz is Slap_common.svd_all, all 'm columns of U and all 'n
rows of V^T are returned in u and vt.
('u_cols = 'm and 'vt_rows = 'n.)jobz is Slap_common.svd_top, the first ('m, 'n) min columns of
U and the first ('m, 'n) min rows of V^T are returned in u and
vt. ('u_cols = ('m, 'n) min and 'vt_rows = ('m, 'n) min.)jobz is Slap_common.svd_overwrite, then
'm >= 'n, a is overwritten with the first ('m, 'n) min columns
of U, and all 'n rows of V^T is returned in vt, thus vt is
'n-by-'n and u is not used;'m < 'n, a is overwritten with the first ('m, 'n) min rows of
V^T, and all 'm columns of U is returned in u; thereby u is
'm-by-'m and vt is not used.'u_cols = 'm and 'vt_rows = 'n, but either u or
vt should be omitted.)jobz is Slap_common.svd_no, no singular vectors are computed.
('u_cols = z and 'vt_rows = z.)s : a return location for singular values.
(default = an implicitly allocated vector.)u : a return location for left singular vectors.
(default = an implicitly allocated matrix.)vt : a return location for (transposed) right singular vectors.
(default = an implicitly allocated matrix.)work : default = an optimum-length vector.iwork : default = an optimum-length vector.val geev_min_lwork : ?vectors:bool -> 'n Slap_size.t -> (module Slap_size.SIZE)geev_min_lwork ?vectors n computes the minimum length of workspace for
geev routine. n is the number of rows or columns of a matrix.vectors : whether eigenvectors are computed, or not.
(default = true, i.e., they are computed.)val geev_opt_lwork : ?vl:('n, 'n, 'vl_cd) mat option ->
?vr:('n, 'n, 'vr_cd) mat option ->
?wr:('n, Slap_misc.cnt) vec ->
?wi:('n, Slap_misc.cnt) vec ->
('n, 'n, 'a_cd) mat -> (module Slap_size.SIZE)geev_opt_lwork ?vl ?vr ?wr ?vi a computes the optimum length of workspace
for geev routine.vl : a return location for left eigenvectors. See LAPACK GEEV
documentation for details about storage of complex eigenvectors.
(default = an implicitly allocated matrix.)vl = None, left eigenvectors are not computed;vl = Some vl, left eigenvectors are computed.vr : a return location for right eigenvectors. See LAPACK GEEV
documentation for details about storage of complex eigenvectors.
(default = an implicitly allocated matrix.)vr = None, right eigenvectors are not computed;vr = Some vr, right eigenvectors are computed.wr : a return location for real parts of eigenvalues.
(default = an implicitly allocated vector.)wi : a return location for imaginary parts of eigenvalues.
(default = an implicitly allocated vector.)val geev : ?work:('lwork, Slap_misc.cnt) vec ->
?vl:('n, 'n, 'vl_cd) mat option ->
?vr:('n, 'n, 'vr_cd) mat option ->
?wr:('n, Slap_misc.cnt) vec ->
?wi:('n, Slap_misc.cnt) vec ->
('n, 'n, 'a_cd) mat ->
('n, 'n, 'vl_cd) mat option * ('n, Slap_misc.cnt) vec *
('n, Slap_misc.cnt) vec * ('n, 'n, 'vr_cd) mat optiongeev ?work ?vl ?vr ?wr ?wi a computes the eigenvalues and the left and
right eigenvectors of 'n-by-'n nonsymmetric matrix a:
Let w(j) is the j-th eigenvalue of a. The j-th right eigenvector
vr(j) satisfies a * vr(j) = w(j) * vr(j), and the j-th left
eigenvector vl(j) satisfies vl(j)^H * a = vl(j)^H * w(j) where vl(j)^H
denotes the conjugate transpose of vl(j). The computed eigenvalues are
normalized by Euclidian norm.
Raises Failure if the function fails to converge.
Returns (vl, wr, wi, vr) where vl and vr are left and right
eigenvectors, and wr and wi are the real and imaginary parts
of eigenvalues. If vl (vr) is an empty matrix, None is set.
work : default = an optimum-length vector.vl : a return location for left eigenvectors. See LAPACK GEEV
documentation for details about storage of complex eigenvectors.
(default = an implicitly allocated matrix.)vl = None, left eigenvectors are not computed;vl = Some vl, left eigenvectors are computed.vr : a return location for right eigenvectors. See LAPACK GEEV
documentation for details about storage of complex eigenvectors.
(default = an implicitly allocated matrix.)vr = None, right eigenvectors are not computed;vr = Some vr, right eigenvectors are computed.wr : a return location for real parts of eigenvalues.
(default = an implicitly allocated vector.)wi : a return location for imaginary parts of eigenvalues.
(default = an implicitly allocated vector.)type 'n syev_min_lwork
val syev_min_lwork : 'n Slap_size.t -> 'n syev_min_lwork Slap_size.tsyev_min_lwork n computes the minimum length of workspace for
syev routine. n is the number of rows or columns of a matrix.val syev_opt_lwork : ?vectors:bool ->
?up:bool -> ('n, 'n, 'cd) mat -> (module Slap_size.SIZE)syev_opt_lwork ?vectors ?up a computes the optimum length of workspace for
syev routine.vectors : whether eigenvectors are computed, or not.
(default = false.)vectors = true, eigenvectors are computed and returned in a.vectors = false, eigenvectors are not computed.up : default = trueup = true, then the upper triangular part of a is used;up = false, then the lower triangular part of a is used.val syev : ?vectors:bool ->
?up:bool ->
?work:('lwork, Slap_misc.cnt) vec ->
?w:('n, Slap_misc.cnt) vec ->
('n, 'n, 'cd) mat -> ('n, 'cnt) vecsyev ?vectors ?up ?work ?w a computes the eigenvalues and the eigenvectors
of 'n-by-'n symmetric matrix a.Failure if the function fails to converge.w of eigenvalues in ascending order.vectors : whether eigenvectors are computed, or not.
(default = false.)vectors = true, eigenvectors are computed and returned in a.vectors = false, eigenvectors are not computed.up : default = trueup = true, then the upper triangular part of a is used;up = false, then the lower triangular part of a is used.work : default = an optimum-length vector.w : a return location for eigenvalues.
(default = an implicitly allocated vector.)val syevd_min_lwork : vectors:bool -> 'n Slap_size.t -> (module Slap_size.SIZE)syevd_min_lwork ?vectors n computes the minimum length of workspace work
for syevd routine. n is the number of rows or columns of a matrix.vectors : whether eigenvectors are computed, or not.
(default = false, i.e., they are not computed.)val syevd_min_liwork : vectors:bool -> 'n Slap_size.t -> (module Slap_size.SIZE)syevd_min_liwork ?vectors n computes the minimum length of workspace
iwork for syevd routine. n is the number of rows or columns of a
matrix.vectors : whether eigenvectors are computed, or not.
(default = false, i.e., they are not computed.)val syevd_opt_lwork : ?vectors:bool ->
?up:bool -> ('n, 'n, 'cd) mat -> (module Slap_size.SIZE)syevd_opt_lwork ?vectors ?up a computes the optimum length of workspace
work for syevd routine.vectors : whether eigenvectors are computed, or not.
(default = false, i.e., they are not computed.)up : default = trueup = true, then the upper triangular part of a is used;up = false, then the lower triangular part of a is used.val syevd_opt_liwork : ?vectors:bool ->
?up:bool -> ('n, 'n, 'cd) mat -> (module Slap_size.SIZE)syevd_opt_liwork ?vectors ?up a computes the optimum length of workspace
iwork for syevd routine.vectors : whether eigenvectors are computed, or not.
(default = false, i.e., they are not computed.)up : default = trueup = true, then the upper triangular part of a is used;up = false, then the lower triangular part of a is used.val syevd : ?vectors:bool ->
?up:bool ->
?work:('lwork, Slap_misc.cnt) vec ->
?iwork:('liwork, Slap_misc.cnt) Slap_common.int32_vec ->
?w:('n, Slap_misc.cnt) vec ->
('n, 'n, 'a_cd) mat -> ('n, 'w_cd) vecsyev ?vectors ?up ?w a computes the eigenvalues and the eigenvectors of
'n-by-'n symmetric matrix a using divide and conquer method.Failure if the function fails to converge.w of eigenvalues in ascending order.vectors : whether eigenvectors are computed, or not.
(default = false.)vectors = true, eigenvectors are computed and returned in a.vectors = false, eigenvectors are not computed.up : default = trueup = true, then the upper triangular part of a is used;up = false, then the lower triangular part of a is used.work : default = an optimum-length vector.iwork : default = an optimum-length vector.w : a return location for eigenvalues.
(default = an implicitly allocated vector.)type 'n sbev_min_lwork
val sbev_min_lwork : 'n Slap_size.t -> 'n sbev_min_lwork Slap_size.tsbev_min_lwork n computes the minimum length of workspace work
for sbev routine. n is the number of rows or columns of a matrix.val sbev : kd:'kd Slap_size.t ->
?z:('n, 'n, 'z_cd) mat ->
?up:bool ->
?work:('lwork, Slap_misc.cnt) vec ->
?w:('n, Slap_misc.cnt) vec ->
(('n, 'kd) Slap_size.syband, 'n, 'a_cd) mat -> ('n, 'cnt) vecsbev ~kd ?z ?up ?work ?w ab computes all eigenvalues and, optionally,
eigenvectors of real symmetric band matrix ab store in band storage
format.Failure if the function fails to converge.w, which is overwritten.kd : the number of subdiagonals or superdiagonals.z : The eigenvectors are returned in z if it is given.
They are not computed if omitted.up : default = trueup = true, then the upper triangular part of ab is used;up = false, then the lower triangular part of ab is used.work : workspace for sbevw : w is replaced by eigenvalues if it is given, or newly
allocated if omitted.type 'n syevr_min_lwork
val syevr_min_lwork : 'n Slap_size.t -> 'n syevr_min_lwork Slap_size.tsbevr_min_lwork n computes the minimum length of workspace work
for syevr routine. n is the number of rows or columns of a matrix.type 'n syevr_min_liwork
val syevr_min_liwork : 'n Slap_size.t -> 'n syevr_min_liwork Slap_size.tsbevr_min_liwork n computes the minimum length of workspace iwork
for syevr routine. n is the number of rows or columns of a matrix.val syevr_opt_lwork : ?vectors:bool ->
?range:[ `A | `I of int * int | `V of float * float ] ->
?up:bool ->
?abstol:float -> ('n, 'n, 'a_cd) mat -> (module Slap_size.SIZE)sbevr_opt_lwork ?vectors ?range ?up ?abstol a computes the optimum length
of workspace work for syevr routine.vectors : whether eigenvectors are computed, or not.
(default = false, i.e., they are not computed.)range : default = `A.up : default = true.abstol : The absolute error tolerance to which each eigenvalue or
eigenvector is required. (default = lamch `S.)val syevr_opt_liwork : ?vectors:bool ->
?range:[ `A | `I of int * int | `V of float * float ] ->
?up:bool ->
?abstol:float -> ('n, 'n, 'a_cd) mat -> (module Slap_size.SIZE)sbevr_opt_liwork ?vectors ?range ?up ?abstol a computes the optimum length
of workspace iwork for sbevr routine.vectors : whether eigenvectors are computed, or not.
(default = false, i.e., they are not computed.)range : default = `A.up : default = true.abstol : The absolute error tolerance to which each eigenvalue or
eigenvector is required. (default = lamch `S.)module type SYEVR_RESULT =sig..end
syevr_dyn.
val syevr_dyn : ?vectors:bool ->
?range:[ `A | `I of int * int | `V of float * float ] ->
?up:bool ->
?abstol:float ->
?work:('lwork, Slap_misc.cnt) vec ->
?iwork:('liwork, Slap_misc.cnt) Slap_common.int32_vec ->
?w:('n, Slap_misc.cnt) vec ->
?z:('n, 'k, 'z_cd) mat ->
?isuppz:(('k, 'k) Slap_size.add, Slap_misc.cnt) Slap_common.int32_vec ->
('n, 'n, 'a_cd) mat -> (module Slap_D.SYEVR_RESULT with type n = 'n)syevr_dyn ?vectors ?range ?up ?abstol ?work ?iwork ?w ?z ?isuppz a
computes selected eigenvalues w and, optionally, eigenvectors z of a
real symmetric matrix a using the Relatively Robust Representations.
Usage:
let f (type nn) ... =
...
let (a : (nn, nn, _) mat) = ... in
let module X = (val syevr_dyn ?vectors ?range ?up ?abstol
?work ?iwork ?w ?z ?isuppz a
: SYEVR_RESULT with type n = nn) in
let (m, w, z, isuppz) = X.value in
...
where type nn is the size of symmetric matrix a.
The returned module X contains tuple X.value = (m, w, z, isuppz) and
type X.m for representation of the number of computed eigenvalues:
m : X.m Slap_size.t is the number of eigenvalues.w : (X.n, _) vec contains the m eigenvalues in ascending order.z : (X.n, X.m, _) mat contains the m eigenvectors of dimension
n in the same order.2*m-dimensional vector
isuppz : ((m, m) Slap.Slap_size.add, _) Slap_common.int32_vec indicates the
nonzero elements in z.Invalid_argument if not X.m <= 'kX.vectors : whether eigenvectors are computed, or not.
(default = false.)vectors = true, eigenvectors are computed and returned in a.vectors = false, eigenvectors are not computed.range : default = `Arange = `A, all eigenvalues are computed.range = `I (il, iu), eigenvalues with indices il to iu are
computed.range = `V (vl, vu), the routine computes eigenvalues w(i) in
the half-open interval: vl < w(i) <= vu where vl <= vu.up : default = trueup = true, then the upper triangular part of a is used;up = false, then the lower triangular part of a is used.abstol : The absolute error tolerance to which each eigenvalue or
eigenvector is required. (default = lamch `S.)work : default = an optimum-length vector.iwork : default = an optimum-length vector.w : a return location for eigenvalues.
(default = an implicitly allocated vector of the minimum
required dimension.)z : a return location for eigenvectors.
(default = an implicitly allocated matrix of the minimum
required dimension.)isuppz : a return location for a vector to indicate the nonzero
elements in z.
(default = an implicitly allocated matrix of the minimum
required dimension.)val sygv_opt_lwork : ?vectors:bool ->
?up:bool ->
?itype:[ `AB | `A_B | `BA ] ->
('n, 'n, 'a_cd) mat ->
('n, 'n, 'b_cd) mat -> (module Slap_size.SIZE)sygv_opt_lwork ?vectors ?up ?itype a b computes the optimum length of
workspace work for sbevr routine.vectors : whether eigenvectors are computed, or not.
(default = false, i.e., they are not computed.)up : default = true.itype : the behavior of this routine.val sygv : ?vectors:bool ->
?up:bool ->
?work:('lwork, Slap_misc.cnt) vec ->
?w:('n, Slap_misc.cnt) vec ->
?itype:[ `AB | `A_B | `BA ] ->
('n, 'n, 'a_cd) mat ->
('n, 'n, 'b_cd) mat -> ('n, 'cnt) vecsygv ?vectors ?up ?work ?w ?itype a b solves a real generalized symmetric
definite eigenproblem:
a * x = lambda * b * x if itype is `A_B;a * b * x = lambda * x if itype is `AB;b * a * x = lambda * x if itype is `BAa is a 'n-by-'n symmetric matrix, and b is a 'n-by-'n
symmetric positive definite matrix.Failure if the function fails to converge.Failure if the function fails to converge.w of eigenvalues in ascending order.vectors : whether eigenvectors are computed, or not.
(default = false.)vectors = true, eigenvectors are computed and returned in a.vectors = false, eigenvectors are not computed.up : default = true.up = true, then the upper triangular part of a is used;up = false, then the lower triangular part of a is used.work : default = an optimum-length workspace.w : a return location for eigenvalues.
(default = an implicitly allocated vector.)itype : the behavior of this routine.val sbgv : ka:'ka Slap_size.t ->
kb:'kb Slap_size.t ->
?z:('n, 'n, 'z_cd) mat ->
?up:bool ->
?work:('lwork, Slap_misc.cnt) vec ->
?w:('n, Slap_misc.cnt) vec ->
(('n, 'ka) Slap_size.syband, 'n, 'ab_cd) mat ->
(('n, 'kb) Slap_size.syband, 'n, 'bb_cd) mat -> ('n, 'cnt) vecsbgv ~ka ~kb ?z ?up ?work ?w ab bb solves a general eigenvalue problem
ab * z = (lambda) * bb * z where ab is a 'n-by-'n symmetric band
matrix with ka subdiagonals, and bb is a 'n-by-'n symmetric
positive-definite band matrix with kb subdiagonals. Both ab and bb
are stored in band storage format.Failure if the function fails to converge.w, which is overwritten.ka : the number of subdiagonals or superdiagonals of ab.kb : the number of subdiagonals or superdiagonals of bb.z : The eigenvectors are returned in z if it is given.
They are not computed if omitted.up : default = trueup = true, then the upper triangular part of ab is used;up = false, then the lower triangular part of ab is used.work : workspace for sbgvw : w is replaced by eigenvalues if it is given, or newly
allocated if omitted.