2017-07-17-17-49 http://www.ma.hw.ac.uk/~loisel/ , Sébastien Loisel
Address: Department of Mathematics, room CMT18
Heriot-Watt University, Edinburgh EH14 4AS, United Kingdom

Click sin browser bring you to Real number calculator. Click example
sample code show up in Box81. Click answer show up
in Box82. Each time display 10 click buttons out of 69. Click or
change buttons. Click for all 69 buttons. to for help.

data:image/png;base64 Red dot What is eigenvector ; update 2017-08-29
Fork me on GitHub
Numeric Javascript Index
Reference Card Numerical analysis in JS to Math Object functions
Utility functions to Arithmetic operations to Linear algebra to
Data manipulation to Complex linear algebra to Eigenvalues
Singular value decomposition Sparse linear algebra to Coordinate matrices
Solving PDEs to Cubic splines to Fast Fourier Transforms
Quadratic Programming Unconstrained optimization to Linear programming to
Solving ODEs to Seedrandom (David Bau) test matrix, run example

Reference card for the numeric module
Function Description

abs Absolute value
acos Arc-cosine
add Pointwise sum x+y
addeq Pointwise sum x+=y
all All the components of x are true
and Pointwise x && y
andeq Pointwise x &= y
any One or more of the components of x are true
asin Arc-sine
atan Arc-tangent
atan2 Arc-tangent (two parameters)
band Pointwise x & y
bench Benchmarking routine
bnot Binary negation ~x
bor Binary or x|y
bxor Binary xor x^y
ccsDim Dimensions of sparse matrix
ccsDot Sparse matrix-matrix product
ccsFull Convert sparse to full
ccsGather Gather entries of sparse matrix
ccsGetBlock Get rows/columns of sparse matrix
ccsLUP Compute LUP decomposition of sparse matrix
ccsLUPSolve Solve Ax=b using LUP decomp
ccsScatter Scatter entries of sparse matrix
ccsSparse Convert from full to sparse
ccsTSolve Solve upper/lower triangular system
ccs<op> Supported ops include: add/div/mul/geq/etc...
cLU Coordinate matrix LU decomposition
cLUsolve Coordinate matrix LU solve
cdelsq Coordinate matrix Laplacian
cdotMV Coordinate matrix-vector product
ceil Pointwise Math.ceil(x)
cgrid Coordinate grid for cdelsq
clone Deep copy of Array
cos Pointwise Math.cos(x)
det Determinant
diag Create diagonal matrix
dim Get Array dimensions
div Pointwise x/y
diveq Pointwise x/=y
dopri Numerical integration of ODE using Dormand-Prince RK method. Returns an object Dopri.
Dopri.at Evaluate the ODE solution at a point
Function Description

dot Matrix-Matrix, Matrix-Vector and Vector-Matrix product
eig Eigenvalues and eigenvectors
epsilon 2.220446049250313e-16
eq Pointwise comparison x === y
exp Pointwise Math.exp(x)
floor Poinwise Math.floor(x)
geq Pointwise x>=y
getBlock Extract a block from a matrix
getDiag Get the diagonal of a matrix
gt Pointwise x>y
identity Identity matrix
imageURL Encode a matrix as an image URL
inv Matrix inverse
isFinite Pointwise isFinite(x)
isNaN Pointwise isNaN(x)
largeArray Don't prettyPrint Arrays larger than this
leq Pointwise x<=y
linspace Generate evenly spaced values
log Pointwise Math.log(x)
lshift Pointwise x<<y
lshifteq Pointwise x<<=y
lt Pointwise x<y
LU Dense LU decomposition
LUsolve Dense LU solve
mapreduce Make a pointwise map-reduce function
mod Pointwise x%y
modeq Pointwise x%=y
mul Pointwise x*y
neg Pointwise -x
neq Pointwise x!==y
norm2 Square root of the sum of the square of the entries of x
norm2Squared Sum of squares of entries of x
norminf Largest modulus entry of x
not Pointwise logical negation !x
or Pointwise logical or x||y
oreq Pointwise x|=y
parseCSV Parse a CSV file into an Array
parseDate Pointwise parseDate(x)
parseFloat Pointwise parseFloat(x)
pointwise Create a pointwise function
pow Pointwise Math.pow(x)
precision Number of digits to prettyPrint
prettyPrint Pretty-prints x
random Create an Array of random numbers
rep Create an Array by duplicating values
Function Description

round Pointwise Math.round(x)
rrshift Pointwise x>>>y
rrshifteq Pointwise x>>>=y
rshift Pointwise x>>y
rshifteq Pointwise x>>=y
same x and y are entrywise identical
seedrandom The seedrandom module
setBlock Set a block of a matrix
sin Pointwise Math.sin(x)
solve Solve Ax=b
solveLP Solve a linear programming problem
solveQP Solve a quadratic programming problem
spline Create a Spline object
Spline.at Evaluate the Spline at a point
Spline.diff Differentiate the Spline
Spline.roots Find all the roots of the Spline
sqrt Pointwise Math.sqrt(x)
sub Pointwise x-y
subeq Pointwise x-=y
sum Sum all the entries of x
svd Singular value decomposition
t Create a tensor type T (may be complex-valued)
T.<numericfun> Supported <numericfun> are: abs, add, cos, diag, div, dot, exp, getBlock, getDiag, inv, log, mul, neg, norm2, setBlock, sin, sub, transpose
T.conj Pointwise complex conjugate
T.fft Fast Fourier transform
T.get Read an entry
T.getRow Get a row
T.getRows Get a range of rows
T.ifft Inverse FFT
T.reciprocal Pointwise 1/z
T.set Set an entry
T.setRow Set a row
T.setRows Set a range of rows
T.transjugate The conjugate-transpose of a matrix
tan Pointwise Math.tan(x)
tensor Tensor product ret[i][j] = x[i]*y[j]
toCSV Make a CSV file
transpose Matrix transpose
uncmin Unconstrained optimization
version Version string for the numeric library
xor Pointwise x^y
xoreq Pointwise x^=y

Matrix eigenvalue and eigenvector
Box11 input matrix

matrices , , , ; decimal to fraction ▲ in, ▼ out ; page
Box12 eVec,eVal ; RUN 01 ☞ ; ; ;

; check 02 ☞ , if Box15 all zero, then OK
Box13 matrix*eigenVector ; eigenvalue,vector

Box14 ;

[matcheck] if Box15 all zero then Matrix*eigenvector=eigenvector*eigenvalue OK
Box15 ; Matrix*eigenvector ▬ eigenvector*eigenvalue = zero

QBboxc15.value='' ;
Matrix multiplications
Box16 define matrix A

Box17 define matrix B ; ,

Box18 define matrix C ; OR RUN 13 ☞ output to Box18

rotation Bank:Box16, Elevation:Box17, Heading:Box18 ; to Box19
Bank=x,φ= , Elevation=y,θ= , Heading=z,ψ= in degree
Box19 ; RUN 11 ☞ ▲ 3box input, ▼ Box20 output

Box16=matrixA, Box17=matrixB, Box18=matrixC, RUN 12 ☞ output ▲
Box20 ; ISBN:0-691-10298-8, p86 eqn.4.4 to Box20

rotation coordinate frame of aircraft to Earth coordinate frame ;
Box21 ; Mat1*Mat2 Box19,20 ,




Perturbation of λ × V = M × V

Let M be square matrix, λ be eigenvalue, V be eigenvector and U be none-eigenvector
then λ × V = M × V , but λ × U NOT= M × U . Above check 02 ☞ ,
exam λ × V = M × V only, if reader change Box12 V to U, [matcheck] not read new data
from Box12 Click button never confirm λ × U NOT= M × U
Following Box22 to Box26 parallel with Box11 to Box15, the only difference is code
read user data from Box22, it will confirm λ × U NOT= M × U . 2017-08-20-07-50
Box22 ;

M,λ,V example , , check 21 ☞
Box23 ; one vector, one value check 22 ☞


Box24 ;


Box25 ;

Matrix*NONE_eigenvector ▬ NONE_eigenvector*eigenvalue NOT= zero
Box26 ; Matrix*eigenvector ▬ eigenvector*eigenvalue = zero


QBboxc26.value='' ; stringList_to_arrayList(inp0) , strMat_to_arrayMat(inp0)
function mat2str(arg1) , matMVecf(arg1,arg2) , readdata() , d2fArr()
bye09(), bye09a()



<a name=eigenDoc01>
2017-08-14-15-16
What is eigenvector? One word to explain is
A vector which square matrix cannot change its direction.
length of output vector can be different from length of input vector.
direction of output vector MUST be same as direction of input vector.
Matrix multiply a vector has following calculation.
a
 
b
 
c
equal
1/2
 
1/3
 
1/6
  
1/3
 
2/3
 
0
  
1/6
 
0
 
5/6
multiply
x
 
y
 
z
---eqn.matRot001 above matrix is matrixA
width of above equation

<a name=eigenDoc02>
If x=7.2; y=8.5; z=12.3; then a,b,c have next values
a = (1/2)*x + (1/3)*y + (1/6)*z = 8.483333333333334=509/60
b = (1/3)*x + (2/3)*y + ( 0 )*z = 8.066666666666666=121/15
c = (1/6)*x + ( 0 )*y + (5/6)*z = 11.45 =229/20
Input vector is [x,y,z]=[7.2, 8.5, 12.3] , length=16.59458
Output vector is [a,b,c]=[509/60, 121/15, 229/20] , length=16.375
509/60 to 7.2 = 1.1782407407407407
121/15 to 8.5 = 0.9490196078431372
229/20 to 12.3= 0.9308943089430893
<a name=eigenDoc03>
Above is Matrix multiply a vector calculation method
Following is vector that matrix cannot change direction.
See following graph eigenv01.png , CB(blue)=matrix * CA(red)
eigenv01.png has two vectors CA(red) and CB(blue).
Vector CA has x-axis component Ax and y-axis component Ay.
Vector CB has x-axis component Bx and y-axis component By.
Two x-axis components ratio is Ax/Bx=2
Two y-axis components ratio is Ay/By=2
Two ratio both be 2, we can write Ax=2*Bx and Ay=2*By
further write [Ax, Ay]=2*[Bx, By]; in this parallel case
eigenvector is [Ax, Ay], eigenvalue is 0.5=1/2 .
Only if vector CA and vector CB parallel it is possible
that components ratio are the same value.
Square matrix rotate vector CA(red) to vector CB(blue) .
If input vector and output vector two vector's components
ratio be constant, we say input vector is eigenvector of
the square matrix. Constant ratio is eigenvalue.
<a name=eigenDoc04>
input vector and output vector be parallel see eigenv01.png
eigenv01.png , a608141348 ; CB(blue)=matrix * CA(red)

2017-08-14-18-13 [above graph is base64 text graph]
<a name=eigenDoc05>
2017-08-14-19-13
Above is vectors that matrix cannot change its direction.
Below is vectors that matrix is able change its direction.
General equation for matrix multiply inputVector is next
outputVector = matrix * inputVector ---eqn.matRot002
If outputVector not parallel to inputVector, then
inputVector is matrix rotatable vector.
eqn.matRot001 has inputVector =[x,y,z]=[7.2, 8.5, 12.3]
eqn.matRot001 has outputVector =[a,b,c]=[509/60, 121/15, 229/20]
matrixA=[1/2 , 1/3 , 1/6 ; 1/3 , 2/3 , 0 ; 1/6 , 0 , 5/6 ]
whether matrixA rotated inputVector=[x,y,z]=[7.2, 8.5, 12.3] ?
to find the answer, check input/output two vector's component
ratio and exam several ratio is constant or not.
509/60 to 7.2 = 1.1782407407407407
121/15 to 8.5 = 0.9490196078431372
229/20 to 12.3= 0.9308943089430893
Because 1.178 ≠ 0.9490 ≠ 0.9308 therefore conclude
eqn.matRot001 inputVector is not an eigenvector of matrixA.
<a name=eigenDoc06>
inputVector and outputVector not in same direction see next
eigenv02.png , a608141423 ; CB(blue)=matrix×CA(red)

2017-08-14-19-40 stop [above graph is base64 text graph]
<a name=eigenDoc07>
2017-08-14-20-43 start
Next use simple square matrixB=[1, 2; 3, 4] to explain eigenvector.
Goto Box11 enter next four numbers
1   2
3   4
Click RUN 01 ☞
Box12 output two eigenvalues 5.372281323269014, -0.3722813232690143
and output two eigenvectors. matrixB is two by two matrix.
inputVector and outputVector each has two components.
eigenvector1=[0.41597355791928425 , 0.9093767091321241]
eigenvalue1 = 5.372281323269014 .
<a name=eigenDoc08>
To verify outputVector truly parallel inputVector, use
outputVector = matrix * inputVector ---eqn.matRot002
to get outputVector. Goto Box16 enter matrixB as next
1   2
3   4
In Box17 enter next two numbers, they are eigenvector1 two components.
0.41597355791928425
0.9093767091321241
click RUN 13 ☞
Box18 output two numbers, they are outputVector two components.
2.2347269761835324
4.885427510286349
<a name=eigenDoc09>
compare with: outputVector = matrix × inputVector ---eqn.matRot002
another one : eigenvalue × eigenvector = matrix × eigenvector ---eqn.matRot003
eqn.matRot002 is valid for any input vector, include eigenvector.
eqn.matRot003 is valid for eigenvector only. Enter eigenvector1
to Box17, output from Box18 must satisfy eqn.matRot003
eqn.matRot003 equality left side calculation as next
eigenvector1 component1 multiply eigenvalue1 is next
0.41597355791928425*5.372281323269014
get 2.2347269761835324
eigenvector1 component2 multiply eigenvalue1 is next
0.9093767091321241 *5.372281323269014
get 4.885427510286349
<a name=eigenDoc10>
eqn.matRot003 equality left side calculation get
2.2347269761835324 , 4.885427510286349
eqn.matRot003 equality right side calculation output to Box18
2.2347269761835324 , 4.885427510286349
eigenvalue × eigenvector = matrix × eigenvector ---eqn.matRot003
equality left side calculation and equality right side calculation
are the same. verify eigenvector1 success. Reader can try verify
eigenvector2 . You need the following information.
matrixB=[1, 2; 3, 4]
eigenvector2 = [-0.8245648401323938 , 0.5657674649689923]
eigenvalue2 = -0.3722813232690143 .
eigenvalue*eigenvector use Real number calculator.
2017-08-14-21-55

<a name=eigenDoc11>
2017-08-15-10-56
Above is eigenvector that matrix cannot change direction.
What happen if a vector which matrix CAN change direction?
Next is an example. MatrixB is defined as
[1 2]
[3 4]
Assume input vector is
[5]
[6]
Matrix multiply vector equation is next.
17
 
39
equal
1
 
3
  
 
 
  
2
 
4
multiply
5
 
6
---eqn.matRot004
width of above equation

<a name=eigenDoc12>
input vector is [5, 6]; output vector is [17, 39] , two vectors parallel?
Check vector component ratio as next
Two vectors component1 ratio 17/5=3.4
Two vectors component2 ratio 39/6=6.5
3.4 ≠ 6.5 , conclusion eqn.matRot004 input vector [5, 6]
is not matrixB [1, 2; 3, 4] eigenvector. Because output vector
not parallel input vector.

<a name=eigenDoc13>
Give a square matrix, use eqn.matRot002
outputVector = matrix * inputVector ---eqn.matRot002
output vector not parallel input vector case has infinity many.
But output vector do parallel input vector case has finite few.
relative to infinity many rotatable vectors, those few
vectors which given square matrix cannot rotate
are called eigenvectors.

eqn.matRot003
eigenvalue × eigenvector = matrix × eigenvector ---eqn.matRot003
is true only for eigenvectors.
<a name=eigenDoc14>
In Box11 enter a square matrix definition, click RUN 01 ☞ .
Box12 output square matrix eigenvectors and eigenvalues.
Click check 02 ☞
Box12 output square matrix eigenvectors (complex number)
    and eigenvalues (complex number).
Box13 output eqn.matRot003 equality right side
    matrix × eigenvector (complex number)
Box14 output eqn.matRot003 equality left side
    eigenvalue × eigenvector (complex number)
Box15 output eqn.matRot003 equality right side and
    equality left side differences.
If programming correct, Box15 output MUST be 0+0i
[i=sqrt(-1)]. Because eqn.matRot003 move right side
to left side get
eigenvalue × eigenvector ▬ matrix × eigenvector ≡ zero
2017-08-15-13-10
<a name=eigenDoc15>
2017-08-15-17-06
Matrix's eigenvector answer is not unique. There are two
degrees of freedom. eigenvector direction must be unique.
eigenvector length can be defined arbitrarily. eigenvector
direction positive or negative can be defined arbitrarily.
eigenvector answer is not unique example is next.
Define matrixC=[2, 1 ; 1, 2] , eigenvector1=[1, 1]
3
 
3
equal
2
 
1
  
 
 
  
1
 
2
multiply
1
 
1
---eqn.matRot005
width of above equation
matrixC=[2, 1 ; 1, 2] input vector=[1, 1]; output vector=[3, 3]
[1, 1] and [3, 3] are in same direction. eigenvalue=3 (3/1=3).
Double the length of [1, 1] get input vector=[2,2]; matrixC
multiply [2,2] get output vector=[6, 6] ; eigenvalue=3 (6/2=3).
Input [2, 2] and output [6, 6] are in same direction. This
illustrate that [1, 1] and [2, 2] are same eigenvector.
<a name=eigenDoc16>
reverse eigenvector [1, 1] to [-1, -1] ; matrixC transform
[-1, -1] to [-3, -3] ; eigenvalue=3 ([-3]/[-1]=3).
Other web page say matrixC=[2,1;1,2] has eigenvector=[1, 1]
numeric-1.2.6.min.js say matrixC=[2,1;1,2] has eigenvector
=[-0.7071067811865475 , -0.7071067811865475 ]
Reader should not be surprised. [1, 1] and [-1, -1] and
[-0.7071067811865475 , -0.7071067811865475 ] all be correct
answer. Other web page normalise eigenvector length to
vector x-axis component length to one. On the other hand
numeric-1.2.6.min.js normalise eigenvector length to one.
therefore x/y-axis components are [1/√2, 1/√2]
here 1/√2=0.7071067811865475 Second degree of freedom
vector positive or negative sense that is numeric-1.2.6.min.js
author's choice.
2017-08-15-17-46

<a name=eigenDoc17>
2017-08-15-13-18
Liu, HsinHan study mechanical engineering, not major
in math. LiuHH is mathematics admirer. In other words,
if LiuHH discuss "What is eigenvector" involve wrong
view point, reader should tolerate LiuHH's entry level
error. If reader do not understand what LiuHH said,
or if reader suspect LiuHH view point, please ask a
math expert nearby, or goto online search
"What is eigenvector" for better explanation.
Thank you for visiting http://freeman2.com/
Liu, HsinHan 2017-08-15-13-27 stop





Real number calculator ;
test numeric-1.2.6.js code click [x5], click [real number calculator]
No prime code, no complex code. integer1/integer2 OK ; d2f(1/3+1/4+1/5+1/6+1/7) OK
Box81 click [a1] to [s1] then click [real number calculator]

Box81 input Box82 output Examples , ,

Box82 ; , , ; ▼ ✚, ▬

Integer ends= ; ,
Box83 ; , ▼ eval(ans)

QPboxc83.value='' ;
[=][=]




Numeric Javascript Index
Reference Card Numerical analysis in JS to Math Object functions
Utility functions to Arithmetic operations to Linear algebra to
Data manipulation to Complex linear algebra to Eigenvalues
Singular value decomposition Sparse linear algebra to Coordinate matrices
Solving PDEs to Cubic splines to Fast Fourier Transforms
Quadratic Programming Unconstrained optimization to Linear programming to
Solving ODEs to Seedrandom (David Bau) test matrix, run example








2017-07-17-12-43 LiuHH receive this document page from
http://www.numericjs.com/documentation.html
below come from http://www.numericjs.com/documentation.html

Numerical analysis in Javascript

Numeric Javascript is library that provides many useful functions for numerical calculations, particularly for linear algebra (vectors and matrices). You can create vectors and matrices and multiply them:
IN> A = [[1,2,3],[4,5,6]];
OUT> [[1,2,3],
      [4,5,6]]
IN> x = [7,8,9]
OUT> [7,8,9]
IN> numeric.dot(A,x);
OUT> [50,122]
The example shown above can be executed in the Javascript Workshop or at any Javascript prompt. The Workshop provides plotting capabilities:

The function workshop.plot() is essentially the flot plotting command.

The numeric library provides functions that implement most of the usual Javascript operators for vectors and matrices:
IN> x = [7,8,9];
    y = [10,1,2];
    numeric['+'](x,y)
OUT> [17,9,11]
IN> numeric['>'](x,y)
OUT> [false,true,true]
These operators can also be called with plain Javascript function names:
IN> numeric.add([7,8,9],[10,1,2])
OUT> [17,9,11]
You can also use these operators with three or more parameters:
IN> numeric.add([1,2],[3,4],[5,6],[7,8])
OUT> [16,20]
The function numeric.inv() can be used to compute the inverse of an invertible matrix:
IN> A = [[1,2,3],[4,5,6],[7,1,9]]
OUT> [[1,2,3],
      [4,5,6],
      [7,1,9]]
IN> Ainv = numeric.inv(A);
OUT> [[-0.9286,0.3571,0.07143],
      [-0.1429,0.2857,-0.1429],
      [0.7381,-0.3095,0.07143]]
The function numeric.prettyPrint() is used to print most of the examples in this documentation. It formats objects, arrays and numbers so that they can be understood easily. All output is automatically formatted using numeric.prettyPrint() when in the Workshop. In order to present the information clearly and succintly, the function numeric.prettyPrint() lays out matrices so that all the numbers align. Furthermore, numbers are given approximately using the numeric.precision variable:
IN> numeric.precision = 10; x = 3.141592653589793
OUT> 3.141592654
IN> numeric.precision = 4; x
OUT> 3.142
The default precision is 4 digits. In addition to printing approximate numbers, the function numeric.prettyPrint() will replace large arrays with the string ...Large Array...:
IN> numeric.identity(100)
OUT> ...Large Array...
By default, this happens with the Array's length is more than 50. This can be controlled by setting the variable numeric.largeArray to an appropriate value:
IN> numeric.largeArray = 2; A = numeric.identity(4)
OUT> ...Large Array...
IN> numeric.largeArray = 50; A
OUT> [[1,0,0,0],
      [0,1,0,0],
      [0,0,1,0],
      [0,0,0,1]]
In particular, if you want to print all Arrays regardless of size, set numeric.largeArray = Infinity.

Math Object functions

The Math object functions have also been adapted to work on Arrays as follows:
IN> numeric.exp([1,2]);
OUT> [2.718,7.389]
IN> numeric.exp([[1,2],[3,4]])
OUT> [[2.718, 7.389],
      [20.09, 54.6]]
IN> numeric.abs([-2,3])
OUT> [2,3]
IN> numeric.acos([0.1,0.2])
OUT> [1.471,1.369]
IN> numeric.asin([0.1,0.2])
OUT> [0.1002,0.2014]
IN> numeric.atan([1,2])
OUT> [0.7854,1.107]
IN> numeric.atan2([1,2],[3,4])
OUT> [0.3218,0.4636]
IN> numeric.ceil([-2.2,3.3])
OUT> [-2,4]
IN> numeric.floor([-2.2,3.3])
OUT> [-3,3]
IN> numeric.log([1,2])
OUT> [0,0.6931]
IN> numeric.pow([2,3],[0.25,7.1])
OUT> [1.189,2441]
IN> numeric.round([-2.2,3.3])
OUT> [-2,3]
IN> numeric.sin([1,2])
OUT> [0.8415,0.9093]
IN> numeric.sqrt([1,2])
OUT> [1,1.414]
IN> numeric.tan([1,2])
OUT> [1.557,-2.185]

Utility functions

The function numeric.dim() allows you to compute the dimensions of an Array.
IN> numeric.dim([1,2])
OUT> [2]
IN> numeric.dim([[1,2,3],[4,5,6]])
OUT> [2,3]
You can perform a deep comparison of Arrays using numeric.same():
IN> numeric.same([1,2],[1,2])
OUT> true
IN> numeric.same([1,2],[1,2,3])
OUT> false
IN> numeric.same([1,2],[[1],[2]])
OUT> false
IN> numeric.same([[1,2],[3,4]],[[1,2],[3,4]])
OUT> true
IN> numeric.same([[1,2],[3,4]],[[1,2],[3,5]])
OUT> false
IN> numeric.same([[1,2],[2,4]],[[1,2],[3,4]])
OUT> false
You can create a multidimensional Array from a given value using numeric.rep()
IN> numeric.rep([3],5)
OUT> [5,5,5]
IN> numeric.rep([2,3],0)
OUT> [[0,0,0],
      [0,0,0]]
You can loop over Arrays as you normally would. However, in order to quickly generate optimized loops, the numeric library provides a few efficient loop-generation mechanisms. For example, the numeric.mapreduce() function can be used to make a function that computes the sum of all the entries of an Array.
IN> sum = numeric.mapreduce('accum += xi','0'); sum([1,2,3])
OUT> 6
IN> sum([[1,2,3],[4,5,6]])
OUT> 21
The functions numeric.any() and numeric.all() allow you to check whether any or all entries of an Array are boolean true values.
IN> numeric.any([false,true])
OUT> true
IN> numeric.any([[0,0,3.14],[0,false,0]])
OUT> true
IN> numeric.any([0,0,false])
OUT> false
IN> numeric.all([false,true])
OUT> false
IN> numeric.all([[1,4,3.14],["no",true,-1]])
OUT> true
IN> numeric.all([0,0,false])
OUT> false
You can create a diagonal matrix using numeric.diag()
IN> numeric.diag([1,2,3])
OUT> [[1,0,0],
      [0,2,0],
      [0,0,3]]
The function numeric.identity() returns the identity matrix.
IN> numeric.identity(3)
OUT> [[1,0,0],
      [0,1,0],
      [0,0,1]]
Random Arrays can also be created:
IN> numeric.random([2,3])
OUT> [[0.05303,0.1537,0.7280],
      [0.3839,0.08818,0.6316]]
You can generate a vector of evenly spaced values:
IN> numeric.linspace(1,5);
OUT> [1,2,3,4,5]
IN> numeric.linspace(1,3,5);
OUT> [1,1.5,2,2.5,3]

Arithmetic operations

The standard arithmetic operations have been vectorized:
IN> numeric.addVV([1,2],[3,4])
OUT> [4,6]
IN> numeric.addVS([1,2],3)
OUT> [4,5]
There are also polymorphic functions:
IN> numeric.add(1,[2,3])
OUT> [3,4]
IN> numeric.add([1,2,3],[4,5,6])
OUT> [5,7,9]
The other arithmetic operations are available:
IN> numeric.sub([1,2],[3,4])
OUT> [-2,-2]
IN> numeric.mul([1,2],[3,4])
OUT> [3,8]
IN> numeric.div([1,2],[3,4])
OUT> [0.3333,0.5]
The in-place operators (such as +=) are also available:
IN> v = [1,2,3,4]; numeric.addeq(v,3); v
OUT> [4,5,6,7]
IN> numeric.subeq([1,2,3],[5,3,1])
OUT> [-4,-1,2]
Unary operators:
IN> numeric.neg([1,-2,3])
OUT> [-1,2,-3]
IN> numeric.isFinite([10,NaN,Infinity])
OUT> [true,false,false]
IN> numeric.isNaN([10,NaN,Infinity])
OUT> [false,true,false]

Linear algebra

Matrix products are implemented in the functions numeric.dotVV() numeric.dotVM() numeric.dotMV() numeric.dotMM():
IN> numeric.dotVV([1,2],[3,4])
OUT> 11
IN> numeric.dotVM([1,2],[[3,4],[5,6]])
OUT> [13,16]
IN> numeric.dotMV([[1,2],[3,4]],[5,6])
OUT> [17,39]
IN> numeric.dotMMbig([[1,2],[3,4]],[[5,6],[7,8]])
OUT> [[19,22],
      [43,50]]
IN> numeric.dotMMsmall([[1,2],[3,4]],[[5,6],[7,8]])
OUT> [[19,22],
      [43,50]]
IN> numeric.dot([1,2,3,4,5,6,7,8,9],[1,2,3,4,5,6,7,8,9])
OUT> 285
The function numeric.dot() is "polymorphic" and selects the appropriate Matrix product:
IN> numeric.dot([1,2,3],[4,5,6])
OUT> 32
IN> numeric.dot([[1,2,3],[4,5,6]],[7,8,9])
OUT> [50,122]
Solving the linear problem Ax=b (Dan Huang):
IN> numeric.solve([[1,2],[3,4]],[17,39])
OUT> [5,6]
The algorithm is based on the LU decomposition:
IN> LU = numeric.LU([[1,2],[3,4]])
OUT> {LU:[[3     ,4     ],
          [0.3333,0.6667]],
       P:[1,1]}
IN> numeric.LUsolve(LU,[17,39])
OUT> [5,6]
The determinant:
IN> numeric.det([[1,2],[3,4]]);
OUT> -2
IN> numeric.det([[6,8,4,2,8,5],[3,5,2,4,9,2],[7,6,8,3,4,5],[5,5,2,8,1,6],[3,2,2,4,2,2],[8,3,2,2,4,1]]);
OUT> -1404
The matrix inverse:
IN> numeric.inv([[1,2],[3,4]])
OUT> [[   -2,    1],
      [  1.5, -0.5]]
The transpose:
IN> numeric.transpose([[1,2,3],[4,5,6]])
OUT> [[1,4],
      [2,5],
      [3,6]]
IN> numeric.transpose([[1,2,3,4,5,6,7,8,9,10,11,12]])
OUT> [[ 1],
      [ 2],
      [ 3],
      [ 4],
      [ 5],
      [ 6],
      [ 7],
      [ 8],
      [ 9],
      [10],
      [11],
      [12]]
You can compute the 2-norm of an Array, which is the square root of the sum of the squares of the entries.
IN> numeric.norm2([1,2])
OUT> 2.236
Computing the tensor product of two vectors:
IN> numeric.tensor([1,2],[3,4,5])
OUT> [[3,4,5],
      [6,8,10]]

Data manipulation

There are also some data manipulation functions. You can parse dates:
IN> numeric.parseDate(['1/13/2013','2001-5-9, 9:31']);
OUT> [1.358e12,9.894e11]
Parse floating point quantities:
IN> numeric.parseFloat(['12','0.1'])
OUT> [12,0.1]
Parse CSV files:
IN> numeric.parseCSV('a,b,c\n1,2.3,.3\n4e6,-5.3e-8,6.28e+4')
OUT> [[     "a",     "b",     "c"],
      [       1,     2.3,     0.3],
      [     4e6, -5.3e-8,   62800]]
IN> numeric.toCSV([[1.23456789123,2],[3,4]])
OUT> "1.23456789123,2
     3,4
     "
You can also fetch a URL (a thin wrapper around XMLHttpRequest):
IN> numeric.getURL('tools/helloworld.txt').responseText
OUT> "Hello, world!"

Complex linear algebra

You can also manipulate complex numbers:
IN> z = new numeric.T(3,4);
OUT> {x: 3, y: 4}
IN> z.add(5)
OUT> {x: 8, y: 4}
IN> w = new numeric.T(2,8);
OUT> {x: 2, y: 8}
IN> z.add(w)
OUT> {x: 5, y: 12}
IN> z.mul(w)
OUT> {x: -26, y: 32}
IN> z.div(w)
OUT> {x:0.5588,y:-0.2353}
IN> z.sub(w)
OUT> {x:1, y:-4}
Complex vectors and matrices can also be handled:
IN> z = new numeric.T([1,2],[3,4]);
OUT> {x: [1,2], y: [3,4]}
IN> z.abs()
OUT> {x:[3.162,4.472],y:}
IN> z.conj()
OUT> {x:[1,2],y:[-3,-4]}
IN> z.norm2()
OUT> 5.477
IN> z.exp()
OUT> {x:[-2.691,-4.83],y:[0.3836,-5.592]}
IN> z.cos()
OUT> {x:[-1.528,-2.459],y:[0.1658,-2.745]}
IN> z.sin()
OUT> {x:[0.2178,-2.847],y:[1.163,2.371]}
IN> z.log()
OUT> {x:[1.151,1.498],y:[1.249,1.107]}
Complex matrices:
IN> A = new numeric.T([[1,2],[3,4]],[[0,1],[2,-1]]);
OUT> {x:[[1, 2],
         [3, 4]],
      y:[[0, 1],
         [2,-1]]}
IN> A.inv();
OUT> {x:[[0.125,0.125],
         [  0.25,    0]],
      y:[[   0.5,-0.25],
         [-0.375,0.125]]}
IN> A.inv().dot(A)
OUT> {x:[[1,         0],
         [0,         1]],
      y:[[0,-2.776e-17],
         [0,         0]]}
IN> A.get([1,1])
OUT> {x: 4, y: -1}
IN> A.transpose()
OUT> { x: [[1, 3],
           [2, 4]],
       y: [[0, 2],
           [1,-1]] }
IN> A.transjugate()
OUT> { x: [[ 1, 3],
           [ 2, 4]],
       y: [[ 0,-2],
           [-1, 1]] }
IN> numeric.T.rep([2,2],new numeric.T(2,3));
OUT> { x: [[2,2],
           [2,2]],
       y: [[3,3],
           [3,3]] }

Eigenvalues

Eigenvalues:
IN> A = [[1,2,5],[3,5,-1],[7,-3,5]];
OUT> [[ 1,  2,  5],
      [ 3,  5, -1],
      [ 7, -3,  5]]
IN> B = numeric.eig(A);
OUT> {lambda:{x:[-4.284,9.027,6.257],y:},
      E:{x:[[ 0.7131,-0.5543,0.4019],
            [-0.2987,-0.2131,0.9139],
            [-0.6342,-0.8046,0.057 ]],
         y:}}
IN> C = B.E.dot(numeric.T.diag(B.lambda)).dot(B.E.inv());
OUT> {x: [[ 1,  2,  5],
          [ 3,  5, -1],
          [ 7, -3,  5]],
      y: }
Note that eigenvalues and eigenvectors are returned as complex numbers (type numeric.T). This is because eigenvalues are often complex even when the matrix is real.

Singular value decomposition (Shanti Rao)

Shanti Rao kindly emailed me an implementation of the Golub and Reisch algorithm:
IN> A=[[ 22, 10,  2,  3,  7],
       [ 14,  7, 10,  0,  8],
       [ -1, 13, -1,-11,  3],
       [ -3, -2, 13, -2,  4],
       [  9,  8,  1, -2,  4],
       [  9,  1, -7,  5, -1],
       [  2, -6,  6,  5,  1],
       [  4,  5,  0, -2,  2]];
    numeric.svd(A);
OUT> {U: 
[[    -0.7071,    -0.1581,     0.1768,     0.2494,     0.4625],
 [    -0.5303,    -0.1581,    -0.3536,     0.1556,    -0.4984],
 [    -0.1768,     0.7906,    -0.1768,    -0.1546,     0.3967],
 [ -1.506e-17,    -0.1581,    -0.7071,    -0.3277,        0.1],
 [    -0.3536,     0.1581,  1.954e-15,   -0.07265,    -0.2084],
 [    -0.1768,    -0.1581,     0.5303,    -0.5726,   -0.05555],
 [ -7.109e-18,    -0.4743,    -0.1768,    -0.3142,     0.4959],
 [    -0.1768,     0.1581,  1.915e-15,     -0.592,    -0.2791]],
S: 
[      35.33,         20,       19.6,          0,          0],
V: 
[[    -0.8006,    -0.3162,     0.2887,    -0.4191,          0],
 [    -0.4804,     0.6325,  7.768e-15,     0.4405,     0.4185],
 [    -0.1601,    -0.3162,     -0.866,     -0.052,     0.3488],
 [  4.684e-17,    -0.6325,     0.2887,     0.6761,     0.2442],
 [    -0.3203,  3.594e-15,    -0.2887,      0.413,    -0.8022]]}

Sparse linear algebra

Sparse matrices are matrices that have a lot of zeroes. In numeric, sparse matrices are stored in the "compressed column storage" ordering. Example:
IN> A = [[1,2,0],
         [0,3,0],
         [2,0,5]];
    SA = numeric.ccsSparse(A);
OUT> [[0,2,4,5],
      [0,2,0,1,2],
      [1,2,2,3,5]]
The relation between A and its sparse representation SA is:
    A[i][SA[1][k]] = SA[2][k] with SA[0][i] ≤ k < SA[0][i+1]
In other words, SA[2] stores the nonzero entries of A; SA[1] stores the row numbers and SA[0] stores the offsets of the columns. See I. DUFF, R. GRIMES, AND J. LEWIS, Sparse matrix test problems, ACM Trans. Math. Soft., 15 (1989), pp. 1-14.
IN> A = numeric.ccsSparse([[ 3, 5, 8,10, 8],
                          [ 7,10, 3, 5, 3],
                          [ 6, 3, 5, 1, 8],
                          [ 2, 6, 7, 1, 2],
                          [ 1, 2, 9, 3, 9]]);
OUT> [[0,5,10,15,20,25],
      [0,1,2,3,4,0,1,2,3,4,0,1,2,3,4,0,1,2,3,4,0,1,2,3,4],
      [3,7,6,2,1,5,10,3,6,2,8,3,5,7,9,10,5,1,1,3,8,3,8,2,9]]
IN> numeric.ccsFull(A);
OUT> [[ 3, 5, 8,10, 8],
      [ 7,10, 3, 5, 3],
      [ 6, 3, 5, 1, 8],
      [ 2, 6, 7, 1, 2],
      [ 1, 2, 9, 3, 9]]
IN> numeric.ccsDot(numeric.ccsSparse([[1,2,3],[4,5,6]]),numeric.ccsSparse([[7,8],[9,10],[11,12]]))
OUT> [[0,2,4],
      [0,1,0,1],
      [58,139,64,154]]
IN> M = [[0,1,3,6],[0,0,1,0,1,2],[3,-1,2,3,-2,4]];
    b = [9,3,2];
    x = numeric.ccsTSolve(M,b);
OUT> [3.167,2,0.5]
IN> numeric.ccsDot(M,[[0,3],[0,1,2],x])
OUT> [[0,3],[0,1,2],[9,3,2]]
We provide an LU=PA decomposition:
IN> A = [[0,5,10,15,20,25],
         [0,1,2,3,4,0,1,2,3,4,0,1,2,3,4,0,1,2,3,4,0,1,2,3,4],
         [3,7,6,2,1,5,10,3,6,2,8,3,5,7,9,10,5,1,1,3,8,3,8,2,9]];
    LUP = numeric.ccsLUP(A);
OUT> {L:[[0,5,9,12,14,15],
         [0,2,4,1,3,1,3,4,2,2,4,3,3,4,4],
         [1,0.1429,0.2857,0.8571,0.4286,1,-0.1282,-0.5641,-0.1026,1,0.8517,0.7965,1,-0.67,1]],
      U:[[0,1,3,6,10,15],
         [0,0,1,0,1,2,0,1,2,3,0,1,2,3,4],
         [7,10,-5.571,3,2.429,8.821,5,-3.286,1.949,5.884,3,5.429,9.128,0.1395,-3.476]],
      P:[1,2,4,0,3],
      Pinv:[3,0,1,4,2]}
IN> numeric.ccsFull(numeric.ccsDot(LUP.L,LUP.U))
OUT> [[ 7,10, 3, 5, 3],
      [ 6, 3, 5, 1, 8],
      [ 1, 2, 9, 3, 9],
      [ 3, 5, 8,10, 8],
      [ 2, 6, 7, 1, 2]]
IN> x = numeric.ccsLUPSolve(LUP,[96,63,82,51,89])
OUT> [3,1,4,1,5]
IN> X = numeric.trunc(numeric.ccsFull(numeric.ccsLUPSolve(LUP,A)),1e-15); // Solve LUX = PA
OUT> [[1,0,0,0,0],
      [0,1,0,0,0],
      [0,0,1,0,0],
      [0,0,0,1,0],
      [0,0,0,0,1]]
IN> numeric.ccsLUP(A,0.4).P;
OUT> [0,2,1,3,4]
The LUP decomposition uses partial pivoting and has an optional thresholding argument. With a threshold of 0.4, the pivots are [0,2,1,3,4] (only rows 1 and 2 have been exchanged) instead of the "full partial pivoting" order above which was [1,2,4,0,3]. Threshold=0 gives no pivoting and threshold=1 gives normal partial pivoting. Note that a small or zero threshold can result in numerical instabilities and is normally used when the matrix A is already in some order that minimizes fill-in. We also support arithmetic on CCS matrices:
IN> A = numeric.ccsSparse([[1,2,0],[0,3,0],[0,0,5]]);
    B = numeric.ccsSparse([[2,9,0],[0,4,0],[-2,0,0]]);
    numeric.ccsadd(A,B);
OUT> [[0,2,4,5],
      [0,2,0,1,2],
      [3,-2,11,7,5]]
We also have scatter/gather functions
IN> X = [[0,0,1,1,2,2],[0,1,1,2,2,3],[1,2,3,4,5,6]];
    SX = numeric.ccsScatter(X);
OUT> [[0,1,3,5,6],
      [0,0,1,1,2,2],
      [1,2,3,4,5,6]]
IN> numeric.ccsGather(SX)
OUT> [[0,0,1,1,2,2],[0,1,1,2,2,3],[1,2,3,4,5,6]]

Coordinate matrices

We also provide a banded matrix implementation using the coordinate encoding.

LU decomposition:
IN> lu = numeric.cLU([[0,0,1,1,1,2,2],[0,1,0,1,2,1,2],[2,-1,-1,2,-1,-1,2]])
OUT> {U:[[    0,    0,    1,      1,  2   ],
         [    0,    1,    1,      2,  2   ],
         [    2,   -1,  1.5,     -1, 1.333]],
      L:[[    0,    1,    1,      2,  2   ],
         [    0,    0,    1,      1,  2   ],
         [    1, -0.5,    1,-0.6667,  1   ]]}
IN> numeric.cLUsolve(lu,[5,-8,13])
OUT> [3,1,7]
Note that numeric.cLU() does not have any pivoting.

Solving PDEs

The functions numeric.cgrid() and numeric.cdelsq() can be used to obtain a numerical Laplacian for a domain.
IN> g = numeric.cgrid(5)
OUT> 
[[-1,-1,-1,-1,-1],
 [-1, 0, 1, 2,-1],
 [-1, 3, 4, 5,-1],
 [-1, 6, 7, 8,-1],
 [-1,-1,-1,-1,-1]]
IN> coordL = numeric.cdelsq(g)
OUT> 
[[ 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 7, 7, 7, 7, 8, 8, 8],
 [ 1, 3, 0, 0, 2, 4, 1, 1, 5, 2, 0, 4, 6, 3, 1, 3, 5, 7, 4, 2, 4, 8, 5, 3, 7, 6, 4, 6, 8, 7, 5, 7, 8],
 [-1,-1, 4,-1,-1,-1, 4,-1,-1, 4,-1,-1,-1, 4,-1,-1,-1,-1, 4,-1,-1,-1, 4,-1,-1, 4,-1,-1,-1, 4,-1,-1, 4]]
IN> L = numeric.sscatter(coordL); // Just to see what it looks like
OUT> 
[[          4,         -1,           ,         -1],
 [         -1,          4,         -1,           ,         -1],
 [           ,         -1,          4,           ,           ,         -1],
 [         -1,           ,           ,          4,         -1,           ,         -1],
 [           ,         -1,           ,         -1,          4,         -1,           ,         -1],
 [           ,           ,         -1,           ,         -1,          4,           ,           ,         -1],
 [           ,           ,           ,         -1,           ,           ,          4,         -1],
 [           ,           ,           ,           ,         -1,           ,         -1,          4,         -1],
 [           ,           ,           ,           ,           ,         -1,           ,         -1,          4]]
IN> lu = numeric.cLU(coordL); x = numeric.cLUsolve(lu,[1,1,1,1,1,1,1,1,1]);
OUT> [0.6875,0.875,0.6875,0.875,1.125,0.875,0.6875,0.875,0.6875]
IN> numeric.cdotMV(coordL,x)
OUT> [1,1,1,1,1,1,1,1,1]
IN> G = numeric.rep([5,5],0); for(i=0;i<5;i++) for(j=0;j<5;j++) if(g[i][j]>=0) G[i][j] = x[g[i][j]]; G
OUT> 
[[ 0     , 0     , 0     , 0     , 0     ],
 [ 0     , 0.6875, 0.875 , 0.6875, 0     ],
 [ 0     , 0.875 , 1.125 , 0.875 , 0     ],
 [ 0     , 0.6875, 0.875 , 0.6875, 0     ],
 [ 0     , 0     , 0     , 0     , 0     ]]
IN> workshop.html('<img src="'+numeric.imageURL(numeric.mul([G,G,G],200))+'" width=100 />');
OUT> 


You can also work on an L-shaped or arbitrary-shape domain:
IN> numeric.cgrid(6,'L')
OUT> 
[[-1,-1,-1,-1,-1,-1],
 [-1, 0, 1,-1,-1,-1],
 [-1, 2, 3,-1,-1,-1],
 [-1, 4, 5, 6, 7,-1],
 [-1, 8, 9,10,11,-1],
 [-1,-1,-1,-1,-1,-1]]
IN> numeric.cgrid(5,function(i,j) { return i!==2 || j!==2; })
OUT> 
[[-1,-1,-1,-1,-1],
 [-1, 0, 1, 2,-1],
 [-1, 3,-1, 4,-1],
 [-1, 5, 6, 7,-1],
 [-1,-1,-1,-1,-1]]

Cubic splines

You can do some (natural) cubic spline interpolation:
IN> numeric.spline([1,2,3,4,5],[1,2,1,3,2]).at(numeric.linspace(1,5,10))
OUT> [ 1, 1.731, 2.039, 1.604, 1.019, 1.294, 2.364, 3.085, 2.82, 2]
For clamped splines:
IN> numeric.spline([1,2,3,4,5],[1,2,1,3,2],0,0).at(numeric.linspace(1,5,10))
OUT> [ 1, 1.435, 1.98, 1.669, 1.034, 1.316, 2.442, 3.017, 2.482, 2]
For periodic splines:
IN> numeric.spline([1,2,3,4],[0.8415,0.04718,-0.8887,0.8415],'periodic').at(numeric.linspace(1,4,10))
OUT> [ 0.8415, 0.9024, 0.5788, 0.04718, -0.5106, -0.8919, -0.8887, -0.3918, 0.3131, 0.8415]
Vector splines:
IN> numeric.spline([1,2,3],[[0,1],[1,0],[0,1]]).at(1.78)
OUT> [0.9327,0.06728]
Spline differentiation:
IN> xs = [0,1,2,3]; numeric.spline(xs,numeric.sin(xs)).diff().at(1.5)
OUT> 0.07302
Find all the roots:
IN> xs = numeric.linspace(0,30,31); ys = numeric.sin(xs); s = numeric.spline(xs,ys).roots();
OUT> [0, 3.142, 6.284, 9.425, 12.57, 15.71, 18.85, 21.99, 25.13, 28.27]

Fast Fourier Transforms

FFT and IFFT on numeric.T objects:
IN> z = (new numeric.T([1,2,3,4,5],[6,7,8,9,10])).fft()
OUT> {x:[15,-5.941,-3.312,-1.688, 0.941],
      y:[40, 0.941,-1.688,-3.312,-5.941]}
IN> z.ifft()
OUT> {x:[1,2,3,4,5],
      y:[6,7,8,9,10]}

Quadratic Programming (Alberto Santini)

The Quadratic Programming function numeric.solveQP() is based on Alberto Santini's quadprog, which is itself a port of the corresponding R routines.
IN> numeric.solveQP([[1,0,0],[0,1,0],[0,0,1]],[0,5,0],[[-4,2,0],[-3,1,-2],[0,0,1]],[-8,2,0]);
OUT> { solution:              [0.4762,1.048,2.095],
       value:                 [-2.381],
       unconstrained_solution:[     0,    5,    0],
       iterations:            [     3,    0],
       iact:                  [     3,    2,    0],
       message:               "" }

Unconstrained optimization

To minimize a function f(x) we provide the function numeric.uncmin(f,x0) where x0 is a starting point for the optimization. Here are some demos from from Moré et al., 1981:
IN> sqr = function(x) { return x*x; }; 
    numeric.uncmin(function(x) { return sqr(10*(x[1]-x[0]*x[0])) + sqr(1-x[0]); },[-1.2,1]).solution
OUT> [1,1]
IN> f = function(x) { return sqr(-13+x[0]+((5-x[1])*x[1]-2)*x[1])+sqr(-29+x[0]+((x[1]+1)*x[1]-14)*x[1]); }; 
    x0 = numeric.uncmin(f,[0.5,-2]).solution
OUT> [11.41,-0.8968]
IN> f = function(x) { return sqr(1e4*x[0]*x[1]-1)+sqr(Math.exp(-x[0])+Math.exp(-x[1])-1.0001); }; 
    x0 = numeric.uncmin(f,[0,1]).solution
OUT> [1.098e-5,9.106]
IN> f = function(x) { return sqr(x[0]-1e6)+sqr(x[1]-2e-6)+sqr(x[0]*x[1]-2)}; 
    x0 = numeric.uncmin(f,[0,1]).solution
OUT> [1e6,2e-6]
IN> f = function(x) { 
       return sqr(1.5-x[0]*(1-x[1]))+sqr(2.25-x[0]*(1-x[1]*x[1]))+sqr(2.625-x[0]*(1-x[1]*x[1]*x[1]));
    }; 
    x0 = numeric.uncmin(f,[1,1]).solution
OUT> [3,0.5]
IN> f = function(x) { 
        var ret = 0,i; 
        for(i=1;i<=10;i++) ret+=sqr(2+2*i-Math.exp(i*x[0])-Math.exp(i*x[1]));
         return ret; 
    };
    x0 = numeric.uncmin(f,[0.3,0.4]).solution
OUT> [0.2578,0.2578]
IN> y = [0.14,0.18,0.22,0.25,0.29,0.32,0.35,0.39,0.37,0.58,0.73,0.96,1.34,2.10,4.39]; 
    f = function(x) { 
        var ret = 0,i; 
        for(i=1;i<=15;i++) ret+=sqr(y[i-1]-(x[0]+i/((16-i)*x[1]+Math.min(i,16-i)*x[2]))); 
        return ret; 
    }; 
    x0 = numeric.uncmin(f,[1,1,1]).solution
OUT> [0.08241,1.133,2.344]
IN> y = [0.0009,0.0044,0.0175,0.0540,0.1295,0.2420,0.3521,0.3989,0.3521,0.2420,0.1295,0.0540,0.0175,0.0044,0.0009]; 
    f = function(x) { 
        var ret = 0,i; 
        for(i=1;i<=15;i++) 
        ret+=sqr(x[0]*Math.exp(-x[1]*sqr((8-i)/2-x[2])/2)-y[i-1]); 
        return ret; 
    }; 
    x0 = numeric.div(numeric.round(numeric.mul(numeric.uncmin(f,[1,1,1]).solution,1000)),1000)
OUT> [0.399,1,0]
IN> f = function(x) { return sqr(x[0]+10*x[1])+5*sqr(x[2]-x[3])+sqr(sqr(x[1]-2*x[2]))+10*sqr(x[0]-x[3]); }; 
    x0 = numeric.div(numeric.round(numeric.mul(numeric.uncmin(f,[3,-1,0,1]).solution,1e5)),1e5)
OUT> [0,0,0,0]
IN> f = function(x) { 
        return (sqr(10*(x[1]-x[0]*x[0]))+sqr(1-x[0])+
                90*sqr(x[3]-x[2]*x[2])+sqr(1-x[2])+
                10*sqr(x[1]+x[3]-2)+0.1*sqr(x[1]-x[3])); }; 
    x0 = numeric.uncmin(f,[-3,-1,-3,-1]).solution
OUT> [1,1,1,1]
IN> y = [0.1957,0.1947,0.1735,0.1600,0.0844,0.0627,0.0456,0.0342,0.0323,0.0235,0.0246]; 
    u = [4,2,1,0.5,0.25,0.167,0.125,0.1,0.0833,0.0714,0.0625]; 
    f = function(x) { 
        var ret=0, i; 
        for(i=0;i<11;++i) ret += sqr(y[i]-x[0]*(u[i]*u[i]+u[i]*x[1])/(u[i]*u[i]+u[i]*x[2]+x[3])); 
        return ret; 
    }; 
    x0 = numeric.uncmin(f,[0.25,0.39,0.415,0.39]).solution
OUT> [     0.1928,     0.1913,     0.1231,     0.1361]
IN> y = [0.844,0.908,0.932,0.936,0.925,0.908,0.881,0.850,0.818,0.784,0.751,0.718,
         0.685,0.658,0.628,0.603,0.580,0.558,0.538,0.522,0.506,0.490,0.478,0.467,
         0.457,0.448,0.438,0.431,0.424,0.420,0.414,0.411,0.406]; 
    f = function(x) { 
        var ret=0, i; 
        for(i=0;i<33;++i) ret += sqr(y[i]-(x[0]+x[1]*Math.exp(-10*i*x[3])+x[2]*Math.exp(-10*i*x[4]))); 
        return ret; 
    }; 
    x0 = numeric.uncmin(f,[0.5,1.5,-1,0.01,0.02]).solution
OUT> [     0.3754,      1.936,     -1.465,    0.01287,    0.02212]
IN> f = function(x) { 
        var ret=0, i,ti,yi,exp=Math.exp; 
        for(i=1;i<=13;++i) { 
            ti = 0.1*i; 
            yi = exp(-ti)-5*exp(-10*ti)+3*exp(-4*ti); 
            ret += sqr(x[2]*exp(-ti*x[0])-x[3]*exp(-ti*x[1])+x[5]*exp(-ti*x[4])-yi); 
        } 
        return ret; 
    }; 
    x0 = numeric.uncmin(f,[1,2,1,1,1,1],1e-14).solution;
    f(x0)<1e-20;
OUT> true
There are optional parameters to numeric.uncmin(f,x0,tol,gradient,maxit,callback). The iteration stops when the gradient or step size is less than the optional parameter tol. The gradient() parameter is a function that computes the gradient of f(). If it is not provided, a numerical gradient is used. The iteration stops when maxit iterations have been performed. The optional callback() parameter, if provided, is called at each step:
IN> z = []; 
    cb = function(i,x,f,g,H) { z.push({i:i, x:x, f:f, g:g, H:H }); }; 
    x0 = numeric.uncmin(function(x) { return Math.cos(2*x[0]); },
                        [1],1e-10,
                        function(x) { return [-2*Math.sin(2*x[0])]; },
                        100,cb);
OUT> {solution:    [1.571],
      f:           -1,
      gradient:    [2.242e-11],
      invHessian:  [[0.25]],
      iterations:  6,
      message:     "Newton step smaller than tol"}
IN> z
OUT> [{i:0, x:[1    ], f:-0.4161, g: [-1.819   ]  , H:[[1     ]] },
      {i:2, x:[1.909], f:-0.7795, g: [ 1.253   ]  , H:[[0.296 ]] },
      {i:3, x:[1.538], f:-0.9979, g: [-0.1296  ]  , H:[[0.2683]] },
      {i:4, x:[1.573], f:-1     , g: [ 9.392e-3]  , H:[[0.2502]] },
      {i:5, x:[1.571], f:-1     , g: [-6.105e-6]  , H:[[0.25  ]] },
      {i:6, x:[1.571], f:-1     , g: [ 2.242e-11] , H:[[0.25  ]] }]

Linear programming

Linear programming is available:
IN> x = numeric.solveLP([1,1],                   /* minimize [1,1]*x                */
                        [[-1,0],[0,-1],[-1,-2]], /* matrix of inequalities          */
                        [0,0,-3]                 /* right-hand-side of inequalities */
                        );       
    numeric.trunc(x.solution,1e-12);
OUT> [0,1.5]
The function numeric.solveLP(c,A,b) minimizes dot(c,x) subject to dot(A,x) <= b. The algorithm used is very basic. For alpha>0, define the ``barrier function'' f0 = dot(c,x) - alpha*sum(log(b-A*x)). The function numeric.solveLP calls numeric.uncmin on f0 for smaller and smaller values of alpha. This is a basic ``interior point method''. We also handle infeasible problems:
IN> numeric.solveLP([1,1],[[1,0],[0,1],[-1,-1]],[-1,-1,-1])
OUT> { solution: NaN, message: "Infeasible", iterations: 5 }
Unbounded problems:
IN> numeric.solveLP([1,1],[[1,0],[0,1]],[0,0]).message;
OUT> "Unbounded"
With an equality constraint:
IN> numeric.solveLP([1,2,3],                      /* minimize [1,2,3]*x                */
                    [[-1,0,0],[0,-1,0],[0,0,-1]], /* matrix A of inequality constraint */
                    [0,0,0],                      /* RHS b of inequality constraint    */
                    [[1,1,1]],                    /* matrix Aeq of equality constraint */
                    [3]                           /* vector beq of equality constraint */
                    );
OUT> { solution:[3,1.685e-16,4.559e-19], message:"", iterations:12 }

Solving ODEs

The function numeric.dopri() is an implementation of the Dormand-Prince-Runge-Kutta integrator with adaptive time-stepping:
IN> sol = numeric.dopri(0,1,1,function(t,y) { return y; })
OUT> { x:    [     0,   0.1, 0.1522, 0.361, 0.5792, 0.7843, 0.9813,     1],
       y:    [     1, 1.105,  1.164, 1.435,  1.785,  2.191,  2.668, 2.718],
       f:    [     1, 1.105,  1.164, 1.435,  1.785,  2.191,  2.668, 2.718],
       ymid: [ 1.051, 1.134,  1.293,   1.6,  1.977,  2.418,  2.693],
       iterations:8,
       events:,
       message:""}
IN> sol.at([0.3,0.7])
OUT> [1.35,2.014]
The return value sol contains the x and y values of the solution. If you need to know the value of the solution between the given x values, use the function sol.at(), which uses the extra information contained in sol.ymid and sol.f to produce approximations between these points. The integrator is also able to handle vector equations. This is a harmonic oscillator:
IN> sol = numeric.dopri(0,10,[3,0],function (x,y) { return [y[1],-y[0]]; });
    sol.at([0,0.5*Math.PI,Math.PI,1.5*Math.PI,2*Math.PI])
OUT> [[         3,         0],
      [ -9.534e-8,        -3],
      [        -3,  2.768e-7],
      [   3.63e-7,         3],
      [         3, -3.065e-7]]
Van der Pol:
IN> numeric.dopri(0,20,[2,0],function(t,y) { return [y[1], (1-y[0]*y[0])*y[1]-y[0]]; }).at([18,19,20])
OUT> [[ -1.208,   0.9916],
      [ 0.4258,    2.535],
      [  2.008, -0.04251]]
You can also specify a tolerance, a maximum number of iterations and an event function. The integration stops if the event function goes from negative to positive.
IN> sol = numeric.dopri(0,2,1,function (x,y) { return y; },1e-8,100,function (x,y) { return y-1.3; });
OUT> { x:          [     0, 0.0181, 0.09051, 0.1822,  0.2624],
       y:          [     1,  1.018,   1.095,    1.2,     1.3],
       f:          [     1,  1.018,   1.095,    1.2,     1.3],
       ymid:       [ 1.009,  1.056,   1.146,  1.249],
       iterations: 5,
       events:     true,
       message:    "" }
Note that at x=0.1822, the event function value was -0.1001 while at x=0.2673, the event value was 6.433e-3. The integrator thus terminated at x=0.2673 instead of continuing until the end of the integration interval.

Events can also be vector-valued:
IN> sol = numeric.dopri(0,2,1,
                        function(x,y) { return y; },
                        undefined,50,
                        function(x,y) { return [y-1.5,Math.sin(y-1.5)]; });
OUT> { x:          [    0,   0.2, 0.4055],
       y:          [    1, 1.221,    1.5],
       f:          [    1, 1.221,    1.5],
       ymid:       [1.105, 1.354],
       iterations: 2,
       events:     [true,true],
       message:    ""}

Seedrandom (David Bau)

The object numeric.seedrandom is based on David Bau's seedrandom.js. This small library can be used to create better pseudorandom numbers than Math.random() which can furthermore be "seeded".
IN> numeric.seedrandom.seedrandom(3); numeric.seedrandom.random()
OUT> 0.7569
IN> numeric.seedrandom.random()
OUT> 0.6139
IN> numeric.seedrandom.seedrandom(3); numeric.seedrandom.random()
OUT> 0.7569
For performance reasons, numeric.random() uses the default Math.random(). If you want to use the seedrandom version, just do Math.random = numeric.seedrandom.random. Note that this may slightly decrease the performance of all Math operations.
Numeric Javascript Index
Reference Card Numerical analysis in JS to Math Object functions
Utility functions to Arithmetic operations to Linear algebra to
Data manipulation to Complex linear algebra to Eigenvalues
Singular value decomposition Sparse linear algebra to Coordinate matrices
Solving PDEs to Cubic splines to Fast Fourier Transforms
Quadratic Programming Unconstrained optimization to Linear programming to
Solving ODEs to Seedrandom (David Bau) test matrix, run example

<a name=docA001>

2017-07-17-12-35
http://www.numericjs.com/lib/numeric-1.2.6.js
E:\wk\eigenvec_www.numericjs.com_numeric-1.2.6.js

2017-07-17-12-39
http://www.numericjs.com/lib/numeric-1.2.6.min.js
E:\wk\eigenvec_www.numericjs.com_numeric-1.2.6.min.js

2017-07-17-12-43 save
http://www.numericjs.com/documentation.html
as
E:\wk\eigenvec_www.numericjs.com.htm

2017-07-17-12-46
http://www.cs.bham.ac.uk/~pxc/web/jslib/

<a name=docA002>
2017-07-17-13-10
C:\$fm\js\other\eigenvec_www.numericjs.com_numeric-1.2.6.js

2017-07-17-13-22
C:\$fm\js\other\numericjs.com_numeric-1.2.6.txt

2017-07-17-16-18 save
C:\$fm\js\other\numericjs.com_numeric-1.2.6.txt
as
C:\$fm\upload\fm\learnj02.htm

2017-07-17-16-23 save
C:\$fm\js\other\numericjs.com_numeric-1.2.6.txt
as
C:\$fm\upload\fm\learnj03.htm

2017-07-17-17-21 wonderful work, but I do not know how to use.

<a name=docA003>
2017-07-17-17-49
http://www.ma.hw.ac.uk/~loisel/
E:\wk\eigenvec_www.numericjs.com_www.ma.hw.ac.uk_~loisel.htm
[[
Sébastien Loisel
Address: Department of Mathematics, room
CMT18
Heriot-Watt University
Edinburgh EH14 4AS
United Kingdom
]]

<a name=docA004> Update 2017-07-23
2017-07-22-20-59
On 2017-07-21-10-44 upload
Numeric Javascript numeric-1.2.6.js document
to http://freeman2.com/numjsdoc.htm
On 2017-07-21-15-53 test run
[[
A = [[1,2,5],[3,5,-1],[7,-3,5]];
B = numeric.eig(A);
C = B.E.dot(numeric.T.diag(B.lambda)).dot(B.E.inv());
mat2str(C.x)
]]
<a name=docA005>
numeric-1.2.6.js document say the output is
[[1,2,5],
[3,5,-1],
[7,-3,5]]
But numjsdoc.htm get
[[1,2,5],
[3,5,+1],
[7,-3,5]]
'-1' become '+1' that is big error.
<a name=docA006>

After debug, find bye09() has error.
error code is if(nine0<0)nine0--;
right code is if(in09<0)nine0--; //a607211645

Liu,Hsinhan build utility to test javascript
functions. Now upload let everyone use it.
http://freeman2.com/utility4.htm#testFuncDoc
2017-07-22-21-13

<a name=docA007>
2017-07-27-12-08
update 2017-07-27 main change is add examples in
http://www.numericjs.com/documentation.html to
http://freeman2.com/numjsdoc.htm as click buttons.
Allow reader one click get documentation.html
example and test run to confirm the output is same
as document declared. There are 69 click buttons
between Box81 and Box82 Page design to bring input
Box81 and output Box82 as close as possible. Each
time display 10 click buttons. Click or
change buttons. Click for all 69 buttons.
Click sin browser bring you to Real number calculator.
Click example sample code show up in Box81,
Click answer show up in Box82.
There are to supply help document.
'▲' or '▼' indicate which box to display.
2017-07-27-12-32

<a name=docA008>
2017-07-29-14-09
update 2017-07-29 modified Examples , , code
let up or down work better.
update 2017-07-29 added base64up.png base64 image
LiuHH hope use thick up, down arrow But Unicode
has only thin up,down arrow ↑,↓ and thick horizontal arrow
➜. On 2017-07-27-21-30 Liu,Hsinhan visited
https://stackoverflow.com/questions/8499633/how-to-display-base64-images-in-html
learned <img src="data:image/png;base64, iVBORw0KGgoAAAANSUhEUgAAAAUA
AAAFCAYAAACNbyblAAAAHElEQVQI12P4//8/w38GIAXDIBKE0DHxgljNBAAO
9TXL0Y4OHwAAAABJRU5ErkJggg==" alt="Red dot" />
generate red dot on page. LiuHH draw base64up.png and
base64dn.png save to base64aa.htm . On 2017-07-28-12-22
open base64aa.htm save as base64aa.mht . mht format file
auto change picture to base64 text string. Attach picture
string right to <img src="data:image/png;base64, and close
with " /> then done. Finally, Liu,Hsinhan did not change ▲,
▼ to because in <textarea> page present code string,
<img src="data:image/png;base64, ..... not picture. LiuHH
cannot keep consistent <textarea> document.
2017-07-29-15-03

<a name=docA009>
2017-08-21-21-30
update 2017-08-22 added What is eigenvector
added Box22 to Box26 they are same as Box11 to Box15 Both
exam Matrix*eigenvector ▬ eigenvector*eigenvalue = zero
Box11 insist eigenvector, Box15 must output 0 0
Box22 allow user change eigenvector to none-eigenvector.
    Box26 output may/mayNot be 0 0
Box22 input format please follow example
[t1], [t2], [t3], [o1] ; [o1] has one none-eigenvector.
expect Box22 input all eigen/none vectors.
expect Box22 input one eigen/none vector.
comparison output to Box26.
2017-08-21-21-45

<a name=docA010>
2017-08-28-17-24
update 2017-08-29 change title. Let M be square matrix,
λ be eigenvalue, V be eigenvector and U be none-eigenvector,
then λ × V = M × V , but λ × U NOT= M × U .
See none-eigenvector example equation eqn.matRot001
input [x,y,z]=[7.2, 8.5, 12.3],
output [a,b,c]=[509/60, 121/15, 229/20]
input,output component ratio 1.178 ≠ 0.9490 ≠ 0.9308
in this case eigenvalue λ is undefined.
If say use none-eigenvector U test λ × V = M × V . That is a
none sense statement. The make sense statement is that V be
eigenvector and in λ × V = M × V if one component has small
change, new vector v deviate from V just a little bit, in this
case λ × v NOT= M × v make sense.
update 2017-08-29 no code change, just add the term perturbation.
2017-08-28-17-50



<a name=JavascriptIndex>
The following is Liu,Hsinhan work.
Liu,Hsinhan is not a programmer, not a mathematics major.
Liu,Hsinhan work may contain error. Please pay attention.
If suspect anything wrong, please ask a math/programming
expert near by. Liu,Hsinhan 2017-07-22-22-08

Javascript index
http://freeman2.com/jsindex2.htm   local
Save graph code to same folder as htm files.
http://freeman2.com/jsgraph2.js   local




doubly stochastic matrix calculator
http://freeman2.com/dbstochm.htm
dbstochm.htm first Upload 2017-06-13
(related material see tute0047.htm)
5
3
3
2
1
1
=
3087/4028 1029/20140 294/5035 49/1007 0/2 4/53
1/16 15/16 0/2 0/2 0/2 0/2
1/16 1/240 14/15 0/2 0/2 0/2
7/152 7/2280 1/285 18/19 0/2 0/2
63/2014 21/10070 12/5035 2/1007 1/2 49/106
63/2014 21/10070 12/5035 2/1007 1/2 49/106
*
6
2.8
2.8
1.8
1.5
0.1
Doubly stochastic matrix table is created by http://freeman2.com/dbstochm.htm
doublStochMat() Date/Time: 2017-07-20-19-48-02.152, elapse=0.111 sec.


2017-07-17-12-35 Liu,Hsinhan receive numeric-1.2.6.js
http://www.numericjs.com/lib/numeric-1.2.6.js

2017-07-17-12-43 receive this document page from
http://www.numericjs.com/documentation.html

numeric-1.2.6.js document
http://freeman2.com/numjsdoc.htm
First Upload 2017-07-21
Thank you for visiting Freeman's page. 
Freeman Liu,Hsinhan 劉鑫漢 2017-07-20-17-51


<a name=base64learn>
2017-07-27-21-30
https://stackoverflow.com/questions/8499633/how-to-display-base64-images-in-html
Red dot alt="Red dot"
2017-07-28-06-08 start
base64up.png base64 image:
base64dn.png base64 image:
base64mt.png (black) base64 image:
base64mt.png (green) base64 image:
numeric-1.2.6.js example [r5] (deleted) base64 image: []
256 base64up.png
264 base64dn.png
281 base64mt.png
QWspanbase64a
2017-07-28-07-20 stop