Explains
EULER supports statistical functions, such as distribution integrals and random variables.
First of all,
>random(N,M)
(or, as usual, random([N,M])) generates an NxM random matrix with uniformly distributed values in [0,1]. The function
>normal(N,M)
returns normally distributed random variables with mean value 0 and standart deviation 1. You should scale the function for other mean values or deviations.
You can set a seed for the random number generators with seed(x), where x is in (0,1). There are also faster versions, using the doubtful generator of C (fastnormal, fastrandom), which are about two times faster.
>shuffle(v)
shuffles the vector v (1xN vector) randomly.
A function for the mean value or the standard deviation is implemented, but it can easily defined in EULER. E.g,
>m=sum(x)/cols(x)
is the mean value of the vector x. However, there are the functions mean, dev, and meandev. The latter returns the mean value and the deviation simultanuously.
Some distributions are implemented.
>normaldis(x)
returns the probability of a normally distributed random variable being less than x.
>invnormaldis(p)
is the inverse to the above function. These functions are not fully accurate. However, the accuracy is enough for practical purposes and improved versions are contained in the file "xdis.e".
Another distribution is
>tdis(x,n)
It the T-distrution of x with n degrees of freedom; i.e., the probability that n the sum of normally distributed random variables scaled with their mean value and standart deviation are less than x.
>invtdis(p,n)
returns the inverse of this function.
>chidis(x,n)
returns the chi^2 distribution; i.e., the distribution of the sum of the squares n normally distributed random variables.
>fdis(x,n,m)
returns the f-distribution with n and m degrees of freedom.
Other functions have been mentioned above, like bin, fak, count, etc., which may be useful for statistical purposes.
There is also the gamma function and the incomplete gamma function
>gamma(x) >gamma(x,a)
and the incomplete gamma function. There is also the Beta function and its incomplete counterpart
>beta(a,b) >beta(x,a,b)
as well as the Bessel functions of the first and second kind besselj, bessely and the modified Bessel functions of the first and second kind besseli, besselk
>besselj(x,a)
where a is the order. The parameter x must be positive, and the order must be non-negative.
A discrete distribution is the binomial distribution
>binsum(i,n,p)
which returns the probability of I or less hits in N trials, if the chance for each hit is p. And
>hypergeomsum(i1,n1,i,n)
returns the probablitly of i1 or less hits, if you choose n1 balls from a bowl of n balls, containing I good choices.
>normalsum(i,n,p)
is a fast approximation of binsum for large n and medium p, and
>invbinsum(x,n,p)
is the inverse of binsum. There is also a special function to plot ranges of data in a histogram style. Assume you have bounds of ranges r(1),...,r(n+1) and frequencies f(1),...,f(n). You may use
>xplotrange(r,v)
to plot these data.
First of all, we show how to read data from a file. Suppose the file test.dat contains an unknown number (less than 1000) of data, separated by any non-digit characters. Then you can read the data with
>open("test.dat","r"); {a,n}=getvector(1000); close(); >a=a[1:n];
The utility function
>A=getmatrix(n,m,filename");
reads a complete nxm matrix from the file, opening and closing the file properly. If the filename is empty, it works like getvector, and the user has to open and close the file himself.
To write a vector a to the file, you can use
>open("test.dat","w"); printformat(a'); close();
This will print the data formatted with the %g format of C. To get a longer output, use printformat with an additional parameter "%30.15f".
You will have to load the statist.e file for most of the functions described here. This is done with the command
>load statist
The first function computes the mean value
>mean(x) >mean(x,f)
where x is a row vector of data and f are optional frequencies for the data (multiplicities). Correspondingly,
>dev(x) >dev(x,f)
computes the standard deviation of a measured sample x. Having computed these values, you may test for a specific mean value, using the Student T-test
>ttest(m,d,n,mu)
where the mean m and the deviation d are measured and tested against having the mean mu. N is the number of data. This function returns the probablility, that the true mean value is mu or more (assuming mu>m). I.e., the error you make, if you reject the hypothesis that the measurement has mean mu or more. Note, that the data must be normally distributed for this test to make sense. To make a two sided test, you have to check with m=0 and use the doubled error probability. You may also test several samples of normally distributed data for equal mean with
>varanalysis(x,y,z,...)
where all parameters are row vectors. A small answer means, that you have a low error, if you reject the equal mean. Assume, you have measurements, which you assume to obey a discrete probability p. Then
>chitest(x,p)
returns the error that you make, if you reject that x obeys the distribution p. Note, that you have to normalize both values before you use this test. E.g., assume you have 600 dice throws with certain results. Test for a false dice with
>chitest([90,103,114,101,103,89],dup(100,6)')
This will again return the error probability to reject the hypothesis of a correct dice. Small results mean a false dice. Another chi^2 test is the table test for dependence of the entries from the rows.
>tabletest(A)
will return the error that you make if you assume that the entries of A depend on the rows. The first non-parametric test in this file is the median test, which test two samples for same distribution.
>mediantest(a,b)
A low answer means, that you may reject the hypothesis of equal distribution. This test uses only the order of data. A sharper test is the rank test
>ranktest(a,b)
This test uses the sizes of the data to obtain sharper results. To test if a is form a distribution, which has a larger expected value than the distribution of the b, use
>signtest(a,b)
or, if you want to include the sizes of the differences for a sharper test
>wilcoxon(a,b)
A special example is the comparison of two medical treatments, done on the same subject.