mmcdara

772 Reputation

12 Badges

3 years, 102 days

MaplePrimes Activity


These are questions asked by mmcdara

Hi, 

When creating a user random variable, I would like to instanciate some of its attributes, for instance ParentName.
But it seems that it's not always possible.

​​​​​​​Is it a Maple's limitation or am I not doing the things correctly ?
​​​​​​​
Example:
 

restart:

with(Statistics):

U := RandomVariable(Uniform(0, 1)):

interface(warnlevel=0):

A := attributes(U)[3]

_ProbabilityDistribution

(1)

AllAttributes := with(A);

[CDF, Conditions, HodgesLehmann, InverseSurvivalFunction, MGF, MaximumLikelihoodEstimate, Mean, Median, Mode, PDF, Parameters, ParentName, Quantile, RandomSample, RandomSampleSetup, RandomVariate, RousseeuwCrouxSn, Support, Variance]

(2)

A:-ParentName

UniformDistribution

(3)

# Define a user random variable

v := Distribution(PDF = (z -> piecewise(0 <= t and t < 1, 1, 0))):
V := RandomVariable(v):
A := attributes(V)[3];
AllAttributes := with(A);
A:-Conditions;

_ProbabilityDistribution0

 

[Conditions, PDF]

 

[]

(4)

# its definition can be augmented by adding some recognized attributes...
# even if the result returned by Mean is strange

v := Distribution(PDF = (z -> piecewise(0 <= t and t < 1, 1, 0)), 'Mean'=1/Pi, 'Median'=exp(-1)):
V := RandomVariable(v):
A := attributes(V)[3];
AllAttributes := with(A);
[Median, Mean](V)

_ProbabilityDistribution1

 

[Conditions, Mean, Median, PDF]

 

[exp(-1), 1/Pi(_R1)]

(5)

# but not all the recognized attributes seem to be able to be instanciated:

v := Distribution(PDF = (z -> piecewise(a <= t and t < b, 1/(b-a), 0)), 'Parameters'=[a, b]);
v := Distribution(PDF = (z -> piecewise(a <= t and t < b, 1/(b-a), 0)), 'ParentNames'=MyDistribution);

Error, (in Statistics:-Distribution) invalid input: too many and/or wrong type of arguments passed to NewDistribution; first unused argument is Parameters = [a, b]

 

Error, (in Statistics:-Distribution) invalid input: too many and/or wrong type of arguments passed to NewDistribution; first unused argument is ParentNames = MyDistribution

 

 

 


 

Download Attributes.mw

Hi, 

How do I change the definition of g to get the result f(1, 2)(x) = 2*t+3 ?

f := a*x+b;
g := (a,b) -> x -> f;
g(a, b)(x);   # answer a*t+b
g(2, 3)(x);   # answer a*t+b

Thanks in advance

 

No doubt that someone will provide a smarter solution: 

 

# the main idea

f := exp(a-b);
subs(B=-b, expand(subs(b=-B, f)));

exp(a-b)

 

exp(a)*exp(-b)

(1)

# ad hoc application
restart:

f := 2*gamma(t, r)-2*alpha(t, r)-2*beta(t, r);
c := op~(1, [op(f)]);
C := [seq(u[i], i=1..numelems(c))];

subs(C=~c, expand(subs(c=~C, exp(f))))
 

2*gamma(t, r)-2*alpha(t, r)-2*beta(t, r)

 

[2, -2, -2]

 

[u[1], u[2], u[3]]

 

exp(2*gamma(t, r))*exp(-2*alpha(t, r))*exp(-2*beta(t, r))

(2)

 


 

Download exp.mw

Hi, 

The procedure Statistics:-ChiSquareSuitableModelTest returns wrong or stupid results in some situations.
The stupid answer can easily be avoided if the user is careful enough.
The wrong answer is more serious: the standard deviation (in the second case below) is not correctly estimated.

PS: the expression "CORRECT ANSWER" is a short for "POTENTIALLY CORRECT ANSWER" given that what ChiSquareSuitableModelTest really does is not documented
 

restart:

with(Statistics):

randomize():

N := 100:
S := Sample(Normal(0, 1), N):

infolevel[Statistics] := 1:

# 0 parameter to fit from the sample S  CORRECT ANSWER

ChiSquareSuitableModelTest(S, Normal(0, 1), level = 0.5e-1):
print():

Chi-Square Test for Suitable Probability Model
----------------------------------------------
Null Hypothesis:
Sample was drawn from specified probability distribution
Alt. Hypothesis:
Sample was not drawn from specified probability distribution
Bins:                    10
Degrees of freedom:      9
Distribution:            ChiSquare(9)
Computed statistic:      15.8
Computed pvalue:         0.0711774
Critical value:          16.9189774487099
Result: [Accepted]
This statistical test does not provide enough evidence to conclude that the null hypothesis is false

 

(1)

# 2 parameters (mean and standard deviation) to fit from the sample S  INCORRECT ANSWER

ChiSquareSuitableModelTest(S, Normal(a, b), level = 0.5e-1, fittedparameters = 2):


print():
# verification
m := Mean(S);
s := StandardDeviation(S);
t := sqrt(add((S-~m)^~2) / (N-1));

print():
error "the estimation of the StandardDeviation ChiSquareSuitableModelTest is not correct";
print():

Chi-Square Test for Suitable Probability Model

----------------------------------------------
Null Hypothesis:
Sample was drawn from specified probability distribution
Alt. Hypothesis:
Sample was not drawn from specified probability distribution
Model specialization:    [a = -.2143e-1, b = .8489]
Bins:                    10
Degrees of freedom:      7
Distribution:            ChiSquare(7)
Computed statistic:      3.8
Computed pvalue:         0.802504
Critical value:          14.0671405764057
Result: [Accepted]
This statistical test does not provide enough evidence to conclude that the null hypothesis is false

 

 

HFloat(-0.021425681632689854)

 

HFloat(0.8531979363682092)

 

HFloat(0.8531979363682094)

 

 

Error, the estimation of the StandardDeviation ChiSquareSuitableModelTest is not correct

 

(2)

# ONLY 1 parameter (mean OR standard deviation ?) to fit from the sample S  STUPID ANSWER
#
# A stupid answer: the parameter to fit not being declared, the procedure should return
# an error of the type "don(t know what is the paramater tio fit"
ChiSquareSuitableModelTest(S, Normal(a, b), level = 0.5e-1, fittedparameters = 1):


print():
WARNING("ChiSquareSuitableModelTest should return it can't fit a single parameter");
print():

Chi-Square Test for Suitable Probability Model

----------------------------------------------
Null Hypothesis:
Sample was drawn from specified probability distribution
Alt. Hypothesis:
Sample was not drawn from specified probability distribution
Model specialization:    [a = -.2143e-1, b = .8489]
Bins:                    10
Degrees of freedom:      8
Distribution:            ChiSquare(8)
Computed statistic:      3.8
Computed pvalue:         0.874702
Critical value:          15.5073130558655
Result: [Accepted]
This statistical test does not provide enough evidence to conclude that the null hypothesis is false

 

 

Warning, ChiSquareSuitableModelTest should return it can't fit a single parameter

 

(3)

ChiSquareSuitableModelTest(S, Normal(a, 1), level = 0.5e-1, fittedparameters = 1):  #CORRECT ANSWER
print():

# verification
m := Mean(S);
print():

Chi-Square Test for Suitable Probability Model

----------------------------------------------
Null Hypothesis:
Sample was drawn from specified probability distribution
Alt. Hypothesis:
Sample was not drawn from specified probability distribution
Model specialization:    [a = -.2143e-1]
Bins:                    10
Degrees of freedom:      8
Distribution:            ChiSquare(8)
Computed statistic:      16.4
Computed pvalue:         0.0369999
Critical value:          15.5073130558655
Result: [Rejected]
This statistical test provides evidence that the null hypothesis is false

 

 

HFloat(-0.021425681632689854)

 

(4)

ChiSquareSuitableModelTest(S, Normal(0, b), level = 0.5e-1, fittedparameters = 1):  #CORRECT ANSWER

print():
# verification
s := sqrt((add(S^~2) - 0^2) / N);
print():

Chi-Square Test for Suitable Probability Model

----------------------------------------------
Null Hypothesis:
Sample was drawn from specified probability distribution
Alt. Hypothesis:
Sample was not drawn from specified probability distribution
Model specialization:    [b = .8492]
Bins:                    10
Degrees of freedom:      8
Distribution:            ChiSquare(8)
Computed statistic:      6.4
Computed pvalue:         0.60252
Critical value:          15.5073130558655
Result: [Accepted]
This statistical test does not provide enough evidence to conclude that the null hypothesis is false

 

 

HFloat(0.8491915633531496)

 

(5)

 


 

Download ChiSquareSuitableModelTest.mw

I found this strange result:
When the central moment of order 2 and the variance are assessed on a sample of a random variable, they don't return the same value.

Let S be the sample, N its size and M its empirical mean.  The difference comes from the fact that 

  • Variance(S)               = add( (S[n]-M)^2, n=1..N) / (N-1)
  • CentralMoment(S, 2) = add( (S[n]-M)^2, n=1..N) / N

By definition the variance of a random variable X is its 2nd order central moment.
This definition should also apply when these statistics are calculated on a sample of X.

It seems to me this is a mistake.
 

restart:

with(Statistics):

N := 100:

# Example 1

U := RandomVariable(BetaDistribution(3, 2)):
CentralMoment(U, 2, numeric);
Variance(U, numeric);

print():
S := Sample(U, N):
CentralMoment(S, 2);
Variance(S);

0.4000000000e-1

 

0.4000000000e-1

 

 

HFloat(0.05053851854005207)

 

HFloat(0.05104900862631522)

(1)

# Example 2

V := RandomVariable(Normal(3, 2)):
CentralMoment(V, 2, numeric);
Variance(V, numeric);

print():

S := Sample(V, N):
CentralMoment(S, 2);
Variance(S)

4.

 

4.

 

 

HFloat(3.7514118336684517)

 

HFloat(3.7893048824933824)

(2)

# Examples revisited

S := Sample(U, N):
CentralMoment(S, 2);
Variance(S)*(N-1)/N;
print():
S := Sample(V, N):
CentralMoment(S, 2);
Variance(S)*(N-1)/N
 

HFloat(0.03346262243902275)

 

HFloat(0.03346262243902278)

 

 

HFloat(4.159851396769612)

 

HFloat(4.159851396769612)

(3)

 


 

Download StrangerThings.mw

1 2 3 4 5 6 7 Last Page 2 of 11