mmcdara

7891 Reputation

22 Badges

9 years, 60 days

MaplePrimes Activity


These are replies submitted by mmcdara

@Otavio 

Sorry, I'd forgotten you (mmcdara is my login when I'm at home)
I'm going to look your new problem.

Do you have a piece of code I could use to have a better understanding of your problem?
If so, load it by using the bid green arrow on the top right.
 

@DustVoice 

I did not pretend my file works with YOUR Maple's version; but it does work with the version wich appears in the first output (Maple 2015.2, APPLE UNIVERSAL OSX, Dec 20 2015, Build ID 1097895)
Then I guess it could be a problem related to the version; thus my advice to write the command kernelopts(version); in your file and to load it after.

It works for me (see the attached file).

Two remarks:

  1. Load your file (green up arrow in the above menu bar) for us to see exactly what you wrote
    PS : Below the initial restart command insert command kernelopts(version) to see what version you are using
  2. It's better to specify the plotting range, for instance write plot(f(x), x=0..2*Pi), than plot(f(x))


plot2015.mw




 

@vv 

why exclude it?
just because "In statistics, the number of degrees of freedom is the number of values in the final calculation of a statistic that are free to vary." (from https://en.wikipedia.org/wiki/Degrees_of_freedom_(statistics)).

You will not find any sane statistician to justify a non integer value for the number of degrees of freedom. 
And this is has nothing to do with the concept of non integer dimension, period.

 

I probably intervene after the end of the battle but you may find here some elements in addition to those you have already received
 

restart

# initial  question
# (an alternative to acer's reply)
 

k[1] := 16;
solve( { k[2]^2 = k[1], k[2] >=0 }, {k[2]})

16

 

{k[2] = 4}

(1)

# Your reply to Christian Wolinski...
#
# first point: look to the equation you want to solve

n:=3:
eq := sum(k_i*10^(n - i), i = 1 .. n)=123

111*k_i = 123

(2)

# then the solution

solve(eq)

 

41/37

(3)

# first thing to do is to rewrite your equation this way

eq := add(k_||i*10^(n - i), i = 1 .. n)=123;

100*k_1+10*k_2+k_3 = 123

(4)

# now apply solve.. and get this uninteresting result (see Carl Love's last reply)

solve(eq, indets(eq));

{k_1 = k_1, k_2 = k_2, k_3 = -100*k_1-10*k_2+123}

(5)

 


 

Download solve.mw

 

@acer 

Thank you for this (too) quick answer (I've corrected my question in the meantime, probably not a smart thing to do).

 

@Carl Love 

Corrected

@Christian Wolinski 

Thanks for the tip

@tomleslie 

Your code leaves a non hatched region around the minimum of the parabola.
To have a whole hatched region look to the code I published 6 hours before yours.

I have no problems with people who use to copy the answer given previously by others if this copy improves the previous solution, which is unfortunately not the case here. So if you copy, at least copy correctly.

Here is your code with more thigten hatches

 

restart;

  with(plots):

  with(plottools):

  sol:=solve( [ x^2-y=0, y-(x+5)=0],

              [x, y],

              explicit

            ):

  f:= x-> line([rhs~(x[1]),rhs~(x[2])][], color=red):

  display( [ inequal( [ x^2-y<0,

                        y-(x+5)<0

                      ],

                      x=-4..4,

                      y=0..10,

                      color=white

                    ),

             textplot( [ [ rhs~(sol[1])[],

                           "A",

                           align=right

                         ],

                         [ rhs~(sol[2])[],

                           "B",

                           align=left

                         ]

                       ],

                       font=[times, bold, 18]

                     ),

             seq( f( solve

                     ( [ x^2-y=0, (y+i)-(x+5)=0],

                         [x, y],

                         explicit

                     )

                   ),

                  i=0..5, 0.1

                )

           ],

           size=[600, 600]

         );

 

 

 


 

Download Unhatched_region.mw

@acer 

Acer, I'm sorry for all the delay, but I forgot a little about you.

I kept working on a variant of  Statistics:-ChiSquareSuitableModelTest.


For now, given a sample S and any of Maple's "native" distributions with parameters P,  I am able to perform a chi-square gof test whether these parameters are numerical or formal.
The number of degrees of freedom is automatically set and the target distribution is either D(P) if all the Ps are numeric, or a particular instance of D(P) adjusted by the maximum likelihood method.

To perform the test I call the procedure Chi2GoF with two arguments (in fact it has been improved since our latter exchange):

  1. the sample S
  2. a random variable R (for instance RandomVariable(Normal(a, b)) )


The problem I'm facing now is to extend this strategy to user random variables.
You saw an example with the generalized beta distribution (GBD for short)
For the moment I can create a GBD with four parameters (the classical a and b from the BetaDistribution, plus a shifting parameter and a scaling parameter.

But in my procedure i must do more than this.
One of the first steps is to extract some informations from the random variable R. In particular I must extract its parameters (attribute Parameters of a RandomVariable) to recognize which are numeri and wich are formal in order to assess them from the sample S.

And here is my problem: I can't declare the attribute Parameters for Maple 2015.5 desperately returns an error message (please see the example below ; I tried a lot of other syntaxes without more success)

Download Problem.mw

At the very end,  I would like to be able to make the command Chi2Gof( S, RandomVariable(GBD(a, b, shift, scale)) ) work (here GBD is just an example).

 

@minhhieuh2003 

Here is a simple way to hatched a rectangle.
This could serve as a basis for your real need, feel free to tell if this doesn't suit you.
 

restart:

# hatched rectangle
#
# Simplest case: white hatches and non white rectangle

f := (x, z) -> x+z;

umin := solve(f(1, u) = 0, u);
umax := 2;

Rect := [[0, 0], [1, 0], [1, 2], [0, 2]]:
W    := [-1..2, -1..3];

plots:-display(
  PLOT(POLYGONS(Rect, COLOR(RGB, 1, 0, 0)), VIEW(op(W))),
  seq(plot(x+u, x=0..2, color=white, view=W), u in [seq(umin..umax, (umax-umin)/30)])
);

proc (x, z) options operator, arrow; x+z end proc

 

-1

 

2

 

[-1 .. 2, -1 .. 3]

 

 

# A priori a more complex case: non white hatches and white rectangle...
#

# But very simple if you play with the thickness of the white lines

plots:-display(
  PLOT(POLYGONS(Rect, COLOR(RGB, 1, 0, 0)), VIEW(op(W))),
  seq(plot(x+u, x=0..2, color=white, thickness=6, view=W), u in [seq(umin..umax, (umax-umin)/30)])
);

 

 


 

Download hatched_rectangle.mw

For this simple case (a parabola and a straight line) here is a simple solution:

You can easily customise the final plot as you want
 

restart:

# parameterize the straight line by z
# (yours correspond to z=5)

f := unapply(x+z, (x,z)):
g := unapply(x^2, x):

# min value of u for which the line tangents the parabola

umin := solve(discrim(g(x)-f(x, a), x)=0, a);

# draw 31 hatches

umax    := 5;
Hatches := NULL:
for u in [seq(umin..umax, (umax-umin)/30)] do
  s       := [solve(g(x)=f(x, u), x)];
  P       := evalf(map(x -> [x, g(x)], s));
  Hatches := Hatches, CURVES(P, COLOR(RGB, 0.8$3)):
end do:

plots:-display(
  PLOT(Hatches),
  plot([g(x), f(x,5)], x = -5 .. 5, color = ["Blue"$2], view = [-5..5, -1..10], size = [500,500], scaling=constrained)
);

-1/4

 

5

 

 

 


 

Download hatches.mw

@vv 

I never said th the Abramowitz-Stegun's 26.2.23  was incorrect, so I do not understand why you oppose me that "[you] find the Abramowitz-Stegun's 26.2.23   correct". Could you detailed please?

You say that "[I] obtain some L^2 approximation": are you making a reference to the procedure NonLinearFit I used?
Abramowitz & Stegun write |eps(p)| < 4.5*10^(-4): unfortunately this says nothing about the way the formula 26.2.23 had been obtained (maybe by minimizing something in the least squere sense? or maybe not?).
I suppose you confuse two things: the norm used in fitting the coefficients and the the norm of the representation error of the rational expression. 

Given your reply I understand you know things I do not know about the way formula 26.2.23 had been obtained ("fit" range, "fitting" procedure that has been used, ...): could you please enlighten me?

Last and not least, could you please be clearer about your  phrase "You should simply give your alternative better form (without detours) in order to compare  (I could have found the terms in bold almost insulting)

I respectfully thank you for your future answer

 

Oh, I just saw you have modified the content of you reply. Could it be that you thought you had pushed the plug a little too far?
Vous auriez pu au moins assumer vos propos en les maintenant ici.

 

@Carl Love 

Either you misinterpreted or either I poorly explained myself, but there is effectively some disagreement.
What I wanted to say writing "NormInv is not so fast" is more precisely : "NormInv doesn't dramatically reduce the time to estimate quantiles for it's barely twice as fast as the latter"

Please, return to my first post and look to the factor 30 between what you said is a "magic" approximation and Quantile.
Of course this speed comes to a price, which is that the approximations are less precise than which you obtain.
So, from the problematic of fast approximations (and the second term is very important which means "we accept to have an error of, let's say, 10-6), the "magic" approximation outperforms NormInv.

You can object that the most important point is accuracy and that NormInv ia about twice faster than Quantile while being more precise, and I agree with that. But if you have, lets say, 106 or more, quantiles to estimate the "magic" method becomes really more interesting than NormInv.

You compute quantiles for probabilities as high as 1-10-20 and, I agree with that too, NormInv is by far better than the "magic" method. But "in practice", that is in usual statistics, you never have to compute such extremely extreme quantiles.
Look, in advanced technology the target reliability of systems is generally between 10-8 and 10-6 , and even: very small failure probabilities are often obtained by using redundancy or connecting such systems in some adhoc way.
What you need is then to estimate probabilities of this magnitude or, reciprocally, to estimate quantiles which (in the normal case) do not exceed +/-6 (which is in fact the idea of this popular, in some circles, "6 sigmas" method).

To set a final point to this debate:

  1. NormInv outperforms Quantile and it could become an improvement of this latter in future Maple's version
  2. Despite it's qualities NormInv doesn't fit the objective of much faster approximations of Quantile in the sense I tried to detial above
  3. It may be a pity that you spent so much time developing, in a direction that was not what I expected, my initial post which the main objective was to put attention on an improvement of the Abramowitz-Stegun's 26.2.23 formula (more generally to point out that this book can contain little errors).
    I should have been clearer... but you quickly steered the debate in a particular direction  saying "I find it hard to believe that any of these "magic coefficients" methods are generally better than this simple Newton's method, which only took 10 minutes to write, can be understood immediately, and works for any setting of Digits or in evalhf mode".
    Looking retrospectively to the whole time spent, it was really much ado about nothing


Thank's for your involvement.

Here is the last test performed.

 

Time and accuracy test of Newton's method inverse CDF of the Normal distribution

restart:

kernelopts(version);

`Maple 2015.2, APPLE UNIVERSAL OSX, Dec 20 2015, Build ID 1097895`

(1)

#Symbolic steps:
Ncdf:= unapply(int(exp(-x^2/2)/sqrt(2*Pi), x= -infinity..X), X):
Newton:= unapply(simplify(x-(Ncdf(x)-Q)/D(Ncdf)(x)), (x,Q)):

#Purely numeric procedures:
NormInv_inner:= proc(Q, z0, eps)
local z:= z0, z1, e:= infinity, e1;
   do  
      z1:= evalf(Newton(z,Q));
      e1:= abs(z1-z);
      if e1 < eps or e1 >= e then return (z+z1)/2 fi;
      e:= e1;  
      z:= z1  
   od
end proc:
      
NormInv:= proc(Q::And(numeric, realcons, satisfies(Q-> 0 <= Q and Q <= 1)))
local r;
   if Q > 1/2 then return -thisproc(1-Q) fi;
   if Q = 0 then return -infinity fi;
   r:= evalhf(
      NormInv_inner(
         Q,
         (Q-0.5)*sqrt(2.*Pi),
         max(10.^(1-Digits), evalhf(DBL_EPSILON))
      )
   );
   if Digits <= evalhf(Digits) then r else NormInv_inner(Q, r, 10.^(1-Digits)) fi
end proc
:

Digits:= 20:

randomize(156606905209673);

156606905209673

(2)

S:= Statistics:-Sample(Uniform(0,1), 10^4):

N_Newton:= CodeTools:-Usage(NormInv~(S)):

memory used=0.64GiB, alloc change=364.01MiB, cpu time=6.24s, real time=6.07s, gc time=460.08ms

 

N_Quantile:= CodeTools:-Usage(Statistics:-Quantile~(Normal(0,1), S)):

memory used=1.41GiB, alloc change=128.00MiB, cpu time=15.00s, real time=14.35s, gc time=1.16s

 

Statistics commands tend to run faster if you first declare a RandomVariable::

N01:= Statistics:-RandomVariable(Normal(0,1)):

N_rv:= CodeTools:-Usage(Statistics:-Quantile~(N01, S, numeric)):

memory used=1.37GiB, alloc change=0 bytes, cpu time=13.29s, real time=12.51s, gc time=1.21s

 

Check the accuracy:

LinearAlgebra:-Norm(N_rv - N_Quantile, 1);

0.

(3)

Digits:= 30:

N_accurate:= CodeTools:-Usage(Statistics:-Quantile~(N01, S)):

memory used=0.89GiB, alloc change=0 bytes, cpu time=9.23s, real time=8.54s, gc time=1.02s

 

LinearAlgebra:-Norm(N_Quantile - N_accurate, 1);

0.114816126227654849290526416e-8

(4)

LinearAlgebra:-Norm(N_Newton - N_accurate, 1);

0.16152847737198934987933416e-9

(5)

14.61 / 5.32, 1.1e-9 / 1.6e-10;

2.74624060150375939849624060150, 6.87500000000000000000000000000

(6)

Conclusion: This particular Sample shows that using Newton's method (at Digits=20) gives about 6 times the accuracy while running at least 2 times as fast.

 


 

Download NormalInveseTest_mmcdara_reply.mw

@eslamelidy 

You wrote "I want to solve tthe systems of diff equation what's the problem"... maybe you should be more specific and say exactly what the "problem" you are facing is.
I gave a look to your code, the first problem you should meet is related to the line R := ... : this line should generate an error message because you try to assign R an expression and that R is protected (R is a component of the package CodeGeneration, to see this write with(CodeGeneration);).

If you want to avoid thus first error you have 3 solutions:

  1. change the name of the variable R
  2. write with(CodeGeneration, Matlab); instead of with(CodeGeneration)
  3. do not load Codegeneration and write CodeGeneration:-Matlab instead of Matlab (bottom of your worksheet):


A second problem could  come from the Matlab conversion: not all the Maple's expressions can be translated into Matlab code.
A simple example among a lot of others:
restart:
with(CodeGeneration):
f := piecewise(x<0, -x, x):
Matlab(f);  
 #observe the warning


So, could you explain precisely what "what's the problem" does mean to you?

Beyond all this, your worksheet is amazingly complex to read and thus to understand.
I think, and it was the purpose of my previous reply, that you would have advantage in simplifying its content (I gave you an example for the first ode but I'm not going to do the job for the 150 remaining ones)

First 120 121 122 123 124 125 126 Last Page 122 of 154