acer

32470 Reputation

29 Badges

20 years, 5 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

It sounds to me as if you would like to compute the residuals and the sum of squares of the residuals, given a forced least-squares function (model). You could of course compute the sums of squares of the residuals easily in Maple. Simply evaluate the model at each X value, and square the difference with the Y value, and add those up. That's probably one or two lines of Maple code. But you asked whether the Statistics routines could be used for the purpose, in a canned way. You could specify a parameter that doesn't actually appear in your candidate model equation. And then use the solutionmodule output to get the extra details. Here's an example, based on one from the ?Statistics,Fit help-page. Notice that the variable `dummy` doesn't appear in the model, which is the basis of the trick. The model equation is thus fully supplied by the user, with no parameters to actually be estimated (except some incidental estimation of the dummy, which is of no consequence).
> X := Vector([1, 2, 3, 4, 5, 6], datatype=float):
> Y := Vector([2, 3, 4.8, 10.2, 15.6, 30.9], datatype=float):
> Digits:=trunc(evalhf(Digits)):

> # Here is the forced model.

> eq:=0.887576142919275224+0.606352318207692531*exp(0.649251558313310717*t):

> # Here are the simple commands to get the results,
> # without using Statistics.

> add( (eval(eq,t=X[i])-Y[i])^2, i=1..op(1,X) );

                               2.29446465108687
 
> seq( (eval(eq,t=X[i])-Y[i]), i=1..op(1,X) );

0.04819978094862, 0.10913477921538, 0.33987862305974, -1.17305895930964,
 
    0.8671971243695, -0.1913514547231

> # And here is Statistics:-Fit computing those results.

> sol := Statistics:-Fit( eq, X, Y, t, parameternames=[dummy], output=solutionmodule):

> sol:-Results(["residualsumofsquares","residuals"]);

2.29446465108658, [0.0481997809486180984, 0.109134779215386946,
 
    0.339878623059754581, -1.17305895930959636, 0.867197124369345929,
 
    -0.191351454723299952]
acer
Compare these two forms,
> 10 = 0 mod 5;

                                    10 = 0

> (10 = 0) mod 5;

                                     0 = 0
In the first of those, the `mod` operator works only on the 0 and not on the 10. See the help-page ?operators,precedence which may help clarify the relative precedence of various operators. That page shows that mod has a higher binding strength than does =. The round-bracket delimiters in the second example above get around the higher precedence of mod over =. Your code could be written as follows.
printlevel:=2:
for n from 1 to 10 do
if ( (n=0) mod 5 ) then 0;
end if;
end do;
Note that the brackets around the conditional are not strictly necessary, so this should also work here.
printlevel:=2:
for n from 1 to 10 do
if (n=0) mod 5 then 0;
end if;
end do;
acer
Remove the ( round bracket between "if" and "isprime". acer
See ?dsolve[classical] for details. Hopefully I got it right. sys:={diff(y(t),t) = ((2-t)*(2-y(t)))/y(t) , y(0)=1}: sol:=dsolve(sys,numeric,method=classical[foreuler],stepsize=0.2,output=listprocedure): fy[0.2] := eval( y(t), sol): sol:=dsolve(sys,numeric,method=classical[foreuler],stepsize=0.1,output=listprocedure): fy[0.1] := eval( y(t), sol): sol:=dsolve(sys,numeric,method=classical[foreuler],stepsize=0.05,output=listprocedure): fy[0.05] := eval( y(t), sol): fy[0.05](1.0); fy[0.1](1.0); fy[0.2](1.0); plot([fy[0.2],fy[0.1],fy[0.05]],0..8,legend=[fy[0.2],fy[0.1],fy[0.05]]); acer
Do you mean something like this?
m := Matrix([[a[1]*b[1],c[1]+d[1]*e[1]],[sin(a[1]),exp(a[1]*b[1])]]);

map(diff,m,a[1]);
acer
Increasing the amount of memory used between garbage collections seems to help. Hopefully I transcribed your nested sum properly.
restart:

kernelopts(gcfreq=[3*10^7,0.1]);

# This next results in a double summation
# involving hypergeoms.
ee := sum(sum(sum((1-p)^i/(a1*i+a1+a2*j+a2+a3*k+a3),
                  i = 0 .. infinity)*(1-p)^j,
              j = 0 .. infinity)*(1-p)^k,
          k = 0 .. infinity);

evalf(subs(a1 = .4, a2 = .1, a3 = .5, p = 1/3, ee));
Using 64bit Linux Maple 11.00 the timing on my machine with default gcfreq was about 64sec, and with the above gcfreq was about 20sec. The memory allocated went up, from 12 million words to about 215 million words. You might gauge how high to set it according to how much physical memory your machine has. Using 32bit Linux Maple 11.00 on the same machine I got a time of about 22sec using gcfreq=[10^8,0.1]. The total bytesalloc was 320 million words. Now, this may be ill-advised in general, but for your particular example I got an answer correct to within 1 digit in the last place (at Digits=10) by using `add` in the final numerical instantiation. And it only took a couple of seconds. Of course, in general one might rely better on evalf/Sum's techniques, and also not know how many terms to add or how fast the excluded portion decays.
restart:

kernelopts(gcfreq=[3*10^8,0.1]):

ee := sum(sum(sum((1-p)^i/(a1*i+a1+a2*j+a2+a3*k+a3),
                  i = 0 .. infinity)*(1-p)^j,
              j = 0 .. infinity)*(1-p)^k,
          k = 0 .. infinity):

EE:=subs({sum=add,infinity=60},ee):
evalf(subs(a1 = .4, a2 = .1, a3 = .5, p = 1/3, EE));
acer
There may be other sneaky ways. This is in Maple 11.00 or 11.02.
> restart:
> ee:=p*(-1+p)*hypergeom([1, 1], [2], 1-p)*hypergeom([1], [], 2-p):
> convert(convert(convert(ee,MeijerG),`1F1`),hypergeom);

                        p hypergeom([1, 1], [2], 1 - p)

> restart:
> ee:=p*(-1+p)*hypergeom([1, 1], [2], 1-p)*hypergeom([1], [], 2-p):
> convert(ee,`0F1`);

                        p hypergeom([1, 1], [2], 1 - p)

acer
This is the sort of problem where people will try to out do each other. :) I'll shoot for simple.
p := proc(n::posint)
  local s;
  Digits:=n;
  s:=convert(op(1,convert(evalf(Pi),binary)),base,10);
  seq(s[nops(s)-i],i=0..nops(s)-1);
end proc:

p(4);
p(17);
There's also this
q := proc(n::posint)
  op(ListTools:-Reverse(convert(op(1,convert(evalf[n](Pi),binary,n)),base,10)));
end proc:
Doing it efficiently is another matter. acer
zurflu14 := t->A*exp(-k*t); D(zurflu14); acer
This might help you produce them. Note that this won't get you Atomic Identifiers in 2D Input without using the mouse. But you can get them assigned to other variables, and use them, and they should appear subscripted in 2D Math output and be distinct from the similar-looking "table" members.
f := proc(x::evaln,y::evaln)
convert( cat( "#msub(mi(\"",
              convert(x,string),
              "\"),mi(\"",
              convert(y,string),
              "\"))"),
         name);
end proc;

f(X,d);

f(x,g) - x[g];
acer
It might be that the intended expression is actually,
> ( cos(x)-cos(11*x) )/( cos(3*x)-cos(7*x) );

                              cos(x) - cos(11 x)
                              -------------------
                              cos(3 x) - cos(7 x)
which is more interesting because the numerator and denominator both evaluate to 0 when x=0. If that's the case, and if you have to show your work, then here's a hint. Try applying L'Hopital's rule twice. You should get
> ( -cos(x)+121*cos(11*x) )/( -9*cos(3*x)+49*cos(7*x) );

                            -cos(x) + 121 cos(11 x)
                           -------------------------
                           -9 cos(3 x) + 49 cos(7 x)
So then you can take the limit of the above. Here's a harder way, in case you have not yet been taught that rule for limits of the form 0/0. Try using multiple-angle formulas to get the numerator and the denominator to have only cos(x) in them. In Maple, you can use expand() for that. If that results in polynomials in cos(x), then substitute cos(x)=y and try to factor those polynomials, so that numerator/denominator will simplify. The end result should be just this simple equivalent expression whose limit at x=0 is easy.
                                   4            2
                          16 cos(x)  - 16 cos(x)  + 3
acer
This input form below worked ok for me in Maple 11.00, using the original operator p that was supplied.
> Optimization[Maximize](p, 1 .. 2,
>     method = branchandbound);
                 [0.600000000000001421, [1.69999999999999996]]
But these next ones did not, which looks like a bug,
> Optimization[Maximize](p(x), x=1 .. 2);
Error, (in Optimization:-NLPSolve) unable to convert

> Optimization[Maximize](p(x), x=1 .. 2,
>     method = branchandbound);
Error, (in Optimization:-NLPSolve) unable to convert
acer
A little more. LinearAlgebra:-LA_Main:-MatrixScalarMultiply( temp, I*alpha[1], inplace=true, outputoptions=[datatype=complex[8]]): in the final loop could be replaced by msm:=LinearAlgebra:-LA_Main:-LA_External:-MatrixScalarMultiply: msm(temp,I*alpha[1]): # safer is msm(temp,evalf(I*alpha[1])) where the assignment to `msm` is done right after that to `mmm`, before the loop begins. Also, there is a call to LinearAlgebra:-LA_Main:-MatrixScalarMultiply inside `matexp`. This could be set up to directly call an external function, just like is done for addition, norm, etc. The following lines could be added in `matexp` in the relevant places, and ExtMSM declared as a new local. ExtMSM := ExternalCalling:-DefineExternal('hw_f06jdf', extlib); ExtMSM := ExternalCalling:-DefineExternal('sw_f06jdf', extlib); Then the call in `matexp` LinearAlgebra:-LA_Main:-MatrixScalarMultiply(a, M, 'inplace' = 'true', 'outputoptions' = []); could be replaced by ExtMSM(n*n, M, a, 1); Together those give another 10%-15% or time savings over the orginal at size 16x16. Keep in mind that all this is deliberately bypassing a lot of sanity checks. The Matrices had better be complex[8] datatypes with full rectangular storage, or else it will crash and burn. It's not just garbage collection that slows Maple down for these computations. It's also the cost and overhead of Maple function calls, some of which have been avoided in the code I've posted on this example. Having Maple be a general system capable of exact or arbitrary precision floating-point computations brings with it the overhead of smart runtime selection of modes. Systems like Matlab don't necessarily have that sort of overhead, if they are primarily purely hardware double precision engines. There are alternative schemes for the general purpose system like Maple. For example, on-the-fly generation of code tailored for just a single mode of computations (exact, hardware, arbitrary precision, etc) is one possibility. Another possibility is making very low-level routines like the BLAS get direct, individual interfaces at the user level. acer
If I issue the Maple command, plotsetup(cps); then my plot goes to the file "postscript.eps" and appears in colour. But with plotsetup(ps) it appears in greyscale. A problem with tex -> pdf (file not found), while working for tex -> dvi, sounds a bit like a path issue in your tex setup. acer
Interesting. When you strengthen any assumptions on beta, can you use subs() and replace any older expressions's "beta"s with the new? To make doing that easier, you could assign beta to some new temp name, on each call to additionally(). A very crude example,
> restart:
> assume(beta::real):
> z := beta:
> A := <beta*Pi>:
> additionally(beta>0);
> z2 := beta:
> newA := subs(z=z2,A);
                       newA := [beta~ Pi]
 
> map(about,indets(%[1]));
Originally beta, renamed beta~:
  is assumed to be: RealRange(Open(0),infinity)
 
                                      {}
So, above, the result newA of the subs() call is a new Vector, a copy of A but with the new tighter beta.
> A-<beta*Pi>;
                             [beta~ Pi - beta~ Pi]
 
> newA-<beta*Pi>;
                                      [0]

acer
First 323 324 325 326 327 328 329 Last Page 325 of 337