acer

32303 Reputation

29 Badges

19 years, 309 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

Type cs\_e instead of cs_e to get it in 2D Math input.

acer

Check this for correctness. Since (3) is missing in the scan, some deduction was necessary. And sometimes I make mistakes.

The request is to make ETsolver accept a general function f of two arguments (which returns dy/dx). That's fine. But why embed into ETsolver the initial condition y[1]=a when that is only true for the particular supplied initial condition y(0)=0 ? And why not also let ETsolver accept a general endpoint b, instead of hard-coding it to only compute an approximation for y(1)?

> ETsolver := proc( f, a, y_a, h, b)
> local N,y,n;
> N := trunc( (b-a)/h );
> y[1]:=f(a,y_a):
> for n from 1 to N do
> y[n+1]:=y[n]+h/2 * ( f(a+(n-1)*h,y[n]) + f(a+n*h,y[n]+h*f(a+(n-1)*h,y[n])) );
> end do;
> return y[N+1];
> end proc:

> ETsolver( (x,y)->x+y, 0, 0, 0.1, 1);
0.7140808465

> ETsolver( (x,y)->x+y, 0, 0, 0.01, 1);
0.7182368623

> ETsolver( (x,y)->x+y, 0, 0, 0.001, 1);
0.7182813769

Like I mentioned before, the key to understanding this algorithm is to draw the exact curve, and the trapezoid with base from x[n] to x[n+1]=x[n]+h , and to see how it relates (as an approximation) to the fundamental theorem of Calculus.

acer

What have you got, so far, with this?

Do you expect complete solutions to your assignments exercises, without making the major contribution yourself? If not, then please just post your instructor's email address so that we can mail in our solutions directly.

It's asking you to write a procedure that implements Euler's method for numerical integration.

That procedure will accept input function f, initial point a, and stepsize h. You could also make it accept final target point b, so as to more generally be able to approximate y(b) instead of y(1).

It refers to equation (3), which is not shown. It's pretty clear, though, that equation (3) is equivalent to (Maple syntax),

y(1) = Int( f(x), x=a..1 );

or

y(b) = Int( f(x), x=a..b );

The procedure could figure out how many steps of length h it will take to get from point a to point 1 (or b). It needs to run a loop, for each such step. Each time through the loop, it will compute y[n+1] using y[n]. It starts off with y[1]:=a .

Here's a hint. To gets from initial point y[1]=a to final point y[N]=b you need to add N increments of stepsize h = (b-a)/N . So you can deduce that N = (b-a)/h . So, you can figure out how many times to go through the loop.

Here's another hint. Try it without a loop, for practice. Try to get it to compute y[2] . From your equation (4) that should equal y[1] + h/2 * (F[1,n]+F[2,n]) .

Try to draw it by hand, as a picture on paper. What do the F[1,n] and F[2,n] mean? Where is the trapezoid? Is the area of the trapezoid almost equal to the area under the curve f between x=a and x=a+h ? Draw it all.

acer

What else can you say about the nonlinear equations? For example, are they (or their subsets that you created) multivariable polynomials?

Are your data sets exact or floating-point approximations, or formulas involving mixed data? Would you consider conversion of float data to exact rational prior to systematic separation of equations?

It sounds like a dimensional effect, as you describe it. For systems of equations in more than one variable, fsolve uses Newton's method. With many variables and hence a much "larger" space, it often becomes more and more unlikely that a starting point will converge. I don't think that fsolve adjusts its number of attempts (distinct initial points) according to the size of the system, and there's no option to specify the number of attempts. So, yes, as the size of the system gets large it might become less and less likely that it will find a n approximate root.

Your idea of separating and reducing the system sounds good, then. But, do you do it "by hand"? Maple's `solve` routine might help you automate it. Or you might be able to use Groebner, depending on the type of equations.

It's possible that someone might be able to do something more with it, if you can upload the equations and some data values to this site.

acer

Please post the example, the actual polynomial. (If you'd like, you can lprint it and put that up here in a reply.)

The fsolve routine is at least sometime capable of doing what you'd like. In Maple 8, 10, and 11 the example below returns  a result with multiple root 3.0 shown twice.

p := expand( (x-3)^2 * (x+4)^3 );
fsolve(p,x=2.5..3.5);

Maybe there's somthing trickier going on with your example. It might be hard for some here to pin down what that might be, if it is specific to Maple 8. Or it could just be something simple, such as all your roots getting put into a maple set at some point (which would "uniquify" them).

Here's a routine that should return the roots of a univariate polynomial which lie in a complex box. The box is specified by lower left corner `a` and upper right corner `b`. In your case, a and b would be the endpoints of a range on the real line. This is similar to how Matlab's `roots` function works, I believe. It should show the multiplicity.

R := (p, a, b) -> select(
    x -> evalb(
        Re(x) <= Re(b) and Im(x) <= Im(b)
        and Re(a) <= Re(x) and Im(a) <= Im(x)),
    convert(LinearAlgebra:-Eigenvalues(
              evalf[trunc(evalhf(Digits))](
                  LinearAlgebra:-CompanionMatrix(p))),
            list));

That routine R won't round or polish the roots, remove small imaginary components, or remove duplicates, etc. That's on purpose, so that it can be you who judges the roots.

y:=randpoly(x,degree=100):
fsolve(y,x=-10..10);
R(y,-10,10);

 You might also be interested in searching the web for Bezout's theorem, roots, and Companion matrix. Here's one link that might interest you.

acer

Please post your example, and maybe it could be improved. acer
First, consider creating an operator which simplifies under your given equation. use_assump := x -> simplify(simplify(x,{a^2+b^2=1})): Now let's look at this Matrix, m := Matrix([[a^2+b^2-1,1,0],[1-a^2-b^2,1,0],[1,0,1]]); with(LinearAlgebra): evals,evecs := Eigenvectors(m); If we now apply the simplification on the eigenvectors Matrix (stored as columns of evecs) then it goes wrong. Presumably this is because it accidentally used as a pivot in the NullSpace computation something that was really zero (eg. an element like 1-a^2-b^2). It fails to simplify the eigenvectors, because they have hidden zeros in the denominators. map(use_assump, evecs); One possibility is to apply the simplification to Matrix m prior to computing the eigenvectors. That works here. But just by luck I think. For another example it too could go wrong as above, I think. But I give it next, anyway. evals,evecs:= Eigenvectors(map(use_assump,m)); map(use_assump, m.evecs - evecs.DiagonalMatrix(evals)); Now, what if the NullSpace computation itself could use the assumption, and so avoid selection of a "hidden" zero as any pivot? One might hope that NullSpace uses Testzero & Normalizer just as LUDecomposition does, and that this benefit also goes for Eigenvectors too. Normalizer:=use_assump: evals,evecs := Eigenvectors(m); map(use_assump, m.evecs - evecs.DiagonalMatrix(evals)); Normalizer := normal: So, maybe that is an Ok method. acer
Jacques reply got quite to the centre of it, I thought. The important part for me is the statement which begins, "It is only various contexts which give an operational semantics to = ". Poor old = , it has too many uses in Maple. For example,
if i = 9 then  ...  end if;

seq( x, x = [1,3,5] );

subs( x = z/2 , cos(x) );

eqn := T = 13/Q - P;

LinearSolve( A, B, method = LU );

G:=table([x = 4]);
As pointed out, the operational semantics of those first three are determined by the context. And even in the fourth example, it is the context which gives a semantic meaning as an inert form to = (if that makes sense to say). I'd guess that there are other interesting examples of use of = , which I've missed. Now for the fun part. What happens if you create a Maple package which exports, or overload, = ? Which of the above examples should or should not be affected by rebinding = ? In which examples above is = in fact so rebound by loading such a package? Are there always workarounds, in the above examples, to force use of the (global, toplevel) postfix :-`=`() operator after such a rebinding of the = symbol? acer
It sounds to me as if you would like to compute the residuals and the sum of squares of the residuals, given a forced least-squares function (model). You could of course compute the sums of squares of the residuals easily in Maple. Simply evaluate the model at each X value, and square the difference with the Y value, and add those up. That's probably one or two lines of Maple code. But you asked whether the Statistics routines could be used for the purpose, in a canned way. You could specify a parameter that doesn't actually appear in your candidate model equation. And then use the solutionmodule output to get the extra details. Here's an example, based on one from the ?Statistics,Fit help-page. Notice that the variable `dummy` doesn't appear in the model, which is the basis of the trick. The model equation is thus fully supplied by the user, with no parameters to actually be estimated (except some incidental estimation of the dummy, which is of no consequence).
> X := Vector([1, 2, 3, 4, 5, 6], datatype=float):
> Y := Vector([2, 3, 4.8, 10.2, 15.6, 30.9], datatype=float):
> Digits:=trunc(evalhf(Digits)):

> # Here is the forced model.

> eq:=0.887576142919275224+0.606352318207692531*exp(0.649251558313310717*t):

> # Here are the simple commands to get the results,
> # without using Statistics.

> add( (eval(eq,t=X[i])-Y[i])^2, i=1..op(1,X) );

                               2.29446465108687
 
> seq( (eval(eq,t=X[i])-Y[i]), i=1..op(1,X) );

0.04819978094862, 0.10913477921538, 0.33987862305974, -1.17305895930964,
 
    0.8671971243695, -0.1913514547231

> # And here is Statistics:-Fit computing those results.

> sol := Statistics:-Fit( eq, X, Y, t, parameternames=[dummy], output=solutionmodule):

> sol:-Results(["residualsumofsquares","residuals"]);

2.29446465108658, [0.0481997809486180984, 0.109134779215386946,
 
    0.339878623059754581, -1.17305895930959636, 0.867197124369345929,
 
    -0.191351454723299952]
acer
Compare these two forms,
> 10 = 0 mod 5;

                                    10 = 0

> (10 = 0) mod 5;

                                     0 = 0
In the first of those, the `mod` operator works only on the 0 and not on the 10. See the help-page ?operators,precedence which may help clarify the relative precedence of various operators. That page shows that mod has a higher binding strength than does =. The round-bracket delimiters in the second example above get around the higher precedence of mod over =. Your code could be written as follows.
printlevel:=2:
for n from 1 to 10 do
if ( (n=0) mod 5 ) then 0;
end if;
end do;
Note that the brackets around the conditional are not strictly necessary, so this should also work here.
printlevel:=2:
for n from 1 to 10 do
if (n=0) mod 5 then 0;
end if;
end do;
acer
Remove the ( round bracket between "if" and "isprime". acer
See ?dsolve[classical] for details. Hopefully I got it right. sys:={diff(y(t),t) = ((2-t)*(2-y(t)))/y(t) , y(0)=1}: sol:=dsolve(sys,numeric,method=classical[foreuler],stepsize=0.2,output=listprocedure): fy[0.2] := eval( y(t), sol): sol:=dsolve(sys,numeric,method=classical[foreuler],stepsize=0.1,output=listprocedure): fy[0.1] := eval( y(t), sol): sol:=dsolve(sys,numeric,method=classical[foreuler],stepsize=0.05,output=listprocedure): fy[0.05] := eval( y(t), sol): fy[0.05](1.0); fy[0.1](1.0); fy[0.2](1.0); plot([fy[0.2],fy[0.1],fy[0.05]],0..8,legend=[fy[0.2],fy[0.1],fy[0.05]]); acer
Do you mean something like this?
m := Matrix([[a[1]*b[1],c[1]+d[1]*e[1]],[sin(a[1]),exp(a[1]*b[1])]]);

map(diff,m,a[1]);
acer
Increasing the amount of memory used between garbage collections seems to help. Hopefully I transcribed your nested sum properly.
restart:

kernelopts(gcfreq=[3*10^7,0.1]);

# This next results in a double summation
# involving hypergeoms.
ee := sum(sum(sum((1-p)^i/(a1*i+a1+a2*j+a2+a3*k+a3),
                  i = 0 .. infinity)*(1-p)^j,
              j = 0 .. infinity)*(1-p)^k,
          k = 0 .. infinity);

evalf(subs(a1 = .4, a2 = .1, a3 = .5, p = 1/3, ee));
Using 64bit Linux Maple 11.00 the timing on my machine with default gcfreq was about 64sec, and with the above gcfreq was about 20sec. The memory allocated went up, from 12 million words to about 215 million words. You might gauge how high to set it according to how much physical memory your machine has. Using 32bit Linux Maple 11.00 on the same machine I got a time of about 22sec using gcfreq=[10^8,0.1]. The total bytesalloc was 320 million words. Now, this may be ill-advised in general, but for your particular example I got an answer correct to within 1 digit in the last place (at Digits=10) by using `add` in the final numerical instantiation. And it only took a couple of seconds. Of course, in general one might rely better on evalf/Sum's techniques, and also not know how many terms to add or how fast the excluded portion decays.
restart:

kernelopts(gcfreq=[3*10^8,0.1]):

ee := sum(sum(sum((1-p)^i/(a1*i+a1+a2*j+a2+a3*k+a3),
                  i = 0 .. infinity)*(1-p)^j,
              j = 0 .. infinity)*(1-p)^k,
          k = 0 .. infinity):

EE:=subs({sum=add,infinity=60},ee):
evalf(subs(a1 = .4, a2 = .1, a3 = .5, p = 1/3, EE));
acer
There may be other sneaky ways. This is in Maple 11.00 or 11.02.
> restart:
> ee:=p*(-1+p)*hypergeom([1, 1], [2], 1-p)*hypergeom([1], [], 2-p):
> convert(convert(convert(ee,MeijerG),`1F1`),hypergeom);

                        p hypergeom([1, 1], [2], 1 - p)

> restart:
> ee:=p*(-1+p)*hypergeom([1, 1], [2], 1-p)*hypergeom([1], [], 2-p):
> convert(ee,`0F1`);

                        p hypergeom([1, 1], [2], 1 - p)

acer
First 321 322 323 324 325 326 327 Last Page 323 of 336