acer

32470 Reputation

29 Badges

20 years, 5 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

Would you accept using T::ppositive , where ppositive were a new type?

Something like this, perhaps.

TypeTools:-AddType( ppositive,
  x->evalb(type(x,positive) or is(x,positive)=true) );

acer

If one could copy & paste your ode into Maple, as displayed in your post here on mapleprimes, then it would be easier to help.

But the ode appears as an image, in my firefox, from some maplenet server.  Does anyone here know how to get that into a new Maple session, without having to enter all that by hand?

acer

Here's an example.

sleep := proc(delay::nonnegative,expr::uneval,$)
local x, endtime;
   print(expr);
   if kernelopts(':-platform') = "unix" then
      ssystem(sprintf("sleep %d",delay));
   else
      endtime := time() + delay;
      while time() < endtime do end do;
   end if;
   seq(eval(x), x in args[2..-1]);
end proc:
                                                                                
sleep(5,int(x,x));

It could be even nicer if the seq Modifier described on the help-page ?parameter_modifiers allowed a sequence of uneval parameters. (Ambiguity could be resolved by having each argument taken separately and not as a sequence.)

Sidebar: if system/ssystem calls are disabled only in the Tools->Options section Standard GUI of Maple 11 then attempting to use them (like above, for Unix) results in a Warning but not necessarily an Error. But attempting to use system calls after -z is passed an an option to the Maple launcher (TTY or Standard) results in an Error.

acer

How about creating new procedures.

Sin := x->sin(Pi*x/180);
plot(Sin,0..360);

Or you could create and load a module which defines its own routines, so that the function names could appear the same.

my_trig := module()
option package;
export sin, cos;
  sin := x -> :-sin(Pi*x/180);
  cos := x -> :-cos(Pi*x/180);
end module:

with(my_trig);

plot(evalf@sin,0..360);
plot(sin(t),t=-180..180);
plot(sin,0..360); # bug, due to what??

You could add more routines to that module, even though above I only put in sin and cos. Simply define and export whichever you want.

If you feel really brave, you could also try this next trick below. I don't particularly recommend it, because apart from the danger of unprotecting sin, it doesn't get much more. It even seems to suffer from same problem as the module, for the last example.

restart:

Sin:=copy(sin):
unprotect(sin):
sin:=x->Sin(Pi*x/180):

plot(evalf@sin,0..360);
plot(sin(t),t=-180..180);
plot(sin,0..360); # again, why?

acer

The z in op(2,a) is an escaped local variable name.

You should be able to use convert(op(2,a),`global`) instead.

a:=FunctionAdvisor( sum_form, tan);
convert(tan(z),Sum) assuming convert(op(2,a),`global`);

acer

Your instincts are right. It can be done using seq(), and a loop which looks like,

S := S, ...

is to be avoided.

But I ought to ask, do you really need to create all 10 operators, which act by evaluating a piecewise?

Or do you just need that f[i](j) returns the value in M[i,j] , or possibly that f[i](y) returns M[i,trunc[y]] ?

There are two parts of the above question. The first is this.  Do you really need 10 operators, f[1]..f[10] (where f was presumably just a Maple table) or would it suffice to have one procedure f which was able be called in that indexed manner?

The second is this. Since the operators (10 individual ones, or 1 indexed proc, no matter) are apparently only going to be used to evaluate a piecewise (in x) at x=y then do you really need to utilize piecewise constructs for that? Maybe these operators don't need to use piecewise constructs at all (since those are, relatively speaking, expensive to evaluate at a specific point).

Some ideas follow, which you might find useful.

M := LinearAlgebra:-RandomMatrix(10,18):

f := proc(y)
#global M; # or by lexical scoping
local i;
  if type(procname,indexed) and type(op(1,procname),posint) then
    i:=op(1,procname);
  end if;
  M[i,trunc(y)];
end proc:

f[2](1);
f[2](1.3);
M[2,1];

Perhaps you really did need what you described, for some other reasons other than just the returned value M[i,j]. You noticed that this invocation below doesn't work, since seq() sees all the arguments flattened out (and hence invalid).

seq( j < x, MM[j,j]*`if`(j=1,0,M[i,j]), j=1..3 );

But maybe you could do something like this,

B := j < x, MM[j,j]*`if`(j=1,0,MM[i,j]);
seq( B, j=1..3 );

acer

Very nice, Axel.

But it seems to me that you have helped Maple out more than you should have had to, by "giving" it the FTOC result.

How about this: instead of,

D(F)(x): int(%,x=0..t) = F(t) - F(0);

you could have,

D(F)(x): Int(%,x=0..t) = int(%,x=0..t,continuous);

so that Maple does more of the work.

 acer

You forgot to define beta[1] as  value*deg .

By the way, I really prefer Units[Standard] to Units[Natural]. The Units[Natural] environment robs the global namespace of too many common symbols which one might want otherwise to use as variables.

> restart:

> with(Units[Standard]):

> alpha[1]:=4.25 * Unit(deg):
> theta[1]:=11.31 * Unit(deg):
> theta[2]:=78.69 * Unit(deg):
> alpha[2]:=-9.25 * Unit(deg):
> a:=1400.00:
> b:=509.90:

> beta[1]:=33.33 * Unit(deg): # I made that up.

> -b*sin(alpha[2]+alpha[1]-beta[1]-theta[2])+b*sin(beta[1]-theta[2]);

                              91.4313519

acer

Type cs\_e instead of cs_e to get it in 2D Math input.

acer

Check this for correctness. Since (3) is missing in the scan, some deduction was necessary. And sometimes I make mistakes.

The request is to make ETsolver accept a general function f of two arguments (which returns dy/dx). That's fine. But why embed into ETsolver the initial condition y[1]=a when that is only true for the particular supplied initial condition y(0)=0 ? And why not also let ETsolver accept a general endpoint b, instead of hard-coding it to only compute an approximation for y(1)?

> ETsolver := proc( f, a, y_a, h, b)
> local N,y,n;
> N := trunc( (b-a)/h );
> y[1]:=f(a,y_a):
> for n from 1 to N do
> y[n+1]:=y[n]+h/2 * ( f(a+(n-1)*h,y[n]) + f(a+n*h,y[n]+h*f(a+(n-1)*h,y[n])) );
> end do;
> return y[N+1];
> end proc:

> ETsolver( (x,y)->x+y, 0, 0, 0.1, 1);
0.7140808465

> ETsolver( (x,y)->x+y, 0, 0, 0.01, 1);
0.7182368623

> ETsolver( (x,y)->x+y, 0, 0, 0.001, 1);
0.7182813769

Like I mentioned before, the key to understanding this algorithm is to draw the exact curve, and the trapezoid with base from x[n] to x[n+1]=x[n]+h , and to see how it relates (as an approximation) to the fundamental theorem of Calculus.

acer

What have you got, so far, with this?

Do you expect complete solutions to your assignments exercises, without making the major contribution yourself? If not, then please just post your instructor's email address so that we can mail in our solutions directly.

It's asking you to write a procedure that implements Euler's method for numerical integration.

That procedure will accept input function f, initial point a, and stepsize h. You could also make it accept final target point b, so as to more generally be able to approximate y(b) instead of y(1).

It refers to equation (3), which is not shown. It's pretty clear, though, that equation (3) is equivalent to (Maple syntax),

y(1) = Int( f(x), x=a..1 );

or

y(b) = Int( f(x), x=a..b );

The procedure could figure out how many steps of length h it will take to get from point a to point 1 (or b). It needs to run a loop, for each such step. Each time through the loop, it will compute y[n+1] using y[n]. It starts off with y[1]:=a .

Here's a hint. To gets from initial point y[1]=a to final point y[N]=b you need to add N increments of stepsize h = (b-a)/N . So you can deduce that N = (b-a)/h . So, you can figure out how many times to go through the loop.

Here's another hint. Try it without a loop, for practice. Try to get it to compute y[2] . From your equation (4) that should equal y[1] + h/2 * (F[1,n]+F[2,n]) .

Try to draw it by hand, as a picture on paper. What do the F[1,n] and F[2,n] mean? Where is the trapezoid? Is the area of the trapezoid almost equal to the area under the curve f between x=a and x=a+h ? Draw it all.

acer

What else can you say about the nonlinear equations? For example, are they (or their subsets that you created) multivariable polynomials?

Are your data sets exact or floating-point approximations, or formulas involving mixed data? Would you consider conversion of float data to exact rational prior to systematic separation of equations?

It sounds like a dimensional effect, as you describe it. For systems of equations in more than one variable, fsolve uses Newton's method. With many variables and hence a much "larger" space, it often becomes more and more unlikely that a starting point will converge. I don't think that fsolve adjusts its number of attempts (distinct initial points) according to the size of the system, and there's no option to specify the number of attempts. So, yes, as the size of the system gets large it might become less and less likely that it will find a n approximate root.

Your idea of separating and reducing the system sounds good, then. But, do you do it "by hand"? Maple's `solve` routine might help you automate it. Or you might be able to use Groebner, depending on the type of equations.

It's possible that someone might be able to do something more with it, if you can upload the equations and some data values to this site.

acer

Please post the example, the actual polynomial. (If you'd like, you can lprint it and put that up here in a reply.)

The fsolve routine is at least sometime capable of doing what you'd like. In Maple 8, 10, and 11 the example below returns  a result with multiple root 3.0 shown twice.

p := expand( (x-3)^2 * (x+4)^3 );
fsolve(p,x=2.5..3.5);

Maybe there's somthing trickier going on with your example. It might be hard for some here to pin down what that might be, if it is specific to Maple 8. Or it could just be something simple, such as all your roots getting put into a maple set at some point (which would "uniquify" them).

Here's a routine that should return the roots of a univariate polynomial which lie in a complex box. The box is specified by lower left corner `a` and upper right corner `b`. In your case, a and b would be the endpoints of a range on the real line. This is similar to how Matlab's `roots` function works, I believe. It should show the multiplicity.

R := (p, a, b) -> select(
    x -> evalb(
        Re(x) <= Re(b) and Im(x) <= Im(b)
        and Re(a) <= Re(x) and Im(a) <= Im(x)),
    convert(LinearAlgebra:-Eigenvalues(
              evalf[trunc(evalhf(Digits))](
                  LinearAlgebra:-CompanionMatrix(p))),
            list));

That routine R won't round or polish the roots, remove small imaginary components, or remove duplicates, etc. That's on purpose, so that it can be you who judges the roots.

y:=randpoly(x,degree=100):
fsolve(y,x=-10..10);
R(y,-10,10);

 You might also be interested in searching the web for Bezout's theorem, roots, and Companion matrix. Here's one link that might interest you.

acer

Please post your example, and maybe it could be improved. acer
First, consider creating an operator which simplifies under your given equation. use_assump := x -> simplify(simplify(x,{a^2+b^2=1})): Now let's look at this Matrix, m := Matrix([[a^2+b^2-1,1,0],[1-a^2-b^2,1,0],[1,0,1]]); with(LinearAlgebra): evals,evecs := Eigenvectors(m); If we now apply the simplification on the eigenvectors Matrix (stored as columns of evecs) then it goes wrong. Presumably this is because it accidentally used as a pivot in the NullSpace computation something that was really zero (eg. an element like 1-a^2-b^2). It fails to simplify the eigenvectors, because they have hidden zeros in the denominators. map(use_assump, evecs); One possibility is to apply the simplification to Matrix m prior to computing the eigenvectors. That works here. But just by luck I think. For another example it too could go wrong as above, I think. But I give it next, anyway. evals,evecs:= Eigenvectors(map(use_assump,m)); map(use_assump, m.evecs - evecs.DiagonalMatrix(evals)); Now, what if the NullSpace computation itself could use the assumption, and so avoid selection of a "hidden" zero as any pivot? One might hope that NullSpace uses Testzero & Normalizer just as LUDecomposition does, and that this benefit also goes for Eigenvectors too. Normalizer:=use_assump: evals,evecs := Eigenvectors(m); map(use_assump, m.evecs - evecs.DiagonalMatrix(evals)); Normalizer := normal: So, maybe that is an Ok method. acer
First 322 323 324 325 326 327 328 Last Page 324 of 337