acer

32333 Reputation

29 Badges

19 years, 321 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

It is interesting that verify gets this but is does not. It seems that verify gets it because signum gets it.

> signum( (X+w)^2 + Y^2)
>   assuming X::real, w>0, Y::real;
                                       1

I notice that both verify and signum do not handle the expanded expression, likely due to the resulting X+w term. It's possible that is is doing such an expansion.

> signum(expand((X+w)^2 + Y^2))
>   assuming X::real, w>0, Y::real;
                                 2            2    2
                         signum(X  + 2 X w + w  + Y )

> verify(expand((X+w)^2 + Y^2),0,'greater_equal')
>   assuming X::real, w>0, Y::real;
                                     FAIL
I have submitted this as a bug report.

acer

If you change the x-axis range to be 0..1000, to represent thousandths of seconds, then you have to accomodate that in the plot, somehow. You could scale the functions. Or you could simply adjust the tickmark values. And the axis-label could be changed to millisecond.

f1,f2 := 3, 10: # frequencies, as cycles/second

plot(sin(f1*t/1000*2*Pi),t=0..1000,
labels=[typeset(Unit(ms)),cycle],
legend=[typeset(f1*Unit(Hz))]);

plot(sin(f2*t/1000*2*Pi),t=0..1000,
labels=[typeset(Unit(ms)),cycle],
legend=[typeset(f2*Unit(Hz))]);

plot(sin(f1*t*2*Pi),t=0..1,
labels=[typeset(Unit(ms)),cycle],
legend=[typeset(f1*Unit(Hz))],
tickmarks=[[seq(i/5=1000*i/5,i=1..5)],default]);

plot(sin(f2*t*2*Pi),t=0..1,
labels=[typeset(Unit(ms)),cycle],
legend=[typeset(f2*Unit(Hz))],
tickmarks=[[seq(i/5=1000*i/5,i=1..5)],default]);

acer

If you change the x-axis range to be 0..1000, to represent thousandths of seconds, then you have to accomodate that in the plot, somehow. You could scale the functions. Or you could simply adjust the tickmark values. And the axis-label could be changed to millisecond.

f1,f2 := 3, 10: # frequencies, as cycles/second

plot(sin(f1*t/1000*2*Pi),t=0..1000,
labels=[typeset(Unit(ms)),cycle],
legend=[typeset(f1*Unit(Hz))]);

plot(sin(f2*t/1000*2*Pi),t=0..1000,
labels=[typeset(Unit(ms)),cycle],
legend=[typeset(f2*Unit(Hz))]);

plot(sin(f1*t*2*Pi),t=0..1,
labels=[typeset(Unit(ms)),cycle],
legend=[typeset(f1*Unit(Hz))],
tickmarks=[[seq(i/5=1000*i/5,i=1..5)],default]);

plot(sin(f2*t*2*Pi),t=0..1,
labels=[typeset(Unit(ms)),cycle],
legend=[typeset(f2*Unit(Hz))],
tickmarks=[[seq(i/5=1000*i/5,i=1..5)],default]);

acer

Inspired by Axel's post, just a little shorter,

> expr := ln(x)*ln(1-x)^2:

> combine(convert(expr,Sum,dummy=k)) assuming x>0, x<=1:

> sum(int(op(1,%),x=0..1),op(2,%));

                                    2
                                  Pi
                             -6 + --- + 2 Zeta(3)
                                   3

It could like nicer without the op() calls, if SumTools had exports that acted similarly to IntegrationTools:-GetRange and friends.

acer

Inspired by Axel's post, just a little shorter,

> expr := ln(x)*ln(1-x)^2:

> combine(convert(expr,Sum,dummy=k)) assuming x>0, x<=1:

> sum(int(op(1,%),x=0..1),op(2,%));

                                    2
                                  Pi
                             -6 + --- + 2 Zeta(3)
                                   3

It could like nicer without the op() calls, if SumTools had exports that acted similarly to IntegrationTools:-GetRange and friends.

acer

The difference should be mostly in terms of memory free for your applications (like Maple), rather than in terms of (cpu) system load. You can check it out by experiment.

Boot the machine to console mode only (runlevel 2, say). Enter the command free and see how much memory is used/free. Then start X (startx, or reboot to runlevel 5 or whatever xdm is). Again, issue free in an xterm, and compare how much memory is still available. This can give you an idea of how much memory X and your window-manager and/or desktop (gnome, kde) are using together.

You can also use top and uptime to gauge the cpu resources and system load, in both console mode and in an xterm. You'll likely discover that X itself doesn't use meaningful amounts of cpu, and unless you are running some spiffy piece of eye-candy with graphical effects (all the time) you may well not be able to detect a significant system load.

The bottom line is that (constantly running graphical eye-candy aside) there should not be much difference in the baseline system load. (It would be a disaster for Linux if running X alone involved some significant cpu overhead.)

If your Maple computation isn't memory intensive then commandline Maple should run pretty much the same in console mode as in an xterm. But if your Maple computation is huge and needs every little bit of physical memory you have (so as not to swap) then commandline Maple in console mode will do better. But only fractionally better because X+desktop is only using a fraction of the total system memory. And if that's the relevant case then maybe consider running 64bit Maple and installing more RAM.

ps. I used to run my symbolic Maple calculations in console mode, back when 8MB was a lot of RAM. Nowadays, it doesn't make much difference.

acer

Quoting from the first paragraph of the Description section of the ?Optimization,NLPSolve help-page,

   Most of the algorithms used by the NLPSolve command assume
   that the objective function and the constraints are twice
   continuously differentiable. NLPSolve will sometimes succeed
   even if these conditions are not met.

So, yes, if the constraint is not continuous at the point in question, then there could be problems.

Keep in mind that these are "numerical" (here, floating-point) solvers working at a given precision. In rough terms, at any given fixed precision for floating-point arithmetic, there will be a non-zero quantity epsilon (or "machine epsilon" for hardware double precision) for which x+epsilon is not arithmetically distinguishable from x. These issues are not specific to floating-point computations only in Maple, but are more general. See here for and here some detail. There may be other ways to implement numerical optimization (using interval computation or some "validated" scheme). But this is why the feasibility and optimality tolerances are options to NLPSolve, so that combined with the Digits setting some programmatic control is available.

acer

Quoting from the first paragraph of the Description section of the ?Optimization,NLPSolve help-page,

   Most of the algorithms used by the NLPSolve command assume
   that the objective function and the constraints are twice
   continuously differentiable. NLPSolve will sometimes succeed
   even if these conditions are not met.

So, yes, if the constraint is not continuous at the point in question, then there could be problems.

Keep in mind that these are "numerical" (here, floating-point) solvers working at a given precision. In rough terms, at any given fixed precision for floating-point arithmetic, there will be a non-zero quantity epsilon (or "machine epsilon" for hardware double precision) for which x+epsilon is not arithmetically distinguishable from x. These issues are not specific to floating-point computations only in Maple, but are more general. See here for and here some detail. There may be other ways to implement numerical optimization (using interval computation or some "validated" scheme). But this is why the feasibility and optimality tolerances are options to NLPSolve, so that combined with the Digits setting some programmatic control is available.

acer

The Maple "kernel", which does the computations, is quite separate from the interfaces. The choice of interface often doesn't really affect the computation except in the contention for memory and cpu cycles.

The Standard GUI itself is a Java application, and uses quite a bit of memory to start just itself. The commandline (TTY) interface is very lightweight. The Java GUI uses more memory and cpu cycles if many plots and lots of 2D Math output get displayed. If your symbolic computation is so large that there's not enough RAM to run both it and the interface without swapping, then things will run slowly.

However, if the computation's memory requirements even out reasonably with time (Maple's garbage collection working well) and if there is little typeset output (no plots, or most output suppressed with full colons, or little automatic scrolling, etc) then there usually shouldn't be much difference at all between runtime in each interface.

acer

Hmm.  Wouldn't it be better if the default value of currentdir() were the same as the result of kernelopts(homedir) when Maple is started from some icon launcher which doesn't specify the working location?

There are other values that would also be much more sensible than the current default. The present default appears to be kernelopts(mapledir), which is not a good default at all. There likely are many Maple users who are inadvertantly saving documents to the Maple installation folder under Program Files. That doesn't seem very wise. (A file or subfolder crucial for Maple's proper operation might too easily be clobbered, for example.)

acer

Hmm.  Wouldn't it be better if the default value of currentdir() were the same as the result of kernelopts(homedir) when Maple is started from some icon launcher which doesn't specify the working location?

There are other values that would also be much more sensible than the current default. The present default appears to be kernelopts(mapledir), which is not a good default at all. There likely are many Maple users who are inadvertantly saving documents to the Maple installation folder under Program Files. That doesn't seem very wise. (A file or subfolder crucial for Maple's proper operation might too easily be clobbered, for example.)

acer

I hope that these have the correct form and do what you intended. (Note: I haven't done anything special, about the cases that you had. In fact, I just commented out part of it, since you say that the particular function is not material; you want to know how to use NLPSolve for procedures, etc.)

# First, using "Matrix Form"
# note: This didn't work well, unless I provided the gradient of f.
#       Also, threw in the jacobian of the constraint proc, for fun.
 
f := proc(x::Array)
  #if (x[1]^2+x[2]+x[3]+x[4] < 15 and x[1]^2+x[2]+x[3]+x[4] > 10) then
     2*x[1]^2+3*x[2]+4*x[3]+2*x[4];
  #else
  #   1/x[1]+x[2]^2+x[3]+x[4]^2;
  #end if;
end proc:
 
fgrad := proc(x::Array, xd::Array)
  # Acts in-place on xd, so returns NULL
  #if (x[1]^2+x[2]+x[3]+x[4] < 15 and x[1]^2+x[2]+x[3]+x[4] > 10) then
    xd[1] := 2*x[1];
    xd[2] := 1;
    xd[3] := 1;
    xd[4] := 2*x[4];
  #else
  #  xd[1] := -1/x[1]^2;
  #  xd[2] := 2*x[2];
  #  xd[3] := 1;
  #  xd[4] := 2*x[4];
  #end if;
  NULL;
end proc:
 
pcons := proc(X::Vector, Y::Vector)
  # Acts in-place on Y, so returns NULL
  Y[1] := add(X[i],i=1..4) - 20;
  NULL:
end proc:
 
pconsjac := proc(X::Vector, J::Matrix)
  # Acts in-place on J, so returns NULL
  J[1,1] := 1; # del pcons_Y[1] / del X[1]
  J[1,2] := 1;
  J[1,3] := 1;
  J[1,4] := 1; # del pcons_Y[1] / del X[4]
  NULL;
end proc:
 
bds := [Vector([1,1,1,1]),Vector([5,5,5,5])]:
 
Optimization:-NLPSolve(4,f,1,pcons,[],bds,initialpoint=Vector([1,1,1,1]),
                       objectivegradient=fgrad,constraintjacobian=pconsjac,
                       maximize);
 
# Second, using "Procedure Form"
 
F := proc(x1,x2,x3,x4)
  #if (x1^2+x2+x3+x4 < 15 and x1^2+x2+x3+x4 > 10) then
     2*x1^2+3*x2+4*x3+2*x4;
  #else
  #   1/x1+x2^2+x3+x4^2;
  #end if;
end proc:
 
b1 := proc(x1,x2,x3,x4) x1+x2+x3+x4 - 20; end proc:
 
bds := 1..5,1..5,1..5,1..5:
 
Optimization:-NLPSolve(F,{b1},bds,initialpoint=[1,1,1,1],maximize);

acer

I hope that these have the correct form and do what you intended. (Note: I haven't done anything special, about the cases that you had. In fact, I just commented out part of it, since you say that the particular function is not material; you want to know how to use NLPSolve for procedures, etc.)

# First, using "Matrix Form"
# note: This didn't work well, unless I provided the gradient of f.
#       Also, threw in the jacobian of the constraint proc, for fun.
 
f := proc(x::Array)
  #if (x[1]^2+x[2]+x[3]+x[4] < 15 and x[1]^2+x[2]+x[3]+x[4] > 10) then
     2*x[1]^2+3*x[2]+4*x[3]+2*x[4];
  #else
  #   1/x[1]+x[2]^2+x[3]+x[4]^2;
  #end if;
end proc:
 
fgrad := proc(x::Array, xd::Array)
  # Acts in-place on xd, so returns NULL
  #if (x[1]^2+x[2]+x[3]+x[4] < 15 and x[1]^2+x[2]+x[3]+x[4] > 10) then
    xd[1] := 2*x[1];
    xd[2] := 1;
    xd[3] := 1;
    xd[4] := 2*x[4];
  #else
  #  xd[1] := -1/x[1]^2;
  #  xd[2] := 2*x[2];
  #  xd[3] := 1;
  #  xd[4] := 2*x[4];
  #end if;
  NULL;
end proc:
 
pcons := proc(X::Vector, Y::Vector)
  # Acts in-place on Y, so returns NULL
  Y[1] := add(X[i],i=1..4) - 20;
  NULL:
end proc:
 
pconsjac := proc(X::Vector, J::Matrix)
  # Acts in-place on J, so returns NULL
  J[1,1] := 1; # del pcons_Y[1] / del X[1]
  J[1,2] := 1;
  J[1,3] := 1;
  J[1,4] := 1; # del pcons_Y[1] / del X[4]
  NULL;
end proc:
 
bds := [Vector([1,1,1,1]),Vector([5,5,5,5])]:
 
Optimization:-NLPSolve(4,f,1,pcons,[],bds,initialpoint=Vector([1,1,1,1]),
                       objectivegradient=fgrad,constraintjacobian=pconsjac,
                       maximize);
 
# Second, using "Procedure Form"
 
F := proc(x1,x2,x3,x4)
  #if (x1^2+x2+x3+x4 < 15 and x1^2+x2+x3+x4 > 10) then
     2*x1^2+3*x2+4*x3+2*x4;
  #else
  #   1/x1+x2^2+x3+x4^2;
  #end if;
end proc:
 
b1 := proc(x1,x2,x3,x4) x1+x2+x3+x4 - 20; end proc:
 
bds := 1..5,1..5,1..5,1..5:
 
Optimization:-NLPSolve(F,{b1},bds,initialpoint=[1,1,1,1],maximize);

acer

I could envision a new user falling into this kind of trap.

> f := proc(x) if signum(x)=1 then cos(x) else -cos(x) end if; end proc:

> evalf(Int(f,-Pi/2..Pi/2)); # correct
                                      0.
 
> int(f,-Pi/2..Pi/2); # wrong
                                      -2

It might be better if int(f,a..b) and value(Int(f,a..b)) issued errors.

Evaluating a procedure at a dummy and then computing with the resulting expression, is a poor way to mimic functionality over operators.

fsolve does something similar, and it too can be quite easily fooled. It'd be better to leave operators alone, and to treat them as black boxes for numerical computation.

acer

I could envision a new user falling into this kind of trap.

> f := proc(x) if signum(x)=1 then cos(x) else -cos(x) end if; end proc:

> evalf(Int(f,-Pi/2..Pi/2)); # correct
                                      0.
 
> int(f,-Pi/2..Pi/2); # wrong
                                      -2

It might be better if int(f,a..b) and value(Int(f,a..b)) issued errors.

Evaluating a procedure at a dummy and then computing with the resulting expression, is a poor way to mimic functionality over operators.

fsolve does something similar, and it too can be quite easily fooled. It'd be better to leave operators alone, and to treat them as black boxes for numerical computation.

acer

First 497 498 499 500 501 502 503 Last Page 499 of 591