acer

32343 Reputation

29 Badges

19 years, 328 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Too many new users go wrong with this. The goal might have been that implicit multiplication in 2D Math entry followed some "natural" mode, but the clash between brackets as delimiters and for function application is a real issue.

There's a need for an explanation of the 2D Math implicit multiplication rules, on a help-page that is really easy to find (ie. lots of useful aliased help-queries would get one there). The explanation should be as thorough as Doug's analysis. At present, the ?worksheet,documenting,2DMathDetails help-page is too hard to get to, and is too thin with explanations of implicit multiplication in the presence of round-brackets.

Maybe the system could detect some problematic instances, and query the user as to the intention. Consider the separate case of function assignment. If one enters f(x):=x^2 in 2D Math mode then a dialogue pops up, to allow the user to specify whether a function definition or a remember table assignment is intended. Either a similar approach could be implemented for problematic implicit multiplication situations, or the system might be made more robust,... or the entire implicit multiplication scheme could be reconsidered altogether.

The mechanism offered by Typesetting:-Settings(numberfunctions = false) is too obscure. Also, it has an effect on 5.01(c) but not on (5.01)(c). It could also be more clearly documented that changing that setting doesn't affect copy-n-pasted expressions, which may not be re-parsed(?).

acer

Too many new users go wrong with this. The goal might have been that implicit multiplication in 2D Math entry followed some "natural" mode, but the clash between brackets as delimiters and for function application is a real issue.

There's a need for an explanation of the 2D Math implicit multiplication rules, on a help-page that is really easy to find (ie. lots of useful aliased help-queries would get one there). The explanation should be as thorough as Doug's analysis. At present, the ?worksheet,documenting,2DMathDetails help-page is too hard to get to, and is too thin with explanations of implicit multiplication in the presence of round-brackets.

Maybe the system could detect some problematic instances, and query the user as to the intention. Consider the separate case of function assignment. If one enters f(x):=x^2 in 2D Math mode then a dialogue pops up, to allow the user to specify whether a function definition or a remember table assignment is intended. Either a similar approach could be implemented for problematic implicit multiplication situations, or the system might be made more robust,... or the entire implicit multiplication scheme could be reconsidered altogether.

The mechanism offered by Typesetting:-Settings(numberfunctions = false) is too obscure. Also, it has an effect on 5.01(c) but not on (5.01)(c). It could also be more clearly documented that changing that setting doesn't affect copy-n-pasted expressions, which may not be re-parsed(?).

acer

From the ?proc help-page,

Implicit Local Variables
- For any variable used within a procedure without being explicitly mentioned
  in a local localSequence; or global globalSequence; the following rules are
  used to determine whether it is local or global:

  The variable is searched for amongst the locals and globals (explicit or
  implicit) in surrounding procedures, starting with the innermost. If the
  name is encountered as a parameter, local variable, or global variable of
  such a surrounding procedure, that is what it refers to.

  Otherwise, any variable to which an assignment is made, or which appears as
  the controlling variable in a for loop, is automatically made local.

  Any remaining variables are considered to be global.
An example to illustrate some of this follows,
> x := 3:

> f:= proc() local g, x;
>   x := 17;
>   g := proc() x; end proc;
>   :-x, x, g();
> end proc:

> f();
                                   3, 17, 17

acer

From the ?proc help-page,

Implicit Local Variables
- For any variable used within a procedure without being explicitly mentioned
  in a local localSequence; or global globalSequence; the following rules are
  used to determine whether it is local or global:

  The variable is searched for amongst the locals and globals (explicit or
  implicit) in surrounding procedures, starting with the innermost. If the
  name is encountered as a parameter, local variable, or global variable of
  such a surrounding procedure, that is what it refers to.

  Otherwise, any variable to which an assignment is made, or which appears as
  the controlling variable in a for loop, is automatically made local.

  Any remaining variables are considered to be global.
An example to illustrate some of this follows,
> x := 3:

> f:= proc() local g, x;
>   x := 17;
>   g := proc() x; end proc;
>   :-x, x, g();
> end proc:

> f();
                                   3, 17, 17

acer

It is interesting that verify gets this but is does not. It seems that verify gets it because signum gets it.

> signum( (X+w)^2 + Y^2)
>   assuming X::real, w>0, Y::real;
                                       1

I notice that both verify and signum do not handle the expanded expression, likely due to the resulting X+w term. It's possible that is is doing such an expansion.

> signum(expand((X+w)^2 + Y^2))
>   assuming X::real, w>0, Y::real;
                                 2            2    2
                         signum(X  + 2 X w + w  + Y )

> verify(expand((X+w)^2 + Y^2),0,'greater_equal')
>   assuming X::real, w>0, Y::real;
                                     FAIL
I have submitted this as a bug report.

acer

It is interesting that verify gets this but is does not. It seems that verify gets it because signum gets it.

> signum( (X+w)^2 + Y^2)
>   assuming X::real, w>0, Y::real;
                                       1

I notice that both verify and signum do not handle the expanded expression, likely due to the resulting X+w term. It's possible that is is doing such an expansion.

> signum(expand((X+w)^2 + Y^2))
>   assuming X::real, w>0, Y::real;
                                 2            2    2
                         signum(X  + 2 X w + w  + Y )

> verify(expand((X+w)^2 + Y^2),0,'greater_equal')
>   assuming X::real, w>0, Y::real;
                                     FAIL
I have submitted this as a bug report.

acer

If you change the x-axis range to be 0..1000, to represent thousandths of seconds, then you have to accomodate that in the plot, somehow. You could scale the functions. Or you could simply adjust the tickmark values. And the axis-label could be changed to millisecond.

f1,f2 := 3, 10: # frequencies, as cycles/second

plot(sin(f1*t/1000*2*Pi),t=0..1000,
labels=[typeset(Unit(ms)),cycle],
legend=[typeset(f1*Unit(Hz))]);

plot(sin(f2*t/1000*2*Pi),t=0..1000,
labels=[typeset(Unit(ms)),cycle],
legend=[typeset(f2*Unit(Hz))]);

plot(sin(f1*t*2*Pi),t=0..1,
labels=[typeset(Unit(ms)),cycle],
legend=[typeset(f1*Unit(Hz))],
tickmarks=[[seq(i/5=1000*i/5,i=1..5)],default]);

plot(sin(f2*t*2*Pi),t=0..1,
labels=[typeset(Unit(ms)),cycle],
legend=[typeset(f2*Unit(Hz))],
tickmarks=[[seq(i/5=1000*i/5,i=1..5)],default]);

acer

If you change the x-axis range to be 0..1000, to represent thousandths of seconds, then you have to accomodate that in the plot, somehow. You could scale the functions. Or you could simply adjust the tickmark values. And the axis-label could be changed to millisecond.

f1,f2 := 3, 10: # frequencies, as cycles/second

plot(sin(f1*t/1000*2*Pi),t=0..1000,
labels=[typeset(Unit(ms)),cycle],
legend=[typeset(f1*Unit(Hz))]);

plot(sin(f2*t/1000*2*Pi),t=0..1000,
labels=[typeset(Unit(ms)),cycle],
legend=[typeset(f2*Unit(Hz))]);

plot(sin(f1*t*2*Pi),t=0..1,
labels=[typeset(Unit(ms)),cycle],
legend=[typeset(f1*Unit(Hz))],
tickmarks=[[seq(i/5=1000*i/5,i=1..5)],default]);

plot(sin(f2*t*2*Pi),t=0..1,
labels=[typeset(Unit(ms)),cycle],
legend=[typeset(f2*Unit(Hz))],
tickmarks=[[seq(i/5=1000*i/5,i=1..5)],default]);

acer

Inspired by Axel's post, just a little shorter,

> expr := ln(x)*ln(1-x)^2:

> combine(convert(expr,Sum,dummy=k)) assuming x>0, x<=1:

> sum(int(op(1,%),x=0..1),op(2,%));

                                    2
                                  Pi
                             -6 + --- + 2 Zeta(3)
                                   3

It could like nicer without the op() calls, if SumTools had exports that acted similarly to IntegrationTools:-GetRange and friends.

acer

Inspired by Axel's post, just a little shorter,

> expr := ln(x)*ln(1-x)^2:

> combine(convert(expr,Sum,dummy=k)) assuming x>0, x<=1:

> sum(int(op(1,%),x=0..1),op(2,%));

                                    2
                                  Pi
                             -6 + --- + 2 Zeta(3)
                                   3

It could like nicer without the op() calls, if SumTools had exports that acted similarly to IntegrationTools:-GetRange and friends.

acer

The difference should be mostly in terms of memory free for your applications (like Maple), rather than in terms of (cpu) system load. You can check it out by experiment.

Boot the machine to console mode only (runlevel 2, say). Enter the command free and see how much memory is used/free. Then start X (startx, or reboot to runlevel 5 or whatever xdm is). Again, issue free in an xterm, and compare how much memory is still available. This can give you an idea of how much memory X and your window-manager and/or desktop (gnome, kde) are using together.

You can also use top and uptime to gauge the cpu resources and system load, in both console mode and in an xterm. You'll likely discover that X itself doesn't use meaningful amounts of cpu, and unless you are running some spiffy piece of eye-candy with graphical effects (all the time) you may well not be able to detect a significant system load.

The bottom line is that (constantly running graphical eye-candy aside) there should not be much difference in the baseline system load. (It would be a disaster for Linux if running X alone involved some significant cpu overhead.)

If your Maple computation isn't memory intensive then commandline Maple should run pretty much the same in console mode as in an xterm. But if your Maple computation is huge and needs every little bit of physical memory you have (so as not to swap) then commandline Maple in console mode will do better. But only fractionally better because X+desktop is only using a fraction of the total system memory. And if that's the relevant case then maybe consider running 64bit Maple and installing more RAM.

ps. I used to run my symbolic Maple calculations in console mode, back when 8MB was a lot of RAM. Nowadays, it doesn't make much difference.

acer

Quoting from the first paragraph of the Description section of the ?Optimization,NLPSolve help-page,

   Most of the algorithms used by the NLPSolve command assume
   that the objective function and the constraints are twice
   continuously differentiable. NLPSolve will sometimes succeed
   even if these conditions are not met.

So, yes, if the constraint is not continuous at the point in question, then there could be problems.

Keep in mind that these are "numerical" (here, floating-point) solvers working at a given precision. In rough terms, at any given fixed precision for floating-point arithmetic, there will be a non-zero quantity epsilon (or "machine epsilon" for hardware double precision) for which x+epsilon is not arithmetically distinguishable from x. These issues are not specific to floating-point computations only in Maple, but are more general. See here for and here some detail. There may be other ways to implement numerical optimization (using interval computation or some "validated" scheme). But this is why the feasibility and optimality tolerances are options to NLPSolve, so that combined with the Digits setting some programmatic control is available.

acer

Quoting from the first paragraph of the Description section of the ?Optimization,NLPSolve help-page,

   Most of the algorithms used by the NLPSolve command assume
   that the objective function and the constraints are twice
   continuously differentiable. NLPSolve will sometimes succeed
   even if these conditions are not met.

So, yes, if the constraint is not continuous at the point in question, then there could be problems.

Keep in mind that these are "numerical" (here, floating-point) solvers working at a given precision. In rough terms, at any given fixed precision for floating-point arithmetic, there will be a non-zero quantity epsilon (or "machine epsilon" for hardware double precision) for which x+epsilon is not arithmetically distinguishable from x. These issues are not specific to floating-point computations only in Maple, but are more general. See here for and here some detail. There may be other ways to implement numerical optimization (using interval computation or some "validated" scheme). But this is why the feasibility and optimality tolerances are options to NLPSolve, so that combined with the Digits setting some programmatic control is available.

acer

The Maple "kernel", which does the computations, is quite separate from the interfaces. The choice of interface often doesn't really affect the computation except in the contention for memory and cpu cycles.

The Standard GUI itself is a Java application, and uses quite a bit of memory to start just itself. The commandline (TTY) interface is very lightweight. The Java GUI uses more memory and cpu cycles if many plots and lots of 2D Math output get displayed. If your symbolic computation is so large that there's not enough RAM to run both it and the interface without swapping, then things will run slowly.

However, if the computation's memory requirements even out reasonably with time (Maple's garbage collection working well) and if there is little typeset output (no plots, or most output suppressed with full colons, or little automatic scrolling, etc) then there usually shouldn't be much difference at all between runtime in each interface.

acer

Hmm.  Wouldn't it be better if the default value of currentdir() were the same as the result of kernelopts(homedir) when Maple is started from some icon launcher which doesn't specify the working location?

There are other values that would also be much more sensible than the current default. The present default appears to be kernelopts(mapledir), which is not a good default at all. There likely are many Maple users who are inadvertantly saving documents to the Maple installation folder under Program Files. That doesn't seem very wise. (A file or subfolder crucial for Maple's proper operation might too easily be clobbered, for example.)

acer

First 497 498 499 500 501 502 503 Last Page 499 of 592