acer

32405 Reputation

29 Badges

19 years, 350 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

Here are some examples.

Adjust the various options, as wanted.  See also the Help page with topic plot,options (in Maple's own Help, or online).

restart;

sol7:= dsolve({D(N)(t) = N(t), N(0)=1}, type=numeric):

 

plots:-display(
  plots:-textplot([4.4,120,N(t)]),
  plots:-odeplot( sol7, [t, N(t), color = red,
                  thickness = 2, linestyle = dot], 0 .. 5),
  labels = [t,N(t)], size=[475,400] );

plots:-odeplot( sol7, [t, N(t), color = red,
                thickness = 2, linestyle = dot], 0 .. 5,
                legend = N, size = [450,400] );

plots:-odeplot( sol7, [t, N(t), color = red,
                thickness = 2, linestyle = dot], 0 .. 5,
                labels = [t,N(t)], legendstyle = [location=right],
                legend = N(t), size = [575,400] );

plots:-display(
  plots:-textplot([4.4,120,N(t)]),
  plots:-odeplot( sol7, [t, N(t), color = red,
                  thickness = 2, linestyle = dot], 0 .. 5),
  labels = [t,""], size=[475,400] );

 

Download ode_leg_lab_ex1.mw

Here is a somewhat crude approach to your problem, using Maple 18.02.

There are several ways to make it fancier, or more general (but I only had 10 minutes for it).

I made very little effort to make it efficient.

restart;

f := z -> (z^2-1)^2;

proc (z) options operator, arrow; (z^2-1)^2 end proc

df := D(f);

proc (z) options operator, arrow; 4*(z^2-1)*z end proc

R := proc(re,im) local z;
   if not [re,im]::list(numeric) then
     return 'procname'(args);
   end if;
   z := re+I*im;
   z := evalhf( z - f(z)/df(z) );
   Re(z),Im(z);
end proc:

T := proc(re,im,N::posint:=20,eps::float:=1e-5) local z;
   if not [re,im]::list(numeric) then
     return 'procname'(args);
   end if;
   z := (R@@N)(re,im);
   z := z[1] + I*z[2];
   if abs(z - 1.0) < abs(eps) then
     1.0;
   elif abs(z + 1.0) < abs(eps) then
     -1.0;
   else 0.0; end if;
end proc:

#plot3d(T(x,y,30), x=-2..2, y=-2..2, grid=[151,151],
#       orientation=[-90,0,0], style=surface, lightmodel=none,
#       colorscheme=["zgradient",[green,black,red]]);

plots:-densityplot(T(x,y,30), x=-2..2, y=-2..2,
       grid=[151,151], style=surface,
       colorscheme=["zgradient",[green,black,red]]);

plots:-densityplot(T(x,y,30,1e-8), x=-2..2, y=-2..2,
       grid=[151,151], style=surface,
       colorscheme=["zgradient",[green,black,red]]);

 

Download simp_newt.mw

[edit] Here is some rough code that shades by varied intensity, according to how many iteration it took to get below a threshold value.

(It's not very robust, as far as adjusting the max.iteration or tolerance values, but seems ok for these values. Sorry, I don't have time to improve it.)

restart;

f := z -> (z^2-1)^2;

proc (z) options operator, arrow; (z^2-1)^2 end proc

df := D(f);

proc (z) options operator, arrow; 4*(z^2-1)*z end proc

R := proc(re,im) option remember;
   local z;
   if not [re,im]::list(numeric) then
     return 'procname'(args);
   end if;
   z := re+I*im;
   z := evalhf( z - f(z)/df(z) );
   Re(z),Im(z);
end proc:

Hfun := proc(re,im)
  local i, lre, lim;
  if not [re,im]::list(numeric) then
     return 'procname'(args);
  end if;
  (lre,lim) := re,im;
  for i from 1 to 30 do
    (lre,lim) := R(lre,lim);
  end do;
  if abs(lre+I*lim + 1.0) < 1e-8 then
    0.4;
  elif abs(lre+I*lim - 1.0) < 1e-8 then
    0.0;
  else 0.7; end if;
end proc:
Vfun := proc(re,im)
  local i, lre, lim;
  if not [re,im]::list(numeric) then
     return 'procname'(args);
  end if;
  (lre,lim) := re,im;
  for i from 1 to 30 do
    (lre,lim) := R(lre,lim);
    if abs(lre+I*lim + 1.0) < 1e-8
     or abs(lre+I*lim - 1.0) < 1e-8 then
      return abs(1-ln(i/19));
    end if;
  end do;
  return 0.0;
end proc:

plot3d(1, x=-2..2, y=-2..2, grid=[140,140],
       orientation=[-90,0,0], style=surface, lightmodel=none,
       color=[Hfun(x,y),1.0,Vfun(x,y),colortype=HSV]);

 

Download simp_newt_alt.mw

ps. Apart from not having much spare time, I apologize but I'm not so inclined to make it more general for the requirement that the Maple version is 9 years and 9 major releases out-of-date.

Here are two ways:

restart;

eq := -((2*(alpha[3]+alpha[2]))*((1/2)*cos(Theta(z, t))^2-(1/2)*sin(Theta(z, t))^2)+(alpha[3]-alpha[2])*(sin(Theta(z, t))^2+cos(Theta(z, t))^2))*(diff(U(z, t), z))-(alpha[3]-alpha[2])*(2*sin(Theta(z, t))^2*(diff(Theta(z, t), t))+2*cos(Theta(z, t))^2*(diff(Theta(z, t), t)))+(diff(Theta(z, t), z, z))*k;

-(2*(alpha[3]+alpha[2])*((1/2)*cos(Theta(z, t))^2-(1/2)*sin(Theta(z, t))^2)+(alpha[3]-alpha[2])*(sin(Theta(z, t))^2+cos(Theta(z, t))^2))*(diff(U(z, t), z))-(alpha[3]-alpha[2])*(2*sin(Theta(z, t))^2*(diff(Theta(z, t), t))+2*cos(Theta(z, t))^2*(diff(Theta(z, t), t)))+(diff(diff(Theta(z, t), z), z))*k

simplify(simplify(eq),{coeff(eq,diff(U(z,t),z))=t1});

(diff(diff(Theta(z, t), z), z))*k+(2*alpha[2]-2*alpha[3])*(diff(Theta(z, t), t))+(diff(U(z, t), z))*t1

algsubs(coeff(eq,diff(U(z,t),z))=t1,simplify(eq));

(diff(diff(Theta(z, t), z), z))*k+(2*alpha[2]-2*alpha[3])*(diff(Theta(z, t), t))+(diff(U(z, t), z))*t1

Download subst_ex4.mw

ps. I removed this one from my worksheet, because it seemed unnecessarily complicated; the collect call is not needed.
   collect(simplify(eq),diff,
          u->simplify(u,{coeff(eq,diff(U(z,t),z))=t1}));

[edit] other possibilities include,
    simplify(subs(coeff(eq,diff(U(z,t),z))=t1,
                collect(eq,diff)));
  simplify(map[2](subs,coeff(eq,diff(U(z,t),z))=t1,
                  collect(eq,diff)));

The lhs of your equation (hidden input) is entered as,
   Int(E*r, r = 0 .. 2*sigma, varphi = 0 .. 2*Pi)
instead of as,
   Int(E*r, [r = 0 .. 2*sigma, varphi = 0 .. 2*Pi])

The GUI's Typesetting system is not pretty-printing the former in Maple 2023.0 and Maple 2022.2. Within Maple 2021.2 (and earlier, say 18.02) it does get pretty-printed as you're expecting.

Note that in M2023.0, even when not pretty-printed as you'd like, value still works on it. The former is not actually one of the documented calling sequences of int, say, afaics, even if accepted.

Fwiw, the command IntegrationTools:-CollapseNested seems to turn the former into the latter, programmatically.

There is a typical issue with nested numeric computations, where the outer process might not be able to attain some target tolerance because the inner process is not guaranteeing corrspondingly adequate accuracy.

For example, if the inner process guarantees only 6 digits of accuracy, and the outer process has a "convergence acceptance" criterion involving 8 places of accuracy. Random 7th and 8th digits from the inner results prevent the outer process from successfully converging.

This kind of thing can happen when you do fsolve of a dsolve process, or Optimization of a dsolve process, of fsolve of an evalf/Int process, evalf/Int of some other numer process, etc, etc.

Even a rough understanding of the issue can help in such situations (as opposed to flailing around). But it can get tricky. It can get even more complicated if there are both relative and absolute accuracy tolerances in play.

Sometimes the simplest thing is just to crank up the working precision of the inner process, eg. Allan's suggestion to set Digits:=18 in ss. That may bring about enough improvement in the inner results's accuracy that the outer process can also meet its own tolerances. But sometimes this can make the whole computation too slow.

Sometimes having unfortunate choices of tolerances (inner and outer) can produce results successfully, but slower than with more fortuitous choices.

Being able to relax the accuracy of the outer process while retaining, say, the compiled/evalhf hardware double precision speed of inner processes is often desirable.

For these reasons knowing the required accuracy of the outer process is a very good place to begin. Alternatively, if there is a cap on the working precision of the inner process (eg. it requires hardware double-precision speed, etc), then one might need to accept a less accurate outer result -- according to the given problem.

nb. There are also some situations where a final step of rounding to a specific number of digits in the inner process (thereby removing all innaccurate digits) can additionally help with this kind of thing. I didn't do that here.

Here is an adjustment that produces results for your three cases. Experiment and adjust as necessary. You indicated that you'd rather not raise the working precision of the inner process (involving dsolve/numeric), so instead I've relaxed the optimalitytolerance of NLPSolve.

restart:

Digits:=15;

15

 

eqodes:=[diff(ca(t),t)=-(u+u^2/2)*1.0*ca(t),diff(cb(t),t)=1.0*u*ca(t)-0.1*cb(t)];

[diff(ca(t), t) = -1.0*(u+(1/2)*u^2)*ca(t), diff(cb(t), t) = 1.0*u*ca(t)-.1*cb(t)]

soln:=dsolve({op(eqodes),ca(0)=alpha,cb(0)=beta}, type=numeric,
             'parameters'=[alpha,beta,u],
              #abserr=1e-15,
              relerr=1e-11,
              compile=true, savebinary=true):

 

ss:=proc(x)
interface(warnlevel=0):
#if  type(x[1],numeric)
if  type(x,Vector)
then

local z1,n1,i,c10,c20,dt,u;
global soln,nvar;
dt:=evalf(1.0/nvar):
c10:=1.0:c20:=0.0:
for i from 1 to nvar do
u:=x[i]:
soln('parameters'=[c10,c20,u]):
z1:=soln(dt):
c10:=subs(z1,ca(t)):c20:=subs(z1,cb(t)):
od:
-c20;
 else 'procname'(args):

end if:

end proc:

 

nvar:=3;#code works for nvar:=3, but not for nvar:=2

3

ic0:=Vector(nvar,[seq(0.1,i=1..nvar)],datatype=float[8]);

Vector(3, {(1) = .1, (2) = .1, (3) = .1})

ss(ic0);

HFloat(-0.09025791391609851)

bl := Vector(nvar,[seq(0.,i=1..nvar)]);bu := Vector(nvar,[seq(5.,i=1..nvar)]);

Vector(3, {(1) = 0., (2) = 0., (3) = 0.})

Vector[column](%id = 36893628421116063364)

infolevel[all]:=1:
infolevel[Optimization]:=3:

Optimization:-NLPSolve(nvar,ss,[],initialpoint=ic0,[bl,bu],optimalitytolerance=1e-8);

NLPSolve: calling NLP solver

NLPSolve: using method=sqp

NLPSolve: number of problem variables 3

NLPSolve: number of nonlinear inequality constraints 0

NLPSolve: number of nonlinear equality constraints 0

NLPSolve: number of general linear constraints 0

NLPSolve: feasibility tolerance set to 0.1053671213e-7

NLPSolve: optimality tolerance set to 0.1e-7

NLPSolve: iteration limit set to 50

NLPSolve: infinite bound set to 0.10e21

NLPSolve: trying evalhf mode

NLPSolve: trying evalf mode

attemptsolution: number of major iterations taken 17

[-.531453352625634534, Vector(3, {(1) = .8114396007818414, (2) = 1.1894022100912927, (3) = 2.4277492065842683})]

nvar:=2;#code works for nvar:=3, but not for nvar:=2

2

ic0:=Vector(nvar,[seq(0.1,i=1..nvar)],datatype=float[8]);

Vector(2, {(1) = .1, (2) = .1})

 

ss(ic0);

HFloat(-0.09025792303565874)

bl := Vector(nvar,[seq(0.,i=1..nvar)]);bu := Vector(nvar,[seq(5.,i=1..nvar)]);

Vector(2, {(1) = 0., (2) = 0.})

Vector[column](%id = 36893628421005739900)

infolevel[all]:=1:
infolevel[Optimization]:=3:

Optimization:-NLPSolve(nvar,ss,[],initialpoint=ic0,[bl,bu],optimalitytolerance=1e-8);

NLPSolve: calling NLP solver

NLPSolve: using method=sqp

NLPSolve: number of problem variables 2

NLPSolve: number of nonlinear inequality constraints 0

NLPSolve: number of nonlinear equality constraints 0

NLPSolve: number of general linear constraints 0

NLPSolve: feasibility tolerance set to 0.1053671213e-7

NLPSolve: optimality tolerance set to 0.1e-7

NLPSolve: iteration limit set to 50

NLPSolve: infinite bound set to 0.10e21

NLPSolve: trying evalhf mode

NLPSolve: trying evalf mode

attemptsolution: number of major iterations taken 12

[-.523901131526401165, Vector(2, {(1) = .9099315182152206, (2) = 1.9272308847323274})]

 

nvar:=5;#code works for nvar:=3, but not for nvar:=2

5

ic0:=Vector(nvar,[seq(0.1,i=1..nvar)],datatype=float[8]);

Vector(5, {(1) = .1, (2) = .1, (3) = .1, (4) = .1, (5) = .1})

 

ss(ic0);

HFloat(-0.09025790901672268)

bl := Vector(nvar,[seq(0.,i=1..nvar)]);bu := Vector(nvar,[seq(5.,i=1..nvar)]);

Vector(5, {(1) = 0., (2) = 0., (3) = 0., (4) = 0., (5) = 0.})

Vector[column](%id = 36893628421097096604)

infolevel[all]:=1:
infolevel[Optimization]:=3:

Optimization:-NLPSolve(nvar,ss,[],initialpoint=ic0,[bl,bu],optimalitytolerance=1e-8);

NLPSolve: calling NLP solver

NLPSolve: using method=sqp

NLPSolve: number of problem variables 5

NLPSolve: number of nonlinear inequality constraints 0

NLPSolve: number of nonlinear equality constraints 0

NLPSolve: number of general linear constraints 0

NLPSolve: feasibility tolerance set to 0.1053671213e-7

NLPSolve: optimality tolerance set to 0.1e-7

NLPSolve: iteration limit set to 50

NLPSolve: infinite bound set to 0.10e21

NLPSolve: trying evalhf mode

NLPSolve: trying evalf mode

attemptsolution: number of major iterations taken 30

[-.537338841361480024, Vector(5, {(1) = .7435701450990695, (2) = .9013559679138707, (3) = 1.1487492967651152, (4) = 1.62908537333248, (5) = 3.222832756435604})]

 

Download test_VS1_ac.mw

The OP mentioned fdiff. If the internal methods NLPSolve uses to compute numerical derivatives is not accurate enough here (it could be a weak link in the nesting of numeric processes) then NLPSolve could be passed a user-defined procedure for objective gradient or constraint jacobian. That might leverage dsolve/numeric itself as a more robust mechanism for that part of the computation. For convenience sometimes the DE system can be augmented for more convenient&direct access to derivative results.

Here is one way, moving from red to the end-color (here, blue) through HSV color-space.

I'll do it for your rho, and then also for another, slightly different formula.

For the first example, I've rotated the orientation to try and illustrate that the shading gradient is orthogonal to the plane defined by that original rho formula.

restart;

rho:=-4*x+4*y+4*z+3;

-4*x+4*y+4*z+3

minr := minimize(rho,x=0..4,y=0..4,z=0..4);

-13

maxr := maximize(rho,x=0..4,y=0..4,z=0..4);

35

p := ColorTools:-Color("HSV","Blue")[1];

.6666666667

adj := min(1,max(0,(rho-minr)/(maxr-minr))):

copt := colorscheme=["xyzcoloring",unapply(p*adj,[x,y,z])]:

plot3d([[0,a,b],[4,a,b],[a,0,b],[a,4,b],[a,b,0],[a,b,4]],
       a=0..4, b=0..4, copt, style=surface, labels=[x,y,z]);

 

Let's do another, for fun.

 

rho:=-4*x^2+4*y^2+4*z^2+3;

-4*x^2+4*y^2+4*z^2+3

minr := minimize(rho,x=0..4,y=0..4,z=0..4);

-61

maxr := maximize(rho,x=0..4,y=0..4,z=0..4);

131

p := ColorTools:-Color("HSV","Blue")[1];

.6666666667

adj := min(1,max(0,(rho-minr)/(maxr-minr))):

copt := colorscheme=["xyzcoloring",unapply(p*adj,[x,y,z])]:

plot3d([[0,a,b],[4,a,b],[a,0,b],[a,4,b],[a,b,0],[a,b,4]],
       a=0..4, b=0..4, copt, style=surface, labels=[x,y,z]);

Download cube_col_ac.mw

Here's another variant. This has the color go from red to blue through the RGB color-space, the proportion determined by where rho's value is between its min and max values (for the given ranges).

restart;

with(ColorTools):

rho:=-4*x+4*y+4*z+3;

-4*x+4*y+4*z+3

minr := minimize(rho,x=0..4,y=0..4,z=0..4);

-13

maxr := maximize(rho,x=0..4,y=0..4,z=0..4);

35

adj := min(1,max(0,(rho-minr)/(maxr-minr))):

spc,nspc := "RGB",RGB;
#spc,nspc := "HSV",HSV:

"RGB", RGB

C1 := ((1-eval(adj,z=0))*[Color(spc,"Red")[]] + eval(adj,z=0)*[Color(spc,"Blue")[]]):
C2 := ((1-eval(adj,z=4))*[Color(spc,"Red")[]] + eval(adj,z=4)*[Color(spc,"Blue")[]]):
C3 := ((1-eval(adj,y=0))*[Color(spc,"Red")[]] + eval(adj,y=0)*[Color(spc,"Blue")[]]):
C4 := ((1-eval(adj,y=4))*[Color(spc,"Red")[]] + eval(adj,y=4)*[Color(spc,"Blue")[]]):
C5 := ((1-eval(adj,x=0))*[Color(spc,"Red")[]] + eval(adj,x=0)*[Color(spc,"Blue")[]]):
C6 := ((1-eval(adj,x=4))*[Color(spc,"Red")[]] + eval(adj,x=4)*[Color(spc,"Blue")[]]):

plots:-display(
 plot3d([x,y,0],x=0..4,y=0..4,color=[C1[1], C1[2], C1[3], colortype=nspc]),
 plot3d([x,y,4],x=0..4,y=0..4,color=[C2[1], C2[2], C2[3], colortype=nspc]),
 plot3d([x,0,z],x=0..4,z=0..4,color=[C3[1], C3[2], C3[3], colortype=nspc]),
 plot3d([x,4,z],x=0..4,z=0..4,color=[C4[1], C4[2], C4[3], colortype=nspc]),
 plot3d([0,y,z],y=0..4,z=0..4,color=[C5[1], C5[2], C5[3], colortype=nspc]),
 plot3d([4,y,z],y=0..4,z=0..4,color=[C6[1], C6[2], C6[3], colortype=nspc]),
style=surface, labels=[x,y,z]);

Download cube_col_02.mw

That can also get the previous result I showed, by commenting out the line that sets the color-space names.

I didn't make that efficient.

You could also make that work with other color-spaces like Lab or Luv, but there's be additional conversion steps (to and from, since some plotting aspects only accept the RGB or HSV).

You could also make an arbitrary coloring action, according to some scheme you want, given this ability to compute the position between min&max value. So, if you describe some other particular scheme in detail then I can show you how to use that with the above approach. (That could likely also be done using another colorscheme approach, possibly re-using colors generated from that option on a simple 2D curve for convenience.)

In your previous Question the Answer by vv contained a link to an older thread.

   https://mapleprimes.com/posts/214347-Multibyte-Characters

In that older posting there is code for another routine by vv, named SS, which leverages vv's LEN procedure to get substring functionality.

So, your current attached worksheet could be easily revised as, say,

restart

LEN := proc (s::string) Python:-EvalFunction("len", s) end proc

SS := proc (s::string, mn::{integer, range({integer, identical()})}) local m, n; if type(mn, integer) then m, n := `$`(mn, 2) else m := lhs(mn); n := rhs(mn) end if; if m = NULL then m := 1 end if; if n = NULL then n := -1 end if; if n = -1 then n := LEN(s) elif n < 0 then n := n+1 end if; if m = -1 then m := LEN(s) elif m < 0 then m := m+1 end if; Python:-EvalString(cat("\"", s, "\"", "[", m-1, ":", n, "]")) end proc

str := "tølkæi"

LEN(str) = 6NULL

NULL

SS(str, 1) = "t"NULL

SS(str, 2) = "ø"NULL

SS(str, 3) = "l"NULL

SS(str, 4) = "k"NULL

SS(str, 5) = "æ"NULL

SS(str, 6) = "i"NULL

Download substrings_ac.mw

It offers additional functionality, eg.

SS(str, 1 .. ()), SS(str, 1 .. -3), SS(str, 1 .. 2), SS(str, -3 .. -1), SS(str, -1)

"tølkæi", "tølk", "tø", "kæi", "i"

NULL

It's not much of a stretch to imagine that "SS" stands for SubString.

Are you just trying to compute this?

   convert( p(x)/q(x), parfrac, x )

 

If that's the case then could it really be that you didn't try entering the term partialfraction into Maple's Help search box?

1) It may work. Solving equations using the numerators could also produce some solutions which don't satisfy equations obtained from the original expressions, since it's possible that some solutions (of the new system) could make both numerator and denominator (in the original system) be zero. You don't want solutions which make any of the denominators zero. That might be unlikely. It's a general statement.

2) Where did you get this command which you are trying to use but don't understand? (If from another of your several threads on a common problem, why not ask there and keep the content sensibly close?)

It's mostly a composition. I don't know why its author included the evala@Norm since that does not have major effect on this example. You could more quickly just use numer(Eqs) , which produces a similar result as before if compared via normal. In that case you would not need the elementwise ~ here. The numer command gets mapped across the set of expressions automatically.

3) Your three expressions are sums of products. The numer command is performing expansion, when reforming over a common denominator. As illustration, notice for the following that the result from numer has greater length than the original,

expr := 1/(a+b)^5 + 1/(c+d);

1/(a+b)^5+1/(c+d)

foo := numer(expr);

a^5+5*a^4*b+10*a^3*b^2+10*a^2*b^3+5*a*b^4+b^5+c+d

length(expr), length(foo);

33, 75

normal(expr);

(a^5+5*a^4*b+10*a^3*b^2+10*a^2*b^3+5*a*b^4+b^5+c+d)/((a+b)^5*(c+d))

Download num_ex.mw

It's not clear why you want these numer results in more compact form, since the next step is to pass them on to some system solver which will need to manipulate/reform as it needs.

Is this adequate for your goal?

L := diff(q1(t), t) + 2*diff(q2(t), t) + 3*q1(t);

diff(q1(t), t)+2*(diff(q2(t), t))+3*q1(t)

Physics:-diff(L, q1(t));

3

Download ph_diff_ex.mw

Here is a shorter example of the underlying error occurring during evaluation at some kind of point of some kind of piecewise.

restart;
piecewise(abs((HFloat(infinity)+HFloat(infinity)*I)*2^(1/2)
              + HFloat(infinity)+HFloat(infinity)*I) < 1,
          1, undefined);
Error, (in sprintf) too many levels of recursion

That error relates to this:

restart;
radnormal(abs((HFloat(infinity)+HFloat(infinity)*I)*2^(1/2)
              +HFloat(infinity)+HFloat(infinity)*I)-1);
Error, (in sprintf) too many levels of recursion

And that in turn tries the following:

`radnormal/ToRadical`(abs((infinity+infinity*I)*2^(1/2)+infinity+infinity*I)-1);
Error, (in sprintf) too many levels of recursion

And here is how that error can occur for your expression assigned to ff, (the plotting internal process will construct a procedure from the expression)

FF := unapply(ff, x):
FF(HFloat(1.9954019196875001));
Error, (in sprintf) too many levels of recursion

subs(x=HFloat(1.9954019196875001),ff):
eval(%);
Error, (in sprintf) too many levels of recursion

bug_plot_may_29_2023_pw_ac.mw


note. So far I'm seeing this problem in Maple 2019.2.1 and after, but not in 2018.2 and earlier.

I will submit a bug report.

A smooth plot can be obtained reasonably quickly in Maple 2023.0 with,
   plot(evala(ff),x=-3..3);
That skirts around issues near 2 for, say,
   plot(X->eval(ff,x=X),-3..3);

I can see that rationalize computation go away for a long time, even in earlier versions.

If you need to get around this, how about one of,

   radnormal(expr,':-rationalized')

   simplify(radnormal(expr,':-rationalized'),size)

   simplify(radnormal(expr,':-rationalized'))

For example, in under 2sec on my Maple 2023.0,

restart;

kernelopts(version);

`Maple 2023.0, X86 64 LINUX, Mar 06 2023, Build ID 1689885`

 

expr:=1/(3*2^(2/3)*((2^(1/3)*(1+(256*u^3+1)^(1/2))^(2/3)-8*u)/(1+(256*u^3+1)^(1/2))^(1/3))^(1/2)+3*(1/(1+(256*u^3+1)^(1/2))^(1/3)*((8*2^(1/6)*u-2^(1/2)*(1+(256*u^3+1)^(1/2))^(2/3))*((2^(1/3)*(1+(256*u^3+1)^(1/2))^(2/3)-8*u)/(1+(256*u^3+1)^(1/2))^(1/3))^(1/2)-4*2^(1/6)*(1+(256*u^3+1)^(1/2))^(1/3))/((2^(1/3)*(1+(256*u^3+1)^(1/2))^(2/3)-8*u)/(1+(256*u^3+1)^(1/2))^(1/3))^(1/2))^(1/2)*2^(7/12)-16*u):

 

mein := CodeTools:-Usage( simplify(radnormal(expr,':-rationalized')) );

memory used=147.31MiB, alloc change=70.00MiB, cpu time=1.74s, real time=1.58s, gc time=392.02ms

(1/8192)*(3*(96*u*((3/8)*u*2^(1/3)*(1+(256*u^3+1)^(1/2))^(2/3)+(3/64)*2^(2/3)*((256*u^3+1)^(1/2)-1)*(1+(256*u^3+1)^(1/2))^(1/3)+u^2)*(((-(256*u^3+1)^(1/2)+1)*(1+(256*u^3+1)^(1/2))^(2/3)+32*2^(1/3)*u^2*(1+(256*u^3+1)^(1/2))^(1/3))/u^2)^(1/2)+9*2^(1/6)*((256*u^3+1)^(1/2)-1)*(1+(256*u^3+1)^(1/2))^(2/3)-1024*u^4*2^(5/6)-288*u^2*2^(1/2)*(1+(256*u^3+1)^(1/2))^(1/3))*((-64*u*((1/8)*u*2^(5/6)*(1+(256*u^3+1)^(1/2))^(2/3)+(1/32)*2^(1/6)*((256*u^3+1)^(1/2)-1)*(1+(256*u^3+1)^(1/2))^(1/3)+2^(1/2)*u^2)*(((-(256*u^3+1)^(1/2)+1)*(1+(256*u^3+1)^(1/2))^(2/3)+32*2^(1/3)*u^2*(1+(256*u^3+1)^(1/2))^(1/3))/u^2)^(1/2)+2^(2/3)*((256*u^3+1)^(1/2)-1)*(1+(256*u^3+1)^(1/2))^(2/3)-64*(1+(256*u^3+1)^(1/2))^(1/3)*u^2)/u^2)^(1/2)+3*(3*2^(1/2)*(256*u^3-3*(256*u^3+1)^(1/2)+3)*(1+(256*u^3+1)^(1/2))^(2/3)+4096*u^2*((3/128)*2^(5/6)*((256*u^3+1)^(1/2)+2)*(1+(256*u^3+1)^(1/2))^(1/3)+u^2*2^(1/6)))*(((-(256*u^3+1)^(1/2)+1)*(1+(256*u^3+1)^(1/2))^(2/3)+32*2^(1/3)*u^2*(1+(256*u^3+1)^(1/2))^(1/3))/u^2)^(1/2)-131072*u^5-13824*u^2)/(256*u^6-135*u^3)

Download nm_rdnm.mw

The maximize command can handle this example,

maximize( (x-4*y)^9+3*z, x=0..3, y=0..14, z=0..17 );

             19734

You could get that output if lessthan had been given its own alias, earlier.

In the sixth bullet point on the Help-page for topic alias, there is the sentence, "Finally, a sequence of all existing aliases is returned as the function value."

(You've added a .mw file Document, since I answered. If I open and rexecute it then I don't see that same output. Perhaps you had an original with prior commmands?)

Your code for it currently attempts it like,

Grading:-Quiz("Calculer la limite :",
   proc(Resp, Ans)
      evalb(limit(Ans, x = 0) = Resp);
   end proc,
   proc() local r;
      r := rand(1 .. 7)();
      sin(r()*x)/(3 + r());
   end proc);

A serious weakness of that is that it doesn't actually typeset it as a limit, and it doesn't indicate the limit point (ie, x=0).

Perhaps you could start with this adjustment to your code,

Grading:-Quiz(
   "Calculer la limite :",
   proc(Resp, Ans)
      evalb(value(Ans) - Resp = 0);
   end proc,
   proc() local r;
      r := rand(1 .. 7)();
      Limit(sin(r()*x)/(3 + r()), x = 0);
   end proc)

I presume that you plan on trying to come up with more inspired examples whose answers are not all zero.

There is a tricky issue, to many such quizzes: what are you willing to accept as the answer? Is it only the identical expression?

Suppose the answer is exact zero. Would you accept the float 0.00? What about 12^(1/2)/2 - 3^(1/2) ? What about sin(2)^2+cos(2)^2-1 ?

If the answer to another example were sqrt(3) would you accept 12^(1/2)/2 ?

If the answer to another example were x-1, would you accept (x^2-1)/(x+1) ?

There are more insidious examples of seemingly harmless variations that could get the wrong grade.

Note that it's not generally practical to attempt any simplification as strong as, say,
   evalb(simplify( limit(Ans, x = 0) = Resp ));
because for various classes of problem (integration, say) that could give the response "Correct" even if one merely entered the unsolved problem as the response!

Also unsatisfactory is, say,
   evalb(limit(Ans, x = 0) = simplify(Resp));
because there may be more than one acceptable form, and (more seriously) different accpectable forms might not simplify to the same simplified form.

Given a particular validation scheme, it's often possible to come up with an innocuous looking example for which the grading is not right.

First 43 44 45 46 47 48 49 Last Page 45 of 336