acer

32303 Reputation

29 Badges

19 years, 309 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

In such situations I usually utilize algsubs or simplify-with-side-relations to accomplish the temporary substitution of the target subexpression by a token (since subs would fail due to the absence of the target structurally).

For a token I sometimes use a double-underscore name since that should ideally not be otherwise present. For more a more programmatic approach I sometimes utilize freeze & thaw, which also lets me skirt having to reverse the substituion-equation.

restart;

 

eq :=c*Int(f(x), x) = something - 2*b*c*Int(f(x), x);

c*(Int(f(x), x)) = something-2*b*c*(Int(f(x), x))

# This will not work, since lhs(eq) is
# not structurally present as a subexpression
# in rhs(eq)

isolate(eq, lhs(eq));

c*(Int(f(x), x)) = something-2*b*c*(Int(f(x), x))

subs( __K=lhs(eq),
      isolate( algsubs( lhs(eq)=__K, eq ), __K ) );

c*(Int(f(x), x)) = something/(2*b+1)

thaw( isolate( algsubs( lhs(eq)=freeze(lhs(eq)), eq ),
               freeze(lhs(eq)) ) );

c*(Int(f(x), x)) = something/(2*b+1)

subs( __K=lhs(eq),
      isolate( simplify( eq, {lhs(eq)=__K} ), __K ) );

c*(Int(f(x), x)) = something/(2*b+1)

thaw( isolate( simplify( eq, {lhs(eq)=freeze(lhs(eq))} ),
               freeze(lhs(eq)) ) );

c*(Int(f(x), x)) = something/(2*b+1)

Download asi.mw

All of the above works in your older Maple 2015.2 version.

In current Maple 2023.0 you could alternatively utilize Physics:-Substitute (instead of algsubs or simplify-with-side-relations) for the first step of substiting for the target by a token. But that doesn't make it easier.

Useful might be something like (I'm making this up),
   PDETools:-Solve(eq,lhs(eq))
as a means to directly handle this whole task.

It seems that subs can handle your example in a straightforward manner,

In particular, you don't have to construct your eq_Dx equation by any manual copy&paste, ie. of a denominator in the rhs of the original. It can be formed programmatically.

restart;

eq_K1_m4 := K__1 = E__q0*(R__T*E__B*sin(delta) + X__Td*E__B*cos(delta))/(L__aqs*L__l + L__aqs*X__E + L__aqs*L__ads_p + L__l^2 + 2*L__l*X__E + L__l*L__ads_p + R__E^2 + 2*R__E*R__a + R__a^2 + X__E^2 + X__E*L__ads_p) + (X__q - X__dp)*i__q0*(X__Tq*E__B*sin(delta) - R__T*E__B*cos(delta))/(L__aqs*L__l + L__aqs*X__E + L__aqs*L__ads_p + L__l^2 + 2*L__l*X__E + L__l*L__ads_p + R__E^2 + 2*R__E*R__a + R__a^2 + X__E^2 + X__E*L__ads_p);

K__1 = E__q0*(R__T*E__B*sin(delta)+X__Td*E__B*cos(delta))/(L__aqs*L__l+L__aqs*X__E+L__aqs*L__ads_p+L__l^2+2*L__l*X__E+L__l*L__ads_p+R__E^2+2*R__E*R__a+R__a^2+X__E^2+X__E*L__ads_p)+(X__q-X__dp)*i__q0*(X__Tq*E__B*sin(delta)-R__T*E__B*cos(delta))/(L__aqs*L__l+L__aqs*X__E+L__aqs*L__ads_p+L__l^2+2*L__l*X__E+L__l*L__ads_p+R__E^2+2*R__E*R__a+R__a^2+X__E^2+X__E*L__ads_p)

subs(denom(rhs(eq_K1_m4))=Dx,eq_K1_m4)

K__1 = E__q0*(R__T*E__B*sin(delta)+X__Td*E__B*cos(delta))/Dx+(X__q-X__dp)*i__q0*(X__Tq*E__B*sin(delta)-R__T*E__B*cos(delta))/Dx


Your original worksheet already had this additional equation:

eq_Dx := Dx = L__aqs*L__l + L__aqs*X__E + L__aqs*L__ads_p + L__l^2 + 2*L__l*X__E + L__l*L__ads_p + R__E^2 + 2*R__E*R__a + R__a^2 + X__E^2 + X__E*L__ads_p;

Dx = L__aqs*L__l+L__aqs*X__E+L__aqs*L__ads_p+L__l^2+2*L__l*X__E+L__l*L__ads_p+R__E^2+2*R__E*R__a+R__a^2+X__E^2+X__E*L__ads_p

 

In that case, all you'd need is,

 

subs((rhs=lhs)(eq_Dx),eq_K1_m4)

K__1 = E__q0*(R__T*E__B*sin(delta)+X__Td*E__B*cos(delta))/Dx+(X__q-X__dp)*i__q0*(X__Tq*E__B*sin(delta)-R__T*E__B*cos(delta))/Dx

Download Q20230606_ac2.mw

That latter one is like Edgardo's, except that it uses subs instead of Physics:-Substitute. (That is, there's not much especially tricky about this example.)

This is not very attractive, but agrees, for real x (almost everywhere, leaving out x=1 say).

restart;

integrand := (ln(-x + sqrt(x^2 - 1))*x
              + sqrt(x^2 - 1))*x*sqrt(-x^2 + 1)/sqrt(x^2 - 1);

(ln(-x+(x^2-1)^(1/2))*x+(x^2-1)^(1/2))*x*(-x^2+1)^(1/2)/(x^2-1)^(1/2)

H:=Int(integrand,x):

A1 := simplify(evala(eval(value(IntegrationTools:-Change(H, s=-x+sqrt(x^2-1), s)),
                          s=-x+sqrt(x^2-1)))) assuming s::real;

((1/9)*I)*(3*ln(-x+(x^2-1)^(1/2))*x^3+4*(x^2-1)^(1/2)*x^2-(x^2-1)^(1/2))

simplify( diff(A1, x) - integrand ) assuming Or(x<=-1,x>=1);

0

simplify( diff(A1, x) + integrand ) assuming x>=-1,x<=1;

0

ans := piecewise(x>=-1 and x<=1, -A1, A1);

ans := piecewise(-1 <= x and x <= 1, -(I*(1/9))*(3*ln(-x+sqrt(x^2-1))*x^3+4*sqrt(x^2-1)*x^2-sqrt(x^2-1)), (I*(1/9))*(3*ln(-x+sqrt(x^2-1))*x^3+4*sqrt(x^2-1)*x^2-sqrt(x^2-1)))

Download ugly.mw

Here are some examples.

Adjust the various options, as wanted.  See also the Help page with topic plot,options (in Maple's own Help, or online).

restart;

sol7:= dsolve({D(N)(t) = N(t), N(0)=1}, type=numeric):

 

plots:-display(
  plots:-textplot([4.4,120,N(t)]),
  plots:-odeplot( sol7, [t, N(t), color = red,
                  thickness = 2, linestyle = dot], 0 .. 5),
  labels = [t,N(t)], size=[475,400] );

plots:-odeplot( sol7, [t, N(t), color = red,
                thickness = 2, linestyle = dot], 0 .. 5,
                legend = N, size = [450,400] );

plots:-odeplot( sol7, [t, N(t), color = red,
                thickness = 2, linestyle = dot], 0 .. 5,
                labels = [t,N(t)], legendstyle = [location=right],
                legend = N(t), size = [575,400] );

plots:-display(
  plots:-textplot([4.4,120,N(t)]),
  plots:-odeplot( sol7, [t, N(t), color = red,
                  thickness = 2, linestyle = dot], 0 .. 5),
  labels = [t,""], size=[475,400] );

 

Download ode_leg_lab_ex1.mw

Here is a somewhat crude approach to your problem, using Maple 18.02.

There are several ways to make it fancier, or more general (but I only had 10 minutes for it).

I made very little effort to make it efficient.

restart;

f := z -> (z^2-1)^2;

proc (z) options operator, arrow; (z^2-1)^2 end proc

df := D(f);

proc (z) options operator, arrow; 4*(z^2-1)*z end proc

R := proc(re,im) local z;
   if not [re,im]::list(numeric) then
     return 'procname'(args);
   end if;
   z := re+I*im;
   z := evalhf( z - f(z)/df(z) );
   Re(z),Im(z);
end proc:

T := proc(re,im,N::posint:=20,eps::float:=1e-5) local z;
   if not [re,im]::list(numeric) then
     return 'procname'(args);
   end if;
   z := (R@@N)(re,im);
   z := z[1] + I*z[2];
   if abs(z - 1.0) < abs(eps) then
     1.0;
   elif abs(z + 1.0) < abs(eps) then
     -1.0;
   else 0.0; end if;
end proc:

#plot3d(T(x,y,30), x=-2..2, y=-2..2, grid=[151,151],
#       orientation=[-90,0,0], style=surface, lightmodel=none,
#       colorscheme=["zgradient",[green,black,red]]);

plots:-densityplot(T(x,y,30), x=-2..2, y=-2..2,
       grid=[151,151], style=surface,
       colorscheme=["zgradient",[green,black,red]]);

plots:-densityplot(T(x,y,30,1e-8), x=-2..2, y=-2..2,
       grid=[151,151], style=surface,
       colorscheme=["zgradient",[green,black,red]]);

 

Download simp_newt.mw

[edit] Here is some rough code that shades by varied intensity, according to how many iteration it took to get below a threshold value.

(It's not very robust, as far as adjusting the max.iteration or tolerance values, but seems ok for these values. Sorry, I don't have time to improve it.)

restart;

f := z -> (z^2-1)^2;

proc (z) options operator, arrow; (z^2-1)^2 end proc

df := D(f);

proc (z) options operator, arrow; 4*(z^2-1)*z end proc

R := proc(re,im) option remember;
   local z;
   if not [re,im]::list(numeric) then
     return 'procname'(args);
   end if;
   z := re+I*im;
   z := evalhf( z - f(z)/df(z) );
   Re(z),Im(z);
end proc:

Hfun := proc(re,im)
  local i, lre, lim;
  if not [re,im]::list(numeric) then
     return 'procname'(args);
  end if;
  (lre,lim) := re,im;
  for i from 1 to 30 do
    (lre,lim) := R(lre,lim);
  end do;
  if abs(lre+I*lim + 1.0) < 1e-8 then
    0.4;
  elif abs(lre+I*lim - 1.0) < 1e-8 then
    0.0;
  else 0.7; end if;
end proc:
Vfun := proc(re,im)
  local i, lre, lim;
  if not [re,im]::list(numeric) then
     return 'procname'(args);
  end if;
  (lre,lim) := re,im;
  for i from 1 to 30 do
    (lre,lim) := R(lre,lim);
    if abs(lre+I*lim + 1.0) < 1e-8
     or abs(lre+I*lim - 1.0) < 1e-8 then
      return abs(1-ln(i/19));
    end if;
  end do;
  return 0.0;
end proc:

plot3d(1, x=-2..2, y=-2..2, grid=[140,140],
       orientation=[-90,0,0], style=surface, lightmodel=none,
       color=[Hfun(x,y),1.0,Vfun(x,y),colortype=HSV]);

 

Download simp_newt_alt.mw

ps. Apart from not having much spare time, I apologize but I'm not so inclined to make it more general for the requirement that the Maple version is 9 years and 9 major releases out-of-date.

Here are two ways:

restart;

eq := -((2*(alpha[3]+alpha[2]))*((1/2)*cos(Theta(z, t))^2-(1/2)*sin(Theta(z, t))^2)+(alpha[3]-alpha[2])*(sin(Theta(z, t))^2+cos(Theta(z, t))^2))*(diff(U(z, t), z))-(alpha[3]-alpha[2])*(2*sin(Theta(z, t))^2*(diff(Theta(z, t), t))+2*cos(Theta(z, t))^2*(diff(Theta(z, t), t)))+(diff(Theta(z, t), z, z))*k;

-(2*(alpha[3]+alpha[2])*((1/2)*cos(Theta(z, t))^2-(1/2)*sin(Theta(z, t))^2)+(alpha[3]-alpha[2])*(sin(Theta(z, t))^2+cos(Theta(z, t))^2))*(diff(U(z, t), z))-(alpha[3]-alpha[2])*(2*sin(Theta(z, t))^2*(diff(Theta(z, t), t))+2*cos(Theta(z, t))^2*(diff(Theta(z, t), t)))+(diff(diff(Theta(z, t), z), z))*k

simplify(simplify(eq),{coeff(eq,diff(U(z,t),z))=t1});

(diff(diff(Theta(z, t), z), z))*k+(2*alpha[2]-2*alpha[3])*(diff(Theta(z, t), t))+(diff(U(z, t), z))*t1

algsubs(coeff(eq,diff(U(z,t),z))=t1,simplify(eq));

(diff(diff(Theta(z, t), z), z))*k+(2*alpha[2]-2*alpha[3])*(diff(Theta(z, t), t))+(diff(U(z, t), z))*t1

Download subst_ex4.mw

ps. I removed this one from my worksheet, because it seemed unnecessarily complicated; the collect call is not needed.
   collect(simplify(eq),diff,
          u->simplify(u,{coeff(eq,diff(U(z,t),z))=t1}));

[edit] other possibilities include,
    simplify(subs(coeff(eq,diff(U(z,t),z))=t1,
                collect(eq,diff)));
  simplify(map[2](subs,coeff(eq,diff(U(z,t),z))=t1,
                  collect(eq,diff)));

The lhs of your equation (hidden input) is entered as,
   Int(E*r, r = 0 .. 2*sigma, varphi = 0 .. 2*Pi)
instead of as,
   Int(E*r, [r = 0 .. 2*sigma, varphi = 0 .. 2*Pi])

The GUI's Typesetting system is not pretty-printing the former in Maple 2023.0 and Maple 2022.2. Within Maple 2021.2 (and earlier, say 18.02) it does get pretty-printed as you're expecting.

Note that in M2023.0, even when not pretty-printed as you'd like, value still works on it. The former is not actually one of the documented calling sequences of int, say, afaics, even if accepted.

Fwiw, the command IntegrationTools:-CollapseNested seems to turn the former into the latter, programmatically.

There is a typical issue with nested numeric computations, where the outer process might not be able to attain some target tolerance because the inner process is not guaranteeing corrspondingly adequate accuracy.

For example, if the inner process guarantees only 6 digits of accuracy, and the outer process has a "convergence acceptance" criterion involving 8 places of accuracy. Random 7th and 8th digits from the inner results prevent the outer process from successfully converging.

This kind of thing can happen when you do fsolve of a dsolve process, or Optimization of a dsolve process, of fsolve of an evalf/Int process, evalf/Int of some other numer process, etc, etc.

Even a rough understanding of the issue can help in such situations (as opposed to flailing around). But it can get tricky. It can get even more complicated if there are both relative and absolute accuracy tolerances in play.

Sometimes the simplest thing is just to crank up the working precision of the inner process, eg. Allan's suggestion to set Digits:=18 in ss. That may bring about enough improvement in the inner results's accuracy that the outer process can also meet its own tolerances. But sometimes this can make the whole computation too slow.

Sometimes having unfortunate choices of tolerances (inner and outer) can produce results successfully, but slower than with more fortuitous choices.

Being able to relax the accuracy of the outer process while retaining, say, the compiled/evalhf hardware double precision speed of inner processes is often desirable.

For these reasons knowing the required accuracy of the outer process is a very good place to begin. Alternatively, if there is a cap on the working precision of the inner process (eg. it requires hardware double-precision speed, etc), then one might need to accept a less accurate outer result -- according to the given problem.

nb. There are also some situations where a final step of rounding to a specific number of digits in the inner process (thereby removing all innaccurate digits) can additionally help with this kind of thing. I didn't do that here.

Here is an adjustment that produces results for your three cases. Experiment and adjust as necessary. You indicated that you'd rather not raise the working precision of the inner process (involving dsolve/numeric), so instead I've relaxed the optimalitytolerance of NLPSolve.

restart:

Digits:=15;

15

 

eqodes:=[diff(ca(t),t)=-(u+u^2/2)*1.0*ca(t),diff(cb(t),t)=1.0*u*ca(t)-0.1*cb(t)];

[diff(ca(t), t) = -1.0*(u+(1/2)*u^2)*ca(t), diff(cb(t), t) = 1.0*u*ca(t)-.1*cb(t)]

soln:=dsolve({op(eqodes),ca(0)=alpha,cb(0)=beta}, type=numeric,
             'parameters'=[alpha,beta,u],
              #abserr=1e-15,
              relerr=1e-11,
              compile=true, savebinary=true):

 

ss:=proc(x)
interface(warnlevel=0):
#if  type(x[1],numeric)
if  type(x,Vector)
then

local z1,n1,i,c10,c20,dt,u;
global soln,nvar;
dt:=evalf(1.0/nvar):
c10:=1.0:c20:=0.0:
for i from 1 to nvar do
u:=x[i]:
soln('parameters'=[c10,c20,u]):
z1:=soln(dt):
c10:=subs(z1,ca(t)):c20:=subs(z1,cb(t)):
od:
-c20;
 else 'procname'(args):

end if:

end proc:

 

nvar:=3;#code works for nvar:=3, but not for nvar:=2

3

ic0:=Vector(nvar,[seq(0.1,i=1..nvar)],datatype=float[8]);

Vector(3, {(1) = .1, (2) = .1, (3) = .1})

ss(ic0);

HFloat(-0.09025791391609851)

bl := Vector(nvar,[seq(0.,i=1..nvar)]);bu := Vector(nvar,[seq(5.,i=1..nvar)]);

Vector(3, {(1) = 0., (2) = 0., (3) = 0.})

Vector[column](%id = 36893628421116063364)

infolevel[all]:=1:
infolevel[Optimization]:=3:

Optimization:-NLPSolve(nvar,ss,[],initialpoint=ic0,[bl,bu],optimalitytolerance=1e-8);

NLPSolve: calling NLP solver

NLPSolve: using method=sqp

NLPSolve: number of problem variables 3

NLPSolve: number of nonlinear inequality constraints 0

NLPSolve: number of nonlinear equality constraints 0

NLPSolve: number of general linear constraints 0

NLPSolve: feasibility tolerance set to 0.1053671213e-7

NLPSolve: optimality tolerance set to 0.1e-7

NLPSolve: iteration limit set to 50

NLPSolve: infinite bound set to 0.10e21

NLPSolve: trying evalhf mode

NLPSolve: trying evalf mode

attemptsolution: number of major iterations taken 17

[-.531453352625634534, Vector(3, {(1) = .8114396007818414, (2) = 1.1894022100912927, (3) = 2.4277492065842683})]

nvar:=2;#code works for nvar:=3, but not for nvar:=2

2

ic0:=Vector(nvar,[seq(0.1,i=1..nvar)],datatype=float[8]);

Vector(2, {(1) = .1, (2) = .1})

 

ss(ic0);

HFloat(-0.09025792303565874)

bl := Vector(nvar,[seq(0.,i=1..nvar)]);bu := Vector(nvar,[seq(5.,i=1..nvar)]);

Vector(2, {(1) = 0., (2) = 0.})

Vector[column](%id = 36893628421005739900)

infolevel[all]:=1:
infolevel[Optimization]:=3:

Optimization:-NLPSolve(nvar,ss,[],initialpoint=ic0,[bl,bu],optimalitytolerance=1e-8);

NLPSolve: calling NLP solver

NLPSolve: using method=sqp

NLPSolve: number of problem variables 2

NLPSolve: number of nonlinear inequality constraints 0

NLPSolve: number of nonlinear equality constraints 0

NLPSolve: number of general linear constraints 0

NLPSolve: feasibility tolerance set to 0.1053671213e-7

NLPSolve: optimality tolerance set to 0.1e-7

NLPSolve: iteration limit set to 50

NLPSolve: infinite bound set to 0.10e21

NLPSolve: trying evalhf mode

NLPSolve: trying evalf mode

attemptsolution: number of major iterations taken 12

[-.523901131526401165, Vector(2, {(1) = .9099315182152206, (2) = 1.9272308847323274})]

 

nvar:=5;#code works for nvar:=3, but not for nvar:=2

5

ic0:=Vector(nvar,[seq(0.1,i=1..nvar)],datatype=float[8]);

Vector(5, {(1) = .1, (2) = .1, (3) = .1, (4) = .1, (5) = .1})

 

ss(ic0);

HFloat(-0.09025790901672268)

bl := Vector(nvar,[seq(0.,i=1..nvar)]);bu := Vector(nvar,[seq(5.,i=1..nvar)]);

Vector(5, {(1) = 0., (2) = 0., (3) = 0., (4) = 0., (5) = 0.})

Vector[column](%id = 36893628421097096604)

infolevel[all]:=1:
infolevel[Optimization]:=3:

Optimization:-NLPSolve(nvar,ss,[],initialpoint=ic0,[bl,bu],optimalitytolerance=1e-8);

NLPSolve: calling NLP solver

NLPSolve: using method=sqp

NLPSolve: number of problem variables 5

NLPSolve: number of nonlinear inequality constraints 0

NLPSolve: number of nonlinear equality constraints 0

NLPSolve: number of general linear constraints 0

NLPSolve: feasibility tolerance set to 0.1053671213e-7

NLPSolve: optimality tolerance set to 0.1e-7

NLPSolve: iteration limit set to 50

NLPSolve: infinite bound set to 0.10e21

NLPSolve: trying evalhf mode

NLPSolve: trying evalf mode

attemptsolution: number of major iterations taken 30

[-.537338841361480024, Vector(5, {(1) = .7435701450990695, (2) = .9013559679138707, (3) = 1.1487492967651152, (4) = 1.62908537333248, (5) = 3.222832756435604})]

 

Download test_VS1_ac.mw

The OP mentioned fdiff. If the internal methods NLPSolve uses to compute numerical derivatives is not accurate enough here (it could be a weak link in the nesting of numeric processes) then NLPSolve could be passed a user-defined procedure for objective gradient or constraint jacobian. That might leverage dsolve/numeric itself as a more robust mechanism for that part of the computation. For convenience sometimes the DE system can be augmented for more convenient&direct access to derivative results.

Here is one way, moving from red to the end-color (here, blue) through HSV color-space.

I'll do it for your rho, and then also for another, slightly different formula.

For the first example, I've rotated the orientation to try and illustrate that the shading gradient is orthogonal to the plane defined by that original rho formula.

restart;

rho:=-4*x+4*y+4*z+3;

-4*x+4*y+4*z+3

minr := minimize(rho,x=0..4,y=0..4,z=0..4);

-13

maxr := maximize(rho,x=0..4,y=0..4,z=0..4);

35

p := ColorTools:-Color("HSV","Blue")[1];

.6666666667

adj := min(1,max(0,(rho-minr)/(maxr-minr))):

copt := colorscheme=["xyzcoloring",unapply(p*adj,[x,y,z])]:

plot3d([[0,a,b],[4,a,b],[a,0,b],[a,4,b],[a,b,0],[a,b,4]],
       a=0..4, b=0..4, copt, style=surface, labels=[x,y,z]);

 

Let's do another, for fun.

 

rho:=-4*x^2+4*y^2+4*z^2+3;

-4*x^2+4*y^2+4*z^2+3

minr := minimize(rho,x=0..4,y=0..4,z=0..4);

-61

maxr := maximize(rho,x=0..4,y=0..4,z=0..4);

131

p := ColorTools:-Color("HSV","Blue")[1];

.6666666667

adj := min(1,max(0,(rho-minr)/(maxr-minr))):

copt := colorscheme=["xyzcoloring",unapply(p*adj,[x,y,z])]:

plot3d([[0,a,b],[4,a,b],[a,0,b],[a,4,b],[a,b,0],[a,b,4]],
       a=0..4, b=0..4, copt, style=surface, labels=[x,y,z]);

Download cube_col_ac.mw

Here's another variant. This has the color go from red to blue through the RGB color-space, the proportion determined by where rho's value is between its min and max values (for the given ranges).

restart;

with(ColorTools):

rho:=-4*x+4*y+4*z+3;

-4*x+4*y+4*z+3

minr := minimize(rho,x=0..4,y=0..4,z=0..4);

-13

maxr := maximize(rho,x=0..4,y=0..4,z=0..4);

35

adj := min(1,max(0,(rho-minr)/(maxr-minr))):

spc,nspc := "RGB",RGB;
#spc,nspc := "HSV",HSV:

"RGB", RGB

C1 := ((1-eval(adj,z=0))*[Color(spc,"Red")[]] + eval(adj,z=0)*[Color(spc,"Blue")[]]):
C2 := ((1-eval(adj,z=4))*[Color(spc,"Red")[]] + eval(adj,z=4)*[Color(spc,"Blue")[]]):
C3 := ((1-eval(adj,y=0))*[Color(spc,"Red")[]] + eval(adj,y=0)*[Color(spc,"Blue")[]]):
C4 := ((1-eval(adj,y=4))*[Color(spc,"Red")[]] + eval(adj,y=4)*[Color(spc,"Blue")[]]):
C5 := ((1-eval(adj,x=0))*[Color(spc,"Red")[]] + eval(adj,x=0)*[Color(spc,"Blue")[]]):
C6 := ((1-eval(adj,x=4))*[Color(spc,"Red")[]] + eval(adj,x=4)*[Color(spc,"Blue")[]]):

plots:-display(
 plot3d([x,y,0],x=0..4,y=0..4,color=[C1[1], C1[2], C1[3], colortype=nspc]),
 plot3d([x,y,4],x=0..4,y=0..4,color=[C2[1], C2[2], C2[3], colortype=nspc]),
 plot3d([x,0,z],x=0..4,z=0..4,color=[C3[1], C3[2], C3[3], colortype=nspc]),
 plot3d([x,4,z],x=0..4,z=0..4,color=[C4[1], C4[2], C4[3], colortype=nspc]),
 plot3d([0,y,z],y=0..4,z=0..4,color=[C5[1], C5[2], C5[3], colortype=nspc]),
 plot3d([4,y,z],y=0..4,z=0..4,color=[C6[1], C6[2], C6[3], colortype=nspc]),
style=surface, labels=[x,y,z]);

Download cube_col_02.mw

That can also get the previous result I showed, by commenting out the line that sets the color-space names.

I didn't make that efficient.

You could also make that work with other color-spaces like Lab or Luv, but there's be additional conversion steps (to and from, since some plotting aspects only accept the RGB or HSV).

You could also make an arbitrary coloring action, according to some scheme you want, given this ability to compute the position between min&max value. So, if you describe some other particular scheme in detail then I can show you how to use that with the above approach. (That could likely also be done using another colorscheme approach, possibly re-using colors generated from that option on a simple 2D curve for convenience.)

In your previous Question the Answer by vv contained a link to an older thread.

   https://mapleprimes.com/posts/214347-Multibyte-Characters

In that older posting there is code for another routine by vv, named SS, which leverages vv's LEN procedure to get substring functionality.

So, your current attached worksheet could be easily revised as, say,

restart

LEN := proc (s::string) Python:-EvalFunction("len", s) end proc

SS := proc (s::string, mn::{integer, range({integer, identical()})}) local m, n; if type(mn, integer) then m, n := `$`(mn, 2) else m := lhs(mn); n := rhs(mn) end if; if m = NULL then m := 1 end if; if n = NULL then n := -1 end if; if n = -1 then n := LEN(s) elif n < 0 then n := n+1 end if; if m = -1 then m := LEN(s) elif m < 0 then m := m+1 end if; Python:-EvalString(cat("\"", s, "\"", "[", m-1, ":", n, "]")) end proc

str := "tølkæi"

LEN(str) = 6NULL

NULL

SS(str, 1) = "t"NULL

SS(str, 2) = "ø"NULL

SS(str, 3) = "l"NULL

SS(str, 4) = "k"NULL

SS(str, 5) = "æ"NULL

SS(str, 6) = "i"NULL

Download substrings_ac.mw

It offers additional functionality, eg.

SS(str, 1 .. ()), SS(str, 1 .. -3), SS(str, 1 .. 2), SS(str, -3 .. -1), SS(str, -1)

"tølkæi", "tølk", "tø", "kæi", "i"

NULL

It's not much of a stretch to imagine that "SS" stands for SubString.

Are you just trying to compute this?

   convert( p(x)/q(x), parfrac, x )

 

If that's the case then could it really be that you didn't try entering the term partialfraction into Maple's Help search box?

1) It may work. Solving equations using the numerators could also produce some solutions which don't satisfy equations obtained from the original expressions, since it's possible that some solutions (of the new system) could make both numerator and denominator (in the original system) be zero. You don't want solutions which make any of the denominators zero. That might be unlikely. It's a general statement.

2) Where did you get this command which you are trying to use but don't understand? (If from another of your several threads on a common problem, why not ask there and keep the content sensibly close?)

It's mostly a composition. I don't know why its author included the evala@Norm since that does not have major effect on this example. You could more quickly just use numer(Eqs) , which produces a similar result as before if compared via normal. In that case you would not need the elementwise ~ here. The numer command gets mapped across the set of expressions automatically.

3) Your three expressions are sums of products. The numer command is performing expansion, when reforming over a common denominator. As illustration, notice for the following that the result from numer has greater length than the original,

expr := 1/(a+b)^5 + 1/(c+d);

1/(a+b)^5+1/(c+d)

foo := numer(expr);

a^5+5*a^4*b+10*a^3*b^2+10*a^2*b^3+5*a*b^4+b^5+c+d

length(expr), length(foo);

33, 75

normal(expr);

(a^5+5*a^4*b+10*a^3*b^2+10*a^2*b^3+5*a*b^4+b^5+c+d)/((a+b)^5*(c+d))

Download num_ex.mw

It's not clear why you want these numer results in more compact form, since the next step is to pass them on to some system solver which will need to manipulate/reform as it needs.

Is this adequate for your goal?

L := diff(q1(t), t) + 2*diff(q2(t), t) + 3*q1(t);

diff(q1(t), t)+2*(diff(q2(t), t))+3*q1(t)

Physics:-diff(L, q1(t));

3

Download ph_diff_ex.mw

Here is a shorter example of the underlying error occurring during evaluation at some kind of point of some kind of piecewise.

restart;
piecewise(abs((HFloat(infinity)+HFloat(infinity)*I)*2^(1/2)
              + HFloat(infinity)+HFloat(infinity)*I) < 1,
          1, undefined);
Error, (in sprintf) too many levels of recursion

That error relates to this:

restart;
radnormal(abs((HFloat(infinity)+HFloat(infinity)*I)*2^(1/2)
              +HFloat(infinity)+HFloat(infinity)*I)-1);
Error, (in sprintf) too many levels of recursion

And that in turn tries the following:

`radnormal/ToRadical`(abs((infinity+infinity*I)*2^(1/2)+infinity+infinity*I)-1);
Error, (in sprintf) too many levels of recursion

And here is how that error can occur for your expression assigned to ff, (the plotting internal process will construct a procedure from the expression)

FF := unapply(ff, x):
FF(HFloat(1.9954019196875001));
Error, (in sprintf) too many levels of recursion

subs(x=HFloat(1.9954019196875001),ff):
eval(%);
Error, (in sprintf) too many levels of recursion

bug_plot_may_29_2023_pw_ac.mw


note. So far I'm seeing this problem in Maple 2019.2.1 and after, but not in 2018.2 and earlier.

I will submit a bug report.

A smooth plot can be obtained reasonably quickly in Maple 2023.0 with,
   plot(evala(ff),x=-3..3);
That skirts around issues near 2 for, say,
   plot(X->eval(ff,x=X),-3..3);

I can see that rationalize computation go away for a long time, even in earlier versions.

If you need to get around this, how about one of,

   radnormal(expr,':-rationalized')

   simplify(radnormal(expr,':-rationalized'),size)

   simplify(radnormal(expr,':-rationalized'))

For example, in under 2sec on my Maple 2023.0,

restart;

kernelopts(version);

`Maple 2023.0, X86 64 LINUX, Mar 06 2023, Build ID 1689885`

 

expr:=1/(3*2^(2/3)*((2^(1/3)*(1+(256*u^3+1)^(1/2))^(2/3)-8*u)/(1+(256*u^3+1)^(1/2))^(1/3))^(1/2)+3*(1/(1+(256*u^3+1)^(1/2))^(1/3)*((8*2^(1/6)*u-2^(1/2)*(1+(256*u^3+1)^(1/2))^(2/3))*((2^(1/3)*(1+(256*u^3+1)^(1/2))^(2/3)-8*u)/(1+(256*u^3+1)^(1/2))^(1/3))^(1/2)-4*2^(1/6)*(1+(256*u^3+1)^(1/2))^(1/3))/((2^(1/3)*(1+(256*u^3+1)^(1/2))^(2/3)-8*u)/(1+(256*u^3+1)^(1/2))^(1/3))^(1/2))^(1/2)*2^(7/12)-16*u):

 

mein := CodeTools:-Usage( simplify(radnormal(expr,':-rationalized')) );

memory used=147.31MiB, alloc change=70.00MiB, cpu time=1.74s, real time=1.58s, gc time=392.02ms

(1/8192)*(3*(96*u*((3/8)*u*2^(1/3)*(1+(256*u^3+1)^(1/2))^(2/3)+(3/64)*2^(2/3)*((256*u^3+1)^(1/2)-1)*(1+(256*u^3+1)^(1/2))^(1/3)+u^2)*(((-(256*u^3+1)^(1/2)+1)*(1+(256*u^3+1)^(1/2))^(2/3)+32*2^(1/3)*u^2*(1+(256*u^3+1)^(1/2))^(1/3))/u^2)^(1/2)+9*2^(1/6)*((256*u^3+1)^(1/2)-1)*(1+(256*u^3+1)^(1/2))^(2/3)-1024*u^4*2^(5/6)-288*u^2*2^(1/2)*(1+(256*u^3+1)^(1/2))^(1/3))*((-64*u*((1/8)*u*2^(5/6)*(1+(256*u^3+1)^(1/2))^(2/3)+(1/32)*2^(1/6)*((256*u^3+1)^(1/2)-1)*(1+(256*u^3+1)^(1/2))^(1/3)+2^(1/2)*u^2)*(((-(256*u^3+1)^(1/2)+1)*(1+(256*u^3+1)^(1/2))^(2/3)+32*2^(1/3)*u^2*(1+(256*u^3+1)^(1/2))^(1/3))/u^2)^(1/2)+2^(2/3)*((256*u^3+1)^(1/2)-1)*(1+(256*u^3+1)^(1/2))^(2/3)-64*(1+(256*u^3+1)^(1/2))^(1/3)*u^2)/u^2)^(1/2)+3*(3*2^(1/2)*(256*u^3-3*(256*u^3+1)^(1/2)+3)*(1+(256*u^3+1)^(1/2))^(2/3)+4096*u^2*((3/128)*2^(5/6)*((256*u^3+1)^(1/2)+2)*(1+(256*u^3+1)^(1/2))^(1/3)+u^2*2^(1/6)))*(((-(256*u^3+1)^(1/2)+1)*(1+(256*u^3+1)^(1/2))^(2/3)+32*2^(1/3)*u^2*(1+(256*u^3+1)^(1/2))^(1/3))/u^2)^(1/2)-131072*u^5-13824*u^2)/(256*u^6-135*u^3)

Download nm_rdnm.mw

First 42 43 44 45 46 47 48 Last Page 44 of 336