acer

32333 Reputation

29 Badges

19 years, 326 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

Here are two ways.

The first way is similar to just,

Matrix(40, 151, {op(op(eval(T)))});

but it also figures out the dimensions for you.

restart;


T := table([(25, 1) = -39, (16, 151) = 32, (33, 1) = -54, (1, 1) = 29, (13, 1) = 32, (31, 101) = -7,
            (6, 51) = -10, (11, 101) = -1, (28, 151) = -39, (18, 51) = -65, (4, 151) = 29,
            (8, 151) = -10, (23, 101) = 23, (34, 51) = -54, (40, 151) = 87, (36, 151) = -54,
            (9, 1) = -1, (37, 1) = 87, (21, 1) = 23, (14, 51) = 32, (22, 51) = 23, (20, 151) = -65,
            (27, 101) = -39, (3, 101) = 29, (19, 101) = -65, (24, 151) = 23, (32, 151) = -7,
            (30, 51) = -7, (38, 51) = 87, (7, 101) = -10, (10, 51) = -1, (29, 1) = -7,
            (35, 101) = -54, (17, 1) = -65, (26, 51) = -39, (15, 101) = 32, (12, 151) = -1,
            (39, 101) = 87, (5, 1) = -10, (2, 51) = 29]);

table( [( 22, 51 ) = 23, ( 26, 51 ) = -39, ( 24, 151 ) = 23, ( 6, 51 ) = -10, ( 34, 51 ) = -54, ( 33, 1 ) = -54, ( 5, 1 ) = -10, ( 1, 1 ) = 29, ( 18, 51 ) = -65, ( 38, 51 ) = 87, ( 20, 151 ) = -65, ( 25, 1 ) = -39, ( 17, 1 ) = -65, ( 40, 151 ) = 87, ( 14, 51 ) = 32, ( 19, 101 ) = -65, ( 4, 151 ) = 29, ( 35, 101 ) = -54, ( 16, 151 ) = 32, ( 3, 101 ) = 29, ( 39, 101 ) = 87, ( 7, 101 ) = -10, ( 31, 101 ) = -7, ( 28, 151 ) = -39, ( 12, 151 ) = -1, ( 10, 51 ) = -1, ( 13, 1 ) = 32, ( 8, 151 ) = -10, ( 32, 151 ) = -7, ( 21, 1 ) = 23, ( 36, 151 ) = -54, ( 27, 101 ) = -39, ( 29, 1 ) = -7, ( 15, 101 ) = 32, ( 2, 51 ) = 29, ( 30, 51 ) = -7, ( 23, 101 ) = 23, ( 37, 1 ) = 87, ( 11, 101 ) = -1, ( 9, 1 ) = -1 ] )


# One way.

indM := Matrix([indices(T)]):

M := Matrix((min..max)(indM[1..-1,1]), (min..max)(indM[1..-1,2]), {op(op(eval(T)))}):


# Another way.

indT := indices(T):
indM := Matrix([indT]):

M2 := Matrix((min..max)(indM[1..-1,1]), (min..max)(indM[1..-1,2])):

for i from 1 to nops([indT]) do
  M2[op(indT[i])] := T[op(indT[i])];
end do:


# If you knew it would have this structure you could have
# constructed the Matrix directly.

interface(rtablesize=200):
(M^%T)[[1,51,101,151],1..-1];
interface(rtablesize=10):

_rtable[18446884009705823758]


# Gets you back to the table.

 table([op(rtable_elems(M))]);

table( [( 22, 51 ) = 23, ( 26, 51 ) = -39, ( 24, 151 ) = 23, ( 6, 51 ) = -10, ( 34, 51 ) = -54, ( 33, 1 ) = -54, ( 5, 1 ) = -10, ( 1, 1 ) = 29, ( 18, 51 ) = -65, ( 38, 51 ) = 87, ( 20, 151 ) = -65, ( 25, 1 ) = -39, ( 17, 1 ) = -65, ( 40, 151 ) = 87, ( 14, 51 ) = 32, ( 19, 101 ) = -65, ( 4, 151 ) = 29, ( 35, 101 ) = -54, ( 16, 151 ) = 32, ( 3, 101 ) = 29, ( 39, 101 ) = 87, ( 7, 101 ) = -10, ( 31, 101 ) = -7, ( 28, 151 ) = -39, ( 12, 151 ) = -1, ( 10, 51 ) = -1, ( 13, 1 ) = 32, ( 8, 151 ) = -10, ( 32, 151 ) = -7, ( 21, 1 ) = 23, ( 36, 151 ) = -54, ( 27, 101 ) = -39, ( 29, 1 ) = -7, ( 15, 101 ) = 32, ( 2, 51 ) = 29, ( 30, 51 ) = -7, ( 23, 101 ) = 23, ( 37, 1 ) = 87, ( 11, 101 ) = -1, ( 9, 1 ) = -1 ] )


rtable_num_elems(M, NonZero);

40


nops(select(u->u<>0,[entries(T)]));

40

 

Download table_to_Matrix.mw

[Note: issues with LPSolve and 64bit Windows are known, in the sense that multiple reports have been made previously. Sometimes it manifests as a crash, and sometimes as a spurious failure to find a feasible point, or a spurious message about the problem being unbounded. I might first suspect a mismatch in the width of hardware integers (or arrays of such) being passed (including to the MKL) externally. But it's very curious: machines either exhibit the problem, or they don't. I've heard of 64bit Windows 7 machines that show it, but my three 64bit Windows 7 machines don't. It's be nice to see this resolved.]
[Note: everything below is in 64bit Linux, which does not exhibit the crash, etc.]

The method being used by Optimization:-Maximize here is the method=activeset of the Optimization:-LPSolve command.

We know that a global optimum ought to be found (if the problem is bounded, and it is...), as this is LP. It seems simply that the solver is just not recognizing that we'd like it to get more accuracy (more digits). I'll justify this supposition.

The solver being used is NAG's e04mfa, and that function in the NAG library does in fact accept an optimality tolerance option. In general the NAG library's optimization functions (chapter E04) use that to control how fine/coarse to compute an extreme point. But the Maple Library interface to that compiled external function, via LPSolve, does not admit and pass on any optimalitytolerance option.

If I use the Maple debugger to examine the arrays passed in for e04mfa's istate and clambda arrays then, following the hardware double precision call out to e04mfa for which the return value is ifail=1, some of the clambda values are small but of the wrong sign according to their corresponding istate values. The NAG documentation for e04mfa describes, in its Section 6, the case of a return value of 1 as being a "weak local" optimium, where it has assessed the lagrange multiplier values as being sufficiently negligible. See also its Section 5 descriptions of the return values for ifail, istate, and clambda

So, I believe what's happening is that the NAG solver thinks its current solution is optimal, because it's optimal up to the small magnitude and (potential violating) sign for the lagrange multipliers of the active variables. Now, down in Section 11.1 that documentation describes an Optimality Tolerance option, which "defines the tolerance used to determine if the bounds and general constraints have the right ‘sign’ for the solution to be judged to be optimal."

If I intercede where Maple is setting options for the call out to e04mfa, using an additional Maple call out to e04mha to set its "Optimality Tolerance" to 1e-9 then I get the expected global optimum -1.07676e-7 as the final objective and the expected solution point.

Moreover, if I use a bogus objective like OB+1e30  (where OB was the original objective) then I get a solution with one of Sw[4] or Sw[6] as 0.11 instead of the optimal 0.1, and that spurious -1.08216e-7 as the final objective value. But this bogus objective behaves like a constant. And the NAG solver reports taking just 2 major iteration steps, just like it does for the default double precision call that the OP is reporting to us. This is more evidence that the OP's example is illustrating that the external solver is returning immediately as soon as it finds a tight feasible point.

An ideal way to prod the external solver to continue iterating would be to allow an optimality tolerance to get passed through. But that's awkward to do.

Another way to get the better result from LPSolve is to increase Maple's Digits. The NAG documentation says that the default for "Optimality Tolerance" is 0.8*eps where eps relates to the machine epsilon (or its sqrt) in the case of hardware double precision. In the case of software precision the external solver uses 10.0^(-Digits) for that eps. This is why (apart from avoiding a crash) setting Digits higher allows the solver to find the true optimum, because doing so affects the default optimality tolerance. On my 64bit Linux I get the better result if I set Digits>=19 .

Yet another way to get the better result from LPSolve is simply to scale the problem so that the objective's difference (that we care about) is greater than the default optimality tolerance. On my 64bit Linux I only had to scale by a factor of (very roughly) approximately 2 orders of 10 to get the solver to obtain the better result. (This jives with the fact that the optimum's magnitude is about 1e-7 and forcing the optimality tolerance to 1e-9 also worked.)

I don't really understand the workings of the external solver used by LPSolve when method=interiorpoint. But my attachment also shows that it can be successfully prodded using either modest rescaling and some modest adjustment to the feasibilitytolerance option (which means something quite different to the solver), or by rescaling greatly.

Here's a sheet showing close to the bare changes to obtain the better solution from the Optimization package, in several of the ways described above (except the forced extra call to e04mha). It runs the same for me in Maple 17.02 as 2017.3, on 64bit Linux.
  LP_example_scaling.mw

If you call beta(0.4) outside of any plotting call the result is not a number but rather and expression containing the unassigned name b__sca which is just a single name.

The operator you assign to d is,

d := z -> t*b__sca/(1-z)

Perhaps you meant,

d := z -> t*b__sc*a/(1-z)

or,

d := z -> t*b__sc/(1-z)

If I understand you, then the name for that (or something effectively close to that) is "balancing".

That is offered by the command EigenConditionNumbers in the LinearAlgebra package, in the sense that it allows you to force it off or on. That command allows you to compute and return eigenvalues and associated condition numbers (and similarly for eigenvectors). Since that Maple command calls out to LAPACK function dgeevx this note may be useful.

See also this description. Note that in their inlined example the diagonal elements become 1.0 after balancing.

[edit] Technically balancing as mentioned above is comprised of permuting and scaling. And what you described is akin to the the scaling part.

The code to construct such Matrices can be quite short, using modular arithmetic.

restart;

M := n -> Matrix(n, (i,j)->binomial(n,i-j mod n)):

M(6);

_rtable[18446884092817477630]

LinearAlgebra:-Determinant( M(6) );

0

M(7);

_rtable[18446884092821861790]

LinearAlgebra:-Determinant( M(7) );

6835648

seq( LinearAlgebra:-Determinant( M(k) ), k = 1 .. 20 );

1, -3, 28, -375, 3751, 0, 6835648, -1343091375, 364668913756, -210736858987743, 101832157445630503, 0, 487627751563388801409591, -4875797582053878382039400448, 58623274842128064372315087290368, -1562716604740038367719196682456673375, 37007295445873520405974435690486750278679, 0, 430324720699761775272469526758735052055558206841367, -211862614423426992024716584401218892949998378753662109375

 


Download detmod.mw

You don't need to load the deprecated linalg package. Matrix is different from matrix. The newer LinearAlgebra package is for Matrix, while the deprecated linalg package is for deprecated matrix or the deprecated array.

Maple is a case-sensitive programming language.

Submitting the word Determinant into Maple's Help system's search mechanism (search box) allows you to find the help page for LinearAlgebra:-Determinant command, and that includes notes on how to use it, as well as examples.

You could consider looking through the basic user manual, available in Maple's Help system by entering UserManual into the Help search box.

Please submit your future queries here as Questions and not as Posts.

If your working precision is too small then not all real roots of your degree 28 polynomial (that determinant, a polynomial in P) get their imaginary components resolved sufficiently close to zero as to be recognized as being purely real.

With Digits set only to 20 then only one real root gets resolved sufficiently (perhaps somewhat randomly).

You can see this a little more clearly by using fsolve on a complex box (rather than a real interval) and playing with the Digits setting.

At Digits=100 I get 28 real roots returned. Here I used Maple 2016.2, which you can upgrade your valid Maple 2016.0 to for free.

Stability_a.mw

I don't know of any 2D Input syntax that allows you to enter a Vector within a typeset radical and use the tilde to make that act elementwise.

I don't see that loading Units:-Standard is the problem here. What you had originally won't work even without that package loaded, AFAIK.

But you can do these:

restart

interface(imaginaryunit = I); with(Units[Standard]); with(plots)

`~`[sqrt](Vector(2, {(1) = 16., (2) = 9.}))

Vector[column](%id = 18446884219360117214)

`~`[proc (x) options operator, arrow; sqrt(x) end proc](Vector(2, {(1) = 16., (2) = 9.}))

Vector[column](%id = 18446884219360119254)

``

Download Elementwise_Square_Root_a.mw

Eigenvectors, Eigenvalues, and Determinant, etc are commands in the LinearAlgebra package.

It sounds like you may not calling the commands properly. In order to access them you have to either load the package by first doing

with(LinearAlgebra):

or call them by their long-form fully qualified name, eg,

LinearAlgebra:-Eigenvalues(M70);

and so on.

In contrast, issuing M70^(-1) or 1/M70 dispatches directly to LinearAlgebra:-MatrixInverse.

Another possibility is that you got a result but since the inputs weren't floats then the result contained RootOf placeholders. I suspect that's not actually the problem, but it's not possible to know for sure because you haven't attached a worksheet/document that exhibits the problem.

Have you tried,

setmaple('a', 10)
instead of just your,
a=10

You cannot get output like what you've described (ie. a mixture of upright text and 2D Output, and without separating commas) using the print command.

But you can still generate a procedure to assemble something which displays as output.

restart

kernelopts(version);

`Maple 2015.2, X86 64 LINUX, Dec 20 2015, Build ID 1097895`

H:=proc()
  local oldlevel, temp;
  uses Typesetting;
  try
    oldlevel:=interface('typesetting'=':-extended');
    temp:=Typeset(EV(args));
    temp:=eval(temp,[mo("&comma;")=NULL,ms=mn,
          mstyle(mo("and"),'mathvariant'="bold")=mn(" and "),
          mo("&and;",msemantics="inert")=mn(" and ")]);
    temp:=subsindets(temp,And('specfunc'(mfenced),
                              satisfies(u->member(msemantics="realrange",
                                                  [op(u)]))),
                     u->mfenced(mrow(op(1,u),mo(";"),op(2,u)),
                                op(3..,u)));
  catch:
    return NULL;
  finally
    interface('typesetting'=oldlevel);
  end try;
  return temp;
end proc:

bt1 := 0 < x and x < (1/2)*Pi;

0 < x and x < (1/2)*Pi

f(x) = x-sin(x)

`%>`(f(x), f(0))

f(0) = 0

`%>`(x, sin(x))

RealRange(Open(0), Open((1/2)*Pi))

 

A fun idea is to get it to display the same regardless of the typesetting level.

 

interface(typesetting = standard):

"So, for 0<x<Pi/2 we have f(x)=x-sin(x)  and  `%>`(f(x),f(0))  and  f(0)=0, or `%>`(x,sin(x)), on the interval (0;Pi/2)"

interface(typesetting = extended):

"So, for 0<x<Pi/2 we have f(x)=x-sin(x)  and  `%>`(f(x),f(0))  and  f(0)=0, or `%>`(x,sin(x)), on the interval (0;Pi/2)"

 

# Ideally the unwanted `and` could be removed more carefully,
# for just the particular cases and without need for `` blanks.
H2:=proc()
  local oldlevel, temp;
  uses Typesetting;
  try
    oldlevel:=interface('typesetting'=':-extended');
    temp:=Typeset(EV(args));
    temp:=eval(temp,[mo("&comma;")=NULL,ms=mn,
          mstyle(mo("and"),'mathvariant'="bold")=NULL,
          mo("&and;",msemantics="inert")=mn("and")]);
    temp:=subsindets(temp,And('specfunc'(mfenced),
                              satisfies(u->member(msemantics="realrange",
                                                  [op(u)]))),
                     u->mfenced(mrow(op(1,u),mo(";"),op(2,u)),
                                op(3..,u)));
  catch:
    return NULL;
  finally
    interface('typesetting'=oldlevel);
  end try;
  return temp;
end proc:

ct1 := 0 < x and x < (1/2)*Pi;

0 < x and x < (1/2)*Pi

f(x) = x-sin(x)

`%>`(``, f(0))

`` = 0

`%>`(x, sin(x))

RealRange(Open(0), Open((1/2)*Pi))

H2("So, for ", ct1, " we have ", `and`(`and`(ct2, ct3), ct4), ", or ", ct5, ", on the interval ", ct6);

"So, for 0<x<Pi/2 we have f(x)=x-sin(x)  `%>`(,f(0))  =0, or `%>`(x,sin(x)), on the interval (0;Pi/2)"

 

Download typeset_math_text.mw

 

kernelopts(version);
         Maple 9.51, IBM INTEL LINUX, Aug 9 2004 Build ID 163356

eliminate({x[5] = x[2] / x[1], x[6] = x[3] / x[2], x[7] = x[1] / x[4],
           x[8] = (2 * x[2] + x[4]) / (2 * x[1] + x[3] + x[4])},
           {x[1], x[2], x[3], x[4], x[8]}):

lprint(%);
[{x[1] = x[7]*x[4], x[8] = (2*x[5]*x[7]+1)/(2*x[7]+x[6]*x[5]*x[7]+1),
  x[2] = x[5]*x[7]*x[4], x[3] = x[6]*x[5]*x[7]*x[4], x[4] = x[4]}, {}]

Between version 9.5.1 and 10.00 the procedure `eliminate/recursive` changed quite a bit.

kernelopts(version);
         Maple 10.00, X86 64 LINUX, May 13 2005 Build ID 190196

eliminate({x[5] = x[2] / x[1], x[6] = x[3] / x[2], x[7] = x[1] / x[4],
           x[8] = (2 * x[2] + x[4]) / (2 * x[1] + x[3] + x[4])},
           {x[1], x[2], x[3], x[4], x[8]});
                           NULL
kernelopts(version);
         Maple 2017.3, X86 64 LINUX, Sep 27 2017, Build ID 1265877

solve({x[5] = x[2] / x[1], x[6] = x[3] / x[2], x[7] = x[1] / x[4],
       x[8] = (2 * x[2] + x[4]) / (2 * x[1] + x[3] + x[4])},
       {x[1], x[2], x[3], x[4], x[8]}):

lprint(%);
{x[1] = x[7]*x[4], x[2] = x[5]*x[7]*x[4], x[3] = x[6]*x[5]*x[7]*x[4],
 x[4] = x[4], x[8] = (2*x[5]*x[7]+1)/(x[5]*x[6]*x[7]+2*x[7]+1)}

The last time that it calls itself on that example in Maple 2017.3, `eliminate/recursive` is passing the empty list {} as the first argument. Perhaps it ought to be doing some additional/modified check before doing that?  I have not studied it recently.

The PolyhedralSets package was introduced in Maple 2015, the major release right after Maple 18.

You could either convert those inequalities to RealRange (using the convert command).

Or you could call solve without the braces around the second argument. For example,

restart;

S1 := `union`( solve( x^2-3*x+2 <= 0, x ) );

RealRange(1, 2)

S2 :=`union`( solve( x^2-3*x+2 >= 0, x ) );

`union`(RealRange(2, infinity), RealRange(-infinity, 1))

lprint(S1);

RealRange(1, 2)

lprint(S2);

RealRange(2, infinity) union RealRange(-infinity, 1)

is( 3/2 in S1 );

true

is( 3/2 in S2 );

false

 


Download RealRange_solved.mw

 

I believe that the parametrizing approach that vv has shown is very likely best, for simplicity and overall quality.

But for fun I'll show that another approach, using the range functionality of the usual cartesian parameters.

It's key to note that the surfaces in question can be split up into (at least two) pieces, where each is the plotted surface of a function. I mean function in the usual mathematical sense, where each (x,y) point produces just one z-value. Hence instead of using implicitplot3d and we can just stitch together those various pieces.

I'm going to split each of the sphere and the ellipse into four such pieces. I'm pretty sure that it could be done in just two (using conditionals, or piecewise, or min/max calls in the ranges) but maybe not worth the extra effort.

One of the downsides to having to stitch together various pieces is that often either the seam shows (slightly darker due to miniscule overlap) or very tiny pinhole gaps show under manual rotation. Even the parametric approach shows a slight seam. But here -- with the radicals that come from solving these conics for particular valiables -- any slight amount of numeric error can lead to these effects being noticably worse.

The reason both the parametric and this approach work is that they both generate MESH plotting structures where the (x,y) data points can be arbitrary and irregular. And so they can follow 3d surface edge, smoothly. In contrast the regularly spaced GRID plotting structure can display with ragged edges and gaps where some evenly spaced (x,y) vertex fails to produce a real z-value (because the value there is undefined or imaginary).

Anyway, without much clean-up, and a few small numeric adjustments to keep the radicals behaved: Intersecting_surfaces_a3.mw

I've seen a few requests like this.

A similar weakness appears in the plots:-matrixplot command (when creating surfaces, not using its histogram features), but the alternative in that case is to use plots:-surfdata which allows you to specify the ranges over which to interpret the data. There is no easy alternative for plots:-listcontplot that I'm aware of.

But you can use the plottools:-transform command to rescale. It's a little work to create the transformer, but once you see the construction of trans below you'll see the idea, which is not complicated and straightforward to modify for other examples.

I also have a modified listcontplot which does this automatically. It also allows for interpolated refinement  of the data. I might dig that up.

restart;

Porig:=plots:-contourplot(sin(y)*x^2, x=0.3 .. 0.7, y=2.2 .. 3.7, style=point):
 

M:=Matrix(20,50,
          (i,j)->evalf( (0.3+(i-1)*(0.7-0.3)/(20-1))^2*sin((2.2+(j-1)*(3.7-2.2)/(50-1))) ) ):

 

You are starting with a numeric Matrix M.

 

Since you've asked for it to be displayed over [0.3..0.7]x[2.2..3.7] then

presumably you want M[..,1] to be interpreted as x=0.3 and M[..,20] to

be interpreted as x=0.7, and something analogous for y.

The Porig above was generated only for visual verification.

 

P1:=plots:-listcontplot( M ):
P1;

trans:=plottools:-transform((x,y)->[0.3+(x-1)*(0.7-0.3)/(20-1), 2.2+(y-1)*(3.7-2.2)/(50-1)]):
Pnew:=trans(P1):
Pnew;

Does Pnew agree with Porig? Let's see:

plots:-display(Porig, Pnew);

 


listcontplot_ranges.mw

 

First 184 185 186 187 188 189 190 Last Page 186 of 336