acer

32333 Reputation

29 Badges

19 years, 319 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

Maple's GE (and GJE) sometimes switches around the rows, but not the columns, so it's unclear what you mean by "...the columns and see where they are after..."

The row switches can be seen by the pivot Vector/Matrix, which is one of the outputs of LUDecomposition (LU is GE in mild disguise). This can be "applied" after the fact to a Vector of names.

acer

Here is a short example which might help you interpret and see the relationship between the algebraic and Matrix forms accepted by LPSolve.

There is no exposed conversion utility (likely because one of the central purposes of the Matrix form is to never form the large expressions in the algebraic form at all). But you might experiment with the internal routine which I use below, in order to see what happens when there are no strict equality constraints, or no inequality constraints, or no simple variable bounds, and so on.

> restart:

> problem := -x-2*y, {7 >= 3*x+5*y, 113 >= -11*x-13*y, 17*x+19*y=23}, x=-29..31, y=-37..41:

> Optimization:-LPSolve(problem);

     [-2.92857142857143, [x = -0.642857142857142, y = 1.78571428571429]]

> kernelopts(opaquemodules=false):

> problem_matrix_form := Optimization:-Convert:-AlgebraicForm:-LPToMatrix(problem)[2..4];

        problem_matrix_form := [-1., -2.], 

          [[-11.  -13.]                               ]                            
          [[          ], [113., 7.], [17.  19.], [23.]], [[-29., -37.], [31., 41.]]
          [[  3.    5.]                               ]                            

> Optimization:-LPSolve(problem_matrix_form);

                  [                   [-0.642857142857142]]
                  [-2.92857142857143, [                  ]]
                  [                   [  1.78571428571429]]

[edited] I might as well just do it. The first two places represent variables ur and dr, and the remaining N places represent w[i], i=1..N.

restart:
randomize():

with(Optimization):
with(Statistics):
with(LinearAlgebra):

N := 500:
R := RandomMatrix(N, outputoptions = [datatype = float[8]]):
ER := Vector[column]([seq(ExpectedValue(Column(R, j)), j = 1 .. N)]):

# algebraic form
st:=time():
W := Vector(N, symbol = w):
S := Vector(N, fill = 1, datatype = float[8]):
z := Multiply(R, Matrix(W)):
con1 := Transpose(W).S = 1:
con4 := seq(z[i][1] - dr >= 0, i = 1 .. N):
con5 := expand(Transpose(W).ER) - ur >= 0:
sol1:=LPSolve(ur+dr, {con1, con4, con5}, seq(w[i]=0..1,i=1..N), maximize = true):
sol1[1];
                        2.68825855741790
time()-st;
                             6.296

# Matrix form
st:=time():
A:=Matrix(N+1,N+2,datatype=float[8]):
A[2..-1,3..-1]:=R:
A[2..-1,1]:=Vector(N,fill=-1,datatype=float[8]):
A[1,3..-1]:=ER:
A[1,2]:=-1:
MatrixScalarMultiply(A,-1,inplace):
b:=Vector(N+1,datatype = float[8]):
#A,b;
Aeq:=Matrix(1,N+2,fill=1,datatype=float[8]):
Aeq[1,1..2]:=Vector(2,datatype=float[8]):
beq:=Vector(1,[1],datatype=float[8]):
#Aeq,beq;
c:=Vector(N+2,[1,1],datatype=float[8]):
#c;
bl:=Vector(N+2,datatype=float[8]):
bl[1..2]:=Vector(2,fill=-infinity):
bu:=Vector(N+2,fill=1,datatype=float[8]):
bu[1..2]:=Vector(2,fill=infinity,datatype=float[8]):
#bl,bu;
sol2:=Optimization:-LPSolve(c,[A,b,Aeq,beq],[bl,bu],maximize = true):
sol2[1];
                        2.68825855741790
time()-st;
                             3.735

So, half the time, for N=500. There are also memory savings on the order of the size of all the constraints as symbolic expressions, which I didn't bother to measure. There might be a relatively small bit more time saved, using ArrayTools routines like BlockCopy and Fill.

acer

Obviously we cannot be completely sure about the cause of the problem, if we cannot see the full worksheet that exhibits the problem.

But I suspect that Georgios K. is on the right tack, when looking at the incomplete worksheet containing just the final formulae.

I suspect roundoff as the underlying cause because, as Georigios mentions, the plots stabilize when computed at a higher working precision. I can actually get `function1` to plot in what seems to be a reliable and accurate manner with Digits set only to 33, provided that I first manipulate `function1` with the `evalc` command. That command will treat `theta` as purely real and under that assumption will simplify the formula enough to remove all merely ostensible occurences of `I` the imaginary unit. Not only does evalc(function1) appear to plot more accurately without such a high Digits setting as `function1` requires, but it also does so measurably faster.

The other reason I suspect roundoff is because the order of terms in PROD and SUM dags still depends on memory value for ordering. As I tried to illustrate in this old blog post, the order in which multiplicands are stored internally in a product data structure can greatly affect floating-point evaluation. The same is true of sums (which I hope to blog about soon, though I must apologize, for I have written that before...). The upshot is that session dependent ordering of arithmetic terms can magnify and enhance numerical difficulties at a given working precision. How high to increase Digits, the working precision, in order to overcome, is yet another tough topic.

Again, without the Original Poster's full worksheets, we cannot investigate and discover the underlying cause in a definitive way. But some of the familiar bells that are ringing seem to include roundoff and memory-based ordering of terms.

To trlewis, please feel free to contact me if you would consider my investigating your worksheets in a confidential way.

Since simplification (including using evalc, say) need not necessarily improve the numeric robustness of any formula to be evaluated in floating-point, then for now I'd simply concur with Georgios: increase Digits.

Example_alt01.mw Roundoff may be more severe in your original full worksheets than in this modified sheet, depending on term ordering issues. It seems like a possibility.

I do not know what, if anything, MuPAD might try to do, automatically, to accomodate numerical issues for such examples.

acer

The seven curves are functions of only x, each for some fixed A value. It's natural to interpolate, to attain values for A between the seven curves.

But you might find that linear interpolation gives too sharp a change, for some `x`, as `A` varies across some of the seven given curves' values. A higher order interpolating scheme may smooth over those changes while still allowing the interpolated surface to match and be coincidental with all seven curves.

Below, I try to show how you can produce both surfaces using both lower and higher order interpolation, with a visual difference (discernable upon closer inspection) in the smoothness in the A-direction. Hopefully my indexing and bookkeeping is correct, but that can be checked. Also, if you feel lucky, you can extrapolate down from A=5 to A=1.

I uploaded only small replicas of surfaces plotted with glossiness and light models. This may cause the inlined images to appear a little splotchy, viewed from Mapleprimes. Here's the full sheet: surf_eq_seven_curve.mw

restart;
# A=5
R5 := (38.269-14.640*x+5.1792*x^2+0.74587e-2*x^3)/(1.-.42387*x+.86530*x^2):
# A=10
R10 := (37.653+47.812*x+4.3844*x^2+.48174*x^3)/(1.+.67811*x+2.9990*x^2):
# A=15
R15 := (36.927+70.867*x+16.401*x^2+.31751*x^3)/(1.+.93439*x+4.4198*x^2):
# A=20
R20 := (37.373+22.213*x+28.413*x^2-.25340*x^3)/(1.-0.54937e-1*x+3.5737*x^2):
# A=25
R25 := (38.018-9.2287*x+28.549*x^2-.28952*x^3)/(1.-.56866*x+2.5447*x^2):
# A=30
R30 := (38.575-22.618*x+25.073*x^2-.22292*x^3)/(1.-.70178*x+1.8478*x^2):
# A=35
R35 := (39.005-27.849*x+21.729*x^2-.16576*x^3)/(1.-.69438*x+1.4182*x^2):

# Form an Array of sample points, along the 7 regularly spaced (in A) curves.
# The more points used, the smoother the surface will be in the x-direction.
# (Further below, we see how we may smooth the surface in the A-direction.)

N := floor((27.0-0.1)/(0.1)):

Q := Array(1 .. 7, 1 .. N,
          [seq([seq(eval(r, x = 0.1+j*0.1), j=0..N-1)],
               r = [seq(cat(R, 5*i), i = 1 .. 7)])], datatype = float[8]):

# Vectors of the sample points (each in 1 dimension only).

P := Vector(7,i->5*i):

T := Vector(N,i->(i)*0.1):

# Form a point plot of the sample points, only for illustration purposes.

plot_pts := plots:-pointplot3d([seq(seq([P[i],T[j],Q[i,j]],
                                       j=1..N),i=1..7)],
                              'symbolsize'=10,'color'='red'):

# First, a surface plot with low order interpolation between the curves.
# Notice that it matches the sample points exactly.
# Notice some sharp changes where A cross the seven curves, due to low
# order interpolation, eg. at A=20,x=27.0. 

P1:=plots:-surfdata(Q, 5..35, 0.1..27.0, orientation=[74,77,-11], labels=['A','x',``],
                axes=box,color=gold,glossiness=0.9,lightmodel=light4):

plots:-display(P1, plot_pts, orientation=[74,77,-11]);

# It would be nice to have a smoother surface, which still matches the sample
# points but without the sharp changes (as A crosses any of the seven curves).
# So create a plot'able procedure, with higher order spline interpolation.
# See ?CurveFitting:-ArrayInterpolation for details on the available schemes.

B := (a,b) -> CurveFitting:-ArrayInterpolation(
                               [P,T], Q,
                               Array(1..1, 1..1, 1..2, [[[a,b]]]),
                               'method' = 'spline')[1,1]:

# Plot procedure B. Notice that it too matches the sample points exactly.
# Notice the surface is smoother for fixed x (eg. along x=27.0, esp. at A=20)

P2 := plot3d(B, 5..35, 0.1..27.0, orientation=[74,77,-11], labels=['A','x',``],
             axes=box,color=RGB(0.0,0.3,0.3),glossiness=0.9,lightmodel=light4):

plots:-display(P2, plot_pts, orientation=[74,77,-11]);

# Show the surfaces without gridlines or the sampled data points.

plots:-display(Array(1..2,[P1,P2]),style=patchnogrid,orientation=[88,70,1]);

# It is also easy to extrapolate the plot of `B`, from A=5 down to A=1.

plot3d(B, 1..35, 0.1..27.0, orientation=[74,77,-11], labels=['A','x',``],
             axes=box,color=RGB(0.0,0.3,0.3),glossiness=0.9,lightmodel=light4);

 

acer

You'd be better off using Quantile's facility for purely numeric computation. Also, you have it prematurely evaluating procedure `t` at symbolic (nonnumeric) `n`.

with(Statistics):

t:=(p,df)->Quantile(RandomVariable(StudentT(df)),p,numeric):

s:=0.5: beta:=0.95: delta:=0.14:

st:=time():

fsolve(n->t((beta+1)/2,n-1)*s/sqrt(n)-delta,2..infinity);


                          51.43599587
time()-st;

                             1.061


delta:=0.11:

st:=time():

fsolve(n->t((beta+1)/2,n-1)*s/sqrt(n)-delta,2..infinity);

                          81.80057685

time()-st;

                             1.498

In your version here's what you were actually passing to fsolve.

restart:
with(Statistics):
t:=(p,df)->Quantile(RandomVariable(StudentT(df)),p):
s:=0.5: beta:=0.95: delta:=0.11:
t((beta+1)/2,n-1)*s/sqrt(n)=delta; # result is  what you were passing to fsolve

       /          /            /                    2 \          
  1    |          |            |[1  1  ]  [3]     _Z  |         /
------ |0.5 RootOf|40 hypergeom|[-, - n], [-], - -----| _Z GAMMA|
 (1/2) \          \            \[2  2  ]  [2]    n - 1/         \
n                                                                

                                            \\       
  1  \                  (1/2)      /1     1\||       
  - n| - 19 (Pi (n - 1))      GAMMA|- n - -||| = 0.11
  2  /                             \2     2///       

In contrast, Statistics:-Quantile knows how to compute a purely numeric result much faster than the above mess can be computed numerically for some values of n.

acer

For one thing, you should be using round brackets like () instead of square brackets [] in the definition of w(r).

Square brackets are the list constructor. But you want round brackets as expression delimiters there.

acer

You should be using `eval` instead of `subs`, for the first way. In other words, eval(F,x=0) instead of subs(x=0,F) for example.

For your other way, you haven't acutally performed an "unapply" action. You've just written down an operator, manually. That will not work because the formal parameter of the operator (the x in x->) is not the same as the global x in the expression f. To get this way to work, you should do G:=unapply(lhs(f),x) instead.

acer

Here is a bit more.

2p81_final_modif2.mw

Perhaps you could make a backup copy.

The last version of updtsrc (suitable for getting MapleV source code file to run in Maple 6), can be obtained using the "HTTP" links from this page (which itself can be arrived at via a link from the ?updtsrc help-page).

But, isn't updtsrc a utility for upgrading plaintext source code only? I mean, it does language syntax converion, right? Does it also handle .mws? I thought not. If this is true, then would it work to open the old R4 .mws sheets in modern Maple's Classic GUI (or in MapleV R4's GUI, if you can), and then first to export (or copy & paste) the source code to plaintext .mpl files?

Where it gets sticky is with sheets older than R4, if I recall. There were some versions of updtsrc which could do R2->R3 conversion (and another for R3->R4 if I recall...). But the last one, linked to above, might not do those earlier conversions as well. I could be wrong about that. I have a hazy recollection of needing a multi-step process for making sheets for R2 or R3 become usable in Maple 7 or modern Maple. If anyone remembers this kind of detail, it'd probably be Thomas Richard.

If the version of updtsrc linked to from ?updtsrc doesn't fully work for the source code in your MapleV R4 sheets then let us know.

acer

First off, I think that one might need to set the Manipulator to "Click and Drag" so that that Action code is executed upon subsequent left-clicks, after the first animation. (I mention it, just in case anyone is having trouble running your example.)

If I am understanding you, then the issue is that the "current" frame of the old animation is displayed while the upcoming animation is being computed by the kernel and prepared for insertion by the GUI. I saw this more clearly if I used your example with frames=400. Then I ran it I stopped part way through, then slid the playbar back to the beginning or one side, then left-clicked to get the Action. What I saw was that the "last" frame of the previous animation was displayed while the upcoming animation was being prepared.

If I've understood rightly, then I should ask: what do you want displayed during that pause? The last shown frame of the previous animation, or a blank plot, or perhaps  the first frame of the upcoming animation? I think that any of these can be accomplished by adding in another line, before the insertion of the upcoming animation. Eg, uncommenting the one you want,

use DocumentTools, plots in

if cc = red then cc:= blue else cc:= red fi;

# quickly insert first frame of coming animation
#SetProperty(Plot0,value,plot(eval(sin(x+t),x=0),t=0..Pi,colour=cc),refresh=true);

# quickly insert blank frame
#SetProperty(Plot0,value,plot([[0,undefined]]),refresh=true);

# quickly insert something else
#...

SetProperty(Plot0,value,
            animate(plot,[sin(x+t),t=0..Pi,colour=cc],x=0..Pi,frames=400),
            refresh=true);
SetProperty(Plot0,play,true,refresh=true)

end use;

This is one of several reasons why I don't like using the traditional animation data structure inside Plot components. Another problem (for me) was that the animation runs asynchronously so any commands that come in the same code, but after the play command, get performed but then overridden by the animation which might still be playing asynchronously in its own thread. I was unable to find a reliable way to promptly apply an action that would occur only when the animation finished. The other main problem with traditional animations is, of course, their heavy use of memory resources in computing all frames up front. I believe that Maple is now fast enough that most "animations" can be done in an frame-by-frame computed-on-the-fly way, where all the controls would be done via library level & Component code. (Joe's comment about reusing a mere reference to an rtable inside a PLOT data structure may be extremely important here, for performance.)

acer

I'm not quite sure what you're asking. Are you unsure how to use the printf modifiers?

> s1:=0.5555552487:
> f(s1):=0.2542365647:

> printf( "s1 has value %.9e, and f(s1) has value %.10f\n", s1, f(s1) );

s1 has value 5.555552487e-01, and f(s1) has value 0.2542365647

See the printf help-page for more about the various modifiers (e, f, g, E, etc).

What do you mean by, "get it published"? If you want it printed to a file then use fprintf instead.

acer

Newton's method is what fsolve does for multivariate problems. You can see the source code for that interpreted Library routine in the usual ways, eg. by issuing the command,

    showstat(`fsolve/sysnewton`);

The source is a bit verbose, as it is adding guard digits, trying and be clever in the absence of a passed pair of stopping tolerances, handling both expression and operator form, etc.

> restart:

> f:=(a,b)->sin(a+b):
> g:=(a,b)->a-b:

> infolevel[fsolve]:=2:

> sol := fsolve({f,g});

fsolve/sysnewton: trying multivariate Newton iteration
fsolve/sysnewton:
guess vector [6.5647247568434774, -9.1035796650957585]
fsolve/sysnewton: norm of errors: 16.235204335176419
fsolve/sysnewton: new norm: 0.85321881380602701e-1
fsolve/sysnewton: iter = 1 |incr| = 15.668 new values _S01 = -1.6135 _S02 = -1.6135
fsolve/sysnewton: new norm: 0.20840812946897378e-3
fsolve/sysnewton: iter = 2 |incr| = 0.85634e-1 new values _S01 = -1.5707 _S02 = -1.5707
fsolve/sysnewton: new norm: 0.30173615373566167e-11
fsolve/sysnewton: iter = 3 |incr| = 0.20840e-3 new values _S01 = -1.5708 _S02 = -1.5708
fsolve/sysnewton: new norm: 0.38462643383279503e-16
fsolve/sysnewton: iter = 4 |incr| = 0.30174e-11 new values _S01 = -1.5708 _S02 = -1.5708
              sol := [-1.5707963267948966, -1.5707963267948966]

> f( op(sol) );

                                            -10
                              4.102067615 10   

> g( op(sol) );

                                     0.

kernelopts(version);
          Maple 13.02, X86 64 WINDOWS, Oct 12 2009, Build ID 436951

acer

You could have a look at this kind of thing (and also the various links at the bottom of that Comment, and others' Answers).

acer

The first argument, in Matrix form, will be like [-EV,2*Cov].

An example, with only simple bounds as constraints,

restart:
with(LinearAlgebra):
with(Optimization):

N:=600:
U:=RandomMatrix(N,outputoptions=[datatype=float[8],shape=triangular[upper]]):
H:=Matrix(U.U^%T,shape=symmetric):

EV:=RandomVector(N,outputoptions=[datatype=float[8]]):

bl:=Vector(N,fill=0,datatype=float[8]):
bu:=Vector(N,fill=1,datatype=float[8]):

sol1 := CodeTools:-Usage( QPSolve([-EV,2*H],[],[bl,bu]) ):
memory used=77.41MiB, alloc change=72.99MiB, cpu time=2.59s, real time=2.26s

W:=Vector(N,symbol=w):
sol2 := CodeTools:-Usage( QPSolve(W^%T.H.W-W^%T.EV,seq(W[i]=bl[i]..bu[i],i=1..N)) ):
memory used=243.44MiB, alloc change=50.99MiB, cpu time=5.05s, real time=5.06s

Norm(Vector(sol1[2])-Vector(eval(W,sol2[2])));
                               0.

# run them in reverse order, to be fair
restart:
with(LinearAlgebra):
with(Optimization):

N:=600:
U:=RandomMatrix(N,outputoptions=[datatype=float[8],shape=triangular[upper]]):
H:=Matrix(U.U^%T,shape=symmetric):

EV:=RandomVector(N,outputoptions=[datatype=float[8]]):

bl:=Vector(N,fill=0,datatype=float[8]):
bu:=Vector(N,fill=1,datatype=float[8]):

W:=Vector(N,symbol=w):
sol2 := CodeTools:-Usage( QPSolve(W^%T.H.W-W^%T.EV,seq(W[i]=bl[i]..bu[i],i=1..N)) ):
memory used=272.45MiB, alloc change=102.11MiB, cpu time=5.29s, real time=4.91s

sol1 := CodeTools:-Usage( QPSolve([-EV,2*H],[],[bl,bu]) ):
memory used=49.22MiB, alloc change=10.25MiB, cpu time=2.28s, real time=2.28s

Norm(Vector(sol1[2])-Vector(eval(W,sol2[2])));
                               0.

So Matrix form runs about twice as fast for this example, and seems to allocate only 70% more extra memory during the solving call. (Maple 15.01, 64bit Windows 7, intel i7).

acer

The short answer is that under `evalhf` coth is being evaluated for arguments outside of the range for which it computes accurately (without overflow, say). Internally, `plot` attempts with `evalhf` by default. You work around that problem when you wrap in `evalf` (or raise Digits above 15).

Examples of interest:

evalf(coth(355+I));

                                           -309  
               1.000000000 - 8.140551093 10     I

evalhf(coth(355+I));

                               1.

evalf(coth(356+I));

                                           -309  
               1.000000000 - 1.101703788 10     I

evalhf(coth(356+I)); # mildly interesting

             Float(undefined) + Float(undefined) I

evalf(1/(exp(354+I)));

                         -155                 -154  
           9.826304709 10     - 1.530356286 10     I

evalhf(1/(exp(354+I)));

                         -155                         -154  
   9.82630470844021329 10     - 1.53035628577376246 10     I

evalf(1/(exp(355+I)));

                         -155                 -155  
           3.614895484 10     - 5.629866151 10     I

evalhf(1/(exp(355+I))); # more interesting

                               0.

evalhf(1/(exp(355)*1/(exp(I))));

                         -155                         -155  
   3.61489548492129760 10     + 5.62986615203655732 10     I

acer

First 271 272 273 274 275 276 277 Last Page 273 of 336