acer

32333 Reputation

29 Badges

19 years, 319 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

I'm not sure what you're intending for 'intercept', or if I've misunderstood you on that. I've assumed that you want to have 'intercept' be a parameter, where a single value for it is to be found. But if,instead, it's supposed to be yet another Vector of independent data then that can be accomodated below (if you supply it).

But Statistics:-Fit can handle your multiple independent data.

I guess you know that Vector GPA has only 48 entries, while the others have 50.

STARTING_SALARY := <29555, 27958, 27230, 31070, 27577, 30007, 28988, 32655, 29310,
 29145, 26142, 29460, 28390, 28574, 28760, 29665, 27481, 29186, 29302, 28917, 32839, 27277,
 26531, 32537, 28594, 28268, 29249, 28525, 26105, 30694, 33102, 30201, 26863, 31556, 29006,
 27054, 31677, 32430, 34023, 32786 , 34857, 34151, 35086, 33517, 36865, 33110, 32859,33181,
 34847, 34384>:

GPA := <2.74, 3.88, 2.70, 3.36, 2.57, 3.81, 2.70, 3.29, 2.97, 2.16, 2.84, 2.53, 2.79, 3.81,
 2.18, 2.25, 3.44, 3.04, 3.78, 2.03, 2.15, 3.89, 2.66, 3.07, 2.34, 3.89, 3.13, 3.40, 3.56, 2.74,
 2.04, 3.61, 2.62, 2.32, 2.35, 2.70, 3.89, 2.52, 3.35, 3.11, 3.72, 2.26, 2.23, 2.59, 2.19, 2.73,
 2.96, 2.66>:

METRICS := <0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,0, 0, 0, 0, 0, 0, 0,
 0, 0, 0,0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1>:

SEX := <1, 0, 0, 1,1,0, 0, 0, 0, 1,0, 0, 1, 0, 0, 0, 0, 0, 1,1, 0, 0, 1,0, 0, 1, 0, 0, 1, 1,
 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1 ,1, 0, 0, 1>:

full := Statistics:-Fit( intercept + sex*B0 + B1*gpa + B2*metrics,
                 <GPA | METRICS[1..48] | SEX[1..48]>,
                 STARTING_SALARY[1..48],
                 [gpa, metrics, sex],
                 output=[leastsquaresfunction, parametervalues,
                         residualmeansquare, residuals] ):


form:=full[1]; # leastsquaresfunction

 59.004139262009176 sex - 694.6349921938285 gpa

    + 3978.586988002064 metrics

    + 31175.179243138406

full[2];

           [B0 = 59.004139262009176, 

             B1 = -694.6349921938285, 

             B2 = 3978.586988002064, 

             intercept = 31175.179243138406]

Vector(48, (i)->eval(form,
                     [sex=SEX[i],gpa=GPA[i],metrics=METRICS[i]])
                - STARTING_SALARY[i]): # this is just the residuals

LinearAlgebra:-Norm( full[4]^%T + % );

                                         -11
                   5.40012479177676141 10   

full[3]; # residualmeansquare

                                      6
                        3.838359311 10 

acer

This error message mostly occurs when one tries to assign to a formal parameter of a procedure, inside that procedure, when its value is not assignable.

Consider this example,

> restart;
> f := proc(x) x:=13; end proc:

> f([7]);
Error, (in f) illegal use of a formal parameter

What's happened is that Maple received a statement to assign 13 not to the name x but rather to the value of x. It was told to assign 13 to the list [7]. But that's not legal, as [7] is not assignable. Since x is a formal parameter of the procedure, that somewhat cryptic error message ensues. (...and it only makes sense to those whom it doesn't really help.)

There are a few ways to work around this. Usually, the attempt was a mistake in the first place, and a local variable should be used instead. If one really does want to assign to the name of the inbound argument (to replace its value, say) then this can be accomplished by passing in an uneval-quoted name, or by declaring that parameter as type uneval. Continuing the example above,

> z := [7]:

> f('z');

                               13

> z;

                               13

> #...or,

> f := proc(x::uneval) x:=13; end proc:

> z := [7]:

> f(z):

> z;

                               13

The above behaviour is sometimes described as the procedure having a side-effect on its argument (here, on name z).

There are a few other errors that are related to this case above,

> [7] := 13;
Error, illegal use of an object as a name

> 7 := 13;
Error, invalid left hand side of assignment

The first of those two has an entry in the Error Message Guide. All these errors should have such explanatory descriptions. They come up quite often. Yours just came up on Nov 21 on stackoverflow.

In my opinion, none of these error messages are extremely useful to the beginner. The error messages in the first of this last pair of example above and in your own example are unhelpful because they don't make it crystal clear that the "illegal use" is the attempted act of assignment. The error message of the second of the last examples above is unclear because it does not make crystal clear what is invalid about the left hand side of the assignment (ie. the fact that it is not assignable).

Banal as it might sound, perhaps better might be, "Error, assignment attempted to non-assignable object" for all three examples. That lets one know precisely what act was illegal, and what was invalid about the object.

Error message guide pages should show one the workarounds for the task at hand. If they have to explain the very wording and terms of the error message then the message is partly at fault.

acer

Are you saying that a function call has been assigned to the name `expr`?

> expr:=F(sin(a)-exp(b));
                          expr := F(sin(a) - exp(b))

> type(expr,function);   
                                     true

> op(0,expr);            
                                       F

> op(1,expr);            
                                sin(a) - exp(b)

Or, for an indexed function call,

> expr:=F[foo](sin(a)-exp(b));
                        expr := F[foo](sin(a) - exp(b))

> op(0,expr);                 
                                    F[foo]

> type(expr,function) and type(op(0,expr),'indexed');
                                     true

> op(0,op(0,expr));
                                       F

> op(op(0,expr));  
                                      foo

acer

If efficiency and speed matter here (ie. if the Matrix is large, or if this is critical code) then it's better to build up the results using successive multiplication than it is to generate all the powers separately.

If efficiency it's a concern for you, then you can sensibly ignore this.

Even using a smart "binary powering" technique for generating each higher power of an exact Matrix, it takes about 75-76 individual Matrix-Matrix multiplications in order to compute all powers from A^1 through to A^20. But of course it only takes 19 such individual multiplications in order to build them all up one at a time.

restart:

N:=10:
A:=LinearAlgebra:-RandomMatrix(N):

ba,st:=kernelopts(bytesalloc),time():
current:=LinearAlgebra:-IdentityMatrix(N):
for n from 1 to 20 do
   current:=A . current;
   #print('A'^n = current);
end do:
kernelopts(bytesalloc)-ba,time()-st;

                         1310480, 0.016

restart:

N:=10:
A:=LinearAlgebra:-RandomMatrix(N):

ba,st:=kernelopts(bytesalloc),time():
for n from 1 to 20 do
   current := A^n;
   #print('A'^n = current);
end do:
kernelopts(bytesalloc)-ba,time()-st;

                         2883056, 0.031

As N gets larger, the difference in costs may become more pronounced.

(sorry, folks, but I'm on a Horner's scheme kick right now...)

acer

One way to test equivalence of two expressions is to test whether their difference (subtraction) is equivalent (or even directly equal) to zero.

You could look at the help-pages evalb , is , simplify/details , testeq , and verify.

The command `Testzero` is underdocumented. By default `Testzero` uses `Normalizer`.

Some examples, which might be more clear after scanning those pages:

expr1:=sin(alpha+beta):
expr2:=sin(alpha)*cos(beta) + cos(alpha)*sin(beta):

evalb(simplify(expr1-expr2,'trig')=0);

                              true

Normalizer:=t->simplify(t,trig):

Testzero(expr1-expr2);

                              true

evalb(23423*2^5=32);

                             false

eqn:=sqrt(2)*sqrt(3)=6^(1/2);

                      (1/2)  (1/2)    (1/2)
                     2      3      = 6     

evalb(eqn);

                             false
is(eqn);

                              true

acer

See the documentation of the NAG Fortran Library function f08mef. This is like CLAPACK's dbdsqr function. It's documentation mentions the (QR based, or differential QD based) algorithms used.

You can see which NAG routine is used by LinearAlgebra, from userinfo messages.

> infolevel[LinearAlgebra]:=1:

> LinearAlgebra:-SingularValues(<>):
SingularValues: calling external function
SingularValues: NAG hw_f08kef
SingularValues: NAG hw_f08mef

Maple also ships with precompiled external libraries (shared objects clapack.dll or libclapack.so) containing the double-precision real and complex (dxxxx and zxxxx) routines from versions 3.0 of CLAPACK. If you want another routine from that library, you can call it yourself using the external-calling mechanism. (See here for an example of such user-defined wrapperless external-calling to CLAPACK from Maple.)

You can use these notes to help decide which CLAPACK function you'd like to use for singular value computation. Note that function dbdsdc for the divide and conquer algorithm is already avaliable in version 3.0. So you could call that directly from stock, shipped Maple, using define_external.

If you are interested in getting more state-of-the-art, then you could consider LAPACK 3.2.x or higher. See these release notes, and in particular section 2 for items 2.5 and 2.10 and the relevent references [5],[6]. Note that v.3.2.1 is avaliable aready as CLAPACK (which you can obtain as source or may be even precompiled). If you build such a more recent version of (C)LAPACK yourself (compiling, linking as shared dynamic library with appropriate exports, etc) then you would again be able to call its functions directly using the wrapperless forms of define_external.

acer

At higher-than-double precision, special techniques (hypergeom, if I recall) may be used.

For double-precision, techniques like those provided by the numapprox package allow one to compute an approximating function which will be sufficiently accurate over a desired range (and which could be exported as compilable C source). Usual range reduction techniques can then be used to map an arbitrary point into that range.

Here's a crude example. Note that, while constructed approximant S does reasonably well, it is not quite as accurate as whatever evalhf uses (which is a longer story for another time...). The plots show the absolute error of each.

restart:

Digits:=50:

S:=numapprox:-chebyshev(sin(x), x=0..evalf(Pi/2), 10^(-28)):

S:=convert(eval(S, T=orthopoly[T]),horner):

S:=unapply(evalf[18](S),x); # something that fits in double-precision code

                           -30   /                      
x -> 1.44062452733909128 10    + \1.00000000000000000 + 

  /                      -26   /                        
  \9.06753070710926319 10    + \-0.166666666666666667 + 

  /                      -23   /                         
  \7.94570910169261477 10    + \0.00833333333333333333 + 

  /                      -21   /                           
  \9.26674469373075778 10    + \-0.000198412698412698471 + 

  /                      -19   /                            
  \2.66256453318299289 10    + \0.00000275573192239766239 + 

  /                      -18   /                       -8   
  \2.49025786553588515 10    + \-2.50521083906818366 10   + 

  /                      -18   /                      -10   
  \8.71393429557927929 10    + \1.60590426859692446 10    + 

  /                      -17   /                       -13   
  \1.20819897391039123 10    + \-7.64726420587879460 10    + 

  /                      -18   /                      -15   
  \6.56111046671949372 10    + \2.80814547640300074 10    + 

  /                      -18   /                       -18
  \1.25828982570811892 10    + \-8.56441515436031830 10   

     /                      -20                         -20  \  \  
   + \6.18404812488481967 10    + 1.37434690531914668 10    x/ x/ x

  \  \  \  \  \  \  \  \  \  \  \  \  \  \  \  \  \  \  
  / x/ x/ x/ x/ x/ x/ x/ x/ x/ x/ x/ x/ x/ x/ x/ x/ x/ x

f:=proc(x)
  local t;
  []; Digits:=40;
  t:=evalhf(S(x))-eval(sin(x));
  log[10](abs(t));
end proc:

F:=proc(x)
  local t;
  []; Digits:=40;
  t:=evalhf(sin(x))-eval(sin(x));
  log[10](abs(t));
end proc:

plot([f,F,-16],0..Pi/4,numpoints=5000,
     view=[0..Pi/4,-17..-15.5],color=[red,green,gold]);

T15 and T17 can be thrown into the above mix. T17 is almost as accurate as S above, on (0,Pi/4), but T15 is not accurate to as many digits on that crucial range.

T15:=unapply(evalf(convert(convert(taylor(sin(x),x=0,15),polynom),horner)),x):

T17:=unapply(evalf(convert(convert(taylor(sin(x),x=0,17),polynom),horner)),x):

G15:=proc(x)
  local t;
  []; Digits:=40;
  t:=evalhf(T15(x))-eval(sin(x));
  log[10](abs(t));
end proc:

G17:=proc(x)
  local t;
  []; Digits:=40;
  t:=evalhf(T17(x))-eval(sin(x));
  log[10](abs(t));
end proc:

plot([G15,G17,f,F,-16],0..Pi/4,numpoints=1000,
     view=[0..Pi/4,-17..-13.5],color=[cyan,blue,red,green,gold]);

I have quite a few sheets that investigate this stuff, for hardware precision. I keep meaning to find spare time to blog it here.

acer

You asked what was the reason for the observed behaviour. The cause can be explained by printlevel.

But that only the cause. Ususally, inserting explicit `print` calls is a more convenient and well-aimed solution than is adjusting printlevel.

acer

Just for fun,

> cat(nprintf~("%a",L)[]);
                        1ax^2aBbac+f(x)

Part of the jape seems to be whether the spaces between entries and commas are wanted.

acer

Your worksheet contains an erroneous space between sin and (5 x) which is causing Maple to think that you have a product of terms sin*(5*x) instead of the desired function application sin(5*x).

Once that erroneous space is removed, then the faulty implicit multiplication instance is removed and your example ought to work as expected.

Please note that, for involved expressions, evalf[5](...) is not a good way to approximate a final result accurate to 5 decimal places. What it does do it make internal numeric computation occur with 5 digits of working precision (possibly with some guard digits for atomic operations). But that is not the same as ensuring 5 places of accuracy for involved, compound expressions. A better approach would be to use evalf at normal working precision, and then only applying evalf[5] to a final floating-point result. This is a common misunderstanding. Your particular example may not be involved enough for this distinction between precision and accuracy to have manifested itself.

acer

Sometimes there is some symbolic result possible for the differentiation, for symbolic n (but not a formal power series to n=infinity).  ...and sometimes this will fail miserably.

But it's sort of fun when it works. The adventure is in what gets assigned to `q` below.

restart:

symbolictaylor:=proc(expr,x,x0,n)
                 Sum(1/p!*eval(diff(expr,x$p),x=x0)*(x-x0)^p,p=0..n);
                end proc:

symbolictaylor(f(x),x,x0,n);

              n                                    
            -----                                  
             \                                    p
              )   (diff(f(x0), [x0 $ p])) (x - x0) 
             /    ---------------------------------
            -----           factorial(p)           
            p = 0      
                            
symbolictaylor(exp(A*x),x,x0,n) assuming x::real, x0::real, n::posint;

                    n                         
                  -----                       
                   \     p                   p
                    )   A  exp(A x0) (x - x0) 
                   /    ----------------------
                  -----      factorial(p)     
                  p = 0    
                   
t1 := value(symbolictaylor(exp(A*x),x,x0,3));

                                     1  2                   2
  exp(A x0) + A exp(A x0) (x - x0) + - A  exp(A x0) (x - x0) 
                                     2                       

       1  3                   3
     + - A  exp(A x0) (x - x0) 
       6    
                   
t2 := convert(taylor(exp(A*x),x=x0,4),polynom);

                                     1  2                   2
  exp(A x0) + A exp(A x0) (x - x0) + - A  exp(A x0) (x - x0) 
                                     2                       

       1  3                   3
     + - A  exp(A x0) (x - x0) 
       6    
                   
t2-t1;

                               0

q := simplify(value(symbolictaylor(exp(A*x),x,x0,n)));

               exp(A x) GAMMA(n + 1, A (x - x0))
               ---------------------------------
                         GAMMA(n + 1)    
       
simplify(t1 - eval(q,n=3));

                               0

eval(q,{x=4,x0=0,A=2});

                     exp(8) GAMMA(n + 1, 8)
                     ----------------------
                          GAMMA(n + 1)   
  
evalf(eval(%,n=3));

                          126.3333333

evalf(eval(t2,{x=4,x0=0,A=2}));

                          126.3333333

t1 := value(symbolictaylor(sin(A*x),x,x0,3));

                                     1            2         2
  sin(A x0) + cos(A x0) A (x - x0) - - sin(A x0) A  (x - x0) 
                                     2                       

       1            3         3
     - - cos(A x0) A  (x - x0) 
       6   
                    
t2 := convert(taylor(sin(A*x),x=x0,4),polynom);

                                     1            2         2
  sin(A x0) + cos(A x0) A (x - x0) - - sin(A x0) A  (x - x0) 
                                     2                       

       1            3         3
     - - cos(A x0) A  (x - x0) 
       6       
                
t2-t1;

                               0

q := simplify(value(symbolictaylor(sin(A*x),x,x0,n)))
       assuming x::real, x0::real, n::posint;


     1       /1                                                    
------------ |- I (n + 1) (GAMMA(n + 1, -I A x + I A x0) exp(-I A x
GAMMA(n + 2) \2                                                    

                                              \
  ) - GAMMA(n + 1, I A x - I A x0) exp(I A x))|
                                              /

simplify(convert(t1 - eval(q,n=3),expln)) assuming x::real, x0::real, n::posint;

                               0

simplify(combine(convert(eval(q,{x=4,x0=0,A=2}),expln))) assuming n::posint;

1                                                              
- I (GAMMA(n + 1, -8 I) exp(-8 I) - GAMMA(n + 1, 8 I) exp(8 I))
2                                                              
---------------------------------------------------------------
                         GAMMA(n + 1)     
                     
evalf(eval(%,n=3));

                      -77.33333333 - 0. I

evalf(eval(t1,{x=4,x0=0,A=2}));

                          -77.33333333

Please don't feel a need to give examples where it fails; they're too easy to find.

acer

Have I understood you rightly?

Uncomment the plot commands, if wanted.

restart:

f:=x+y:
g:=x*y:

stage2:=maximize(g,x=0..1);

                             stage2 := max(0, y)

stage1:=maximize(eval(f,x=stage2),y=0..1);

                                 stage1 := 2

stage2:=maximize(g,x=0..1,location);

              stage2 := max(0, y), {[{x = 0}, 0], [{x = 1}, y]}

stage1:=maximize(eval(f,x=stage2[1]),y=0..1,location);

                         stage1 := 2, {[{y = 1}, 2]}

# Let's try a harder example.
restart:

f:=sin(x*y):
g:=exp(x+y)^((x-6/10)^2):

# Let's first try and do as much as we can symbolically.
stage2:=maximize(g,y=0..1):

#plot(stage2,x=0..1);

fopt:=eval(f,y=stage2);

                /     /        /         2\              /         2\\\
                |     |        \(x - 3/5) /              \(x - 3/5) /||
     fopt := sin\x max\(exp(x))            , (exp(x + 1))            //

stage1:=maximize(fopt,x=0..1,location);

                    /        (4/25)\   /[         /        (4/25)\]\ 
       stage1 := sin\(exp(2))      /, { [FAIL, sin\(exp(2))      /] }
                                       \                           / 

evalf(%);

                    0.9813047879, {[FAIL, 0.9813047879]}

Optimization:-NLPSolve(fopt,x=0..1,maximize); # This agrees.

                        [0.981304787948013, [x = 1.]]

# Now let's try and do it purely numerically.

v := proc(X)
      if not type(X,numeric) then 'procname'(args);
      else Optimization:-NLPSolve(eval(g,x=X),y=0..1,maximize)[1];
      end if;
     end proc:

#plot(v(x),x=0..1);

st2num:=Optimization:-NLPSolve(v,0..1,maximize,method=branchandbound)[1];

                         st2num := 1.43332941456034

H:=proc(X)
    if not type(X,numeric) then 'procname'(args);
    else eval(f,{x=X,y=v(X)});
    end if;
   end proc:

#plot(H(x),x=0..1);

Optimization:-NLPSolve(H(x),{x>=0,1>=x},maximize,method=sqp);

Warning, no iterations performed as initial point satisfies first-order conditions
                      [0.981304787948013124, [x = 1.]]

H(1.0);

                              0.981304787948013

objf := proc(V::Vector)
  H(V[1]);
end proc:

objfgradient := proc(X::Vector,G::Vector)
  G[1] := fdiff( H, [1], [X[1]] );
  NULL;
end proc:

Optimization:-NLPSolve( 1, objf, [Matrix([[1],[-1]]),Vector([1,0])],
                       'objectivegradient'=objfgradient,
                       'initialpoint'=Vector([0.3]), 'method'='sqp',
                       'maximize' );

                        [0.981304787948013124, [1.]]

Using other methods, and set up with simple bounds instead of linear constraints, or not in Matrix form as in the last call above, I got some inconsistent results from NLPSolve (as if it were not re-entrant!?)

Have you considered a global optimization approach, such as using DirectSearch or GlobalOptimization?

acer

Try putting them into an Array of plots.

restart:

p1:=plots:-animate(plots:-sphereplot,
                   [exp(6*sin(t))-1, theta=0..2*Pi, phi=0..Pi],
                   t=0..3, frames=100):

p2:=plots:-animate(plots:-implicitplot,
                   [(i)^2 + (j)^2= (t)^2, i = -10..10, j = -10..10 ],
                   t=0..3, frames=100):

plots:-display(Array([p1, p2]));

Or, if you really want them stacked vertically,

plots:-display(Array([[p1],[p2]]));

acer

That's an interesting question. I know of none, as of yet.

Recent additions and enhancements to LAPACK seem (to me) to lean more toward lowering the algorithmic complexity than to attaining higher accuracy in the eigen-solving areas.

Is there a concrete reason why QR (or divide-and-conquer) based approaches such as LAPACK & Maple's LinearAlgebra provide are not satisfactory for your problems? Is the accuracy inadequate? Or do you have very large sparse Matrices? Or...?

acer

The first one seems simple to get, while the second shows a wrinkle. I suppose that the minor adjustment used to get the second one could be automated (possibly for both connections).

If these points arise from (numerically computed?) solutions of DEs then I'd guess that all this pain might be avoided by using Maple's dedicated facilities for plotting such solutions.

restart:

ch1 := [[0.1e-1, -.56], [0.25e-1, -.57], [0.35e-1, -.555], [.115, -.43], [.16, -.3],
        [.195, -.24], [.28, -.155], [.315, -0.25e-1], [.32, -0.85e-1], [.41, .19],
        [.425, .11], [.425, .425], [.425, .43], [.425, .44], [.43, .12], [.43, .265],
        [.43, .405], [.43, .445], [.435, .295], [.435, .38], [.465, .44], [.5, .29],
        [.505, .31], [.505, .395], [.51, .37]]:

ch2 := [[0.3e-1, -.5], [0.5e-1, .575], [.11, .495], [.15, -.295], [.19, -.33],
        [.23, -.155], [.23, .475], [.255, .55], [.265, -.205], [.27, .465],
        [.3, -0.25e-1], [.305, -0.15e-1], [.31, .54], [.32, 0.15e-1], [.325, .445],
        [.335, -0.85e-1], [.335, .44], [.365, .115], [.38, -0.5e-2], [.405, .36],
        [.41, 0.5e-1], [.485, .2], [.5, .235], [.505, .465], [.51, .46], [.525, .305]]:

ch11 := simplex[convexhull](ch1,output=[hull]):
plots:-display( plot(ch11), plot(ch1,style=point) );

ch1b:={op(ch1)} minus {op(ch11)}:
ch1b1:=simplex[convexhull](ch1b,output=[hull]):
plots:-display( plot([op(ch11),op(ListTools:-Reverse(ch1b1)),ch11[1]]),
                plot(ch1,style=point) );

ch22 := simplex[convexhull](ch2,output=[hull]):
plots:-display( plot(ch22), plot(ch2,style=point) );

ch2b:={op(ch2)} minus {op(ch22)}:
ch2b2:=simplex[convexhull](ch2b,output=[hull]):
plots:-display( plot([op(ch22),ch2b2[1],op(ListTools:-Reverse(ch2b2[2..-1])), ch2[1]]),
                plot(ch2,style=point) );

acer

First 272 273 274 275 276 277 278 Last Page 274 of 336