acer

32348 Reputation

29 Badges

19 years, 330 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@hirnyk I don't think so. (Digits defaults to 10, so it isn't special to "raise" it to 10.) In your worksheet, you didn't evaluate {x=0,y=0,z=0,w=0} at R. You used some other point.

If I append these into your worksheet, the invalid results are shown all the same.

eval({w = 0, x = 0, y = 0, z = 0}, R)

seq(eval([x, y, z, w], eval(Q[n], [_Z2 = 0, _Z4 = 0, _Z6 = 0, _Z8 = 0])), n = 1 .. nops([Q]));

seq(eval([x, y, z, w], eval(qq, [_Z2 = 0, _Z4 = 0, _Z6 = 0, _Z8 = 0])), qq in Q);

I guess that's incentive to upgrade versions.

Oh, hang on...

> restart:

> x := 2*(-sin(2*a)+sin(2*(a+b))+2*cos(2*a+2*b+2*c+d)*sin(d)):
> y := -2+4*cos(2*a)-4*cos(2*(a+b))+4*cos(2*(a+b+c))-4*cos(2*(a+b+c+d)):
> z := 2*(sin(a)-sin(a+b)+sin(a+b+c)-sin(a+b+c+d)):
> w := -1+2*cos(a)-2*cos(a+b)+2*cos(a+b+c)-2*cos(a+b+c+d):

> sol := solve({w = 0, x = 0, y = 0, z = 0}, AllSolutions = true):# a long output

> Q:=evalf(allvalues(sol[2])):

> Q[5];

             {a = 0.3419031877 + 6.283185308 _Z2, 

               b = 0.6383343061 + 6.283185308 _Z4, 

               c = 0.7556269538 + 6.283185308 _Z6, 

               d = 0.4422007045 + 6.283185308 _Z8}

> map(about,indets(Q[5],name) minus {a,b,c,d});

Originally _Z2, renamed _Z2~:
  is assumed to be: integer

Originally _Z4, renamed _Z4~:
  is assumed to be: integer

Originally _Z6, renamed _Z6~:
  is assumed to be: integer

Originally _Z8, renamed _Z8~:
  is assumed to be: integer

> # OK, so those _Z? parameters have to be integer valued.
> # So let's try the value 0 for them all.

> eval(Q[5],map(`=`,indets(Q[5],name) minus {a,b,c,d},0));

    {a = 0.3419031877, b = 0.6383343061, c = 0.7556269538, 

      d = 0.4422007045}

> # Now, evaluating the original equations at this point
> # should produce small values. (...but it doesn't, and
> # one can show that it's not just a roundoff issue.)

> eval([x,y,z,w],%);

    [-0.639506680, 0.231347804, -0.6603026126, 0.5832003498]

This is not a new issue, for solve with its AllSolutions option returns too many results (some invalid) for trig examples. It looks like a tough subject.

One has to check, which results are valid. The following should produce results with small magnitude. But only 8 of 48 do. It looks like incorrect results in allvalues(sol[2]), ie, more than just a roundoff/precision issue. It looks like sol[2] needs additional qualification.

Digits:=500:
for X in evalf(allvalues(sol[2])) do
   eval([x,y,z,w],eval(X,map(`=`,indets(X,name) minus {a,b,c,d},0)));
   print(evalf[5](%));
end do:

acer

I guess that's incentive to upgrade versions.

Oh, hang on...

> restart:

> x := 2*(-sin(2*a)+sin(2*(a+b))+2*cos(2*a+2*b+2*c+d)*sin(d)):
> y := -2+4*cos(2*a)-4*cos(2*(a+b))+4*cos(2*(a+b+c))-4*cos(2*(a+b+c+d)):
> z := 2*(sin(a)-sin(a+b)+sin(a+b+c)-sin(a+b+c+d)):
> w := -1+2*cos(a)-2*cos(a+b)+2*cos(a+b+c)-2*cos(a+b+c+d):

> sol := solve({w = 0, x = 0, y = 0, z = 0}, AllSolutions = true):# a long output

> Q:=evalf(allvalues(sol[2])):

> Q[5];

             {a = 0.3419031877 + 6.283185308 _Z2, 

               b = 0.6383343061 + 6.283185308 _Z4, 

               c = 0.7556269538 + 6.283185308 _Z6, 

               d = 0.4422007045 + 6.283185308 _Z8}

> map(about,indets(Q[5],name) minus {a,b,c,d});

Originally _Z2, renamed _Z2~:
  is assumed to be: integer

Originally _Z4, renamed _Z4~:
  is assumed to be: integer

Originally _Z6, renamed _Z6~:
  is assumed to be: integer

Originally _Z8, renamed _Z8~:
  is assumed to be: integer

> # OK, so those _Z? parameters have to be integer valued.
> # So let's try the value 0 for them all.

> eval(Q[5],map(`=`,indets(Q[5],name) minus {a,b,c,d},0));

    {a = 0.3419031877, b = 0.6383343061, c = 0.7556269538, 

      d = 0.4422007045}

> # Now, evaluating the original equations at this point
> # should produce small values. (...but it doesn't, and
> # one can show that it's not just a roundoff issue.)

> eval([x,y,z,w],%);

    [-0.639506680, 0.231347804, -0.6603026126, 0.5832003498]

This is not a new issue, for solve with its AllSolutions option returns too many results (some invalid) for trig examples. It looks like a tough subject.

One has to check, which results are valid. The following should produce results with small magnitude. But only 8 of 48 do. It looks like incorrect results in allvalues(sol[2]), ie, more than just a roundoff/precision issue. It looks like sol[2] needs additional qualification.

Digits:=500:
for X in evalf(allvalues(sol[2])) do
   eval([x,y,z,w],eval(X,map(`=`,indets(X,name) minus {a,b,c,d},0)));
   print(evalf[5](%));
end do:

acer

Changing the printlevel like that is overkill, when instead one could terminate the `end do` with a full colon.

acer

Changing the printlevel like that is overkill, when instead one could terminate the `end do` with a full colon.

acer

Using fnormal is not a trap, if one is trying to eliminate spurious nonreal artefacts (which is the case here) and if one uses it properly. But one does have to realize that it respects Digits (by default, unless overridden).

And that makes sense, since a small value of 10^(-13) may be considered an "artefact" at 10 digits of working precision but not at 15 digits of working precision. Or, a small value like 10^(-13) may be considered an artefact when compared to 10^(-12) but not when compared to 10^(-13). And there can be interplay between the effect of those two conditions: working precison and comparative size. And that is just what the extra 'digits' and 'epsilon' options of the fnormal command are there for.

The fine controls are there so that one can discriminate between what is to be taken as insignificant and what is not, since of course the status with which to consider some same magnitude will depend on the situation.

> q:=1e-13:

> fnormal(q,Digits,1e-12);
                               0.

> fnormal(q,Digits,1e-13);
                                -13
                            1 10   

The default behaviour won't suit everybody, and that too is to be expected. Everything which should have an option is like that.

acer

Using fnormal is not a trap, if one is trying to eliminate spurious nonreal artefacts (which is the case here) and if one uses it properly. But one does have to realize that it respects Digits (by default, unless overridden).

And that makes sense, since a small value of 10^(-13) may be considered an "artefact" at 10 digits of working precision but not at 15 digits of working precision. Or, a small value like 10^(-13) may be considered an artefact when compared to 10^(-12) but not when compared to 10^(-13). And there can be interplay between the effect of those two conditions: working precison and comparative size. And that is just what the extra 'digits' and 'epsilon' options of the fnormal command are there for.

The fine controls are there so that one can discriminate between what is to be taken as insignificant and what is not, since of course the status with which to consider some same magnitude will depend on the situation.

> q:=1e-13:

> fnormal(q,Digits,1e-12);
                               0.

> fnormal(q,Digits,1e-13);
                                -13
                            1 10   

The default behaviour won't suit everybody, and that too is to be expected. Everything which should have an option is like that.

acer

This posted answer is not the only decent way to do it. Like many things in Maple, it's one of several viable alternative approaches.

The posted mechanism would process and assign to all the exports treated in like manner, and it would do them all on the very first reference to the package name in subsequent sessions. If one has a very large number of such exports, then the delay upon issuing with() might be undesirable. And indeed some alternative approaches could spread the cost, by separating the individual define_external calls. It could be done so that each define_external is done only when each export is referenced.

For example, one could code the module so that each export was defined as a procedure. Do away with the ModuleLoad entirely. Each export procedure could either make its own relevant define_external call each time it was invoked for a computation. Eg,

ASTEM97:=module()
option package;
export psat;
local dllloc;
   dllloc := "c:/Windows/system/astemdll.dll";
   psat := proc(x)
   local extern_fcn;
      extern_fcn := define_external('psat97mc', 'FORTRAN',
                           'arg1'::('float[8]'),
                           'RETURN'::('float[8]'),
                           'LIB' = dllloc);
      extern_fcn(x);
   end proc;
   # repeat for other exports   
end module:

with(ASTEM97);
                             [psat]

psat(4); # I have no such .dll
Error, (in psat) external linking: error loading external
library c:/Windows/system/astemdll.dll: The specified module could not be found.

The above approach works even without savelib and restart (but, naturally, one could still do that, for easy re-use in subsequent sessions!). One drawback to the above is that define_external is called each and every time the export psat is called. Improvements can be had here too, by using additional module locals to which to assign the define_external returned object (and checks whether each is assigned, each time invoked). Or the define_external calls can be replaced by a call to a proc (module local, say) which has `option remember`. There are lots of variations.

Or things could be even fancier, and each export could do the define_external *the first time* and then immediately redefine itself to the call_external (which is the proc that define_external actually returns). See here for a little on that technique.

acer

This posted answer is not the only decent way to do it. Like many things in Maple, it's one of several viable alternative approaches.

The posted mechanism would process and assign to all the exports treated in like manner, and it would do them all on the very first reference to the package name in subsequent sessions. If one has a very large number of such exports, then the delay upon issuing with() might be undesirable. And indeed some alternative approaches could spread the cost, by separating the individual define_external calls. It could be done so that each define_external is done only when each export is referenced.

For example, one could code the module so that each export was defined as a procedure. Do away with the ModuleLoad entirely. Each export procedure could either make its own relevant define_external call each time it was invoked for a computation. Eg,

ASTEM97:=module()
option package;
export psat;
local dllloc;
   dllloc := "c:/Windows/system/astemdll.dll";
   psat := proc(x)
   local extern_fcn;
      extern_fcn := define_external('psat97mc', 'FORTRAN',
                           'arg1'::('float[8]'),
                           'RETURN'::('float[8]'),
                           'LIB' = dllloc);
      extern_fcn(x);
   end proc;
   # repeat for other exports   
end module:

with(ASTEM97);
                             [psat]

psat(4); # I have no such .dll
Error, (in psat) external linking: error loading external
library c:/Windows/system/astemdll.dll: The specified module could not be found.

The above approach works even without savelib and restart (but, naturally, one could still do that, for easy re-use in subsequent sessions!). One drawback to the above is that define_external is called each and every time the export psat is called. Improvements can be had here too, by using additional module locals to which to assign the define_external returned object (and checks whether each is assigned, each time invoked). Or the define_external calls can be replaced by a call to a proc (module local, say) which has `option remember`. There are lots of variations.

Or things could be even fancier, and each export could do the define_external *the first time* and then immediately redefine itself to the call_external (which is the proc that define_external actually returns). See here for a little on that technique.

acer

Is improvement to the rendering (inlined into Mapleprimes posts, not in Maple itself) of typeset "2D" math a part of this plan?

If that is so then is it an item of higher, or lower, priority?

acer

See the help-page ?printlevel

acer

See the help-page ?printlevel

acer

If your Matrix is badly conditioned, then computation of its determinant would require higher working precision in order to be accurate. There is nothing surprising about that, and it isn't that unusual. (I'm talking about the "usual" condition number relates to linear system solving, since you mentioned difficulty with Matrix inversion. It's not clear why you bring eigenvalues into it, unless you already have deep numerical linear algebra understanding.)

You might see the help-page ConditionNumber, or read a numerical linear algebra text for details on the connection to accuracy & precision (though most assume fixed precision, not variable like Maple's), and maybe reread the other answers.

acer

If your Matrix is badly conditioned, then computation of its determinant would require higher working precision in order to be accurate. There is nothing surprising about that, and it isn't that unusual. (I'm talking about the "usual" condition number relates to linear system solving, since you mentioned difficulty with Matrix inversion. It's not clear why you bring eigenvalues into it, unless you already have deep numerical linear algebra understanding.)

You might see the help-page ConditionNumber, or read a numerical linear algebra text for details on the connection to accuracy & precision (though most assume fixed precision, not variable like Maple's), and maybe reread the other answers.

acer

@roman_pearce What Roman says is quite true, especially about certain subsets of computation. My response was relating specifically to the concept of general symbolic computation, for which there is likely no all-round "silver bullet".

First 443 444 445 446 447 448 449 Last Page 445 of 592