acer

32373 Reputation

29 Badges

19 years, 334 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

I wonder, is there a real root somewhere "near" x=1.59e7 or so? I had some trouble with it, due to scaling issues I suppose.

acer

I wonder, is there a real root somewhere "near" x=1.59e7 or so? I had some trouble with it, due to scaling issues I suppose.

acer

@PatD The procedure Cproc[5] evaluates the 5th expression from your `C` (for those fixed a1 and a2 values you gave), following production from that of an optimized procedure. When Cproc[5] is called, its arguments are used as values for the remaining varaibles (given that a1 and a2 have been fixed).

The procedure f[5] also accepts the same kind of arguments as Cproc[5]. In fact, all that f[5] does is raise the working precision (Digits) and then pass the arguments on to a call to Cproc[5].

Digits is an environment variable, and as such will be inherited in the call to Cproc[5] done within f[5]. So f[5] is just a slick way to get Cproc[5] to compute at higher working precision without having to raise Digits at the top level.

I gave an example, where one can see that f[5] and Cproc[5] return results that differ in something like the 3rd decimal digit. This shows what you said, that the expressions are sensitive to working precision. Hopefully the result from f[5] is accurate enough.

And similarly, for all the f[i] and Cproc[i] for i=1..9.

I created such f[i] because of what Carl mentioned about fsolve. When you call fsolve it has to figure out its stopping/acceptance criteria. It bases that on the number of variables and on Digits. If you want your expressions to get evaluated at high working precision (b/c of roundoff error or numerical instability) then the temptation is to just raise Digits high at the level from which you call fsolve. But raising Digits high at that level will cause fsolve to use a much tighter accuracy/acceptance, which again does not get met. It's a push-me/pull-me dichotomy. What fsolve does not offer are options to raise its working precision to a user-specified value while also forcibly keeping the accuracy requirements low. A problematic scenario is one in which the value of Digits where fsolve is called is never high enough a working precision for the expression evaluations to allow the accuracy tolerance of fsolve (based on that same Digits value) to be met.

And that's where the f[i] come in. Using them as replacements to the Cproc[i] we can cause the expressions to be evaluated numerically at high working precision while allowing fsolve to still use the lower Digits setting (at the outer level at which its called) and thus make far less demands for acceptance of a root.

The idea is to leave Digits as it is, say 10 the default value. Then call fsolve and use the f[i]. Internally, fsolve will try and meet an accuracy acceptance tolerance based on Digits=10. And that will likely never succeed unless the individual expressions can be numerically evaluated as something near 10-digits accurate or so. And such accuracy for the numerical evaluations of the expressions may require a very high working precision indeed. (You make wish to experiment with that formation of the f[i], to see just high they have to locally set Digits.)

This all assumes that fsolve is being called with its first argument as a set of procedures, rather than a set of expressions. Hence the parameter ranges are supplied like [...,1..900,...] instead of [...,c3=1..900,...] etc. This is similar to how `plot` and `Optimization` routines differ for ranges, according to input in procedure or expression form.

 

@PatD The procedure Cproc[5] evaluates the 5th expression from your `C` (for those fixed a1 and a2 values you gave), following production from that of an optimized procedure. When Cproc[5] is called, its arguments are used as values for the remaining varaibles (given that a1 and a2 have been fixed).

The procedure f[5] also accepts the same kind of arguments as Cproc[5]. In fact, all that f[5] does is raise the working precision (Digits) and then pass the arguments on to a call to Cproc[5].

Digits is an environment variable, and as such will be inherited in the call to Cproc[5] done within f[5]. So f[5] is just a slick way to get Cproc[5] to compute at higher working precision without having to raise Digits at the top level.

I gave an example, where one can see that f[5] and Cproc[5] return results that differ in something like the 3rd decimal digit. This shows what you said, that the expressions are sensitive to working precision. Hopefully the result from f[5] is accurate enough.

And similarly, for all the f[i] and Cproc[i] for i=1..9.

I created such f[i] because of what Carl mentioned about fsolve. When you call fsolve it has to figure out its stopping/acceptance criteria. It bases that on the number of variables and on Digits. If you want your expressions to get evaluated at high working precision (b/c of roundoff error or numerical instability) then the temptation is to just raise Digits high at the level from which you call fsolve. But raising Digits high at that level will cause fsolve to use a much tighter accuracy/acceptance, which again does not get met. It's a push-me/pull-me dichotomy. What fsolve does not offer are options to raise its working precision to a user-specified value while also forcibly keeping the accuracy requirements low. A problematic scenario is one in which the value of Digits where fsolve is called is never high enough a working precision for the expression evaluations to allow the accuracy tolerance of fsolve (based on that same Digits value) to be met.

And that's where the f[i] come in. Using them as replacements to the Cproc[i] we can cause the expressions to be evaluated numerically at high working precision while allowing fsolve to still use the lower Digits setting (at the outer level at which its called) and thus make far less demands for acceptance of a root.

The idea is to leave Digits as it is, say 10 the default value. Then call fsolve and use the f[i]. Internally, fsolve will try and meet an accuracy acceptance tolerance based on Digits=10. And that will likely never succeed unless the individual expressions can be numerically evaluated as something near 10-digits accurate or so. And such accuracy for the numerical evaluations of the expressions may require a very high working precision indeed. (You make wish to experiment with that formation of the f[i], to see just high they have to locally set Digits.)

This all assumes that fsolve is being called with its first argument as a set of procedures, rather than a set of expressions. Hence the parameter ranges are supplied like [...,1..900,...] instead of [...,c3=1..900,...] etc. This is similar to how `plot` and `Optimization` routines differ for ranges, according to input in procedure or expression form.

 

I don't know whether it's useful here, but one can write and use an extension to the programmatic convert mechanism for this.  It might not be useful for this particular case, since the right-click context-menu acts nicely in place on a 2D Math table-reference input. Also (except in Maple 16 where it's weird?) there is the subliteral entry on the Layout palette which allows one to enter such atomic subscripted names.

Anyway, using this code,

 

`convert/identifier`:=proc(x)
  cat(`#`,convert(convert(:-Typesetting:-Typeset(x),
                           `global`),
                   name));
end proc:

T:=convert(H[deg],identifier);

`#msub(mi("H"),mi("deg"))`

lprint(T);

`#msub(mi("H"),mi("deg"))`

T - H[deg];

`#msub(mi("H"),mi("deg"))`-H[deg]

 

 

Download convertidentifier.mw

I don't know whether it's useful here, but one can write and use an extension to the programmatic convert mechanism for this.  It might not be useful for this particular case, since the right-click context-menu acts nicely in place on a 2D Math table-reference input. Also (except in Maple 16 where it's weird?) there is the subliteral entry on the Layout palette which allows one to enter such atomic subscripted names.

Anyway, using this code,

 

`convert/identifier`:=proc(x)
  cat(`#`,convert(convert(:-Typesetting:-Typeset(x),
                           `global`),
                   name));
end proc:

T:=convert(H[deg],identifier);

`#msub(mi("H"),mi("deg"))`

lprint(T);

`#msub(mi("H"),mi("deg"))`

T - H[deg];

`#msub(mi("H"),mi("deg"))`-H[deg]

 

 

Download convertidentifier.mw

@Carl Love But that is not a comprehensive solution that can be automatically applied easily. It's easy to accomodate any given dependency/assignment chain, but that does not mean that all unknown chains will so easily be automatically handled.

And grouping variables in a set does not help with the posted question's request for handling "specific variables". Knowing about memory used by sets of names such as {a,b,c,d} doesn't tell us which is which in an assignment chain, say, or which are/is primarily responsible for the allocation.

We still don't know what the Asker really wants, though, and why.

@Carl Love But that is not a comprehensive solution that can be automatically applied easily. It's easy to accomodate any given dependency/assignment chain, but that does not mean that all unknown chains will so easily be automatically handled.

And grouping variables in a set does not help with the posted question's request for handling "specific variables". Knowing about memory used by sets of names such as {a,b,c,d} doesn't tell us which is which in an assignment chain, say, or which are/is primarily responsible for the allocation.

We still don't know what the Asker really wants, though, and why.

@AppOptGrp The issue is not with the so-called pronumerals. It is with subscripted names.

Let's take the simpler example of base name `H`. When you enter the subscripted H[deg] in 2D Math input the underlying object H[deg] is, by default, a table reference.

Your issue is like this, then,

H:=4;

                               4

H[deg]:=3;

                               3
H;

                               H

By assigning to H[deg] you bring about a reassignment to `H` itself, which clobbers the previous assignment of 4 to `H`. The new assignment to just `H` is that of a table.

eval(H);

                           table([deg = 3])

In the table-reference form of subscipting the subscripting index is mutable. You can change it. The next example had 1D output, but in the Standard GUI the output will appear with subscripted names.

foo:=Y[x];

                              Y[x]

subs(x=77,foo);

                             Y[77]

A workaround to your issue is to instead use another form of subscripting, in which the entire subscripted object is just one big name. It can be an immutable name, by which I mean that the subscript is unchangeable. It is a new, unique name all to itself. This new name is called an Atomic Identifier -- Atomic because it's a whole unchanging thing, and Identifier because it's a name.

Underneath, this name is a complicated thing, all between single left-quotes (name quotes). Enter the following, and you can see the markup, as the output gets rendered as a subscripted thing.

bar := `#msub(mi("Y"),mi(x))`;

subs(x=77,bar);

Notice that `subs` did nothing to it, just as subs(x=77,vwxyz) wouldn't output vw77yz either!

The context-menu action described in my Answer above converts table reference Y[x] into such a complicated name, behind the scenes. It is a new name which happens to get typeset nicely, as a subscripted thing.

That complicated Atomic Identifier is unrelated to the base name `Y` of the earlier table, and assigning to the Atomic Identifier will not cause any new assignment to the table's base name `Y`, and so won't clobber any previous assignment to `Y`.

@AppOptGrp The issue is not with the so-called pronumerals. It is with subscripted names.

Let's take the simpler example of base name `H`. When you enter the subscripted H[deg] in 2D Math input the underlying object H[deg] is, by default, a table reference.

Your issue is like this, then,

H:=4;

                               4

H[deg]:=3;

                               3
H;

                               H

By assigning to H[deg] you bring about a reassignment to `H` itself, which clobbers the previous assignment of 4 to `H`. The new assignment to just `H` is that of a table.

eval(H);

                           table([deg = 3])

In the table-reference form of subscipting the subscripting index is mutable. You can change it. The next example had 1D output, but in the Standard GUI the output will appear with subscripted names.

foo:=Y[x];

                              Y[x]

subs(x=77,foo);

                             Y[77]

A workaround to your issue is to instead use another form of subscripting, in which the entire subscripted object is just one big name. It can be an immutable name, by which I mean that the subscript is unchangeable. It is a new, unique name all to itself. This new name is called an Atomic Identifier -- Atomic because it's a whole unchanging thing, and Identifier because it's a name.

Underneath, this name is a complicated thing, all between single left-quotes (name quotes). Enter the following, and you can see the markup, as the output gets rendered as a subscripted thing.

bar := `#msub(mi("Y"),mi(x))`;

subs(x=77,bar);

Notice that `subs` did nothing to it, just as subs(x=77,vwxyz) wouldn't output vw77yz either!

The context-menu action described in my Answer above converts table reference Y[x] into such a complicated name, behind the scenes. It is a new name which happens to get typeset nicely, as a subscripted thing.

That complicated Atomic Identifier is unrelated to the base name `Y` of the earlier table, and assigning to the Atomic Identifier will not cause any new assignment to the table's base name `Y`, and so won't clobber any previous assignment to `Y`.

It's not really clear what the Asker hopes to ascertain with such measurements.

Matlab was mentioned, and perhaps all that's needed is measurement of hardware float rtables' memory allocations.

If symbolic objects' memory storage sizes are the desired knowledge, then is it important to avoid duplicate counting? In the example below, the "size" of `c` is counted twice, once when measuring `c` and once again when measuring `a`. But we know that the long list is only being stored once in Maple, due to its uniquification process.

> a:=b+c:

> c:=[seq(cat(x,i),i=1..10^4)]:

> length(a);
                                     58905

> length(c);
                                     58897

> length(sprintf("%m", eval(a)));
                                     78910

> length(sprintf("%m", eval(c)));
                                     78899

Maybe the Asker was originally trying to do something else, such as optimize code with respect to memory use (in which case using procedures and Maple's profiling tools might be a better all round approach).

We may need the Asker to clarify what's really wanted.

acer

It's not really clear what the Asker hopes to ascertain with such measurements.

Matlab was mentioned, and perhaps all that's needed is measurement of hardware float rtables' memory allocations.

If symbolic objects' memory storage sizes are the desired knowledge, then is it important to avoid duplicate counting? In the example below, the "size" of `c` is counted twice, once when measuring `c` and once again when measuring `a`. But we know that the long list is only being stored once in Maple, due to its uniquification process.

> a:=b+c:

> c:=[seq(cat(x,i),i=1..10^4)]:

> length(a);
                                     58905

> length(c);
                                     58897

> length(sprintf("%m", eval(a)));
                                     78910

> length(sprintf("%m", eval(c)));
                                     78899

Maybe the Asker was originally trying to do something else, such as optimize code with respect to memory use (in which case using procedures and Maple's profiling tools might be a better all round approach).

We may need the Asker to clarify what's really wanted.

acer

Comments which relate to the programming or mathematics of such escape fractals are welcome as followups to this post.

But comments with little or no relevant substance, such as simple links to galleries of images with some mathematical background, are not welcome, and I may delete them.

acer

Here is a higher resolution version of one of those, produced at 1600 pixels in width (34sec on my machine, compiled and multithreaded) and scaled down to 800 pixels wide for insertion here.

 

 

acer

Are you running 16.02, and do you have the ability to run (or reinstall) 16.01? See here and here.

For this particular example, the curve can be revealed with,

  subsindets(A,[undefined,undefined],z->NULL);

but that (and related simple attempts at replacements) won't be a general workaround (as it may wrongly connect points).

acer

First 379 380 381 382 383 384 385 Last Page 381 of 592