acer

32363 Reputation

29 Badges

19 years, 333 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@Markiyan Hirnyk The popular explanations in that cited mathoverflow thread discuss numeric ode and quadrature solvers in general. And several of them discuss particular classes of badly behaved integrands. That really is different from claims about the behaviour of particular solvers (Maple's) on problems (integrands) in general. I don't think that your citation necessarily provides a contradiction.

Also, that mathoverflow question was about using ode solvers for general integrands (which presumably can be evaluated as accurately as required). But in this thread the question was about an integrand which specifically is the numeric solution of an ode system. So that adds all kinds of extra tie-in between the ode-solver and any accuracy of the integrand itself.

Anyway, let's try and work with a simple example of trying to integrate up close to a singularity, just to see what kind of difficulties one might encounter while trying such approaches in Maple. The solution curve x(t)->infinity as t->5 in a straightforward way, with no special tricks of oscillations.

The brave might run with infolevel set high, to try and glean additional insight.

Again: this is just one example, and is not supposed to prove anything as a general rule. But the difficulties that happen here can happen elsewhere: how high should Digits be set, in order to get convergence? What method of integration should be used? A specific method, whiich could be fast, or which might fail more easily? Or no specified method, which might get things bogged down in repeated domain splitting?

Of course, the first thing to note is that the quadrature (evalf/Int) result's attainable accuracy will depend heavily on the accuracy of the approximated integrand X(t). So, while dsolve/numeric is trying to compute the integral of X (as another variable in the expanded system) to the same relative accuracy as it computes X, it's clear that evalf/Int will not be able to match with the same accuracy in its computed numeric integration. So we have to let evalf/Int off the hook and allow it to try for slightly less accuracy -- a concession. Fine. But can evalf/Int keep the pace, as the requested accuracy made tighter? As the need for working precision goes above what the NAG methods can handle? Let's see.

Things start off ok for evalf/Int. But as the accuracy request gets tighter it starts to falter -- first getting about 100 times slower, and then failing to attain a result. Meanwhile, dsolve/numeric seems to have hardly just begun to break a sweat.

restart:
de := diff(x(t),t) = 2*t/(5-t)+t^2/(5-t)^2: deaug := x(t) = diff(p(t),t):
IC := x(0)=0: ICaug := p(0)=0:
sol := dsolve({de, deaug, IC, ICaug}, output=listprocedure,
              numeric, relerr=1e-7):
X:=eval(x(t),sol): P:=eval(p(t),sol):

CodeTools:-Usage( P(4.9999999) );
memory used=64.59KiB, alloc change=63.99KiB, cpu time=0ns, real time=0ns
                              405.688337959965

(%-405.688340054625523236)/405.688340054625523236;
                                               -9
                           -5.21131615638382 10  

CodeTools:-Usage( evalf(Int(X, 0..4.9999999, epsilon=1e-7)) );
memory used=3.52MiB, alloc change=2.69MiB, cpu time=140.00ms, real time=140.00ms
                              405.688333682607

(%-405.688340054625523236)/405.688340054625523236;
                                               -8
                           -1.57547752621667 10  

restart: Digits:=15:
de := diff(x(t),t) = 2*t/(5-t)+t^2/(5-t)^2: deaug := x(t) = diff(p(t),t):
IC := x(0)=0:ICaug := p(0)=0:
sol := dsolve({de, deaug, IC, ICaug}, output=listprocedure,
               numeric, relerr=1e-9):
X:=eval(x(t),sol): P:=eval(p(t),sol):

CodeTools:-Usage( P(4.9999999) ); # satisfies the request
memory used=65.26KiB, alloc change=63.99KiB, cpu time=15.00ms, real time=14.00ms
                              405.688339959059

(%-405.688340054625523236)/405.688340054625523236;
                                               -10
                           -2.35566566253453 10   

CodeTools:-Usage( evalf(Int(proc(t) []; X(t); end proc, 0..4.9999999,
                            epsilon=1e-9, method=_d01ajc)) ); # 100 times slower
memory used=6.95MiB, alloc change=6.50MiB, cpu time=1.39s, real time=1.39s
                              405.688339983143

(%-405.688340054625523236)/405.688340054625523236;
                                          -10
                               -1.76201 10   

restart: Digits:=30:
de := diff(x(t),t) = 2*t/(5-t)+t^2/(5-t)^2: deaug := x(t) = diff(p(t),t):
IC := x(0)=0:ICaug := p(0)=0:
sol := dsolve({de, deaug, IC, ICaug}, output=listprocedure,
               numeric, maxfun=infinity, relerr=1e-18):
X:=eval(x(t),sol): P:=eval(p(t),sol):

CodeTools:-Usage( P(4.9999999) ); # This turns out to have rel.err. < 1e-18
memory used=261.07MiB, alloc change=31.24MiB, cpu time=2.64s, real time=2.64s
                       405.688340054625523584756774239

(%-405.688340054625523236030110897705748985792044)/
     405.688340054625523236030110897705748985792044;
                                             -19
                             8.59592521922 10   

# This next doesn't complete in a short timeframe, taking 11 minutes just
# to attain < 1e-12 relative accuracy.
# I really don't see how to force Maple to get the integral with a
# relative accuracy of 1e-16 (say) by using numerical quadrature
# against the results from X(t), without having to raise Digits much higher
# and wait a very long time.

CodeTools:-Usage( evalf(Int(X, 0..4.9999999, epsilon=1e-12)) );
memory used=61.99GiB, alloc change=1.69MiB, cpu time=10.91m, real time=10.91m
                       405.688340037701495153700680162

(%-405.688340054625523236030110897705748985792044)/
     405.688340054625523236030110897705748985792044;
                                                 -11
                        -4.1716821538550078939 10   

CodeTools:-Usage( evalf(Int(X, 0..4.9999999, epsilon=1e-13)) );
memory used=77.75GiB, alloc change=63.99KiB, cpu time=13.53m, real time=13.53m
                             /                   
                            |                    
                            |  X d0. .. 4.9999999
                            |                    
                           /                     

restart: Digits:=76:
de := diff(x(t),t) = 2*t/(5-t)+t^2/(5-t)^2: deaug := x(t) = diff(p(t),t):
IC := x(0)=0:ICaug := p(0)=0:
sol := dsolve({de, deaug, IC, ICaug}, output=listprocedure,
               numeric, maxfun=infinity, relerr=1e-38):
X:=eval(x(t),sol): P:=eval(p(t),sol):

CodeTools:-Usage( P(4.9999999) );
memory used=379.93MiB, alloc change=34.24MiB, cpu time=3.49s, real time=3.50s
405.6883400546255232360301108977057489857920448359671233018307746349243537220

@Markiyan Hirnyk The popular explanations in that cited mathoverflow thread discuss numeric ode and quadrature solvers in general. And several of them discuss particular classes of badly behaved integrands. That really is different from claims about the behaviour of particular solvers (Maple's) on problems (integrands) in general. I don't think that your citation necessarily provides a contradiction.

Also, that mathoverflow question was about using ode solvers for general integrands (which presumably can be evaluated as accurately as required). But in this thread the question was about an integrand which specifically is the numeric solution of an ode system. So that adds all kinds of extra tie-in between the ode-solver and any accuracy of the integrand itself.

Anyway, let's try and work with a simple example of trying to integrate up close to a singularity, just to see what kind of difficulties one might encounter while trying such approaches in Maple. The solution curve x(t)->infinity as t->5 in a straightforward way, with no special tricks of oscillations.

The brave might run with infolevel set high, to try and glean additional insight.

Again: this is just one example, and is not supposed to prove anything as a general rule. But the difficulties that happen here can happen elsewhere: how high should Digits be set, in order to get convergence? What method of integration should be used? A specific method, whiich could be fast, or which might fail more easily? Or no specified method, which might get things bogged down in repeated domain splitting?

Of course, the first thing to note is that the quadrature (evalf/Int) result's attainable accuracy will depend heavily on the accuracy of the approximated integrand X(t). So, while dsolve/numeric is trying to compute the integral of X (as another variable in the expanded system) to the same relative accuracy as it computes X, it's clear that evalf/Int will not be able to match with the same accuracy in its computed numeric integration. So we have to let evalf/Int off the hook and allow it to try for slightly less accuracy -- a concession. Fine. But can evalf/Int keep the pace, as the requested accuracy made tighter? As the need for working precision goes above what the NAG methods can handle? Let's see.

Things start off ok for evalf/Int. But as the accuracy request gets tighter it starts to falter -- first getting about 100 times slower, and then failing to attain a result. Meanwhile, dsolve/numeric seems to have hardly just begun to break a sweat.

restart:
de := diff(x(t),t) = 2*t/(5-t)+t^2/(5-t)^2: deaug := x(t) = diff(p(t),t):
IC := x(0)=0: ICaug := p(0)=0:
sol := dsolve({de, deaug, IC, ICaug}, output=listprocedure,
              numeric, relerr=1e-7):
X:=eval(x(t),sol): P:=eval(p(t),sol):

CodeTools:-Usage( P(4.9999999) );
memory used=64.59KiB, alloc change=63.99KiB, cpu time=0ns, real time=0ns
                              405.688337959965

(%-405.688340054625523236)/405.688340054625523236;
                                               -9
                           -5.21131615638382 10  

CodeTools:-Usage( evalf(Int(X, 0..4.9999999, epsilon=1e-7)) );
memory used=3.52MiB, alloc change=2.69MiB, cpu time=140.00ms, real time=140.00ms
                              405.688333682607

(%-405.688340054625523236)/405.688340054625523236;
                                               -8
                           -1.57547752621667 10  

restart: Digits:=15:
de := diff(x(t),t) = 2*t/(5-t)+t^2/(5-t)^2: deaug := x(t) = diff(p(t),t):
IC := x(0)=0:ICaug := p(0)=0:
sol := dsolve({de, deaug, IC, ICaug}, output=listprocedure,
               numeric, relerr=1e-9):
X:=eval(x(t),sol): P:=eval(p(t),sol):

CodeTools:-Usage( P(4.9999999) ); # satisfies the request
memory used=65.26KiB, alloc change=63.99KiB, cpu time=15.00ms, real time=14.00ms
                              405.688339959059

(%-405.688340054625523236)/405.688340054625523236;
                                               -10
                           -2.35566566253453 10   

CodeTools:-Usage( evalf(Int(proc(t) []; X(t); end proc, 0..4.9999999,
                            epsilon=1e-9, method=_d01ajc)) ); # 100 times slower
memory used=6.95MiB, alloc change=6.50MiB, cpu time=1.39s, real time=1.39s
                              405.688339983143

(%-405.688340054625523236)/405.688340054625523236;
                                          -10
                               -1.76201 10   

restart: Digits:=30:
de := diff(x(t),t) = 2*t/(5-t)+t^2/(5-t)^2: deaug := x(t) = diff(p(t),t):
IC := x(0)=0:ICaug := p(0)=0:
sol := dsolve({de, deaug, IC, ICaug}, output=listprocedure,
               numeric, maxfun=infinity, relerr=1e-18):
X:=eval(x(t),sol): P:=eval(p(t),sol):

CodeTools:-Usage( P(4.9999999) ); # This turns out to have rel.err. < 1e-18
memory used=261.07MiB, alloc change=31.24MiB, cpu time=2.64s, real time=2.64s
                       405.688340054625523584756774239

(%-405.688340054625523236030110897705748985792044)/
     405.688340054625523236030110897705748985792044;
                                             -19
                             8.59592521922 10   

# This next doesn't complete in a short timeframe, taking 11 minutes just
# to attain < 1e-12 relative accuracy.
# I really don't see how to force Maple to get the integral with a
# relative accuracy of 1e-16 (say) by using numerical quadrature
# against the results from X(t), without having to raise Digits much higher
# and wait a very long time.

CodeTools:-Usage( evalf(Int(X, 0..4.9999999, epsilon=1e-12)) );
memory used=61.99GiB, alloc change=1.69MiB, cpu time=10.91m, real time=10.91m
                       405.688340037701495153700680162

(%-405.688340054625523236030110897705748985792044)/
     405.688340054625523236030110897705748985792044;
                                                 -11
                        -4.1716821538550078939 10   

CodeTools:-Usage( evalf(Int(X, 0..4.9999999, epsilon=1e-13)) );
memory used=77.75GiB, alloc change=63.99KiB, cpu time=13.53m, real time=13.53m
                             /                   
                            |                    
                            |  X d0. .. 4.9999999
                            |                    
                           /                     

restart: Digits:=76:
de := diff(x(t),t) = 2*t/(5-t)+t^2/(5-t)^2: deaug := x(t) = diff(p(t),t):
IC := x(0)=0:ICaug := p(0)=0:
sol := dsolve({de, deaug, IC, ICaug}, output=listprocedure,
               numeric, maxfun=infinity, relerr=1e-38):
X:=eval(x(t),sol): P:=eval(p(t),sol):

CodeTools:-Usage( P(4.9999999) );
memory used=379.93MiB, alloc change=34.24MiB, cpu time=3.49s, real time=3.50s
405.6883400546255232360301108977057489857920448359671233018307746349243537220

@Markiyan Hirnyk So, you wrapped fdiff in an operator, and think that answers an old question from 2006.

So mention that there, not here.

It doesn't add any credibility to the issue of whether fdiff might be a robust and useful answer to this topic of computing higher derivatives for dsolve/numeric results.

acer

@Markiyan Hirnyk So, you wrapped fdiff in an operator, and think that answers an old question from 2006.

So mention that there, not here.

It doesn't add any credibility to the issue of whether fdiff might be a robust and useful answer to this topic of computing higher derivatives for dsolve/numeric results.

acer

@Mac Dude Yes, that is why I asked before, for clarification. Augmenting with specific nonzero data can also be done easily (but in other ways).

Note that your question still has a few vague aspects. To augment from 4x4 to 7x7 and fill in all the new entries with specific nonzero data entails more than just one new Matrix. (...unless you intend to mask the old, onto the new.)

Here is one way, by calling the Matrix() constructor. This is just one possibility (involving three new submatrices of the final result) amongst several, because of the remaining ambiguity as to your precise intent.

M:=Matrix(4,4,(i,j)->orig);

                             [orig  orig  orig  orig]
                             [                      ]
                             [orig  orig  orig  orig]
                        M := [                      ]
                             [orig  orig  orig  orig]
                             [                      ]
                             [orig  orig  orig  orig]

T1:=Matrix(4,3,(i,j)->new1):
T2:=Matrix(3,4,(i,j)->new2):
T3:=Matrix(3,3,(i,j)->new3):

Matrix([[M,T1],[T2,T3]]);

                 [orig  orig  orig  orig  new1  new1  new1]
                 [                                        ]
                 [orig  orig  orig  orig  new1  new1  new1]
                 [                                        ]
                 [orig  orig  orig  orig  new1  new1  new1]
                 [                                        ]
                 [orig  orig  orig  orig  new1  new1  new1]
                 [                                        ]
                 [new2  new2  new2  new2  new3  new3  new3]
                 [                                        ]
                 [new2  new2  new2  new2  new3  new3  new3]
                 [                                        ]
                 [new2  new2  new2  new2  new3  new3  new3]

You could also use the angle-bracket constructors to get the same result, and so use a terser input command. You could also create the new 7x7 Matrix as empty, and then use indexing operations to write all the submatrices into it.

@Mac Dude Yes, that is why I asked before, for clarification. Augmenting with specific nonzero data can also be done easily (but in other ways).

Note that your question still has a few vague aspects. To augment from 4x4 to 7x7 and fill in all the new entries with specific nonzero data entails more than just one new Matrix. (...unless you intend to mask the old, onto the new.)

Here is one way, by calling the Matrix() constructor. This is just one possibility (involving three new submatrices of the final result) amongst several, because of the remaining ambiguity as to your precise intent.

M:=Matrix(4,4,(i,j)->orig);

                             [orig  orig  orig  orig]
                             [                      ]
                             [orig  orig  orig  orig]
                        M := [                      ]
                             [orig  orig  orig  orig]
                             [                      ]
                             [orig  orig  orig  orig]

T1:=Matrix(4,3,(i,j)->new1):
T2:=Matrix(3,4,(i,j)->new2):
T3:=Matrix(3,3,(i,j)->new3):

Matrix([[M,T1],[T2,T3]]);

                 [orig  orig  orig  orig  new1  new1  new1]
                 [                                        ]
                 [orig  orig  orig  orig  new1  new1  new1]
                 [                                        ]
                 [orig  orig  orig  orig  new1  new1  new1]
                 [                                        ]
                 [orig  orig  orig  orig  new1  new1  new1]
                 [                                        ]
                 [new2  new2  new2  new2  new3  new3  new3]
                 [                                        ]
                 [new2  new2  new2  new2  new3  new3  new3]
                 [                                        ]
                 [new2  new2  new2  new2  new3  new3  new3]

You could also use the angle-bracket constructors to get the same result, and so use a terser input command. You could also create the new 7x7 Matrix as empty, and then use indexing operations to write all the submatrices into it.

I don't recall who wrote that Strip procedure. But I believe that it's original purpose was for stripping off trailing zeroes, no?

I felt I should mention that, lest anyone wrongly imagine that it was written to fix the behaviour that you're talking about in this thread.

The Strip proc does turn 1.0 and 1. into exact integer 1, and it turns 3.0 and 3. into exact 3 as well. I'd call that a bug in Strip, if it was intended just for stripping off trailing zeroes!

Strip := Z -> subsindets( Z, {float}, z -> parse(sprintf("%g",z)) ):

x := -1.*2.0^b;

                                            b
                                x := -1. 2.0 

Strip(x); # not appropriate!

                                       b
                                     -2 

acer

I don't recall who wrote that Strip procedure. But I believe that it's original purpose was for stripping off trailing zeroes, no?

I felt I should mention that, lest anyone wrongly imagine that it was written to fix the behaviour that you're talking about in this thread.

The Strip proc does turn 1.0 and 1. into exact integer 1, and it turns 3.0 and 3. into exact 3 as well. I'd call that a bug in Strip, if it was intended just for stripping off trailing zeroes!

Strip := Z -> subsindets( Z, {float}, z -> parse(sprintf("%g",z)) ):

x := -1.*2.0^b;

                                            b
                                x := -1. 2.0 

Strip(x); # not appropriate!

                                       b
                                     -2 

acer

@ambitiousad The eight equations in s1 are explicit equations with the members of [Ana, Anm, Anr, Ans, Bna, Bnm, Bno, Bns] on the left-hand-sides.

If you can solve for the (introduced) alpha[i] variables, then their explicit solutions can be resubstituted back into s1.

@ambitiousad The eight equations in s1 are explicit equations with the members of [Ana, Anm, Anr, Ans, Bna, Bnm, Bno, Bns] on the left-hand-sides.

If you can solve for the (introduced) alpha[i] variables, then their explicit solutions can be resubstituted back into s1.

@mc123 

In Maple 16 the thickness of the gridlines can be set independently of the thickness of the axes.

plot(sin(x),x=0..4,axis=[gridlines=[thickness=1],thickness=4]);

 

Tickmarks are another issue, however.

acer

@mc123 

In Maple 16 the thickness of the gridlines can be set independently of the thickness of the axes.

plot(sin(x),x=0..4,axis=[gridlines=[thickness=1],thickness=4]);

 

Tickmarks are another issue, however.

acer

It's not generally well known, these days, just how much Georg Ohm looked like Rudolph Diesel.

@herclau Try this (Linux/Unix file/directory paths, similar to your code),

with(ImageTools):

plotsetup(default);

img := Matrix(ToGrayscale(Read(cat(kernelopts(datadir),
                                   "/images/FingerPrint.jpg")))):

vsize,hsize:=op(1,img);

P:=plots:-textplot([1, .5, "take from A"], align = {above, right},
                   font = [TIMES,ROMAN,16], axes = none):

plotsetup('jpeg','plotoutput'="text1.jpg",
          'plotoptions'=cat("height=",vsize+6+4,",width=",hsize+5+5));

P; # Wait until this gets exported and written to OS! How long? It depends...

Threads:-Sleep(3): # This waits long enough, on my host. YMMV.

T := Read("text1.jpg"):

correctedT := rtable_redim(T[1+6 .. (-1)-4,
                           1+5 .. (-1)-5, 1 .. -1],
                           1 .. vsize, 1 .. hsize, 1 .. 3):

View(correctedT);

newQ := ImageTools:-Mask(img, ToGrayscale(correctedT)):

View(newQ);

I made a change to mask with ToGrayscale(correctedT), because your example uses a gray scale image for the background.

First 403 404 405 406 407 408 409 Last Page 405 of 592