acer

32717 Reputation

29 Badges

20 years, 85 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

It is recursive because the procedure bisection calls itself.

It calls itself differently, according to whether f(c) has the same sign as f(a). (I just used your logic there. You could improve it.)

When it calls itself, it replaces either argument a or b by c, according to your logic as mentioned.

That whole process, of bisection calling itself with new arguments, over and over, happens until the tolerance eps is met.

You can first issue trace(bisection) before calling it with example inputs. to see more printed detail of what it does when it runs. See the ?trace help-page.

acer

yankyank (with its extra simplify) isn't actually needed here. yank will do. Indeed, if you hit it with a final combine then you get pretty much what I posted above. ie. I did what yank does.

acer

yankyank (with its extra simplify) isn't actually needed here. yank will do. Indeed, if you hit it with a final combine then you get pretty much what I posted above. ie. I did what yank does.

acer

I often get a feeling of surprise that floating-point accuracy (as opposed to working precision!) does not come up on here on mapleprimes as a topic of intense discussion.

Routines such as evalf/Int and Optimization's exports have numerical tolerance parameters. And "atomic arithmetic operations", single calls to trig, and some special functions compute results accurate within so many ulps. But other than that, for compound expressions, all bets are off. The expression might be badly conditioned for floating-point evaluation.

Now, Maple has Digits as a control over working precision but it has no more general specifier of requested accuracy. Compare with Mathematica, which claims to have both. So, in Maple one must either do analysis or raise Digits to some problem-specific mystery value. The evalr command (and its less well-known friend, shake) is not really strong enough to help much. An argument that I have sometimes heard against making any progress in this area is that its an open field of study, and partial, fuzzily-bounded coverage is to be avoided. If we all accepted that sort of thinking all the time, Maple might have normal but no radnormal, evala/Normal, simplify/trig let alone simplify proper.

acer

I often get a feeling of surprise that floating-point accuracy (as opposed to working precision!) does not come up on here on mapleprimes as a topic of intense discussion.

Routines such as evalf/Int and Optimization's exports have numerical tolerance parameters. And "atomic arithmetic operations", single calls to trig, and some special functions compute results accurate within so many ulps. But other than that, for compound expressions, all bets are off. The expression might be badly conditioned for floating-point evaluation.

Now, Maple has Digits as a control over working precision but it has no more general specifier of requested accuracy. Compare with Mathematica, which claims to have both. So, in Maple one must either do analysis or raise Digits to some problem-specific mystery value. The evalr command (and its less well-known friend, shake) is not really strong enough to help much. An argument that I have sometimes heard against making any progress in this area is that its an open field of study, and partial, fuzzily-bounded coverage is to be avoided. If we all accepted that sort of thinking all the time, Maple might have normal but no radnormal, evala/Normal, simplify/trig let alone simplify proper.

acer

This is shorter than what I had before, but still multiplies and divides by exp(t). Of course, it also does ln-of-exp as another "arbitrary" operation-with-its-inverse.

> ee:=ln(cosh(t)):

> combine(simplify(expand(
>   ln(exp(t)*expand(exp(-t)*(exp(convert(ee,exp)))))
>                         ))) assuming real;
                          ln(1/2 + 1/2 exp(-2 t)) + t

> # or, using 1/exp(t), showing nonreliance at least upon the exp "form".
> combine(simplify(expand(
>   ln(exp(t)*expand(1/exp(t)*(exp(convert(ee,exp)))))
>                         ))) assuming real;
                          ln(1/2 + 1/2 exp(-2 t)) + t

acer

This is shorter than what I had before, but still multiplies and divides by exp(t). Of course, it also does ln-of-exp as another "arbitrary" operation-with-its-inverse.

> ee:=ln(cosh(t)):

> combine(simplify(expand(
>   ln(exp(t)*expand(exp(-t)*(exp(convert(ee,exp)))))
>                         ))) assuming real;
                          ln(1/2 + 1/2 exp(-2 t)) + t

> # or, using 1/exp(t), showing nonreliance at least upon the exp "form".
> combine(simplify(expand(
>   ln(exp(t)*expand(1/exp(t)*(exp(convert(ee,exp)))))
>                         ))) assuming real;
                          ln(1/2 + 1/2 exp(-2 t)) + t

acer

You've started with lots of the interesting bits already done, such as gracefully pulling out that exp(t) factor while keeping exp(-2*t) instead of 1/exp(t^2).

So, one can do it by first multiplying by exp(-t), then multiplying shortly after by exp(t). But I consider that a kludge since it's not fully automatic -- it sorta depends on fortuitously choosing that particular factor of exp(t) to introduce. And still there can be the problem of multiplying through by the exp(-t) and getting exp(-2*t) rather than 1/exp(2*t).

Below the intermediate conversion back to trigh is there merely to allow the exp(-t) factor to survive multiplication by exp(t). Quite ugly, all told.

> ee:=ln(cosh(t)):

> exp(t)*convert(exp(-t)*(exp(convert(ee,exp))),trigh);
                                     2
                      exp(t) (cosh(t)  - sinh(t) cosh(t))
 
> combine(simplify(convert(expand(ln(%)),exp))) assuming real;
                          t + ln(1/2 + 1/2 exp(-2 t))

acer

You've started with lots of the interesting bits already done, such as gracefully pulling out that exp(t) factor while keeping exp(-2*t) instead of 1/exp(t^2).

So, one can do it by first multiplying by exp(-t), then multiplying shortly after by exp(t). But I consider that a kludge since it's not fully automatic -- it sorta depends on fortuitously choosing that particular factor of exp(t) to introduce. And still there can be the problem of multiplying through by the exp(-t) and getting exp(-2*t) rather than 1/exp(2*t).

Below the intermediate conversion back to trigh is there merely to allow the exp(-t) factor to survive multiplication by exp(t). Quite ugly, all told.

> ee:=ln(cosh(t)):

> exp(t)*convert(exp(-t)*(exp(convert(ee,exp))),trigh);
                                     2
                      exp(t) (cosh(t)  - sinh(t) cosh(t))
 
> combine(simplify(convert(expand(ln(%)),exp))) assuming real;
                          t + ln(1/2 + 1/2 exp(-2 t))

acer

Hi Doug,

Thanks for the interesting write-up.

I was wondering what you might be able to report about two items that you have yourself mentioned here in the past.

The first relates to the inability to download .mw files using Microsoft Outlook's webmail access, which you reported as a serious restriction for your university's use of Maple in courses.

The second relates to a post you made about using Maplet's vs Embedded Components, in which you mentioned the clear lack of programmatic control of Embedded Components.

May I ask, did you bring up these subjects?

acer

What I was trying to get at before is that testing for the difference of two expressions' being zero is a more practical general strategy than trying to simplify one form to the other with a general manipulation rule.

Nobody is going to argue -- sensibly, I suppose -- about the "simplest" form of an expression the is exactly equal to zero. So rather than wonder what is the "simplest" form of either the RHS or the LHS of your (here, trigonometric) equation, you can instead subtract them and test for zero.

acer

What I was trying to get at before is that testing for the difference of two expressions' being zero is a more practical general strategy than trying to simplify one form to the other with a general manipulation rule.

Nobody is going to argue -- sensibly, I suppose -- about the "simplest" form of an expression the is exactly equal to zero. So rather than wonder what is the "simplest" form of either the RHS or the LHS of your (here, trigonometric) equation, you can instead subtract them and test for zero.

acer

For fun, and to show how very many ways there are,

<Matrix(5,shape=scalar[1])>;

The extra <..> is to get a Matrix without any indexing function.

Sorry, It hasn't been clear to me whether you wanted f(i,i) or merely introduced `f` as some way to try and get the value of 1 along the diagonal.

acer

For fun, and to show how very many ways there are,

<Matrix(5,shape=scalar[1])>;

The extra <..> is to get a Matrix without any indexing function.

Sorry, It hasn't been clear to me whether you wanted f(i,i) or merely introduced `f` as some way to try and get the value of 1 along the diagonal.

acer

That is not quite right. It's not just a question of replacing the _Inert_LOCALSEQ call. The locals have to be changed from _Inert_NAME to _Inert_LOCAL and the relvant argument changed from a string to an ordinal. And subsop(2=...) doesn't serve, as that is only allowed when the number of locals remains the same, I believe.

(Perhaps see my code above, or try your code on my example procecdure f therein.)

It seems that Doug's guess was correct in any event, and that instead of blowing lexical scoping out of the water the OP merely wanted to supress the warnings due to automatic/implicit local declaration. I guess that using a term like "variable" in a Maple discussion is a bit like using the term "germ" in a conversation with an epidemiologist or microbiologist.

acer

First 477 478 479 480 481 482 483 Last Page 479 of 599