acer

32722 Reputation

29 Badges

20 years, 86 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@fzfbrd

Change your,

CrssOfVds := VDS -> ArrayInterpolation(Vds, Crss, V)

to,

CrssOfVds := VDS -> ArrayInterpolation(Vds, Crss, VDS)

And change,

Q := evalf(Int(CrssOfVds, Vdson .. Charge_Vds)

to,

Q := evalf(Int(CrssOfVds, Vdson .. Charge_Vds, epsilon = 0.1e-6))

That gets me a result of 2.96204101072752*10^(-9).

I found that at your default working precision of Digits=10 the epsilon tolerance for the evalf/Int call had to be at least about 1e-8. You can experiment with the working precision and that epsilon tolerance. (I don't know how accurate you want the result to be.)

Alternatively you could (and probably should) change that call to ArrayInterpolation to include the method=spline option. (See my Answer below.) That allows a result to attain even without using the epsilon option of evalf/Int, at default Digits=10.

Ie, on my 64bit Linux version of Maple 2015.0 I am seeing,

CrssOfVds := VDS -> ArrayInterpolation(Vds, Crss, VDS, method = spline):

Q := evalf(Int(CrssOfVds, Vdson .. Charge_Vds));

                 2.96479729742643*10^(-9)

Note that there is a distinction between obtaining a highly accurate numeric estimation of a crude (linear, say) interpolation, versus obtaining a numeric estimation of a higher degree (and likely more correct) interpolation. There's not much point in obtaining many digits of a crude linear interpolation. You'd be better off with a modest number of accurate digits of a higher degree (better fitting) interpolation. Hope that makes sense.

But of course if you are working with experimental data that is likely subject to noise then it may well be that only a few digits can even be meaningful. This might also be a good time to mention that if your data is experimental and subject to noise then you  might also consider the distinction between interpolating the data and smoothing (or fitting it) it. The interpolation schemes discussed here will pass directly through the data points. But it may be that experimental data merely approximates some physical process, in which case an interpolation which directly passes through the data points may not actually give the best approximation of the area under the abstract curve represented by the actual physical process. It may turn out that a smoothing of the data (of which a numeric "fit" is one example) might give a curve whose area better represents that of the actual physical process. In practice the difference may be indiscernable, up to the degree of noise present. Sorry if this is all obvious. You may well be (already) only expecting a few decimal digits to be trusted in any numerical estimate.

Here is my edited worksheet. (...you'll have to change back the definition of ThisDocumentPath, to run it.) 

Fet_Cap_modif.mw

I have seen a similar problem when I have a running worksheet that is minimized to the desktop tray (MS-Windows), and if I have the GUI set up to always ask whether to use a new or shared kernel for each new opened document.

In this situation, when I double-click the Maple launch icon the kernel-query-popup can be present and waiting but suprressed from view.

In this situation I can sometimes simply hit the Enter key and so clear the waiting query-popup.

acer

How did you obtain the Arrays of values, btw?

I ask because if you obtained them from calling dsolve(...,numeric) then we can often do better than ArrayInterpolation (or often any other quadrature approach, after the fact)  by using dsolve itself. That flavour of this question comes up quite a bit.

The choice of most efficient method will depend on how many elements there are in your Arrays, and how many times you need compute a numeric integral. For example, a spline interpolation approach (from CurveFitting, say) can gets somwhat slow as the Array length gets very large, partly beacause of the cost of forming the piecewise (once) and partly on account of the cost of evaluating the piecewise (each time).

acer

@Carl Love Please pardon me while I make a clarifying comment, just in the interest of being clear to the OP:

The first element of the solution returned by DirectSearch:-DataFit is the minimal achieved value of the objective function used for the particular fitting method. And different fitting methods may use a different objective formula as their respective measure.

So it is not generally sensible to compare results obtained from the various fitting methods simply by comparing the magnitude of the first element in each returned solution. The minimal values from different objectives, evaluated at their different optimal parameter points, are not directly comparable. So one cannot just pick the solution which gives the smallest first element.

It thus becomes up to the user to decide what measure of optimality (ie. what objective) may produce a "better" fit.

How does your supervisor feel about an interactive solution using Embedded Components?

acer

@Carl Love Yes, that kind of deferment (userinfo, etc) is part of the unfortunate way that Document Blocks are implemented (the pair of execution  groups, one with output suppressed and the other with input hidden...).

But the OP mentioned printf, and is I recall corectly that particular kind of i/o can usually be obtained asynchronously even in a Document Block. So that's one reason why I asked for more details.

[edit] I must correct myself. In a Document Block it seems that even printf display is deferred. dbdefer.mw

However, the OP has now clarified: this is about a Worksheet, so the above not relevant.

More details might help here. What OS? Is this in a Worksheet and an Execution Group or a Document and a paragraph (Document Block)?

How much output are we talking about? Very many short-ish lines?

Is there any natural amount of output which you actually would like to see, in a block? Or do you want every line? Or only one last line is useful?

Do you have a sample of code that demonstrates the problem, that we can work with?

Have you tried sprintf and redirecting that to a TextArea embedded component? (Just an idea... may not be suitable here, depending on the details above.)

acer

So your problem is with the final call to `solve`, when epsilon is greater than 0?

If so, then you could try using `fsolve` (repeated, with the avoid option built uo) or the `DirectSearch` (3rd party add-on from the Application Center) rather than `solve`. 

Let us know if you need all the roots, or just one real root, or all the real roots, etc.

acer

assumptions, internal remember tables... if the OP wants a clean slate then it seems sensible to restart. The premise that a restart should be avoided because package initialization is onerous seems faulty to me.

If you are loading packages using the mouse and the menubar Tools->Load Package... then I suggest that instead you make the package loading be done explicitly by code in the Document/Worksheet.

You can even paste all the package loading commands (calls to the `with` command) on the same line, so that you can execute them all in one keystroke.

You can even make the Standard GUI insert the actuall command for you, the first time you load the packages in the worksheet. See the menubar item Tools->Options->Display and the checkbox "Expose commands inserted from Load/Unload Package menus". If you check that then subsequent use of Tools->Load Package from the menubar will embed the `with` command call in the document.

acer

@MTata94 You can define the "subroutine" inside the outer procedure, but you don't have to.

One disadvantage of defining the subroutine `q` inside the body of the calling procedure `p` is that it incurs the cost of defining `q` each time `p` is called. That cost may be negligible for simple examples, though.

For example, here I define q at the top-level, alongside p. Note that `q` is no longer declared local inside p.

restart:

q:=proc(s) s^2 end proc:

p:=proc(x::numeric)
  local k,r;
  r:=x;
  for k from 1 to 5 do
    r:=q(r)
  end do;
  evalf(sin(r))
end proc:

p(2);

                                -0.4619865795

If you wish to organize things, and so manage the name-space a little nicer, you can also make use of modules. For example,

restart:

p:=module()
  local ModuleApply, q;
  ModuleApply:=proc(x::numeric)
  local k,r;
  r:=x;
  for k from 1 to 5 do
    r:=q(r)
  end do;
  evalf(sin(r))
  end proc:
  q:=proc(s) s^2 end proc:
end module:

p(2);
                                -0.4619865795

q(2); # note that now q is not assigned at the higher level.
                                    q(2)

See the Programming Manual ([1], [2]) for more details and examples.

@Markiyan Hirnyk Just to be clear, I meant as b->1 from below, while d->infinity.

@Markiyan Hirnyk Are you sure about the case of b < -1 ? Why not b/(b-1)?

And for the case of b>=-1 and b<0, why not undefined?

@dbachito Another way is to keep the Spline data (your e3) exact and differentiate the piecewise Spline before applying evalf. That works, but depending on the original function the process can become very slow as N gets large.

There are three real roots close to 0.085786 which give difficulty. Note that applying evalf to P before passing to fsolve will actually solve a different problem if Digits is not large enough to encapsulate the original exact data. And there is also the matter of the accuracy of the results.

The following is on Maple 2015.1 on a 64bit Linux system. With Digits<21 as the working precision I see at least two of those three problematic roots as being returned with nonzero and nonnegligible imaginary parts, when sending evalf(P) to fsolve. And with Digits=21 I see those particular three real roots' estimates as still having about 1e-9 absolute error. And when repeated (clean session or carefully with forget(evalf)) at even higher working precision those three roots get computed this way with about 15 fewer accurate digits than the others.

This is indeed a bug. Internally the candidate approximations to the problematic roots should be found wanting earlier (the bug is that `fsolve/refine2` is being passed candidates with Float(undefined) in them) and skipped so that `fsolve/polyill` can tackle them more robustly. I submitted a but report.

(I see also that robust RootFinding:-Isolate is never used when the complex option is passed. But it might help in such situations, perhaps making life easier on `fsolve/polyill`.)

First 332 333 334 335 336 337 338 Last Page 334 of 599