## dharr

Dr. David Harrington

## 5973 Reputation

19 years, 300 days
University of Victoria
Professor or university staff

## Social Networks and Content at Maplesoft.com

I am a professor of chemistry at the University of Victoria, BC, Canada, where my research areas are electrochemistry and surface science. I have been a user of Maple since about 1990.

## Bespoke approximation...

Simple and resonably accurate

 > restart;
 > f := u -> RootOf(8*_Z^4 + 12*u*_Z^3 + (5*u^2 - 4)*_Z^2 - 4*u*_Z - u^2): f0 := 1/sqrt(2): finf := 1/sqrt(5): Df0 := -1/4:
 > fapprox:=(f0-finf)*exp(-x/8)/(1+(5/6)*x)+finf; # 5/6 could be 0.837 for D(fapprox)(0) = -0.25002 evalf(eval(diff(fapprox,x),x=0));

 > plot([f(x),fapprox],x=0..20,0..0.8,color=[red,blue]);

Maximum relative error for 0 to infinity <2%

 > plot((fapprox-f(x))/f(x),x=0..20);

 >

@MaPal93 I remain confused about what you actually want:

1. a simple function (a few parameters) that gives good agreement for the function at 0 and infinity and for the derivative at 0, but is not necessarily that accurate.

2. an accurate approximation over a limited range.

3. an accurate approximation over the full range.

My objective was (1), because you asked for a 1-line function and you seemed to want to interpret it easily. That was also what I did in that paper. But now you are comparing as though you want (2), which you did ask for at some point and @acer gave a nice answer to. For (3) you can just use the function itself.

Since the comparison you provided is for (2) then a fair comparison requires the same number of parameters, which is 8 here in each case (though maybe 9 for the rational function; I'm not sure). (That's more that I was originally thinking of for (1).) Now plot them out to 200 and you will see that mine is OK at infinity (by design), but has the oscillation. The second one is a polynomial and will become large at infinity. The rational function can go to a constant value at infinity if the degree of the numerator and denominator polynomials are the same, and @acer chose [4,4], presumably for this reason. As @acer pointed out, the rational functions are usually better than the polynomials anyway, and you see that here.

The numapprox routines are focussed on accuracy over the whole range and they do well for that. They should also do well for derivatives at zero since they are based on some series expansions around that point. I'm assuming they are based on evaluating the function at evenly spaced points across the chosen interval or some criterion that treats all the interval evenly. If you get to choose the interpolation points then you can do better than if they are evenly spaced. In Gaussian quadrature, you optimize this, so (from memory) an (n/2)-degree polynomial with optimally chosen points can do as well as an n-degree polynomial at evenly spaced points. This is not quadrature, so I don't think the Laguerre points are optimal, but the basic idea is that they should spread out with more near the origin (where the function is changing) and fewer at large values (where it isn't changing much). But that again is assuming you want to approximate out to infinity. And then why not (3)?

## DirectSearch...

@MaPal93 Nonlinear equations can be tricky. DirectSearch, an external package from the Maple Applications Centre can solve this. You need some interpolation points closer to the origin, and then spreading out with fewer later where the function is featureless. One possibility is to use roots of Laguerre polynomials, which are spread out in this way, and are used in Guass-Laguerre quadrature for functions in a semi-infinite range; see the end of the file. But that is just a guess.

Approx_new_DS.mw

## misc...

@MaPal93 That's a nice fit (0.2%) over that range. Df0 looks good. If you want more than the probe accuracy, you can use numapprox:-infnorm. You have oscillations outside the cutoff. If you want it to work over a longer range, you will have to spread the interpolation points out. High-degree polynomials can oscillate - you can likely remove the oscillations with a lower degree polynomial and still get reasonable accuracy. But maybe it is fine as it is.

Approx_new.mw

1. Should I consider looking into [15,16,17] to try and find a similarly accurate and simple approximation for my function? Would that be challenging yet worthy? Would you help?
Those refs are about fitting to experimental data and not relevant. I developed that approximation by playing around using series and asympt and some intuition. In fact a referee asked me to explain how I had "derived it", but I was unable to answer that question and so only the vague description is is the paper. For your case the series/asympt expressions are complicated, so I don't think they will help; another approach will be required. Notice that the approximation was not accurate (up to 4%); the errors only had to be small compared to the experimental errors. I'll take a look to see if anything is obvious.
2. You also mention "relative errors (with respect to Eq. (2))" and "systematic error in the parameters can be estimated by individually varying the parameters to find the minimum in the residual sum of squares". I think it could be interesting to quantify the errors for my approximation as well.
The relative error is just from a plot of (fapprox-fexact)/fexact. The other errors are related to the fit to experimental data.
3. Talking about interpolation instead, you mention "Exact value and derivative at zero preclude any of the things that fit (as opposed to interpolate) arbitrary functions unless they are carefully designed not to disrupt the exact values" and "c1*x term will mess up the derivative at zero". Which replacement term would preserve the derivative at 0?​​
The simplest would be to replace the Df0 exponential decay constant with a parameter a, differentiate the whole thing and set its value at zero equal to Df0 as another equation to be solved.

## nice...

@Preben Alsholm Vote up. Nice analysis.

## approximations...

@MaPal93 That is more or less what I meant, but you interpolated the polynomial not the overall function fapprox. Howerer, I should have given it a bit more thought before suggesting it. I suggested the exp form with P(x) = 1 because I wanted something that has the correct behaviour at infinity. I don't like the cutoff idea for that reason, and like to hack things that work well at both ends (for a real example see Eq. (13) in doi:10.1016/j.elecom.2019.106566). I also wanted a good derivative at 0. Exact value and derivative at zero preclude any of the things that fit (as opposed to interpolate) arbitrary functions unless they are carefully designed not to disrupt the exact values. Other methods work hard to match values and derivatives at one point (like the numapprox routines with polynomials) but need a cutoff. Since a decaying exponential dominates a polynomial at infinity, I suggested multiplying the exp by a polynomial. I made the constant 1 to keep the value at 0 correct, but forgot that the c1*x term will mess up the derivative at zero.

So there are tradeoffs about what you want to work well, the simplicity of the function and the ease of implementation.

## fdiff and evalf/D...

@acer But evalf(D(f)(1e-9)); and fdiff(f, [1], [1e-9]); gave very different values; it seems more is going on here.

1. Just change MakeFunction to unapply and it should work in older versions.

2. Correct.

3. Divide the expression by sigma__v*gamma, then it becomes D(f)(x)+f(x)/x, where x= Gamma. Plot this for x=0..10 and it is seen to be always positive, i.e., the second term is always larger than the first.

4. Consider (f(0)-f(infinity)*exp(D(f)(0)*x)*P(x) + f(infinity). For P(x)=1, this has the correct values at 0 and infinity and the correct slope at 0. It's not great. But write P(x) = 1 +b*x+c*x^2). If you evaluate this at two x values and set equal to the numerical value of f at those values, you can get two equations in b and c that fsolve can solve. This might be better. You can always add more terms to the polynomial and make it fit at more places. It might improve.

Edit: I think you want something simple like the above function, so you can see what is going on? But the derivatives are unlikely to be accurate, and oscillations can occur with interpolation. So you can do much better as @acer showed, but then why not just evaluate the function (or its derivative) numerically?

## gtetrahedron...

geom3d:-gtetrahedron can be used to make this solid.

## conductances and resistances...

@Mike Mc Dermott Here's one way that one might think about this, but you run into the same problem. If the rules of the game are that you are not allowed to know the network structure, then I thnk you cannot easily do this.

 >

Take an equivalent resistance expression.

 >

Define the conductances corresponding ro the resistances

 >

Is this a parallel connection, i.e., a sum of terms involving conductances? First convert to an admittance in terms of conductances - find four parallel connections (not the two that we know is the right answer if we are allowed to know the network structure.

 >

Take the first term and go back to resistances - find 4 series connections, but two of them are not going to simplify further

 >

What if we tried a series connection first, i.e., a sum of terms involving resistances?

 >

Convert first term to admittance and conductance - we have the same problem.

 >

 >

## Knowing the geometry......

@Mike Mc Dermott Once you go from the network/graph to the effective resistance, it's hard to go back, unless you have a well-defined step-by-step algorithm.

On the other hand, if you are allowed to know about the graph, then it is easier. Is the expression (R1 + R2)||(R3 + R4) at the beginning of the second example deliberately the resistance of the first network without RB, or could it potentially have something to do with some completely diferent network?

If the main part of the exercise is combining series and parallel resistors successively to simplify a network, that could be done something like the following hack, which is intended just to give the flavor of what might be done. But I don't konw how you would finish off the second example, because n3/d3 doesn't seem obviously related to a known network.

See mapleprimes

First example"

I1 0 1
R1 1 2
R2 2 0
R3 1 3
R4 3 0

 >
 >

From Carl Love:

 >

Construct graph from nodes and resistances (edges), Assume all edges have different resistance names

 >

Find equiv resistance (routine in startup code)

 >

Simplify series connections (pairwise for now). Find vertices of degree 2 that aren't the in or out vertices

 >

Eliminate these vertices and edges and add new edges with resistance the sum of these.
(DeleteEdges not properly handled here for a multigraph.)

 >

Update the graph - the 2 indicates 2 parallel edges

 >

Now parallel cases.
Just do this one edge manually

 >

 >

Second example
I1 0 1
R1 1 2
R2 2 0
R3 1 3
R4 3 0
RB 2 3

 >

Find equiv resistance (routine in startup code)

 >

Divide by first example

 >

 >

This limits to 1, but numerator and denominator each tend to infinity, so the coeffs of RB must be the same

 >

 >

 >

 >

## agreed...

@MaPal93 Good catch!

## some tests...

@MaPal93 You should be able to rely on limits directly from RootOf, and it is disappointing that some are not correct. Converting to radicals is only possible in some cases and if it is possible, the expression may be too complicated to get the limit, as you found. In general, Maple is not always reliable in limits of complicated expressions, so one can always do a numerical check. So it looks like various tests show the correct limits in the end - in one case the limit at infinity is a constant even if it looked initially like zero.

caseAcaseB.mw

## comment...

@MaPal93 I agree you can't tell much from those "instabilities", especially on infinity plots where successive x-axis values can be quite different. The plot routine uses hardware precision and the radical expression is complicated, so at extreme values higher accuracy (high Digits) may be needed for good numerics.

 First 6 7 8 9 10 11 12 Last Page 8 of 62
﻿